text stringlengths 256 16.4k |
|---|
« Page 3 / 3
In Pentagonal Number Theory, I touched on the topic of generating function but now I'll give examples of generating functions being used to find explicit solutions for recurrent relations. I think this was their primary purpose; Abraham de Moivre invented them when he tried to find the exact formula for …
Yesterday, I stumbled upon this very surprising identity while reading on nested radicals: \[ \sqrt[3]{2\pm\sqrt{5}} = \frac{1\pm\sqrt{5}}{2} \] While it is easy to prove the fact after seeing it, I will try to prove this from the perspective of someone who is not …
Today, I'll prove Euler's Pentagonal Number Theorem and show how he used it to find recurrence formulae for the sum of \(n\)'s positive divisors and the partitions of \(n\). This post will be based on two papers I read last week: “An Observation on the Sums of Divisors” and …
Envelopes kept cropping up in my doodles but I never gave them much attention. Up until the day I started drawing line segments of a constant length, say \(1\), from one side of a piece of paper to an adjacent side (See Fig 1). It appeared that these lines were …
« Page 3 / 3 |
How should one go about solving this problem?
Use identities to find the exact values at $\alpha$ for the remaining five trigonometric functions.
$\cos\alpha = -\sqrt{2}/4$ and $\alpha$ is in quadrant III.
Thanks
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
How should one go about solving this problem?
Use identities to find the exact values at $\alpha$ for the remaining five trigonometric functions.
$\cos\alpha = -\sqrt{2}/4$ and $\alpha$ is in quadrant III.
Thanks
This question appears to be off-topic. The users who voted to close gave this specific reason:
For $\sin\alpha$, use the Trigonometry Fundamental Theorem, $$\sin^2\alpha+\cos^2\alpha=1,$$ to obtain $\sin\alpha=\pm\sqrt{14}/4.$ Because $\alpha$ is in QIII, its sine must be negative.
For the other four, use $\tan\alpha=\frac{\sin\alpha}{\cos\alpha},$ $\sec\alpha=\frac{1}{\cos\alpha},$ $\csc\alpha=\frac{1}{\sin\alpha},$ $\cot\alpha=\frac{\cos\alpha}{\sin\alpha}.$
Outline:
In QIII, both sine and cosine are negative.
Use the Pythagorean identity: $\sin^2 \alpha+\cos^2 \alpha = 1$ to find the value of $\sin\alpha$.
Then use quotient and/or reciprocal identities to find the other trig values of $\alpha$. |
Ok, let me see if I can shed light on some of the questions raised.
"What fails if I try the construction with non-smooth stuff?" This question is a bit unspecific, a lot of things can fail, depending on what you want. For instance, if you want higher Chow groups to come out of the construction, you need homotopy invariance which is false for non-smooth schemes.
For the construction of categories of motives starting from a site of (possibly non-smooth) schemes, you would usually use some topology like $h$-, $qfh$- or $cdh$-topology. In these topologies, schemes will be locally smooth because some sort of resolution of singularities gives you coverings. In some cases (best with rational coefficients), there are comparison results to the Nisnevich topology.
The Lefschetz motive questions:If you take Voevodsky's embedding from Chow motives into $DM_{gm}$, the image of the Lefschetz motive is the Tate motive $\mathbb{Z}(1)[2]$. This is the reduced motive of $\mathbb{G}_m$ or $\mathbb{P}^1$, appropriately shifted, i.e., either $\ker\left(C_\ast\mathbb{Z}_{tr}(\mathbb{G}_{m,S})\to C_\ast\mathbb{Z}_{tr}(S)\right)$.
The embedding from Chow motives to Voevodsky motives is given by mapping a variety $X$ to $M(X)=C_\ast\mathbb{Z}_{tr}(X)$. Since Voevodsky's category is idempotent complete and has transfers, this assignment factors through Chow motives. Hence, the embedding takes a Chow motive $(X,p)$ to the summand of $M(X)$ given by the projector $p$. I should think that this is discussed in the red book (Friedlander, Suslin, Voevodsky: Cycles, transfers and motivic homology theories), at least the book contains the theorem that Chow motives embed as full subcategory of Voevodsky motives.
I would say that there is no geometric meaning to $\mathbb{Z}(-1)$ (unless you consider virtual bundles as something geometric). It is simply constructed as the $\otimes$-inverse of $\mathbb{Z}(1)$, in the same way as spectra arise from CW-complexes by inverting $S^1$ w.r.t. the smash product. Somehow, all this technical stabilization business is necessary because there is no geometric meaning to $\mathbb{Z}(-1)$ (although this is more like a philosophical belief).
The construction of motives over more general bases is currently being studied. Check out the works of Joseph Ayoub, cf. his homepage where you can find his papers. His ICM paper gives a nice overview of the state of the art. I guess the easiest construction of motives over arbitrary bases is via étale sheaves made $\mathbb{A}^1$-invariant and $\mathbb{P}^1$-stable. For the resulting category, a full six functor formalism is available...
There are also constructions using Nisnevich sheaves with transfers, cf. the work of Cisinski-Déglise, but the construction of these categories is a bit complicated in full generality because of difficulties with the intersection theory. With rational coefficients, these categories can be compared to the étale version; with integral coefficients, there are problems with the localization triangle.
Representability of smash products: I do not know of any results, but I would guess that the smash product is more likely not represented by a scheme. I am more confident saying that the smash product is most likely not represented by a smooth scheme: there is a conjecture of Asok stating that the spheres $S^{p,q}\cong S^p\wedge\mathbb{G}_m^{\wedge q}$ are not $\mathbb{A}^1$-homotopic to smooth schemes unless $p=2q$ or $p=2q-1$. Admittedly, that's a bit different from the corresponding question about motives... The right way to think about the smash product is: it's the same thing as in topology, same categorical properties etc.
Finally, I would again say that there is no geometric meaning to shifting $\mathbb{Z}(q)$. The topological analogues are exactly the Eilenberg-Mac Lane spaces, one should think of $\mathbb{Z}(q)[n]$ as the motive of $K(\mathbb{Z}(q),n)$. In topology as well, there is no real geometric relation between $K(\mathbb{Z},n)$ and $K(\mathbb{Z},n-1)$...
I would also like to point out that $\mathbb{Z}_{tr}\left(\bigwedge^n\mathbb{G}_m\right)$ does not really have a meaning, because $\mathbb{Z}_{tr}$ takes a scheme and turns it into a sheaf with transfers - you should take $\mathbb{Z}(1)$ and then take its smash powers, not the other way round. |
Many accounts of the history of quantum physics explain how Planck resorted to quantizing energy in an "act of desperation" while attempting to solve blackbody radiation, only to discover by surprise that a nonzero value of $h$ in $E=nh\nu$ reproduced experimental results.
What was Planck's motivation behind the $\nu$ dependence in this expression? Did classical physics provide any hints for this frequency dependence?
Einstein used this same relation to help explain the photoelectric effect, but that came later.
Finally, to emphasize why I have this question, consider these seemingly contradictory facts:
Planck was treating the quantized EM waves as harmonic oscillators. However, the relation between energy and frequency for a classicalharmonic oscillator has a squaredependence: $E=\frac{1}{2}m\omega^2A^2=2\pi^2m\nu^2A^2$, where $A$ is the amplitude. In classicalelectromagnetic theory, the average energy density of a plane wave in vacuum has nofrequency dependence: $u=\frac{1}{2}\epsilon_oE_o^2$, where $E_o$ is the amplitude of the electric field part of the wave. It's easy to imagine postulating $E=nh$ as a first attempt to quantize energy. The $n$ part of this expression is the quantization piece, which was a new idea that I can understand as a hopeful guess or mathematical trick—but the $\nu$ part seems a prioriunmotivated, and this isn't addressed in any of the sources I looked through. |
The description of the movement of bodies by their position, velocity, acceleration (and possibly higher time derivatives, such as, jerk) without concern for the underlying dynamics/forces/causes.
When to Use this Tag
Use kinematics to discuss the movement of a body in terms of position, velocity, acceleration (or, in principle, higher derivatives thereof, such as, jerk) without concern for the forces/dynamics causing this movement.
Introduction
The classical description of the movement of a (point-like) body consists of three vector quantities, defined in a suitable background coordinate system (usually $\mathbb{R}^n$ for n-dimensional problems).
The positionof the body, usually denoted by either $\vec x(t)$ or $\vec q(t)$ as a function of the time $t$. The first total time derivative of the position of the body, defined to be the velocity$v(t) \equiv \frac{\mathrm{d}\vec x(t)}{\mathrm{d}t} $. The second total time derivative of the position of the body, defined to be the acceleration$a(t) \equiv \frac{\mathrm{d}\vec v(t)}{\mathrm{d}t} = \frac{\mathrm{d}^2\vec x(t)}{\mathrm{d}t^2} $. Special Cases Constant Velocity
Problems in which some body travels with a constant velocity are common introductory exercises and can be solved with the difference version of the definition of velocity:
$$ \vec v = \frac{\Delta \vec x}{\Delta t} = \frac{\vec x - \vec x_0}{t - 0}\quad,$$
where we take the body to be at position $x_0$ at time $t = 0$.
Constant acceleration
In some problems, the acceleration of the body is a constant $\vec a_0$, for example $\vec g$ during a free fall close to the surface of Earth. In this case, it is easy to integrate twice to calculate the position $\vec x$. With initial conditions $\vec x(0) = \vec x_0$ and $\vec v(0) = \vec v_0$, we have:
\begin{eqnarray} \vec a(t) & = & \vec a_0 \\ \vec v(t) & = & \vec a_0 t + \vec v_0 \\ \vec x(t) & = & \frac{1}{2} \vec a_0 t^2 + \vec v_0 t + \vec x_0 \end{eqnarray}
Constant Jolt
jolt, or jerk is the rate of change of acceleration with respect to time; i.e. $\vec j=\frac{\mbox{d}\vec a}{\mbox{d}t}$. In the case of a constant jolt, one may trivially apply the Taylor expansion (or through algebraic means) to find that:
\begin{eqnarray} \vec j(t) & = & \vec j_0 \\ \vec a(t) & = & \vec j_0 t + \vec a_0 \\ \vec v(t) & = & \frac{1}{2} \vec j_0 t^2 + \vec a_0 t + \vec v_0 \\ \vec x(t) & = & \frac{1}{6} \vec j_0 t^3 + \frac{1}{2} \vec a_0 t^2 + \vec v_0 t + \vec x_0 \end{eqnarray} |
There are formulae you can use to calculate the strength of interactions between two molecules, which are derived from first principles. You can find an overview here on Chemistry LibreTexts. The two relevant ones are:
$$U_{\mathrm{dipole}} = -\frac{2}{3kT} \frac{p^4}{(4\pi\varepsilon_0)^2r^6}$$
$$U_{\mathrm{dispersion}} = -\frac{3}{4} \frac{\alpha^2 I}{(4\pi\varepsilon_0)^2r^6}$$
$p$ is the molecular dipole moment $r$ is the average separation between neighbouring molecules $\alpha$ is the polarisability of the molecule, which in a nutshell, measures how easy it is to create an induced dipole in the molecule $I$ is the first ionisation energy of the molecule
If you plug all the numbers in for the hydrogen halides $\ce{HX}$, you will find that the magnitude of the dispersion forces is much larger than the magnitude of the dipole-dipole attractions, i.e. $\left|U_{\mathrm{dispersion}}\right| \gg \left|U_{\mathrm{dipole}}\right|$.
Unfortunately, I don't have all the necessary data on hand, but I can quote some results that were given in first-year lecture notes at Oxford (which should be legitimate). For $\ce{HCl}$, dispersion forces contribute $86\%$ to the intermolecular attractions, and for $\ce{HI}$, they contribute $99\%$.
Considering this fact, it is not surprising that variations in the magnitude of dispersion forces affect the boiling point much more than variations in the magnitude of the dipole-dipole attractions. If you had the all the data required and evaluated the interactions using the two formulae above, you would reach the same conclusion quantitatively.
In fact, this trend is hardly unique to the hydrogen halides. The hydrides of the Group 15 and Group 16 elements behave exactly the same way:
$$\begin{array}{|cc|cc|}\hline\text{Species} & \text{Boiling point / }\mathrm{^\circ C} & \text{Species} & \text{Boiling point / }\mathrm{^\circ C} \\\hline\ce{PH3} & -87.5 & \ce{H2S} & -60.3 \\\ce{AsH3} & -62.4 & \ce{H2Se} & -41.3 \\\ce{SbH3} & -18.4 & \ce{H2Te} & -4 \\\hline\end{array}$$
(source: Greenwood & Earnshaw, Chemistry of the Elements 2nd ed., pp 557, 767)
Of course, the first-row hydrides are left out of the discussion because of hydrogen bonding, which makes their boiling points anomalously high. |
Newform invariants
Coefficients of the \(q\)-expansion are expressed in terms of \(\beta = \sqrt{2}\). We also show the integral \(q\)-expansion of the trace form.
For each embedding \(\iota_m\) of the coefficient field, the values \(\iota_m(a_n)\) are shown below.
For more information on an embedded modular form you can click on its label.
This newform does not admit any (nontrivial) inner twists.
\( p \) Sign \(3\) \(-1\) \(5\) \(1\) \(7\) \(1\)
This newform can be constructed as the intersection of the kernels of the following linear operators acting on \(S_{2}^{\mathrm{new}}(\Gamma_0(945))\):
\(T_{2}^{2} \) \(\mathstrut +\mathstrut 2 T_{2} \) \(\mathstrut -\mathstrut 1 \) \(T_{11}^{2} \) \(\mathstrut -\mathstrut 2 T_{11} \) \(\mathstrut -\mathstrut 1 \) |
Our new book (NAT)
Nonabelian algebraic topology: filtered spaces, crossed complexes, cubical homotopy groupoids, EMS Tracts in Mathematics vol 15
uses mainly cubical, rather than simplicial, sets. The reasons are explained in the Introduction: in strict cubical higher categories we can easily express
algebraic inverse to subdivision,
a simple intuition which I have found difficult to express in simplicial terms. Thus cubes are useful for local-to-global problems. This intuition is crucial for our Higher Homotopy Seifert-van Kampen Theorem, which enables new calculations of some homotopy types, and suggests a new foundation for algebraic topology at the border between homotopy and homology.
A further reason for the connections is that they enabled an equivalence between crossed modules and certain double groupoids, and later, crossed complexes and strict cubical $\omega$-groupoids.
Also cubes have a nice tensor product and this is
crucial in the book for obtaining some homotopy classification results. See Chapter 15.
I have found that with cubes I have been able to conjecture and in the end prove theorems which have enabled new nonabelian calculations in homotopy theory, e.g. of second relative homotopy groups. So I have been happy to use cubes until someone comes up with something better. ($n$-simplicial methods, in conjunction with cubical ideas, turned out, however, to be necessary for proofs in the work with J.-L. Loday.)
See also some beamer presentations available on my preprint page.
Here is a further emphasis on the above point on algebraic structures: consider the following diagram:
From left to right pictures subdivision; from right to left pictures composition. The composition idea is well formulated in terms of double categories, and that idea is easily generalised to $n$-fold categories, and is expressed well in a cubical context. In that context one can conjecture, and eventually prove, higher dimensional Seifert-van Kampen Theorems, which allow new calculations in algebraic topology. Such multiple compositions are difficult to handle in globular or simplicial terms.
The further advantage of cubes, as mentioned in above answers, is that the formula $$I^m \times I^n \cong I^{m+n}$$ makes cubes very helpful in considering monoidal and monoidal closed structures. Most of the major results of the EMS book required cubical methods for their conjecture and proof. The main results of Chapter 15 of NAT have not been done simplicially. See for example Theorem 15.6.1, on a convenient dense subcategory closed under tensor product.
Sept 5, 2015: The paper by Vezzani arxiv::1405.4548 shows a use of cubical, rather than simplicial, methods, in motivic theory; while the paper by I. Patchkoria, HHA arXiv:1011.4870, Homology Homotopy Appl.Volume 14, Number 1 (2012), 133-158, gives a "Comparison of Cubical and Simplicial Derived Functors".
In all these cases the use of
connections in cubical methods is crucial. There is more discussion on this mathoverflow. For us connections arose in order to define commutative cubes in higher cubical categories: compare this paper.
See also this 2014 presentation The intuition for cubical methods in algebraic topology.
April 13, 2016. I should add some further information from Alberto Vezzani:
The cubical theory was better suited than the simplicial theory when dealing with (motives of) perfectoid spaces in characteristic 0. For example: degeneracy maps of the simplicial complex $\Delta$ in algebraic geometry are defined by sending one coordinate $x_i$ to the sum of two coordinates $y_j+y_{j+1}$. When one considers the perfectoid algebras obtained by taking all $p$-th roots of the coordinates, such maps are no longer defined, as $y_j+y_{j+1}$ doesn't have $p$-th roots in general. The cubical complex, on the contrary, is easily generalized to the perfectoid world.
November 29, 2016 There is more information in this paper on Modelling and Computing Homotopy Types: I which can serve as an introduction to the NAT book. |
I'm trying to follow Andrew Ng's notes on Support Vector Machines and had the following question.
In his notes, Ng, transforms the following optimization problem [using the notion of geometric margin] of the SVM
into the following equivalent problem [using the notion of functional margin]
My question is this: how are the conditions the same? I understand how $\gamma = \frac{\hat{\gamma}}{\Vert w\Vert}$, but what is the proof of the equivalence of the "s.t." conditions? |
The Schrödinger equation describes the energy and time-evolution of a particle or system of particles, and is one of the fundamental building blocks of modern physics. In it’s general form, the (time-independent) Schrödinger equation looks like this:
1
There are relatively few situations in which the Schrödinger equation can be solved analytically, and numerical methods and approximations are one way around that analytical limitation. To demonstrate how this is possible and how a numerical solution works, what better way than to solve a system which
can be solved analytically and comparing the results.
In solving the Schrödinger equation, we will start with one of the simplest interesting quantum mechanical systems, the quantum mechanical harmonic oscillator.
2 Let’s first define our quantum harmonic oscillator. The general form of the Schrödinger equation for a one-dimensional harmonic oscillator reads thus:
\begin{equation}
\label{eq:sch} \frac{-\hbar^2}{2m} \frac{\partial^2}{\partial z^2}\psi(z) + \frac{mz^2}{2} \psi(z) = E\psi(z) \end{equation}
We will make use of the Numerov algorithm which is particularly suited to solving second order differential equations of the form \(y\prime\prime(x) + k(x)y(x)=0\). You can read more about it elsewhere, including its derivation etc., but its form is quite simple and easy to code:
\begin{equation}
\label{eq:numerov} \left(1+\frac{1}{12}h^2k_{n+1}\right)y_{n+1} = 2\left(1-\frac{5}{12}h^2k_n\right)y_n -\left(1+\frac{1}{12}h^2k_{n-1}\right)y_{n-1}+O(h^6) \end{equation}
As you can see, it provides 6
th order accuracy which is pretty impressive for such a simple algorithm. In the above equation, \(h\) is the step size between each iteration, and \(n\) is the index of iteration; \(y\) and \(k\) relate to those in the formula in the paragraph above.
Thus we need to manipulate \eqref{eq:sch} into a (dimensionless) form which the Numerov algorithm can solve: using a substitution \(E=\varepsilon \hbar \omega\) and \(z=x\sqrt{\frac{\hbar}{m\omega}}\) we can rearrange \eqref{eq:sch} into the form:
\begin{equation}
\label{eq:solve} \psi\prime\prime(x) + (2\varepsilon – x^2)\psi(x) = 0 \end{equation}
Now the Schrödinger equation is in the correct form, we can simply plug it into the Numerov algorithm and see what comes out.
Finding the Eigenvalues Numerically
To determine the eigenvalues, the program scans a range of energies, denoted by the Greek letter \(\varepsilon\) in the above equations, and tests for when the tail of the graph changes from \(+\infty\) to \(-\infty\) or vice versa. When that happens, the tail must have crossed zero, and therefore it must have stepped over a solution.
3 The program then goes backwards and so on with increased resolution, honing in until it finds all of the solutions we want.
Given the substitution above, we should expect the eigenvalues to be \(\varepsilon = n + \frac 12\) where \(n\) is an integer from zero (representing the ground state) upwards.
4 Hit
F12 to pull up the web console before you run the simulation to see what results you actually get.
Note: On a smartphone? Possibly(?) wait till you’re on a more powerful machine (unless you’re hard; let me know in the comments how it worked out). It works fine! Likewise, using an old Internet Explorer? – well, don’t blame me if it crashes or just nothing happens. where \(m\) is the mass of the particle, \(x\) is the position, \(\hbar\) is the Plank constant, \(V(x)\) is the potential the particle is in, \(E\) is the particle’s energy, and \(\psi(x)\) is the wavefunction. [↩] A harmonic oscillator is simply an object which is moving in a constant field of some kind. For example a gravitational or electric field. A good example is a pendulum, or a ball bouncing on a piece of elastic. Harmonic oscillators are vitally important in physics and physical chemistry, because they can be used to model the complex vibrations and oscillations of molecules, atoms, and sub-atomic particles to a reasonable degree of accuracy, and are really rather simple to solve. [↩] This is because wavefunctions have to be normalizable, and so a wavefunction which goes to infinity as x increases is not a physically relevant one. [↩] Don’t confuse this \(n\) with the index \(n\) from the definition of the Numerov algorithm! [↩] |
Two-graphs¶
A two-graph on \(n\) points is a family \(T \subset \binom {[n]}{3}\)of \(3\)-sets, such that any \(4\)-set \(S\subset [n]\) of size fourcontains an even number of elements of \(T\). Any graph \(([n],E)\)gives rise to a two-graph\(T(E)=\{t \in \binom {[n]}{3} : \left| \binom {t}{2} \cap E \right|\ odd \}\),and any two graphs with the same two-graph can be obtained onefrom the other by
Seidel switching.This defines an equivalence relation on the graphs on \([n]\),called Seidel switching equivalence.Conversely, given a two-graph \(T\), one can construct a graph\(\Gamma\) in the corresponding Seidel switching class with anisolated vertex \(w\). The graph \(\Gamma \setminus w\) is calledthe
descendant of \(T\) w.r.t. \(v\).
\(T\) is called regular if each two-subset of \([n]\) is contained in the same number alpha of triples of \(T\).
This module implements a direct construction of a two-graph from a list of triples, construction of descendant graphs, regularity checking, and other things such as constructing the complement two-graph, cf. [BH12].
AUTHORS:
Dima Pasechnik (Aug 2015) Index¶
This module’s methods are the following :
is_regular_twograph()
tests if
self is a regular two-graph, i.e. a 2-design
complement()
returns the complement of
self
descendant()
returns the descendant graph at \(w\)
This module’s functions are the following :
taylor_twograph()
constructs Taylor’s two-graph for \(U_3(q)\)
is_twograph()
checks that the incidence system is a two-graph
twograph_descendant()
returns the descendant graph w.r.t. a given vertex of the two-graph of a given graph Methods¶ class
sage.combinat.designs.twographs.
TwoGraph(
points=None, blocks=None, incidence_matrix=None, name=None, check=False, copy=True)¶
Two-graphs class.
A two-graph on \(n\) points is a 3-uniform hypergraph, i.e. a family \(T \subset \binom {[n]}{3}\) of \(3\)-sets, such that any \(4\)-set \(S\subset [n]\) of size four contains an even number of elements of \(T\). For more information, see the documentation of the
twographsmodule.
complement()¶
The two-graph which is the complement of
self
EXAMPLES:
sage: p = graphs.CompleteGraph(8).line_graph().twograph() sage: pc = p.complement(); pc Incidence structure with 28 points and 1260 blocks
descendant(
v)¶
The descendant
graphat
v
The
switching class of graphscorresponding to
selfcontains a graph
Dwith
vits own connected component; removing
vfrom
D, one obtains the descendant graph of
selfat
v, which is constructed by this method.
INPUT:
v– an element of
ground_set()
EXAMPLES:
sage: p = graphs.PetersenGraph().twograph().descendant(0) sage: p.is_strongly_regular(parameters=True) (9, 4, 1, 2)
is_regular_twograph(
alpha=False)¶
Test if the
TwoGraphis regular, i.e. is a 2-design.
Namely, each pair of elements of
ground_set()is contained in exactly
alphatriples.
INPUT:
alpha– (optional, default is
False) return the value of
alpha, if possible.
EXAMPLES:
sage: p=graphs.PetersenGraph().twograph() sage: p.is_regular_twograph(alpha=True) 4 sage: p.is_regular_twograph() True sage: p=graphs.PathGraph(5).twograph() sage: p.is_regular_twograph(alpha=True) False sage: p.is_regular_twograph() False
sage.combinat.designs.twographs.
is_twograph(
T)¶
Checks that the incidence system \(T\) is a two-graph
INPUT:
T– an
incidence structure
EXAMPLES:
a two-graph from a graph:
sage: from sage.combinat.designs.twographs import (is_twograph, TwoGraph) sage: p=graphs.PetersenGraph().twograph() sage: is_twograph(p) True
a non-regular 2-uniform hypergraph which is a two-graph:
sage: is_twograph(TwoGraph([[1,2,3],[1,2,4]])) True
sage.combinat.designs.twographs.
taylor_twograph(
q)¶
constructing Taylor’s two-graph for \(U_3(q)\), \(q\) odd prime power
The Taylor’s two-graph \(T\) has the \(q^3+1\) points of the projective plane over \(F_{q^2}\) singular w.r.t. the non-degenerate Hermitean form \(S\) preserved by \(U_3(q)\) as its ground set; the triples are \(\{x,y,z\}\) satisfying the condition that \(S(x,y)S(y,z)S(z,x)\) is square (respectively non-square) if \(q \cong 1 \mod 4\) (respectively if \(q \cong 3 \mod 4\)). See §7E of [BvL84].
There is also a \(2-(q^3+1,q+1,1)\)-design on these \(q^3+1\) points, known as the unital of order \(q\), also invariant under \(U_3(q)\).
INPUT:
q– a power of an odd prime
EXAMPLES:
sage: from sage.combinat.designs.twographs import taylor_twograph sage: T=taylor_twograph(3); T Incidence structure with 28 points and 1260 blocks
sage.combinat.designs.twographs.
twograph_descendant(
G, v, name=None)¶
Returns the descendant graph w.r.t. vertex \(v\) of the two-graph of \(G\)
In the
switching classof \(G\), construct a graph \(\Delta\) with \(v\) an isolated vertex, and return the subgraph \(\Delta \setminus v\). It is equivalent to, although much faster than, computing the
TwoGraph.descendant()of
two-graph of G, as the intermediate two-graph is not constructed.
INPUT:
G– a
graph
v– a vertex of
G
name– (optional)
None- no name, otherwise derive from the construction
EXAMPLES:
one of s.r.g.’s from the
database:
sage: from sage.combinat.designs.twographs import twograph_descendant sage: A=graphs.strongly_regular_graph(280,135,70) # optional - gap_packages internet sage: twograph_descendant(A, 0).is_strongly_regular(parameters=True) # optional - gap_packages internet (279, 150, 85, 75) |
Semimonomial transformation group¶
The semimonomial transformation group of degree \(n\) over a ring \(R\) is the semidirect product of the monomial transformation group of degree \(n\) (also known as the complete monomial group over the group of units \(R^{\times}\) of \(R\)) and the group of ring automorphisms.
The multiplication of two elements \((\phi, \pi, \alpha)(\psi, \sigma, \beta)\) with
\(\phi, \psi \in {R^{\times}}^n\) \(\pi, \sigma \in S_n\) (with the multiplication \(\pi\sigma\) done from left to right (like in GAP) – that is, \((\pi\sigma)(i) = \sigma(\pi(i))\) for all \(i\).) \(\alpha, \beta \in Aut(R)\)
is defined by
where \(\psi^{\pi, \alpha} = (\alpha(\psi_{\pi(1)-1}), \ldots, \alpha(\psi_{\pi(n)-1}))\) and the multiplication of vectors is defined elementwisely. (The indexing of vectors is \(0\)-based here, so \(\psi = (\psi_0, \psi_1, \ldots, \psi_{n-1})\).)
Todo
Up to now, this group is only implemented for finite fields because of the limited support of automorphisms for arbitrary rings.
AUTHORS:
Thomas Feulner (2012-11-15): initial version
EXAMPLES:
sage: S = SemimonomialTransformationGroup(GF(4, 'a'), 4)sage: G = S.gens()sage: G[0]*G[1]((a, 1, 1, 1); (1,2,3,4), Ring endomorphism of Finite Field in a of size 2^2 Defn: a |--> a)
class
sage.groups.semimonomial_transformations.semimonomial_transformation_group.
SemimonomialActionMat(
G, M, check=True)¶
The left action of
SemimonomialTransformationGroupon matrices over the same ring whose number of columns is equal to the degree. See
SemimonomialActionVecfor the definition of the action on the row vectors of such a matrix.
class
sage.groups.semimonomial_transformations.semimonomial_transformation_group.
SemimonomialActionVec(
G, V, check=True)¶
The natural left action of the semimonomial group on vectors.
The action is defined by: \((\phi, \pi, \alpha)*(v_0, \ldots, v_{n-1}) := (\alpha(v_{\pi(1)-1}) \cdot \phi_0^{-1}, \ldots, \alpha(v_{\pi(n)-1}) \cdot \phi_{n-1}^{-1})\). (The indexing of vectors is \(0\)-based here, so \(\psi = (\psi_0, \psi_1, \ldots, \psi_{n-1})\).)
class
sage.groups.semimonomial_transformations.semimonomial_transformation_group.
SemimonomialTransformationGroup(
R, len)¶
A semimonomial transformation group over a ring.
The semimonomial transformation group of degree \(n\) over a ring \(R\) is the semidirect product of the monomial transformation group of degree \(n\) (also known as the complete monomial group over the group of units \(R^{\times}\) of \(R\)) and the group of ring automorphisms.
The multiplication of two elements \((\phi, \pi, \alpha)(\psi, \sigma, \beta)\) with
\(\phi, \psi \in {R^{\times}}^n\) \(\pi, \sigma \in S_n\) (with the multiplication \(\pi\sigma\) done from left to right (like in GAP) – that is, \((\pi\sigma)(i) = \sigma(\pi(i))\) for all \(i\).) \(\alpha, \beta \in Aut(R)\)
is defined by\[(\phi, \pi, \alpha)(\psi, \sigma, \beta) = (\phi \cdot \psi^{\pi, \alpha}, \pi\sigma, \alpha \circ \beta)\]
where \(\psi^{\pi, \alpha} = (\alpha(\psi_{\pi(1)-1}), \ldots, \alpha(\psi_{\pi(n)-1}))\) and the multiplication of vectors is defined elementwisely. (The indexing of vectors is \(0\)-based here, so \(\psi = (\psi_0, \psi_1, \ldots, \psi_{n-1})\).)
Todo
Up to now, this group is only implemented for finite fields because of the limited support of automorphisms for arbitrary rings.
EXAMPLES:
sage: F.<a> = GF(9) sage: S = SemimonomialTransformationGroup(F, 4) sage: g = S(v = [2, a, 1, 2]) sage: h = S(perm = Permutation('(1,2,3,4)'), autom=F.hom([a**3])) sage: g*h ((2, a, 1, 2); (1,2,3,4), Ring endomorphism of Finite Field in a of size 3^2 Defn: a |--> 2*a + 1) sage: h*g ((2*a + 1, 1, 2, 2); (1,2,3,4), Ring endomorphism of Finite Field in a of size 3^2 Defn: a |--> 2*a + 1) sage: S(g) ((2, a, 1, 2); (), Ring endomorphism of Finite Field in a of size 3^2 Defn: a |--> a) sage: S(1) ((1, 1, 1, 1); (), Ring endomorphism of Finite Field in a of size 3^2 Defn: a |--> a)
Element¶
base_ring()¶
Returns the underlying ring of
self.
EXAMPLES:
sage: F.<a> = GF(4) sage: SemimonomialTransformationGroup(F, 3).base_ring() is F True
degree()¶
Returns the degree of
self.
EXAMPLES:
sage: F.<a> = GF(4) sage: SemimonomialTransformationGroup(F, 3).degree() 3
gens()¶
Return a tuple of generators of
self.
EXAMPLES:
sage: F.<a> = GF(4) sage: SemimonomialTransformationGroup(F, 3).gens() [((a, 1, 1); (), Ring endomorphism of Finite Field in a of size 2^2 Defn: a |--> a), ((1, 1, 1); (1,2,3), Ring endomorphism of Finite Field in a of size 2^2 Defn: a |--> a), ((1, 1, 1); (1,2), Ring endomorphism of Finite Field in a of size 2^2 Defn: a |--> a), ((1, 1, 1); (), Ring endomorphism of Finite Field in a of size 2^2 Defn: a |--> a + 1)]
order()¶
Returns the number of elements of
self.
EXAMPLES:
sage: F.<a> = GF(4) sage: SemimonomialTransformationGroup(F, 5).order() == (4-1)**5 * factorial(5) * 2 True |
Find
$$\lim_{x\to1^-}\log_2(1-x)+x+x^2+x^4+x^8+\cdots$$
I have found $1-\dfrac{1}{\ln2}$ as a lower bound, but not further than that
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Find
$$\lim_{x\to1^-}\log_2(1-x)+x+x^2+x^4+x^8+\cdots$$
I have found $1-\dfrac{1}{\ln2}$ as a lower bound, but not further than that
This question appears to be off-topic. The users who voted to close gave this specific reason:
Let $$ f(x)=\log_2(1-x)+\sum_{k=0}^\infty x^{2^k}\tag1 $$ then $f(0)=0$ and $$ f\!\left(x^2\right)=\log_2\left(1-x^2\right)+\sum_{k=1}^\infty x^{2^k}\tag2 $$ and therefore, $$ f(x)-f\!\left(x^2\right)=x-\log_2(1+x)\tag3 $$ Thus, for $x\in(0,1)$, $$ \begin{align} f(1) &=f(1)-f(0)\\[12pt] &=\sum_{k=-\infty}^\infty\left[f\!\left(x^{2^k}\right)-f\!\left(x^{2^{k+1}}\right)\right]\\ &=\sum_{k=-\infty}^\infty\left(x^{2^k}-\log_2\left(1+x^{2^k}\right)\right)\tag4 \end{align} $$ Expanding $\log(1+x)$ into its Taylor Series in $x$, we get $$ \begin{align} \int_0^1x^{a-1}\log(1+x)\,\mathrm{d}x &=\int_0^1\sum_{k=1}^\infty\frac{(-1)^{k-1}x^{a-1+k}}k\,\mathrm{d}x\\ &=\sum_{k=1}^\infty\frac{(-1)^{k-1}}{k(k+a)}\\ &=\frac1a\sum_{k=1}^\infty(-1)^{k-1}\left(\frac1k-\frac1{k+a}\right)\\ &=\frac1a\left(\sum_{k=1}^\infty\left(\frac1k-\frac1{k+a}\right)-2\sum_{k=1}^\infty\left(\frac1{2k}-\frac1{2k+a}\right)\right)\\ &=\frac1a\left(\sum_{k=1}^\infty\left(\frac1k-\frac1{k+a}\right)-\sum_{k=1}^\infty\left(\frac1k-\frac1{k+a/2}\right)\right)\\[3pt] &=\frac{H(a)-H(a/2)}a\tag5 \end{align} $$ where $H(a)$ are the Extended Harmonic Numbers. Apply $(5)$ to get $$ \int_0^1\log_2\left(1+x^{2^k}\right)\,\mathrm{d}x=\frac{H\!\left(2^{-k}\right)-H\!\left(2^{-k-1}\right)}{\log(2)}\tag6 $$ Integration of a monomial gives $$ \int_0^1x^{2^k}\,\mathrm{d}x=\frac1{2^k+1}\tag7 $$ Integrating $(4)$ over $[0,1]$ and using $(6)$ and $(7)$ yields $$ \begin{align} f(1) &=\lim_{n\to\infty}\left[\sum_{k=-n}^n\frac1{2^k+1}-\frac{H\!\left(2^n\right)-H\!\left(2^{-n-1}\right)}{\log(2)}\right]\\ &=\lim_{n\to\infty}\left[\frac12+n-\frac{\gamma+n\log(2)+O\!\left(2^{-n}\right)}{\log(2)}\right]\\[3pt] &=\frac12-\frac{\gamma}{\log(2)}\tag8 \end{align} $$
Problem with the Use of Equation $\boldsymbol{(3)}$
As pointed out by Michael, the use of equation $(3)$ above ignores the fact that $$ g(x)-g\!\left(x^2\right)=0\tag9 $$ does not mean $g(x)=0$. In fact, for any $1$-periodic $h$, i.e. $h(x)=h(x+1)$, $$ g(x)=h\!\left(\log_2(-\log(x))\right)\tag{10} $$ satisfies $(9)$. I have encountered this misbehavior before in Does the family of series have a limit? and Find $f'(0)$ if $f(x)+f(2x)=x\space\space\forall x$.
Thus, the value given in $(8)$ is an average of the values of $f(1)$ given by $(4)$.
The function given in $(4)$ for $x=2^{-2^{-t}}$ has period $1$ in $t$. I have computed $f(1)$ from $(4)$ for $x\in\left[\frac14,\frac12\right]$; that is, the full period $t\in[-1,0]$. I get a plot very similar to that of Michael:
which oscillates between $-0.33274775$ and $-0.33274460$. The horizontal line is $$ \frac12-\frac\gamma{\log(2)}=-0.33274618 $$ which is pretty close to the average of the minimum and maximum.
I am still looking for an a priori method to compute this oscillation.
$$f(x)+\log_2(1+x)-x=f(x^2)\text{ ( 0<x<1 )}\\f(exp(y))+\log_2(1+\exp(y))-\exp(y)=f(\exp(2y))\text{ $(-\infty<y<0)$}\\g(y)+\log_2(1+\exp(y))-\exp(y)=g(2y)\text{ $(-\infty<y<0)$}\\g(-2^z)+\log_2(1+\exp(-2^z))-\exp(-2^z)=g(-2^{z+1})\text{( $-\infty<z<\infty$)}\\h(z)+\log_2(1+\exp(-2^z))-\exp(-2^z)=h(z+1)\text{$(-\infty<z<\infty)$}\\f(1)-f(0)=\int_{-\infty}^{\infty}dh=\int_{-\infty}^\infty\log_2(1+\exp(-2^z))-\exp(-2^z)dz\\\approx -0.332746$$By change of variable, it becomes $$\int_0^1 \frac{x-\log_2(1+x)}{x\log x\log 2}dx$$which the Inverse Symbolic Calculator gives as $$\frac12-\frac\gamma{\ln2}\approx -0.3327461772769$$As pointed out by Somos, I took an approximation when I replaced$\sum h(z+1)-h(z)$ by $\int dh$. It seems to have variation in the sixth decimal place as $x$ varies from $x_0$ to $x_0^2$.
Let $\ f(x) := \log_2(1-x) + \sum_{n=0}^\infty x^{2^n}, \ $ $\ g(x) := f(e^{-x}) = \log_2(1-e^{-x}) + \sum_{n=0}^\infty e^{-x2^n}, \ $ and $\ a_k := g(2^{-k}) = b_k + \sum_{n=0}^\infty e^{-2^{n-k}} \ $ where $\ b_k := \log_2(1-e^{-2^{-k}}) \approx -k - 2^{-1-k}/\log(2). \ $ Now $\ \sum_{n=0}^\infty e^{-2^{n-k}} = \sum_{n=1}^k e^{-2^{-n}} + B \ $ where $\ B := \sum_{n=0}^\infty e^{-2^n} \approx 0.521865938459879089046726. \ $ But $ c_k :=\! -k \!+\! \sum_{n=1}^k e^{-2^{-n}}\! = \sum_{n=1}^k \big(e^{-2^{-n}}\!-\!1 \big) \ $ and $\ c_k \to C \ $ where $\ C \approx -0.8546133208927. \ $ Finally, $\ \lim_{x\to 1^-} f(x) = \lim_{x\to 0^+} g(x) = \lim_{k\to\infty} a_k = B+C \approx -0.3327473824328992250.\ $ The digits of $1$ minus this number is OEIS sequence A158468. $f(\exp(-2^{-30})) \approx -0.3327473822.$
EDIT: Unfortunately, it seems that the function $\ f(x) \ $ oscillates as it gets close to $1$ from below. That is, $\ g(2^{-x}) \ $ approaches a period $1$ function with mean value $\ 1/2 - \gamma/\log(2) \ $ with oscillations of magnitude $\ \approx 1.57315\times 10^{-6} \ $ as Michael shows. Thus, the limit does
not exist. It was obvious that the infinite sum in $\ f(x) \ $ has radius of convergence $1.$ What was not obvious was the limiting behavior as $\ x\to 1^-. \ $ We now know that the series has a logarithmic singularity and $\ f(x) \ $ is what remains. That $\ f(x) \ $ has interesting oscillatory behavior is nice information. |
The Schrödinger equation is the basis to understanding quantum mechanics, but how can one derive it? I asked my instructor but he told me that it came from the experience of Schrödinger and his experiments. My question is,
can one derive the Schrödinger equation mathematically?
The Schrödinger equation is the basis to understanding quantum mechanics, but how can one derive it? I asked my instructor but he told me that it came from the experience of Schrödinger and his experiments. My question is,
Be aware that a "mathematical derivation" of a physical principle is, in general, not possible. Mathematics does not concern the real world, we always need empirical input to decide which mathematical frameworks correspond to the real world.
However, the Schrödinger equation can be seen arising naturally from classical mechanics through the process of quantization. More precisely, we can motivate quantum mechanics from classical mechanics purely through Lie theory, as is discussed here, yielding the quantization prescription
$$ \{\dot{},\dot{}\} \mapsto \frac{1}{\mathrm{i}\hbar}[\dot{},\dot{}]$$
for the classical Poisson bracket. Now, the classical evolution of observables on the phase space is
$$ \frac{\mathrm{d}}{\mathrm{d}t} f = \{f,H\} + \partial_t f$$
and so its quantization is the operator equation
$$ \frac{\mathrm{d}}{\mathrm{d}t} f = \frac{\mathrm{i}}{\hbar}[H,f] + \partial_t f$$
which is the equation of motion in the Heisenberg picture. Since the Heisenberg and Schrödinger picture are unitarily equivalent, this is a "derivation" of the Schrödinger equation from classical phase space mechanics.
(Note: not all steps can be included here, it would be too long to remain in the context of a forum-discussion-answer.)
In the path integral formalism, each path is attributed a wavefunction $\Phi[x(t)]$, that contributes to the total amplitude, of let's say, to go from $a$ to $b.$ The $\Phi$'s have the same magnitude but have differing phases, which is just given by the classical action $S$ as was defined in the Lagrangian formalism of classical mechanics. So far we have: $$ S[x(t)]= \int_{t_a}^{t_b} L(\dot{x},x,t) dt $$ and $$\Phi[x(t)]=e^{(i/\hbar) S[x(t)]}$$
Denoting the total amplitude $K(a,b)$, given by: $$K(a,b) = \sum_{paths-a-to-b}\Phi[x(t)]$$
The idea to approach the wave equation, describing the wavefunctions as a function of time, we should start by dividing the time interval between $a$-$b$ into $N$ small intervals of length $\epsilon$, and for a better notation, let's use $x_k$ for a given path between $a$-$b$, and denote the full amplitude, including its time dependance as $\psi(x_k,t)$ ($x_k$ taken over a region $R$):
$$\psi(x_k,t)=\lim_{\epsilon \to 0} \int_{R} \exp\left[\frac{i}{\hbar}\sum_{i=-\infty}^{+\infty}S(x_{i+1},x_i)\right]\frac{dx_{k-1}}{A} \frac{dx_{k-2}}{A}... \frac{dx_{k+1}}{A} \frac{dx_{k+2}}{A}... $$
Now consider the above equation if we want to know the amplitude at the next instant in time $t+\epsilon$:
$$\psi(x_{k+1},t+\epsilon)=\int_{R} \exp\left[\frac{i}{\hbar}\sum_{i=-\infty}^{k}S(x_{i+1},x_i)\right]\frac{dx_{k}}{A} \frac{dx_{k-1}}{A}... $$
The above is similar to the equation preceding it, the difference relying on the hint that, the added factor with $\exp(i/\hbar)S(x_{k+1},x_k)$ does not involve any of the terms $x_i$ before $i<k$, so the integration can be preformed with all such terms factored out. All this reduces the last equation to:
$$\psi(x_{k+1},t+\epsilon)=\int_{R} \exp\left[\frac{i}{\hbar}\sum_{i=-\infty}^{k}S(x_{i+1},x_i)\right]\psi(x_k,t)\frac{dx_{k}}{A}$$
Now a quote from Feynman's original paper, regarding the above result:
This relation giving the development of $\psi$ with time will be shown, for simple examples, with suitable choice of $A$, to be equivalent to Schroedinger's equation. Actually, the above equation is not exact, but is only true in the limit $\epsilon \to 0$ and we shall derive the Schroedinger equation by assuming this equation is valid to first order in $\epsilon$. The above
needonly be true for small $\epsilon$ to the first order in $\epsilon.$
In his original paper, following up the calculations for 2 more pages, from where we left things, he then shows that:
Canceling $\psi(x,t)$ from both sides, and comparing terms to first order in $\epsilon$ and multiplying by $-\hbar/i$ one obtains
$$-\frac{\hbar}{i}\frac{\partial \psi}{\partial t}=\frac{1}{2m}\left(\frac{\hbar}{i}\frac{\partial}{\partial x}\right)^2 \psi + V(x) \psi$$ which is Schroedinger's equation.
I would strongly encourage you to read his original paper, don't worry it is really well written and readable.
References: Space-Time Approach to Non-Relativistic Quantum Mechanics by R. P. Feynman, April 1948.
Feynman Path Integrals in Quantum Mechanics, by Christian Egli
According to Richard Feynman in his lectures on Physics, volume 3, and paraphrased "The Schrodinger Equation Cannot be Derived". According to Feynman it was imagined by Schrodinger, and it just happens to provide the predictions of quantum behavior.
Fundamental laws of physics cannot be derived (turtles all the way down and all that).
However, they can be motivated in various ways. Direct experimental evidence aside, you can argue by analogy - in case of the Schrödinger equation, comparisons to Hamiltonian mechanics and the Hamilton-Jacobi equation, fluid dynamics, Brownian motion and optics have been made.
Another approach is arguing by mathematical 'beauty' or necessity: You can look at various ways to model the system and go with the most elegant approach consistent with constraints you imposed (ie reasoning in the vein of 'quantum mechanics is the only way to do X' for 'natural' or experimentally necessary values of X).
While it is in general impossible to derive the laws of physics in the mathematical sense of the word, a strong motivation or rationale can be given most of the time. Such impossibility arises from the very nature of physical sciences which attempt to stretch the all-to-imperfect logic of the human mind onto the natural phenomena around us. In doing so, we often make connections or intuitive hunches which happen to be successful at explaining phenomena in question. However, if one had to point out which logical sequence was used in producing the hunch, he would be at a loss - more often than not such logical sequence simply does not exist.
"Derivation" of the Schroedinger equation and its successful performance at explaining various quantum phenomena is one of the best (read audacious, mind-boggling and successful) examples of the intuitive thinking and hypothesizing which lead to great success. What many people miss is that Schroedinger simply took the ideas of Luis de Broglie further to their bold conclusion.
In 1924 de Broglie suggested that every moving particle could have a wave phenomenon associated with it. Note that he didn't say that every particle was a wave or vice versa. Instead, he was simply trying to wrap his mind around the weird experimental results which were produced at the time. In many of these experiments, things which were typically expected to behave like particles also exhibited a wave behavior. It is this conundrum which lead de Broglie to produce his famous hypothesis of $\lambda = \frac{h}{p}$. In turn, Schroedinger used this hypothesis as well as the result from Planck and Einstein ($E = h\nu$) to produce his eponymous equation.
It is my understanding that Schroedinger originally worked using Hamilton-Jacobi formalism of classical mechanics to get his equation. In this, he followed de Broglie himself who also used this formalism to produce some of his results. If one knows this formalism, he can truly follow the steps of the original thinking. However, there is a simpler, more direct way to produce the equation.
Namely, consider a basic harmonic phenomenon:
$ y = A sin (wt - \delta)$
for a particle moving along the $x$-axis,
$ y = A sin \frac{2\pi v}{\lambda} (t - \frac{x}{v}) $
Suppose we have a particle moving along the $x$-axis. Let's call the wave function (similar to the electric field of a photon) associated with it $\psi (x,t)$. We know nothing about this function at the moment. We simply gave a name to the phenomenon which experimentalists were observing and are following de Broglie's hypothesis.
The most basic wave function has the following form: $\psi = A e^{-i\omega(t - \frac{x}{v})}$, where $v$ is the velocity of the particle associated with this wave phenomenon.
This function can be re-written as
$\psi = A e^{-i 2 \pi \nu (t - \frac{x}{\nu\lambda})} = A e^{-i 2 \pi (\nu t - \frac{x}{\lambda})}$, where $\nu$ - the frequency of oscillations and $E = h \nu$. We see that $\nu = \frac{E}{2 \pi \hbar}$ The latter is, of course, the result from Einstein and Planck.
Let's bring the de Broglie's result into this thought explicitly:
$\lambda = \frac{h}{p} = \frac{2\pi \hbar}{p}$
Let's substitute the values from de Broglie's and Einstein's results into the wave function formula.
$\psi = A e^{-i 2 \pi (\frac{E t}{2 \pi \hbar} - \frac{x p}{2 \pi \hbar})} = A e^{- \frac{i}{\hbar}(Et - xp)} (*)$
this is a wave function associated with the motion of an unrestricted particle of total energy $E$, momentum $p$ and moving along the positive $x$-direction.
We know from classical mechanics that the energy is the sum of kinetic and potential energies.
$E = K.E. + P.E. = \frac{m v^2}{2} + V = \frac{p^2}{2 m} + V$
Multiply the energy by the wave function to obtain the following:
$E\psi = \frac{p^2}{2m} \psi + V\psi$
Next, rationale is to obtain something resembling the wave equation from electrodynamics. Namely we need a combination of space and time derivatives which can be tied back into the expression for the energy.
Let's now differentiate $(*)$ with respect to $x$.
$\frac{\partial \psi}{\partial x} = A (\frac{ip}{\hbar}) e^{\frac{-i}{\hbar}(Et - xp)}$
$\frac{\partial^2 \psi}{\partial x^2} = -A (\frac{p^2}{\hbar^2}) e^{\frac{-i}{\hbar}(Et - xp)} = \frac{p^2}{\hbar^2} \psi$
Hence, $p^2 \psi = -\hbar^2 \frac{\partial^2 \psi}{\partial x^2}$
The time derivative is as follows:
$\frac{\partial \psi}{\partial t} = - A \frac{iE}{\hbar} e^{\frac{-i}{\hbar}(Et - xp)} = \frac{-iE}{\hbar}\psi$
Hence, $E \psi = \frac{-\hbar}{i} \frac{\partial \psi}{\partial t}$
The expression for energy we obtained above was $E\psi = \frac{p^2}{2m} \psi + V\psi$
Substituting the results involving time and space derivatives into the energy expression, we obtain
$\frac{-i}{\hbar} \frac{\partial \psi}{\partial t} = \frac{- \hbar ^2}{2m} \frac{\partial ^2 \psi}{\partial x^2} + V\psi$
This, of course, became better known as the Schroedinger equation.
There are several interesting things in this "derivation." One is that both the Einstein's quantization and de Broglie's wave-matter hypothesis were used explicitly. Without them, it would be very tough to come to this equation intuitively in the manner of Schroedinger. What's more, the resulting equation differs in form from the standard wave equation so well-known from classical electrodynamics. It does because the orders of partial differentiation with respect to space and time variables are reversed. Had Schrodinger been trying to match the form of the classical wave equation, he would have probably gotten nowhere.
However, since he looked for something containing $p^2\psi$ and $E\psi$, the correct order of derivatives was essentially pre-determined for him.
Note: I am not claiming that this derivation follows Schroedinger's work. However, the spirit, thinking and the intuition of the times are more or less preserved.
In Mathematics you derive theorems from axioms and the existing theorems.
In Physics you derive laws and models from existing laws, models and observations.
In this case we can start from the observations in photoelectric effect to get the relation between photon energy and frequency. Then continue with the special relativity where we observed the speed of the light is constant in all reference frames. From this when generalizing the kinetic energy we can get the mass energy equivalence. Combining the two we can assign mass to the photon, consequently we can get the momentum of a photon as function of the wavenumber.
Generalizing the energy-frequency and the momentum-wavenumber relation when have the De-Broglie relations. Which is applicable to any particles.
Assuming that a particle have 0 energy when it stands still (you can do it), although it doesn't cause too much trouble if you leave the constant term there, in the later phases you can simply put it into the left side of the equation. We can deal with the kinetic energy. Substituting the
non-relativistic kinetic into the relation and reordering we can have the following dispersion relation:
$$\omega = \frac{\hbar k^2}{2m}$$
The wave equation can be derived from the dispersion relation of the matter waves using the way I mentioned in that answer.
In this case we will need the laplacian and first time derivative:
$$\nabla^2 \Psi + \partial_t \Psi = -k^2\Psi - \frac{i \hbar k^2}{2m}\Psi$$
Multiplying the time derivative with $-\frac{2m}{i\hbar}$, we can zero the right side:
$$\nabla^2 \Psi - \frac{2m}{i\hbar} \partial_t \Psi = -k^2\Psi + k^2\Psi = 0$$
We can reorder it to obtain the time dependent schrödinger equation of a free particle:
$$ \partial_t \Psi = \frac{i\hbar}{2m} \nabla^2 \Psi$$
To my mind there are two senses in which we can "derive" a result in physics. New theories try to address the shortcomings of older ones by upgrading what we already have, giving new results. They also recover old results. I suppose we can call both derivations.
For example, the TISE and TDSE were first obtained because quantum mechanics said that, where classical mechanics would imply $f=0$, we should have $\hat{f}\left|\psi\right\rangle = 0$, with $\hat{f}$ the operator promotion of $f$, which in this case is $f=E-\frac{p^2}{2m}-V$ with operators $E=i\hbar\partial_t,\,\mathbf{p}=-i\hbar\boldsymbol{\nabla}$. (Some results become the weaker $\left\langle\psi\right|\hat{f}\left|\psi\right\rangle = 0$, e.g. with $f=\frac{d\mathbf{p}}{dt}+\boldsymbol{\nabla}V$, so I'm not being entirely honest here. But we expect $\hat{E}$-eigenstates are important because the probability distribution of $E$ is conserved.)
Note that the above paragraph summarises how Schrödinger was derived in the first sense, and its ending parenthesis hints at how Newton's second law was "derived" in my second sense. And everyone talking about path integrals is hinting at a type-2 derivation for both results (path integrals obtain a transition amplitude in terms of $e^{iS}$ with $S$ the classical action now miraculously coming out of a hat, so technically our direct recovery is of Lagrangian mechanics rather than the equivalent Newtonian formulation).
I'll leave people to fight over which, if either, type of derivation is "valid" or "better", but physical insight requires frequent doses of both. I think it's worth distinguishing them in a discussion like this.
protected by Qmechanic♦ Oct 20 '14 at 20:11
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead? |
The orthogonal group, consisting of all proper and improper rotations, is generated by reflections. Every proper rotation is the composition of two reflections, a special case of the Cartan–Dieudonné theorem.
Yeah it does seem unreasonable to expect a finite presentation
Let (V, b) be an n-dimensional, non-degenerate symmetric bilinear space over a field with characteristic not equal to 2. Then, every element of the orthogonal group O(V, b) is a composition of at most n reflections.
How does the Evolute of an Involute of a curve $\Gamma$ is $\Gamma$ itself?Definition from wiki:-The evolute of a curve is the locus of all its centres of curvature. That is to say that when the centre of curvature of each point on a curve is drawn, the resultant shape will be the evolute of th...
Player $A$ places $6$ bishops wherever he/she wants on the chessboard with infinite number of rows and columns. Player $B$ places one knight wherever he/she wants.Then $A$ makes a move, then $B$, and so on...The goal of $A$ is to checkmate $B$, that is, to attack knight of $B$ with bishop in ...
Player $A$ chooses two queens and an arbitrary finite number of bishops on $\infty \times \infty$ chessboard and places them wherever he/she wants. Then player $B$ chooses one knight and places him wherever he/she wants (but of course, knight cannot be placed on the fields which are under attack ...
The invariant formula for the exterior product, why would someone come up with something like that. I mean it looks really similar to the formula of the covariant derivative along a vector field for a tensor but otherwise I don't see why would it be something natural to come up with. The only places I have used it is deriving the poisson bracket of two one forms
This means starting at a point $p$, flowing along $X$ for time $\sqrt{t}$, then along $Y$ for time $\sqrt{t}$, then backwards along $X$ for the same time, backwards along $Y$ for the same time, leads you at a place different from $p$. And upto second order, flowing along $[X, Y]$ for time $t$ from $p$ will lead you to that place.
Think of evaluating $\omega$ on the edges of the truncated square and doing a signed sum of the values. You'll get value of $\omega$ on the two $X$ edges, whose difference (after taking a limit) is $Y\omega(X)$, the value of $\omega$ on the two $Y$ edges, whose difference (again after taking a limit) is $X \omega(Y)$ and on the truncation edge it's $\omega([X, Y])$
Gently taking caring of the signs, the total value is $X\omega(Y) - Y\omega(X) - \omega([X, Y])$
So value of $d\omega$ on the Lie square spanned by $X$ and $Y$ = signed sum of values of $\omega$ on the boundary of the Lie square spanned by $X$ and $Y$
Infinitisimal version of $\int_M d\omega = \int_{\partial M} \omega$
But I believe you can actually write down a proof like this, by doing $\int_{I^2} d\omega = \int_{\partial I^2} \omega$ where $I$ is the little truncated square I described and taking $\text{vol}(I) \to 0$
For the general case $d\omega(X_1, \cdots, X_{n+1}) = \sum_i (-1)^{i+1} X_i \omega(X_1, \cdots, \hat{X_i}, \cdots, X_{n+1}) + \sum_{i < j} (-1)^{i+j} \omega([X_i, X_j], X_1, \cdots, \hat{X_i}, \cdots, \hat{X_j}, \cdots, X_{n+1})$ says the same thing, but on a big truncated Lie cube
Let's do bullshit generality. $E$ be a vector bundle on $M$ and $\nabla$ be a connection $E$. Remember that this means it's an $\Bbb R$-bilinear operator $\nabla : \Gamma(TM) \times \Gamma(E) \to \Gamma(E)$ denoted as $(X, s) \mapsto \nabla_X s$ which is (a) $C^\infty(M)$-linear in the first factor (b) $C^\infty(M)$-Leibniz in the second factor.
Explicitly, (b) is $\nabla_X (fs) = X(f)s + f\nabla_X s$
You can verify that this in particular means it's a pointwise defined on the first factor. This means to evaluate $\nabla_X s(p)$ you only need $X(p) \in T_p M$ not the full vector field. That makes sense, right? You can take directional derivative of a function at a point in the direction of a single vector at that point
Suppose that $G$ is a group acting freely on a tree $T$ via graph automorphisms; let $T'$ be the associated spanning tree. Call an edge $e = \{u,v\}$ in $T$ essential if $e$ doesn't belong to $T'$. Note: it is easy to prove that if $u \in T'$, then $v \notin T"$ (this follows from uniqueness of paths between vertices).
Now, let $e = \{u,v\}$ be an essential edge with $u \in T'$. I am reading through a proof and the author claims that there is a $g \in G$ such that $g \cdot v \in T'$? My thought was to try to show that $orb(u) \neq orb (v)$ and then use the fact that the spanning tree contains exactly vertex from each orbit. But I can't seem to prove that orb(u) \neq orb(v)...
@Albas Right, more or less. So it defines an operator $d^\nabla : \Gamma(E) \to \Gamma(E \otimes T^*M)$, which takes a section $s$ of $E$ and spits out $d^\nabla(s)$ which is a section of $E \otimes T^*M$, which is the same as a bundle homomorphism $TM \to E$ ($V \otimes W^* \cong \text{Hom}(W, V)$ for vector spaces). So what is this homomorphism $d^\nabla(s) : TM \to E$? Just $d^\nabla(s)(X) = \nabla_X s$.
This might be complicated to grok first but basically think of it as currying. Making a billinear map a linear one, like in linear algebra.
You can replace $E \otimes T^*M$ just by the Hom-bundle $\text{Hom}(TM, E)$ in your head if you want. Nothing is lost.
I'll use the latter notation consistently if that's what you're comfortable with
(Technical point: Note how contracting $X$ in $\nabla_X s$ made a bundle-homomorphsm $TM \to E$ but contracting $s$ in $\nabla s$ only gave as a map $\Gamma(E) \to \Gamma(\text{Hom}(TM, E))$ at the level of space of sections, not a bundle-homomorphism $E \to \text{Hom}(TM, E)$. This is because $\nabla_X s$ is pointwise defined on $X$ and not $s$)
@Albas So this fella is called the exterior covariant derivative. Denote $\Omega^0(M; E) = \Gamma(E)$ ($0$-forms with values on $E$ aka functions on $M$ with values on $E$ aka sections of $E \to M$), denote $\Omega^1(M; E) = \Gamma(\text{Hom}(TM, E))$ ($1$-forms with values on $E$ aka bundle-homs $TM \to E$)
Then this is the 0 level exterior derivative $d : \Omega^0(M; E) \to \Omega^1(M; E)$. That's what a connection is, a 0-level exterior derivative of a bundle-valued theory of differential forms
So what's $\Omega^k(M; E)$ for higher $k$? Define it as $\Omega^k(M; E) = \Gamma(\text{Hom}(TM^{\wedge k}, E))$. Just space of alternating multilinear bundle homomorphisms $TM \times \cdots \times TM \to E$.
Note how if $E$ is the trivial bundle $M \times \Bbb R$ of rank 1, then $\Omega^k(M; E) = \Omega^k(M)$, the usual space of differential forms.
That's what taking derivative of a section of $E$ wrt a vector field on $M$ means, taking the connection
Alright so to verify that $d^2 \neq 0$ indeed, let's just do the computation: $(d^2s)(X, Y) = d(ds)(X, Y) = \nabla_X ds(Y) - \nabla_Y ds(X) - ds([X, Y]) = \nabla_X \nabla_Y s - \nabla_Y \nabla_X s - \nabla_{[X, Y]} s$
Voila, Riemann curvature tensor
Well, that's what it is called when $E = TM$ so that $s = Z$ is some vector field on $M$. This is the bundle curvature
Here's a point. What is $d\omega$ for $\omega \in \Omega^k(M; E)$ "really"? What would, for example, having $d\omega = 0$ mean?
Well, the point is, $d : \Omega^k(M; E) \to \Omega^{k+1}(M; E)$ is a connection operator on $E$-valued $k$-forms on $E$. So $d\omega = 0$ would mean that the form $\omega$ is parallel with respect to the connection $\nabla$.
Let $V$ be a finite dimensional real vector space, $q$ a quadratic form on $V$ and $Cl(V,q)$ the associated Clifford algebra, with the $\Bbb Z/2\Bbb Z$-grading $Cl(V,q)=Cl(V,q)^0\oplus Cl(V,q)^1$. We define $P(V,q)$ as the group of elements of $Cl(V,q)$ with $q(v)\neq 0$ (under the identification $V\hookrightarrow Cl(V,q)$) and $\mathrm{Pin}(V)$ as the subgroup of $P(V,q)$ with $q(v)=\pm 1$. We define $\mathrm{Spin}(V)$ as $\mathrm{Pin}(V)\cap Cl(V,q)^0$.
Is $\mathrm{Spin}(V)$ the set of elements with $q(v)=1$?
Torsion only makes sense on the tangent bundle, so take $E = TM$ from the start. Consider the identity bundle homomorphism $TM \to TM$... you can think of this as an element of $\Omega^1(M; TM)$. This is called the "soldering form", comes tautologically when you work with the tangent bundle.
You'll also see this thing appearing in symplectic geometry. I think they call it the tautological 1-form
(The cotangent bundle is naturally a symplectic manifold)
Yeah
So let's give this guy a name, $\theta \in \Omega^1(M; TM)$. It's exterior covariant derivative $d\theta$ is a $TM$-valued $2$-form on $M$, explicitly $d\theta(X, Y) = \nabla_X \theta(Y) - \nabla_Y \theta(X) - \theta([X, Y])$.
But $\theta$ is the identity operator, so this is $\nabla_X Y - \nabla_Y X - [X, Y]$. Torsion tensor!!
So I was reading this thing called the Poisson bracket. With the poisson bracket you can give the space of all smooth functions on a symplectic manifold a Lie algebra structure. And then you can show that a symplectomorphism must also preserve the Poisson structure. I would like to calculate the Poisson Lie algebra for something like $S^2$. Something cool might pop up
If someone has the time to quickly check my result, I would appreciate. Let $X_{i},....,X_{n} \sim \Gamma(2,\,\frac{2}{\lambda})$ Is $\mathbb{E}([\frac{(\frac{X_{1}+...+X_{n}}{n})^2}{2}] = \frac{1}{n^2\lambda^2}+\frac{2}{\lambda^2}$ ?
Uh apparenty there are metrizable Baire spaces $X$ such that $X^2$ not only is not Baire, but it has a countable family $D_\alpha$ of dense open sets such that $\bigcap_{\alpha<\omega}D_\alpha$ is empty
@Ultradark I don't know what you mean, but you seem down in the dumps champ. Remember, girls are not as significant as you might think, design an attachment for a cordless drill and a flesh light that oscillates perpendicular to the drill's rotation and your done. Even better than the natural method
I am trying to show that if $d$ divides $24$, then $S_4$ has a subgroup of order $d$. The only proof I could come up with is a brute force proof. It actually was too bad. E.g., orders $2$, $3$, and $4$ are easy (just take the subgroup generated by a 2 cycle, 3 cycle, and 4 cycle, respectively); $d=8$ is Sylow's theorem; $d=12$, take $d=24$, take $S_4$. The only case that presented a semblance of trouble was $d=6$. But the group generated by $(1,2)$ and $(1,2,3)$ does the job.
My only quibble with this solution is that it doesn't seen very elegant. Is there a better way?
In fact, the action of $S_4$ on these three 2-Sylows by conjugation gives a surjective homomorphism $S_4 \to S_3$ whose kernel is a $V_4$. This $V_4$ can be thought as the sub-symmetries of the cube which act on the three pairs of faces {{top, bottom}, {right, left}, {front, back}}.
Clearly these are 180 degree rotations along the $x$, $y$ and the $z$-axis. But composing the 180 rotation along the $x$ with a 180 rotation along the $y$ gives you a 180 rotation along the $z$, indicative of the $ab = c$ relation in Klein's 4-group
Everything about $S_4$ is encoded in the cube, in a way
The same can be said of $A_5$ and the dodecahedron, say |
Is there any reference which studies sets of random variables as independence systems, a type of combinatorial object (see below)?
Motivation:In particular, since independence systems are abstract simplicial complexes, this would allow one to apply homology theory (I think) to study families of random variables. Moreover, one could cross-apply intuition from other types of independence systems to better understand statistical independence.
For example, two things that I have had difficulty understanding intuitively, that statistical independence is not transitive, and that $(n-1)$-wise independence doesn't necessary imply $n$-wise independence, are facts which have geometrically easy to understand analogs in linear algebra.
Note: The closest thing I can think of having seen are treatments of random variables (with finite support) as standard simplices, since each point of the simplex corresponds to a probability distribution such that $p_1 + \dots + p_n = 1$. Also, this. However, these are not what I am looking for: in what I am describing, only the vertices represent random variables -- the higher-order faces do not represent random variables, they represent relationships of statistical independence between random variables (e.g. a $2$-face connecting three vertices corresponds to the statement that the three random variables symbolized by the vertices are mutually statistically independent).
I also don't think I am looking for a treatise on the theory of random graphs or random matroids or random simplicial complexes -- whether or not two given random variables $X$ and $Y$ are statistically independent is supposed to be a deterministic relationship (at least in simple models).
This question on Math.SE comes the closest to discussing the type of phenomenon I am talking about (the statistical independence of a family of random variables being modeled by an independence system, in this case the independence system of linearly independent vectors).
These two questions on Math.SE are also related, perhaps the first more so than than second: (1) (2). These two questions on MathOverflow also seem possibly related, although to be honest I don't feel I understand them well enough to discern that accurately: (1)(2).
Example 2.1 (Statistical Independence)Independence occurs in multiple contexts, including linear independence of a collection of vectors or linear independence of solutions to linear differential equations. More subtle examples include statistical independence of random variables: recall that the random variables $\mathcal{X}=\{X_i\}_1^n$ are statistically independent if their probability densities $f_{X_i}$ are jointly multiplicative, i.e., the probability density $f_{\mathcal{X}}$ of the combined random variable $(X_1, \dots, X_n)$ satisfies $f_{\mathcal{X}} = \prod_i f(X_i)$. The independence complex of a collection of random variables compactly encodes statistical dependencies.
In section 7.8, p.151 of the same book this idea is referenced once again:
One clever application of this result [use of discrete Morse theory to study evasiveness in decision tree algorithms as in e.g. this paper,
Morse Theory and Evasiveness, by Forman] is to independence tests for random variables. Let $\mathcal{X} = \{X_i\}$ be a collection of random variables and recall from Example 2.1 the independence complex $\mathcal{J}_{\mathcal{X}} \subset \Delta^n$ of $\mathcal{X}$. Given an unknown subcollection $\sigma \subset \mathcal{X}$ of the random variables, how many trials of the form " Is $X_i$ a member of $\sigma$" are required to determine if the collection is statistically independent? According to the results cited above, statistical independence is evasive if $\mathcal{J}_{\mathcal{X}}$ is not acyclic: any nontrivial homology class in $\tilde{H}_{\bullet}(\mathcal{J}_{\mathcal{X}})$ is an obstruction to evasiveness of statistical independence. How many such evasive collections of random variables are there? It is at least twice the total dimension of $\tilde{H}_{\bullet}(\mathcal{J}_{\mathcal{X}})$.
Unfortunately, despite the fact that most of the book has ample citations and references to further reading, these examples are not accompanied by any references, so I am not sure how to explore this (to me) interesting idea further. Perhaps the reason why is that there are no references discussing this topic, and it is an original idea of the author.
Pages 4-5 here discussing log-linear models seem like they might be related, although it is hard for me to tell based on what is written and it is not expounded upon in much depth. In any case, like an aforementioned related question on Math.SE, it also suggests a relationship between this topic and Bayesian networks and/or algebraic statistics. However, it is worth noting that the edges in a Bayesian network denote conditional
dependence, not independence, so they cannot be directly extended to the type of independence/abstract simplicial complex I am referring to. On the other hand, if one forms a new graph by connecting each node with the complement of its Markov blanket, then perhaps this might work. I.e. in other words Bayesian networks may encode the necessary information about conditional independence relationships for this to work -- on the other hand, it might still not work since "conditional independence" refers to a different (conditional) probability measure for each pair of nodes, while the standard example assumes a single choice of probability measure on the probability space on which all of the random variables are defined (I think). The answer is probably obvious but I need to think about it more.
This other document by Robert Ghrist also seems like it might be relevant, but if it is I can't tell for certain (honestly I don't think so but it's better than nothing so I'm including the link anyway).
I was thinking that perhaps one could use log-likelihoods to construct chain complexes and then homologies on these abstract simplicial complexes, although each time I try to work out the details I hit a roadblock/realize I was misunderstanding something. In any case, using the log-likelihood instead of the likelihood seems analogous to the use of the exponential to turn a (geometric/not abstract) simplex into a vector space as outlined in this blog post. Probably a more viable method for calculating the homologies of these complexes would be to use discrete Morse theory, as implied by Professor Ghrist on p.151 of his aforementioned book.
Also I wonder if there is a relation between all of this and information theory: the phenomenon of information gain in decision tree algorithms is well known and is even used to motivate the entire concept of entropy/information in some books, for example. In particular, Ghrist's remarks on p.151 of his book about Forman's paper seem like they could possibly be interpreted in terms of information theory, and being particularly bold, one could imagine that the homologies of these statistical independence complexes have an information-theoretic interpretation.
Disclaimer: This is a revised version of my now-deleted week-old unanswered question on Math.SE -- perhaps this is a current topic of research, which perhaps is why it was unanswered on Math.SE and why it might be on-topic here. If you disagree, please let me know. |
I'm considering implementing (just for simplicity) the unconstrained implicit optimization based integration for Material Point Method as described in Chenfanfu Jiang's thesis on MPM (the minimization algorithm starts on page 101 – "
A.3.1 Unconstrained minimization").
I have difficulties understanding how would be the algorithm implemented.
Are the statements below (in)correct?
The goal of the algorithm is obtaining new grid velocities in response to external and internal forces of the material represented by material points.
The function to be minimized is $E(\mathbf{v}) = \sum_i\frac{1}{2}m_{\mathbf{i}} \lVert \mathbf v_i - \mathbf {v_i}^n \rVert + \Phi(\mathbf {x_i}^n + \Delta t\mathbf v_i)$. (from Jiang's thesis)
It's derivative is $\mathbf h(\mathbf v) = \mathbf M\mathbf v - \Delta t \mathbf f(\mathbf x^n + \Delta t \mathbf v) - \mathbf M\mathbf v^n$.
Based on the thesis I assume the approach to minimizing $E$ is by the Newton method. The Newton method requires inverse of hessian $\mathbf{\mathit{H}}$. However we're in fact interested in $\Delta \mathbf v$ that would minimize (the linearization of) $E$, so we can instead solve $\mathbf{\mathit{H}}\Delta \mathbf v=\mathbf h$.
When solving the system via Conjugate gradient, we only need to know the product $\mathbf{\mathit{H}}\Delta \mathbf v$, which can be thought of as differential $\delta \mathbf h = \mathbf M\Delta \mathbf v - \Delta t^2 \delta \mathbf f(\mathbf x^n + \Delta t \mathbf v) = \mathbf{\mathit{H}}\Delta \mathbf v$.
From [Stomakhin et al. MPM snow paper] we know how to compute $\delta\mathbf{f_i}$ (page 6 – "
6 Stress-based forces and linearization") – i.e. force differential for the grid velocity at node $\mathbf i$. My assumptions: When solving $\mathbf{\mathit{H}}\Delta \mathbf v=\mathbf h$ via Conjugate Gradient (step 4. in the algorithm) ${\Delta \mathbf v}$ is a column vector consisting of component-wise grid velocity differentials – $[{\Delta v}_{1x}, {\Delta v}_{1y}, {\Delta v}_{1z},\,\,\, {\Delta v}_{2x}, {\Delta v}_{2y}, {\Delta v}_{2z}, \,\,\,\ldots]$ (length of $\Delta \mathbf v$ is therefore $3n$, where $n$ is the number of grid nodes (containing velocity)) When multiplying $\mathbf{\mathit{H}}\Delta \mathbf v$, we're in fact always processing 3 rows at the same time - i.e. we process one grid node at time (which consists of x,y,z velocity components). When computing residual $\mathbf r = \mathbf h - \mathbf{\mathit{H}}\Delta \mathbf v$, we can compute its consecutive 3 components corresponding to grid node $\mathbf i$ (like in assumption 1.1.) by writing $\mathbf{r_i} = \left(m_{\mathbf i}\mathbf v_{\mathbf i} - \Delta t \mathbf f_{\mathbf i}(\mathbf x^n_{\mathbf i} + \Delta t \mathbf v_{\mathbf i}) - m_{\mathbf i}\mathbf {v_{\mathbf i}}^n\right) - \left(m_{\mathbf i}\Delta\mathbf v_{\mathbf i} - \Delta t^2 \delta \mathbf {f_i}(\mathbf x^n_{\mathbf i} + \Delta t \mathbf v_{\mathbf i})\right)$. When computing $\mathbf r \cdot \mathbf r$, I ignore the fact that each 3 consecutive elements of vector $\mathbf r$ correspond to 1 grid node, and just compute the dot product like with any other vector. Additional question: Is the line search (step 7 of the algorithm) necessary? |
I read that every chemical reaction is theoretically in equilibrium in an old textbook. If this is true how can a reaction be one way?
Yes, every chemical reaction can theoretically be in equilibrium. Every reaction is reversible. See my answer to chem.SE question 43258 for more details.
This includes even precipitation reactions and reactions that release gases. Equilibrium isn't just for liquids! Multiphase equilibria exist.
The only thing that stops chemical reactions from being "in equilibrium" is the lack of the proper number of molecules. For a reaction to be in equilibrium, the concentrations of reactants and products must be related by the equilibrium constant.
$$ \ce{ A <=> B} $$ $$ K = \frac{[B]}{[A]} $$
When equilibrium constants are extremely large or small, then extremely large numbers of molecules are required to satisfy this equation. If $K = 10^{30}$, then at equilibrium there will be $10^{30}$ molecules of B for every molecule of A. Another way to look at this is that for equilibrium to happen, there need to be
at least$10^{30}$ molecules of B, i.e. more than one million molesof B, in the system for there to be "enough" B to guarantee an equilibrium, i.e. to guarantee that there will be a well-defined "equilibrium" concentration of A.
When this many molecules are not present, then there is no meaningful equilibrium. For very large (or very small) equilibrium constants, it will be very difficult to obtain an equilibrium. In addition to needing a megamole-sized system (or bigger), the system will have to be well-mixed, isothermal, and isobaric. That's not easy to achieve on such large scales!
Update Commenters suggest that "irreversible" reactions do not have an equilibrium. This is true, but tautological. In the real world, all reactions are reversible, at least to a (perhaps vanishingly small) degree. To say otherwise would violate microscopic reversibility. A reaction that was 100% irreverible would have an equilibrium constant of infinity. But if $K= \infty$, then $\Delta G^{\circ} = -RT \ln{K}$ would turn into $\Delta G^{\circ} = -\infty$. So to get infinite energy we would just have to use 100% irreversible reactions! Hopefully the problems with the idea of "irreversible" reactions are becoming apparent.
Equilibrium can only apply to a closed system.
Reactions which form insoluble precipitates or gases which escape do not exhibit the behavior of a closed system. Therefore, these reactions may not be in equilibrium. However, these claims are pragmatic rather than real.
As it turns out, in the above answer, barium sulphate has a $\ce{K_{sp}}$ of $\ce{1.1 x 10^{-10}}$, so formally there is some small equilibrium related to the amount of barium sulphate in solution $\ce{1.05 x 10^{-5}}$
As gasses are escaping in solution, they may be readsorbed, an thus there would be some small equilibrium for processes like that.
But pragmatically, these reactions are not at equilibrium.
Yes every reaction is an equilibrium. A complete reaction is a equilibrium with high equlibrium constant. If you write the expression for equilibrium constant, you will find that high equilibrium constant implies that the conc. of the products is very high, i.e. the reaction has reached completion.
$$ \ce{BaCl2 (aq) + H2SO4 (aq) -> BaSO4 (s) + 2 HCl (aq)} $$ Let's just think about what (aq) means; it means you have ions floating about in there which are in equilibrium with their solids.
If you start from thinking there is no precipitate, just ions dissolved, we have $\ce{Ba^{2+}}$, $\ce{H+}$, $\ce{Cl-}$, and $\ce{SO4^{2-}}$. Then you consider the $K_{\mathrm{s}}$ of the different salts, which are $\ce{BaCl2}$, $\ce{HCl}$, $\ce{BaSO4}$, and $\ce{H2SO4}$. They will all go to $K_{\mathrm{s}}$, so all the salts would be forming and be dissolved unless blocked, e.g. things can become supersaturated. $\ce{BaSO4}$ has an extremely low Ks so most will precipitate at the same time. $\ce{BaCl2}$ will go to $K_{\mathrm{s}}$ with the ions $\ce{Ba^{2+}}$ and $\ce{Cl-}$ which are still in solution, and $\ce{H2SO4}$ would also go to $K_{\mathrm{s}}$, which means $\ce{BaCl2}$ is being formed, and therefore there is a reverse reaction.
Note if I had $\ce{BaSO4}$ in water, which would be in equilibrium (so tiny dissolved/tiny bit of $\ce{Ba^{2+}}$ ions and $\ce{SO4^{2-}}$ ions), and I added $\ce{Cl-}$ ions, a negligible amount more $\ce{BaSO4}$ will dissolve, as $\ce{BaCl2}$ would go to equilibrium, reducing $\ce{Ba^{2+}}$ ions, leading to negligible amounts of $\ce{BaSO4}$ dissolving to remain at $K_{\mathrm{s}}$. This also shows Le Chatelier's principle.
No, every reaction isn't in equilibrium with its products. Consider the following irreversible reaction: $$\ce{BaCl2(aq) + H2SO4(aq) -> BaSO4(ppt) + 2HCl(aq)}$$.
By definition if the reaction is irreversible then there is no equilibrium for that reaction. If there were an "equilibrium" for the reaction then the equation would be something like: $$\ce{K_{eq}} = \dfrac{\ce{[BaSO4][HCl]^2}}{ \ce{[BaCl2][H2SO4]}}$$ and such an equilibrium just doesn't exist since when the barium sulfate precipitates there could be a microgram or a kilogram as the product. Think of it another way - adding $\ce{HCl}$ (in dilute solution) or $\ce{BaSO4}$ won't shift the reaction to the left. (Adding more HCl would shift $\ce{HSO4^{-} <-> H+ + SO4^{2-}}$ in concentrated solutions, which is besides the point I'm trying to make.) There is a solubility product for barium sulfate, but the solubility product doesn't depend on the amount of barium sulfate precipitate, nor the concentration of HCl. So the solubility product isn't for the overall reaction but rather for part of the system:
$$\ce{[Ba][SO4^{2-}] = K_{sp}}$$
(Full disclosure - Theoretically the barium sulfate solubility product wouldn't depend on the HCl concentration, but really that isn't quite true. The barium sulfate solubility product really depends on the activity of the barium and sulfate ions, so the ionic strength of the solution matters.) |
Consider $\mathbb{R}^n$ equipped with the standard dot product $\langle \cdot, \cdot \rangle$ and $m$ vectors there: $v_1, v_2, \ldots, v_m$. We want to build a data structure that allows queries of the following format: given $x \in \mathbb{R}^n$ output $\min_i \langle x, v_i \rangle$.
Is it possible to go beyond the trivial $O(nm)$ query time? For example if $n = 2$, then it is immediate to get $O(\log^2 m)$.
The only thing I can come up with is the following. It is an immediate consequence of Johnson-Lindenstrauss lemma that for every $\varepsilon > 0$ and a distribution $\mathcal{D}$ on $\mathbb{R}^n$ there is a linear mapping $f \colon \mathbb{R}^n \to \mathbb{R}^{O(\log m)}$ (which can be
evaluated in $O(n \log m)$ time) such that $\mathrm{Pr}_{x \sim \mathcal{D}}\left[\forall i \quad \langle x, v_i \rangle - \varepsilon (\|x\| + \|v_i\|)^2 \leq \langle f(x), f(v_i)\rangle \leq \langle x, v_i \rangle + \varepsilon (\|x\| + \|v_i\|)^2 \right] \geq 1 - \varepsilon$. So, in time $O((n + m) \log m)$ we can compute something that is in some sense close to $\min_i \langle x, v_i \rangle$ for most $x$'s (at least if the norms $\|x\|$ and $\|v_i\|$ are small). UPD The abovementioned bound can be somewhat sharpened to the query time $O(n + m)$ if we use locality-sensitive hashing. More precisely, we choose $k := O(\frac{1}{\varepsilon^2})$ independent Gaussian vectors $r_1, r_2, \ldots, r_k$. Then we map $\mathbb{R}^n$ to $\{0,1\}^k$ as follows: $v \mapsto (\langle r_1, v \rangle \geq 0, \langle r_2, v \rangle \geq 0, \ldots, \langle r_k, v \rangle \geq 0)$. Then we can estimate the angle between two vectors within an additive error $\varepsilon$ by computing $\ell_1$-distance in the image of this mapping. Thus, we can estimate dot products within an additive error $\varepsilon \|x\| \|v_i\|$ in $O(\frac{1}{\varepsilon^2})$ time. |
I'm struggling to understand the reasoning in question $b)$. Basically, I had to write Newton's Second Law and do a change of variables to put the equation of motion of a mass $m$ in a specific form, which will give the driven harmonic oscillator time function once solved.
My Physics assistant was unable to answer my question/problem. It's about the change of variables itself, what it depicts/means, that I've done wrong because of my understanding of the problem and concepts (I think).
Here is the wording of the problem :
"A car, viewed as a point of mass $m$, is moving on a bumpy road with an horizontal and constant velocity $\vec v_x$. The mass is linked up to a spring device of spring constant $k$ and of length at rest $l_0$. At the end of the device, there is a wheel without mass and of negligible radius that follows the road (see the diagram here https://i.imgur.com/4RthHYD.png ).
We suppose that the device which keeps the spring straight doesn't affect the movement of the mass. The values of the parameters of this problem are such that the wheel doesn't take off the ground and the car never hits the wheel. The road has a sinusoidal shape of height $H$ and length $L$."
a) Express the vertical position of the wheel $h(t)$ as a function of the time.
I've already done that one. With a vertical axis $y$ pointing up, whose origin is set at $H/2$, we get : $$h(t) = \frac {H}{2} sin (\frac{2 \pi v_x}{L} t) = \frac {H}{2} sin(\omega t)$$
Using $h(t)$, find the equation of motion of the car in the vertical direction $y$. b) Indication : Put the equation of motion in the form : $\ddot u + \omega_0^2 u = \alpha_0 sin(\omega t)$. A change of variables $y \rightarrow u$ may be needed. Firstly, I've written down Newton's Lex Secunda :
$$-mg -k(y -h(t) - l_0) = m\ddot y$$where $(y -h(t) - l_0)$ is the extension/compression of the spring in function of the position $y$ of the mass from the $y$-axis. Indeed, the position of the mass minus the position of the wheel minus the length of the spring at rest gives us the extension/compression of the spring.
Now,
my try was to immediately do the change of variables. I defined an axis $u$ pointing up, whose origin is set at the equilibrium position and then I expressed the extension/compression of the spring like this : $\Delta u = u - d$, where $d$ is the compression of the spring when the system is at the equilibrium position (as if it were a simple harmonic system with a mass) such that $-mg + kd = 0$.
Therefore I have the equality of the extension/compression :
$$y -h(t) - l_0 = u-d \Rightarrow y = h(t) + l_0 + u - d$$
So I substituted $y$ the lex secunda and got :
$$- mg - k(u-d) = m \ddot y$$ But I know that $-mg + kd = 0$, so I finally got : $$\ddot u + \frac{k}{m} u = \frac{H}{2} \omega^2 sin (\omega t)$$ because $\ddot y = \ddot h(t) + \ddot u$. $$\ddot u + \omega_0^2 u = \frac{H}{2} \omega^2 sin (\omega t)$$
But the problem is that apparently,
my change of variables is wrong.In the corrected version, they firstly developped the lex secunda equation and got :
$m \ddot y = -mg -k(y -h(t) - l_0) = -mg - ky + kl_0 + k \frac{H}{2}sin(\frac{2\pi v_x}{L}t)$
$\ddot y = -g - \frac {k}{m}y + \frac {k}{m}l_0 + \frac {k}{m} \frac{H}{2}sin(\frac{2\pi v_x}{L}t)$
$\ddot y = -g - \omega_0^2y + \omega_0^2l_0 + \omega_0^2 \frac{H}{2}sin(\omega t)$
$\ddot y + \omega_0^2 (\frac{g}{\omega_0^2} + y - l_0) = + \omega_0^2 \frac{H}{2}sin(\omega t)$, where $\frac{g}{\omega_0^2} = d$.
Here is what I don't understand :
In the corrected version, they set, without explanations, $u = (d + y - l_0)$ and $\ddot u = \ddot y$ and substituted them in the equation of motion. However, I don't understand how can $u$ express the position of the spring relative to an axis $u$, whose origin would be set at the equilibrium position (since we then want to find the driven harmonic oscillator function $u(t)$ which gives the position of the mass in respect to the equilibrium position of the system). I don't understand it because if $u$ is the position of the mass in respect to the equilibrium position, then $u - d = y -l_0$ is the extension of the spring, but we saw above that the extenion is $y - h(t) - l_0$. So it makes no sense to me.
In brief, my problem is that I don't understand how $u - d = (y - l_0)$ is the extension/compression of the spring. |
The relationship between 3d Chern-Simons theory on the product of the disk and the real line ($D\times \mathbb{R}$) and the chiral WZW model on $S^1\times \mathbb{R}$ was shown in Elitzur et al Nucl.Phys. B326 (1989) 108 (the main details can be found in the top answer to this question, whose notation we follow).
The essential point is that the Chern-Simons gauge field $\tilde{a}$ on the disk can be shown to obey the flatness constraint $\tilde{f}=d\tilde{a}+\tilde{a}\wedge\tilde{a}=0$, which is solved by $$\tilde{a}=-\tilde{d}UU^{-1}. \tag{1}\label{1}$$ Substituting this into the action leads to the chiral WZW model on $S^1\times \mathbb{R}$.
For the quantum theory, one also ought to show that the path integral measure does not have a nontrivial Jacobian, i.e., $$\int \mathcal{D}\tilde{a}~\delta{(\tilde{f})}=\int \mathcal{D}U, \tag{2}\label{2}$$ where $\int \mathcal{D}U$ comes from the Haar measure. My question is, how do we show this?
My attempt: In 1, one way to show this was explained on page 111. They claim that for the change of variables $$\tilde{a}=-\tilde{d}UU^{-1}+\epsilon \tag{3}\label{3}$$ (where $\epsilon$is a small variation transverse to the space of flat gauge fields), the path integral Jacobian is proportional to $|\textrm{det}(\partial_z+\partial_zU U^{-1})|^2$, and they further claim that this cancels the factor obtained in converting $\delta(\tilde{f})$ to $\delta(\epsilon)$. Here, $z$ is a complex coordinate on $S^1\times \mathbb{R}$.
Here is my attempt to show this. Firstly, by varying \eqref{3}, I obtain $$\delta \tilde{a}=-\tilde{d}(\delta U) U^{-1}+\tilde{d}UU^{-1}\delta U U^{-1}+\delta\epsilon .$$ If the Jacobian is understood to be $|\textrm{det}\frac{\delta\tilde{a}}{\delta U}|$ this seems to imply that $$\int \mathcal{D}\tilde{a}_z\int \mathcal{D}\tilde{a}_{\bar{z}}=|\textrm{det}(\partial_z-\partial_zU U^{-1})U^{-1}|^2\int\mathcal{D}U,$$ which is not exactly what we want.
Next, to show that $$\delta(\tilde{f})\propto \frac{\delta(\epsilon)}{|\textrm{det}(\partial_z+\partial_zU U^{-1})|^2},$$ it seems that we should use a formula of the form $\delta(f(x)) = \delta(x-x_0)/|f'(x_0)|$, together with the explicit form of $|\textrm{det}\frac{\delta \tilde{f}}{\delta \epsilon_z}|$. I was able to show that $$\tilde{f}=\partial_z\epsilon_{\bar{z}}-[\partial_z U U^{-1},\epsilon_{\bar{z}}]-\partial_{\bar{z}}\epsilon_{{z}}+[\partial_{\bar{z}} U U^{-1},\epsilon_{{z}}],$$ but I am not sure how to generalize $\delta(f(x)) = \delta(x-x_0)/|f'(x_0)|$ appropriately. |
Rational function \( R(z)= \frac {P(z)}{Q(z)} \) ; where P and Q are polynimials . There are some theory about fixed points .
Theorem:
Let \( \rho \) be the fixed point of the maps R and g be the Mobius map . Then \( gRg^{-1} \) has the same number of fixed points at \( g(\rho) \) as \( R \) has at \( \rho \).
Theorem :
If \( d \geq 1 \) , a rational map of degree d has previously \( d+1 \) fixed points in .
To each fixed point \( \rho \) of a rational maps R , we associate a complex number which we call the multiplier \( m(R , \rho) \) of R at \( \rho \) .
$$ m(r, \rho) = \{ R^{,}(\rho) ; \ if \ \rho \neq \propto \ and \ \frac {1}{R^{,}(\rho)} ; \ if \ \rho = \propto $$
Now, we dive into classification of fixed points and this is purely local matter , it applies to any analytic function and in particular , to the local inverse(when it exists ) of a rational map . |
Normal Forms
James Murdock (2006), Scholarpedia, 1(10):1902. doi:10.4249/scholarpedia.1902 revision #91592 [link to/cite this article]
A
normal form of a mathematical object, broadly speaking, isa simplified form of the object obtained by applying atransformation (often a change of coordinates) that is consideredto preserve the essential features of the object. For instance, amatrix can be brought into Jordan normal form by applying asimilarity transformation. This article focuses on normal formsfor autonomous systems of differential equations (vector fields or flows) near an equilibrium point. Similar ideas can be used fordiscrete-time dynamical systems (diffeomorphisms) near a fixed point, or for flows near a periodic orbit.
Contents Basic Definitions
The starting point is a smooth system of differential equationswith an equilibrium (rest point) at the origin, expanded as a power series\[\dot x = Ax + a_1(x) + a_2(x) +\cdots,\]where \(x\in{\mathbb R}^n\) or \({\mathbb C}^n\ ,\) \(A\) is an \(n\times n\) real or complex matrix, and \(a_j(x)\) is a homogeneous polynomial of degree \(j+1\) (for instance, \(a_1(x)\) is quadratic). The expansion is taken to some finite order \(k\) and truncated there, or else is taken to infinity but is treated formally (the convergence or divergence of the series is ignored). The purpose is to obtain an approximation to the (unknown) solution of the original system, that will be valid over an extended range in time. The linear term \(Ax\) is assumed to be already in the desired normal form, usually the Jordan or a real canonical form. A transformation to new variables \(y\) isapplied, having the form\[x=y+u_1(y)+u_2(y)+\cdots,\]where \(u_j\) is homogeneous of degree \(j+1\ .\) This results in a new system\[\dot y = Ay + b_1(y) + b_2(y) +\cdots,\]having the same general form as the original system. The goal is to make acareful choice of the \(u_j\ ,\) so that the \(b_j\) are "simpler" insome sense than the \(a_j\ .\) "Simpler" may mean only that someterms have been eliminated, but in the best cases one hopes toachieve a system that has additional
symmetries that were notpresent in the original system. (If the normal form possesses asymmetry to all orders, then the original system had a hiddenapproximate symmetry with transcendentally small error.)
Among many historical references in the development of normal form theory, two significant ones are Birkhoff (1996) and Bruno (1989). As the Birkhoff reference shows, the early stages of the theory were confined to Hamiltonian systems, and the normalizing transformations were canonical (now called symplectic). The Bruno reference treats in detail the convergence and divergence of normalizing transformations.
An Example
A basic example is the nonlinear oscillator with \(n=2\) and\[A=\left[\begin{matrix} 0 & -1 \\ 1 & 0\end{matrix} \right].\]In this case it is possible (no matter what the original \(a_j\) maybe) to achieve \(b_j=0\) for \(j\) odd and to eliminate all but twocoefficients from each \(b_j\) with \(j\) even. More precisely,writing \(r^2=y_1^2+y_2^2\ ,\) a normal form in this case is\[\dot y = Ay + \sum_{i=1}^{\infty} \alpha_ir^{2i}y + \beta_ir^{2i}Ay\ .\]In polar coordinates this becomes\[\dot r = \alpha_1 r^3 + \alpha_2r^5+\cdots\]\[\dot\theta = 1 + \beta_1r^2 + \beta_2r^4+\cdots\ .\]The first nonzero \(\alpha_i\) determines the stability of the origin,and the \(\beta_i\) control the dependence of frequency on amplitude. Also the normalized system has achieved symmetry (more technically,
equivariance) under rotation about the origin. Although theclassical (or level-one) approach to normal forms stops withthe form obtained above for this example, it is important to note that neitherthe coefficients \(\alpha_i\) and \(\beta_i\) in the equation, nor thetransformation terms \(u_j\) used to achieve the equation, are uniquelydetermined by the original \(a_j\ .\) In fact, by a more carefulchoice of the \(u_j\ ,\) it is possible to put the nonlinear oscillatorinto a hypernormal form (also called a unique, higher-level, or simplest normal form) in which all but finitely many of the coefficients \(\alpha_i\) and \(\beta_i\) are zero. Hypernormal forms are difficult to calculate, and from here on we speak only of classical normal forms. Asymptotic Consequences of Normal Forms
For some systems, the normal form (truncated at a given degree) is simple enough to become solvable. In this case it is of interest to ask whether this solution gives rise to a good approximation (an asymptotic approximation in some specific sense) to a solution of the original equation (say, with the same initial condition). The answer is "sometimes yes". ("Gives rise to" means that the solution of the truncated normal form usually must be fed back through the transformation to normal form.) Some popular books, such as Nayfeh (1993), present the subject entirely from this point of view, without proving any error estimates or noticing that there are cases in which asymptotic validity cannot hold. Several theorems and open questions in this regard are given in chapter 5 of Murdock (2003). The most basic theorem states that an asymptotic error estimate with respect to a small parameter holds if (a) the parameter is introduced correctly, (b) the matrix of the linear term is semisimple (see below) and has all its eigenvalues on the imaginary axis, and (c) the semisimple normal form style (see below) is used. Although the asymptotic use of normal forms is important when it is true, and has many practical applications, the primary importance of normal forms is as a preparatory step towards the study of qualitative dynamics, unfoldings, and bifurcations.
Geometrical Consequences of the Normal Form
It has already been pointed out that a normal form can decide stability questions and establish hidden symmetries. Computing the normal form up to degree \(k\) also automatically computes (to degree \(k\)) the stable, unstable, and center manifolds, the center manifold reduction, and the fibration of the center-stable and center-unstable manifolds over the center manifold. The common practice of computing the center manifold reduction first, and then computing the normal form only for this reduced system, seems to save work but loses many of these results. See chapter 5 of Murdock (2003).
On occasion, the truncation of a normal form produces a simplesystem that is topologically equivalent to the original system ina neighborhood of the equilibrium, called
topological normal form. For instance, in the example above, truncating after the first nonvanishing \(\alpha_i\) will accomplishthis, but if all \(\alpha_i\) are zero, the topological behavior isprobably determined by a transcendentally small effect that is notcaptured by the normal form. The Homological Equation and Normal Form Styles
In the general case, we define the
Lie derivative operator\(L_A\) associated with the matrix \(A\) by \((L_A v)(x)=v'(x)Ax-Av(x)\ ,\)where \(v\) is a vector field and \(v'\) is its matrix of partial derivatives. Then \(L_A\) maps the vector space \(\mathcal{V}_j\) ofhomogeneous vector fields of degree \(j+1\) into itself. Therelation between the \(a_j\ ,\) \(b_j\ ,\) and \(u_j\) is determinedrecursively by the homological equations\[L_A u_j = K_j - b_j\ ,\]where \(K_1=a_1\) and \(K_j\) equals \(a_j\) plus a correction termcomputed from \(a_1,\dots,a_{j-1}\) and \(u_1,\dots,u_{j-1}\ .\) Let \(\mathcal{N}_j\) be any choice of a complementary subspace to the image of\(L_A\) in \(\mathcal{V}_j\ ;\) then it is possible to choose the \(u_j\) so thateach \(b_j\in \mathcal{N}_j\ .\) (Take \(b_j=P_j K_j\ ,\) where\(P_j:\mathcal{V}_j\rightarrow\mathcal{N}_j\) is the projection map, and note that thehomological equation can be solved, nonuniquely, for \(u_j\ .\)) Thechoice of \(\mathcal{N}_j\) is called a normal form style, andrepresents the preference of the user as to what is considered "simple". The purpose of this procedure is to ensure that the higher-order correction terms, \(u_j\ ,\) are bounded, so that the approximation to the solution, \(x(t)\ ,\) is valid over an extended range in time. The Semisimple Case; Resonant Monomials
The theory breaks into two cases according to whether \(A\) issemisimple (diagonalizable) or not. The semisimple case,illustrated by the nonlinear oscillator above, is the easiest, andthere is only one useful style (in which \(\mathcal{N}_j\) is the kernel of\(L_A\)), ultimately due to Poincaré. It is easy to describe thesemisimple normal form if \(A\) is diagonal with diagonal entries\(\lambda_1,\dots,\lambda_n\) (which usually requires introducing complexvariables with reality conditions): The \(r\)th equation (for \(\dot y_r\)) of the normalized system will contain onlymonomials \(y_1^{m_1}\cdots y_n^{m_n}\) satisfying\[m_1\lambda_1+\cdots+m_n\lambda_n-\lambda_r=0\ .\]Such monomials are called
resonant because for pureimaginary eigenvalues, this equation becomes a resonance amongfrequencies in the usual sense. An elementary treatment of normalforms in the semisimple case only is by Kahn and Zarmi (1998). The Nonsemisimple Case
In the nonsemisimple case there are two important styles, the
inner product normal form, originally due to Belitskii butpopularized by Elphick et al. (1987), and the sl(2) normal form due toCushman and Sanders. In the inner product style, \(\mathcal{N}_j\) is thekernel of \(L_{A^*}\ ,\) \(A^*\) being the adjoint or conjugate transposeof \(A\ .\) In the sl(2) style, \(\mathcal{N}_j\) is the kernel of an operatordefined from \(A\) using the theory of the Lie algebra sl(2). Theinner product style is more popular at this time, but the sl(2)style has a much richer mathematical structure with deepconnections to sl(2) representation theory and to the classical invariant theory of Cayley, Sylvester and others. Because of this the sl(2) style has computational algorithms thatare not available for the inner product style. There is also a simplified normal form style that is derived from the innerproduct style by changing the projection.
A modern introduction to normal form theory, containing all the styles mentioned here with references and historical remarks, may be found in the monograph by Murdock (2003). Some more recent developments are contained in the last few chapters of Sanders, Verhulst, and Murdock (2007).
References Poincaré, H., New Methods of Celestial Mechanics (Am. Inst. of Physics, 1993). Birkhoff, G.D., Dynamical Systems (Am. Math. Society, Providence, 1996). Arnold, V.I., Geometrical Methods in the Theory of Ordinary Differential Equations (Springer-Verlag, New York, 1988). Bruno, A.D., Local Methods in Nonlinear Differential equations (Springer-Verlag, Berlin, 1989). Elphick C., Tirapegui E., Brachet M.E., Coullet P., and Iooss G. A simple global characterization for normal forms of singular vector fields. Physica D, 29:95-127(1987). Nayfeh, A.H., Method of Normal Forms. (Wiley, New York, 1993). Kahn P.B. and Zarmi Y., Nonlinear Dynamics: Exploration through Normal Forms. (Wiley, New York, 1998). Murdock J. Normal Forms and Unfoldings for Local Dynamical Systems. (Springer, New York, 2003). Jan Sanders, Ferdinand Verhulst, and James Murdock, Averaging Methods in Nonlinear Dynamical Systems, Springer, New York, 2007, xxiii+431. Internal references Jack Carr (2006) Center manifold. Scholarpedia, 1(12):1826. Jeff Moehlis, Kresimir Josic, Eric T. Shea-Brown (2006) Periodic orbit. Scholarpedia, 1(7):1358. Philip Holmes and Eric T. Shea-Brown (2006) Stability. Scholarpedia, 1(10):1838. James Murdock (2006) Unfoldings. Scholarpedia, 1(12):1904. |
This is the final part of this hands-on tutorial. I will assume from now on that you have read Part I, Part II, and Part III of this series.
As promised, this post will deal with:
As promised, this post will deal with:
Some tweaks to the protocol presented in the previous posts. A complexity analysis of the protocol, A small optimization. A few words about modern ZK proving protocols Lior's Tweaks
A colleague at Starkware, Lior Goldberg, pointed out a caveat in the zero-knowledge aspect, and suggested a nice simplification of the protocol, here they are:
A ZK Fix
Now if the random query $i$ happens to be $1$, then the prover is required to reveal the second and third elements in the witness. If they happen to be 15 and 21, then the verifier knows immediately that 5 and 6 (from the problem instance) belong to the
same sidein the solution. This violates the zero knowledge property that we wanted.
This happened because we chose uniformly at random from a very small range, and $r$ happened to be the maximal number in that range.
There are two ways to solve this. One is by chosing some arbitrary number and doing all computations modulo that number. A simpler way would be chosing $r$ from a huge domain, such as $0..2^{100}$, which makes the probability of getting a revealing $r$ negligible.
Modern ZK proving protocols, such as ZK-SNARKS, ZK-STARKS, Bulletproof, Ligero, Aurora, and others, are often compared along these four axes: Simplify By Having A Cyclic List
Our witness originally had $n + 1$ elements such that the first was a random number, and the rest were partial sums of the problem and the assignment dot product (plus the initial random number).
This meant we had two types of queries, one to check that two consecutive elements in the list differ in absolute value by the corresponding element in the problem list. Another type of query just checked that the first and last element are equal.
As Lior pointed out, it is much more elegant to omit the last element from the witness entirely, and if $i = n$ - check that the first and last elements in the witness differ, in absolute value, by the last element in the problem instance. Essentially, this is like thinking of the witness as cyclic. The nice thing about this is that now we only have one type of queries - a query about the difference between two consecutive elements
modulo nin the witness. Proof Size / Communication Complexity
We'd like to analyze the size of the proof that our code generates. This often referred to as
communication complexity,because the Fiat-Shamir Heuristic (that was described in Part III) transforms messages (from an interactive protocol) to a proof, making these two terms interchangeable in this context.
So, for each query, the proof stores:
The value of i. The value of the $i$-th element in the witness and the $(i \mod n)$-th element. Authentication paths for both elements.
The authentication paths here are the heavy part. Each of them is a $\log(n)$-element long list of 256bit values.
As was discussed in the last post, to get a decent soundness, the number of queries has to be roughly $100n$.
Putting these two together, the proof size will be dominated by the $~200 \cdot n \cdot \log(n)$ hashes that form the authentication paths.
So a proof that one knows an assignment to a Partition Problem instance with 1000 numbers, will require roughly $2,000,000$ hashes, which translate to 64 Megabytes of data.
Small Merkle Optimization
Since merkle authentication paths, somewhat surprisingly, make up the vast majority of the proof, maybe we can reduce their number by a little.
Note that all queries (except for one) ask about consecutive leaves in the tree.
Consecutive leaves share, on average, have their LCA (least common ancestor) at height $\frac {\log n} {2}$. Up to the LCA, their authentication paths may differ, but from the LCA up to the root, they're authentication paths are identical, so we're wasting space writing both in the proof.
Omitting the path from the LCA to the root from one of them will bring the proof size down to $150 \cdot n \cdot \log (n)$, which is a nice 25% improvement.
Implementing this optimization, as well as Lior's tweaks, is left - as they say in textbooks - as an exercise for the reader.
Modern Protocols
Modern ZK proving protocols, such as ZK-SNARKS, ZK-STARKS, Bulletproof, Ligero, Aurora, and others, are often compared along these four axes:
What type of statements can be proved using the protocol. How much space the proof takes up. How long it takes to create a proof. How long it takes to verify a proof.
Often the topic of trusted setup is discussed, but we won't get into that here.
Let's see how our toy-protocol fares:
Which statements can be proved?
In the toy-protocol, only knowledge of a solution to a Partition Problem instance could be proved. In contrast with most protocols, where one can use the protocol to prove knowledge of an input that satisfies some arbitrary arithmetic circuit, or even that a specific program ran for $T$ steps, and provided a specified output (this is what ZK-STARKS do).
Well, you may say, if you can prove one NP-complete problem (and the Partition Problem is one) - you can prove them all, due to polynomial time reductions. And theoretically speaking you would be right. However, in the practical world of ZK-proofs, all these manipulations have costs of their own, and conversions often incur a blow-up of the problem, since "polynomial reduction" is a theoretical term that can translate to non-practical cost. For this reasons, modern protocols make an effort to take as input more expressive forms (such as arithmetic circuits and statements about computer programs).
Well, you may say, if you can prove one NP-complete problem (and the Partition Problem is one) - you can prove them all, due to polynomial time reductions. And theoretically speaking you would be right. However, in the practical world of ZK-proofs, all these manipulations have costs of their own, and conversions often incur a blow-up of the problem, since "polynomial reduction" is a theoretical term that can translate to non-practical cost. For this reasons, modern protocols make an effort to take as input more expressive forms (such as arithmetic circuits and statements about computer programs).
Space
As the analysis showed, our proof takes up $O(n \log (n))$ space, whereas in most modern protocols, the proof size is somewhere between constant and polylogarithmic in $n$ (e.g. $O(\log ^2 (n))$).
This You can trace this gap to the fact we need a The approach I took was inspired by tricks from the ZK-STARK protocol, that is slightly more expensive than others in terms of proof size, but is expressive, requires relatively short prover time, and very short verifier time. In STARK, indeed the lion share of the proof is comprised of Merkle authentication paths, but great care is taken so that the number of queries will be minuscule.
This
hugegap is what this makes the proposed protocol nothing more than a toy example, that - while demonstrating certain approaches and tricks - is useless for any real application.
You can trace this gap to the fact we need a
linear number of queries, each costing a logarithmic number of hashes (the Merkle authentication paths).
The approach I took was inspired by tricks from the ZK-STARK protocol, that is slightly more expensive than others in terms of proof size, but is expressive, requires relatively short prover time, and very short verifier time. In STARK, indeed the lion share of the proof is comprised of Merkle authentication paths, but great care is taken so that the number of queries will be minuscule.
Prover Running Time
In our protocol it is roughly $O(n \log (n))$, which is not far from modern protocols.
Verifier Running Time
In our protocol it is linear in the proof size, so $O(n \log n)$ which is not so good. Recall that, at least in the context of blockchains, a proof is written once but verified many times (by miners for example). Modern protocols thus strive to make the verifier workload as small as they possibly can without impeding soundness.
This concludes what I hoped to cover in this tutorial. It was fun to write and code. Let's do it again sometime. :) |
I think there are two legitimate sources of complaint. For the first, I will give you the anti-poem that I wrote in complaint against both economists and poets. A poem, of course, packs meaning and emotion into pregnant words and phrases. An anti-poem removes all feeling and sterilizes the words so that they are clear. The fact that most English speaking humans cannot read this assures economists of continued employment. You cannot say that economists are not bright.
Live Long and Prosper-An Anti-Poem
May you be denoted as $k\in{I},I\in\mathbb{N}$, such that $I=1\dots{i}\dots{k}\dots{Z}$
where $Z$ denotes the most recently born human.
$\exists$ a fuzzy set $Y=\{y^i:\text{Human Mortality Expectations}\mapsto{y^i},\forall{i\in{I}}\},$
may $y^k\in\Omega,\Omega\in{Y}$ and $\Omega$ is denoted as "long"
and may $U(c)$, where c is the matrix of goods and services across your lifetime
$U$ is a function of $c$, where preferences are well-defined and $U$ is qualitative satisfaction,
be maximized $\forall{t}$, $t$ denoting time, subject to
$w^k=f'_t(L_t),$ where $f$ is your production function across time
and $L$ is the time vector of your amount of work,
and further subject to $w^i_tL^i_t+s^i_{t-1}=P_t^{'}c_t^i+s^i_t,\forall{i}$
where $P$ is the vector of prices and $s$ is a measure of personal savings across time.
May $\dot{f}\gg{0}.$
Let $W$ be the set $W=\{w^i_t:\forall{i,t}\text{ ranked ordinally}\}$
Let $Q$ be the fuzzy subset of $W$ such that $Q$ is denoted "high".
Let $w_t^k\in{Q},\forall{t}$
The second is mentioned above, which is the misuse of math and statistical methods. I would both agree and disagree with the critics on this. I believe that most economists are not aware of how fragile some statistical methods can be. To provide an example, I did a seminar for the students in the math club as to how your probability axioms can completely determine the interpretation of an experiment.
I proved using real data that newborn babies will float out of their cribs unless nurses swaddle them. Indeed, using two different axiomatizations of probability, I had babies clearly floating away and obviously sleeping soundly and securely in their cribs. It wasn't the data that determined the result; it was axioms in use.
Now any statistician would clearly point out that I was abusing the method, except that I was abusing the method in a manner that is normal in the sciences. I didn't actually break any rules, I just followed a set of rules to their logical conclusion in a way that people do not consider because babies don't float. You can get significance under one set of rules and no effect at all under another. Economics is especially sensitive to this type of problem.
I do belive that there is an error of thought in the Austrian school and maybe the Marxist about the use of statistics in economics that I believe is based on a statistical illusion. I am hoping to publish a paper on a serious math problem in econometrics that nobody has seemed to notice before and I think it is related to the illusion.
This image is the sampling distribution of Edgeworth's Maximum Likelihood estimator under Fisher's interpretation (blue) versus the sampling distribution of the Bayesian maximum a posteriori estimator (red) with a flat prior. It comes from a simulation of 1000 trials each with 10,000 observations, so they should converge. The true value is approximately .99986. Since the MLE is also the OLS estimator in the case, it is also Pearson and Neyman's MVUE.
Note how relatively inaccurate the Frequency based estimator is compared to the Bayesian. Indeed, the relative efficiency of $\hat{\beta}$ under the two methods is 20:1. Although Leonard Jimmie Savage was certainly alive when the Austrian school left statistical methods behind, the computational ability to use them didn't exist. The first element of the illusion is inaccuracy.
The second part can better be seen with a kernel density estimate of the same graph.
In the region of the true value, there are almost no examples of the maximum likelihood estimator being observed, while the Bayesian maximum a posteriori estimator closely covers .999863. In fact, the average of the Bayesian estimators is .99987 whereas the frequency based solution is .9990. Remember this is with 10,000,000 data points overall.
Frequency based estimators are averaged over the sample space. The missing implication is that it is unbiased, on average, over the entire space, but possibly biased for any specific value of $\theta$. You also see this with the binomial distribution. The effect is even greater on the intercept.
The red is the histogram of Frequentist estimates of the itercept, whose true value is zero, while the Bayesian is the spike in blue. The impact of these effects are worsened with small sample sizes because the large samples pull the estimator to the true value.
I think the Austrians were seeing results that were inaccurate and didn't always make logical sense. When you add data mining into the mix, I think they were rejecting the practice.
The reason I believe the Austrians are incorrect is that their most serious objections are solved by Leonard Jimmie Savage's personalistic statistics. Savages
Foundations of Statistics fully covers their objections, but I think the split had effectively already happened and so the two have never really met up.
Bayesian methods are generative methods while Frequency methods are sampling based methods. While there are circumstances where it may be inefficient or less powerful, if a second moment exists in the data, then the t-test is always a valid test for hypotheses regarding the location of the population mean. You do not need to know how the data was created in the first place. You need not care. You only need to know that the central limit theorem holds.
Conversely, Bayesian methods depend entirely on how the data came into existence in the first place. For example, imagine you were watching English style auctions for a particular type of furniture. The high bids would follow a Gumbel distribution. The Bayesian solution for inference regarding the center of location would not use a t-test, but rather the joint posterior density of each of those observations with the Gumbel distribution as the likelihood function.
The Bayesian idea of a parameter is broader than the Frequentist and can accomodate completely subjective constructions. As an example, Ben Roethlisberger of the Pittsburgh Steelers could be considered a parameter. He would also have parameters associated with him such as pass completion rates, but he could have a unique configuration and he would be a parameter in a sense similar to Frequentist model comparison methods. He might be thought of as a model.
The complexity rejection isn't valid under Savage's methodology and indeed cannot be. If there were no regularities in human behavior, it would be impossible to cross a street or take a test. Food would never be delivered. It may be the case, however, that "orthodox" statistical methods can give pathological results that have pushed some groups of economists away. |
Problem
I want to convert the general second order linear PDE problem \begin{align} \begin{cases} a(x,y)\frac{\partial^2 u}{\partial x^2}+b(x,y) \frac{\partial^2 u}{\partial y^2} +c(x,y)\frac{\partial^2 u}{\partial x \partial y}\\+d(x,y)\frac{\partial u}{\partial x}+e(x,y)\frac{\partial u}{\partial y}+f(x,y)u=g(x,y) & \text{in } R \text{ PDE} \\ u=u^* & \text{on } S_1 \text{ Dirchlet boundary condition} \\ \dfrac{\partial u}{\partial n}=q^* & \text{on } S_2 \text{ Neumann boundary condition} \\ \dfrac{\partial u}{\partial n}=r^*_1-r^*_2 u & \text{on } S_3 \text{ Robin boundary condition} \\ \end{cases} \end{align} into a weak form suitable for the finite element method. That is into the weak bilinear form $B(u,v)=L(v)$ where $B$ is bilinear, symmetric and positive definite functional and $L$ is a linear functional.
Work thus far
I know to how convert the following
\begin{align} \begin{cases} \dfrac{\partial^2 u}{\partial x^2}+\dfrac{\partial^2 u}{\partial y^2}+u=g(x,y) & \text{in } R \text{ PDE} \\ u=u^* & \text{on } S_1 \text{ Dirchlet boundary condition} \\ \dfrac{\partial u}{\partial n}=q^* & \text{on } S_2 \text{ Neumann boundary condition} \\ \dfrac{\partial u}{\partial n}=r^*_1-r^*_2 u & \text{on } S_3 \text{ Robin boundary condition} \\ \end{cases} \end{align} into the weak bilinear form $B(u,v)=L(v)$ where $B$ is bilinear, symmetric and positive definite and $L$ is linear. The steps are as follows (note that $v$ is our test function) \begin{align} \int \int_{R} \left(\frac{\partial^2 u}{\partial x^2}+\frac{\partial^2 u}{\partial y^2}+u \right) v \ dA &= \int \int_{R} g(x,y) v \ dA \end{align} Using the identity \begin{align} \int \int_{R} v \nabla^2 u\ dA &= \int_{S} v \frac{\partial u}{\partial n}\ ds-\int\int_{R} \nabla u \cdot \nabla v\ dA \end{align} We get \begin{align} \int \int_R -\nabla u \cdot \nabla v +uv \ dA &= \int \int_R g v \ dA - \int_{S} v \frac{\partial u}{\partial n}\ ds \\ \int \int_R -\nabla u \cdot \nabla v +uv \ dA &= \int \int_R g v \ dA - \int_{S_1} v \frac{\partial u}{\partial n}\ ds- \int_{S_2} v \frac{\partial u}{\partial n}\ ds - \int_{S_3} v \frac{\partial u}{\partial n}\ ds \\ \int \int_R -\nabla u \cdot \nabla v +uv \ dA &= \int \int_R g v \ dA - \int_{S_2} v q^* \ ds - \int_{S_3} v (r^*_1-r^*_2 u) \ ds \\ \int \int_R -\nabla u \cdot \nabla v +uv \ dA &= \int \int_R g v \ dA - \int_{S_2} v q^* \ ds - \int_{S_3} v r^*_1\ ds +\int_{S_3} r^*_2 uv \ ds \\ \int \int_R -\nabla u \cdot \nabla v +uv \ dA +\int_{S_3} r^*_2 uv \ ds &= \int \int_R g v \ dA - \int_{S_2} v q^* \ ds - \int_{S_3} v r^*_1\ ds \\ B(u,v)&=L(v) \end{align} Where I am having trouble
I do not know what to do with the terms $$c(x,y)\frac{\partial^2 u}{\partial x \partial y}+d(x,y)\frac{\partial u}{\partial x}+e(x,y)\frac{\partial u}{\partial y}$$as using the divergence theorem/integration by parts used in the
work thus far section leaves terms that are not symmetric and therefore does not not satisfy the requirements for $B(u,v)$.
The other problem are the terms $$a(x,y)\frac{\partial^2 u}{\partial x^2}+b(x,y) \frac{\partial^2 u}{\partial y^2}$$ the identity that I used in the
work thus far section does not work (I am probably wrong on this part).
I could really use some guidance on both of these problems.
Notes This question is part of a much larger problem in which I have to use the finite element method. Once the problem is in a weak form in which the finite element/galerkin method can be applied I know what to do. From what I know the symmetry of $B(u,v)$ is essential. If there is some other weak form that works with the finite element (that is suitable for a numerical solution), that would be an acceptable answer to my problem. I have been following "Finite Elements: A Gentle Introduction" I could not find anything in the book that answered the problem. If you have any references that covers my problem that would be great (so far I have found nothing). If you have any questions feel free to ask. I originally posted this question on math stack exchange. I reposted the question here as it relevant and could bring more interest to my problem. Notation $n$ is the vector normal to the boundary surface. $u(x,y)$ is the solution to the given PDE or ODE. $v(x,y)$ is a test function. $\int \int_{R} * \ dA$ is an integral over region $R$. $\int_{S} * ds$ is a surface integral over $S$. $u^*, q^*, r^*_1, r^*_2, r^*_3$ are either constants or functions used to define the boundary conditions. The surface (S) boundary conditions can be divided into Dirchlet, Neumann, and Robin boundary conditions. That is $S=S_1\cup S_2 \cup S_3$. |
I have used the
tcolorbox to highlight my equations, the problem i have i that the indentation does somehow not work properly. If i use regular equations i just have to remove the lines before and after the equation to get no indentation. Nevertheless with tcolorbox this does not seem to work.
Here is an example:
This is the definition for the colorbox:
\newtcolorbox{empheqboxed}[1][]{colback=gray!20, colframe=white, width=\textwidth, sharpish corners, top=-3mm, % default value 2mm bottom=3pt}
In the document:
... response of the medium can be considered linear, leading to the following expression for the polarizability:%\begin{empheqboxed} \begin{align}% \mathbf{P}(\mathbf{r},t)=\varepsilon_0\int\limits_{-\infty}^{\infty}\!\!\mbox{d}\tau\int \!\mbox{d}^3r'\:\boldsymbol{\chi}(\mathbf{r},\mathbf{r}',\tau)\cdot\mathbf{E}(\mathbf{r},\mathbf{r}',-\tau) + \mathbf{P}_N(\mathbf{r},t) \label{eq:langevin_1}\end{align} %\end{empheqboxed}%here, $\boldsymbol{\chi}$ represents...
Which looks like this: |
I'm doing research in generalised inverse limits, and I'm trying to prove a result about circle-like plane continua.
Definitions
A
continuum is a compact, connected metric space.
A
plane continuum is a continuum that is homeomorphic to a subcontinuum of $\mathbb{R}^2$. This definition can just be interpreted as "a continuum which is a subset of $\mathbb{R}^2$", as the natural equivalence class on continua is homeomorphism classes.
A cover $\mathscr{C}$ is a $chain\ cover$ if there is an enumeration $\{C_1, C_2, \cdots, C_n\}$ so that $C_p \cap C_q = \emptyset$ if and only if $|p-q|>1$. Intuitively this is any cover that looks like an actual steel chain, but abstract.
A continuum is
arc-like if for every $\varepsilon>0$, there is an $\varepsilon$ chain cover of the continuum. (By $\varepsilon$ chain cover I mean every subset in the cover is an $\varepsilon$ ball.)
A continuum is
circle-like if it is not arc-like, but for every $\varepsilon>0$, there is an $\varepsilon$ cover $\mathscr{C}$ of the continuum such that $\mathscr{C}$ has an enumeration $\{C_1, C_2, \cdots, C_n\}$ in which $C_p \cap C_q = \emptyset$ if and only if $n-1>|p-q|>1$. Intuitively this looks like a steel chain where the first ring is connected to the last ring. It is the same as taking a chain-cover, but then requiring the first and last subsets in the cover to intersect.
A metric space $X$ is
path connected if given any two points $a, b \in X$, there is a path between them. In other words, there is a continuous function $\varphi: [0,1]\rightarrow X$ such that $\varphi(0)=a, \varphi(1)=b$.
A metric space is
locally path connected if given any point in the space, there is a neighbourhood around it which is path connected.
Note that these both imply connectedness and locally connectedness respectively.
A continuum is indecomposable if it is not the union of any two of its proper subcontinua. Examples include the Buckethandle set and the Pseudo-arc.
Problem
I'm trying to prove a result about generalised inverse limits, and I've encountered a problem dealing with path connected and locally path connected circle-like continua. I was simply going to try and prove results directly from these facts I know, but this lead to a different question: Are there any "weird" circle-like path connected and locally path connected continua?
Conjecture
Every locally path connected and path connected circle-like continuum is homeomorphic to $S^1$, the circle.
If this is the case, the result I'm trying to prove would get pretty trivial. It's definitely closely to whether or not every locally path connected and path connected arc-like continuum is an arc (a metric space homeomorphic to the unit closed interval.)
The general question I'm asking here on MSE is "Are there any pathological locally path connected and path connected plane continua?" Every strange continuum I know has properties resulting from "infinite things" in such a way that they lose path connectedness and/or locally path connectedness. |
Current browse context:
math.KT
Change to browse by: Bookmark(what is this?) Mathematics > Differential Geometry Title: Spectral sections, twisted rho invariants and positive scalar curvature
(Submitted on 23 Sep 2013 (v1), last revised 25 Apr 2014 (this version, v3))
Abstract: We had previously defined the rho invariant $\rho_{spin}(Y,E,H, g)$ for the twisted Dirac operator $\not\partial^E_H$ on a closed odd dimensional Riemannian spin manifold $(Y, g)$, acting on sections of a flat hermitian vector bundle $E$ over $Y$, where $H = \sum i^{j+1} H_{2j+1} $ is an odd-degree differential form on $Y$ and $H_{2j+1}$ is a real-valued differential form of degree ${2j+1}$. Here we show that it is a conformal invariant of the pair $(H, g)$. In this paper we express the defect integer $\rho_{spin}(Y,E,H, g) - \rho_{spin}(Y,E, g)$ in terms of spectral flows and prove that $\rho_{spin}(Y,E,H, g)\in \mathbb Q$, whenever $g$ is a Riemannian metric of positive scalar curvature. In addition, if the maximal Baum-Connes conjecture holds for $\pi_1(Y)$ (which is assumed to be torsion-free), then we show that $\rho_{spin}(Y,E,H, rg) =0$ for all $r\gg 0$, significantly generalizing our earlier results. These results are proved using the Bismut-Weitzenb\"ock formula, a scaling trick, the technique of noncommutative spectral sections, and the Higson-Roe approach. Submission historyFrom: Varghese Mathai [view email] [v1]Mon, 23 Sep 2013 09:48:25 GMT (23kb) [v2]Sat, 2 Nov 2013 20:05:59 GMT (326kb,D) [v3]Fri, 25 Apr 2014 20:53:53 GMT (313kb,D) |
Warning: Boring technical stuff that’s only here because I needed it for the model in the Helicopter Money paper, and there are no other good references online.
All the existing resources on the internet that I’ve found are either vague or inconsistent with their
σ/ ρ notation (where \(ρ=\frac{σ-1}{σ}\) and σ is the elasticity of substitution, which makes the exponents in the utility function look cleaner, though the exponents in the demand function will be correspondingly messier), or don’t derive it with the coefficients. Almost none go step by step (with the exception of this video) or carry through the coefficients with exponents, and literally none derive a function for more than two goods.
We’re going to do all of these: a fully general derivation of demand functions from an
n-good CES utility function, carrying through the actual elasticity of substitution as a parameter. I’ll use sum notation throughout, which you can easily expand to a definite number of goods. It’s worth noting though that the elasticity of substitution has to be the same between all pairs of goods, otherwise there’s no fully general form.
We start by writing our CES function this way, raising our coefficient to the 1/
σ and summing over a set of n goods. You might wonder why we’re raising our coefficient to an exponent too. It’ll make our demand function slightly cleaner in the end, and since it’s a parameter, you can just define α n
(1) $$U=\left(\sum_n β_n^{1/σ}G_n^\frac{σ-1}{σ} \right)^\frac{σ}{σ-1} $$
A function of this form means that the elasticity of substitution between any pair of goods is
σ. Our budget constraint, then, is
(2) $$I=\sum_nP_nG_n$$
So we want to maximize (1) subject to (2). So we set up our lagrangian, and derive with respect to each good plus λ, which gives us
n+1 first-order conditions.
(3) $$\mathcal{L} = \left(\sum_n β_n^{1/σ}G_n^\frac{σ-1}{σ} \right)^\frac{σ}{σ-1} + λ\left(I-\sum_nP_nG_n\right)$$
Since both parts of the Lagrangian are sums, and the parameters of the various goods are all siloed into their own terms, this is actually fairly straightforward to derive with respect to any
G variable. So we’ll pick any two goods, say a and b, and derive with respect to G a and
To get the derivative of the first part of the Lagrangian, remember the chain rule for deriving
f( g( x)): \(\frac{∂ f}{∂ x} = \frac{∂ f}{∂ g}\frac{∂ g}{∂ x}\). Our f( g) will be \(g^\frac{σ}{σ-1}\), and our g( x) will be the sum on the inside of the parentheses. Carrying down the \(\frac{σ}{σ-1}\) exponent from \(\frac{∂ f}{∂ g}\) cancels out the \(\frac{σ-1}{σ}\) exponent carried down from the G n in \(\frac{∂ g}{∂ x}\). This gives us:
(4) $$\frac{∂\mathcal{L}}{∂G_a} = \left(\frac{β_a}{G_a}\right)^\frac{1}{σ} \left(\sum_n β_n^{1/σ}G_n^\frac{σ-1}{σ}\right)^\frac{1}{1-σ} – \lambda P_a = 0$$
(5) $$\frac{∂\mathcal{L}}{∂ G_b} = \left(\frac{β_b}{G_b}\right)^\frac{1}{σ} \left(\sum_n β_n^{1/σ}G_n^\frac{σ-1}{σ}\right)^\frac{1}{1-σ} – \lambda P_b = 0$$
(6) $$\frac{∂\mathcal{L}}{∂ \lambda} = I-\sum_nP_nG_n = 0$$
We have
β a/
From here, we want the relative price of
P a to
(7) $$\frac{P_a}{P_b} = \frac{(β_a/G_a)^\frac{1}{σ}}{(β_b/G_b)^\frac{1}{σ}} = \left(\frac{β_a G_b}{β_b G_a}\right)^\frac{1}{σ}$$
The right hand side is the marginal rate of substitution between
G a and
Solving for
G b, we have
(8) $$G_b = \frac{β_bG_a}{β_a}\left(\frac{P_a}{P_b}\right)^σ$$
And similarly for
G c, etc. What we’re aiming to do here is to replace all the
So, substituting (8) and its brothers back into the budget constraint (2) gives us
(9) $$I=P_aG_a + \sum_{n\neq a}P_n\frac{β_nG_a}{β_a}\left(\frac{P_a}{P_n}\right)^σ$$
$$= G_a \sum_{n}\frac{β_n}{β_a}P_a^σ P_n^{1-σ}$$
We’ve substituted every
G term of the sum, except the original G a term, with an expression in terms of
All that remains is to solve for
G a.
(10) $$ G_a = \frac{I}{\sum_{n}\frac{β_n}{β_a}P_a^σ P_n^{1-σ}} $$
$$= \frac{I P_a^{-σ}}{\sum_{n}\frac{β_n}{β_a}P_n^{1-σ}}$$
So there’s our demand function for
G a, and |
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs
Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class
I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra
Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric
It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice
The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly
And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building)
It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad
I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore)
In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus
One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of
@TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students
In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $...
"If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have
Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed?
Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2
Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$
Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight.
hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$
for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$
I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything.
I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D
Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of
One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ...
The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious.
(but seriously, the best tactic is over powered...)
Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible
It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field?
Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement?
"Infinity exists" comes to mind as a potential candidate statement.
Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system
@Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity
but so far failed
Put it in another way, an equivalent formulation of that (possibly open) problem is:
> Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object?
If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite.
My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book
The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science...
O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem
hmm...
By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as:
$$P(x) = \prod_{k=0}^n (x - \lambda_k)$$
If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic
Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows:
The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases.
In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}...
Do these still exist if the axiom of infinity is blown up?
Hmmm...
Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum:
$$\sum_{k=1}^M \frac{1}{b^{k!}}$$
The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test
therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework
There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'...
and neither Rolle nor mean value theorem need the axiom of choice
Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure
Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment
typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set
> are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion |
The reason is that you want to space the raised-cosine pulses by the symbol interval, $T_p$. Consider the signal you want to create: $$s(t) = \sum_k a_k p(t-kT_p),$$ where $p(t)$ is your prototype raised-cosine pulse, and $a_k$ are the symbols. Notice that pulses are spaced $T_p$ seconds.
To re-create this with a filter, you can write the equation for the signal as follows: $$s(t)=\sum_k p(t) \ast a_k\delta(t-kT_p),$$ where $p(t)$ is interpreted as the filter's impulse response. Notice that the impulses are spaced $T_p$ seconds, and this results in the raised-cosine pulses being spaced by the same amount.
If you want to simulate this process in discrete-time in Matlab or similar tools, then you can create a vector
p for $p(t)$, and a vector
d for $a_k\delta(t-kT_p)$. Let's say your symbols are stored in vector
a. Then, you can create
d by upsampling
a in such a way that there are $T_p$ seconds between symbols:
d = [ a[1], 0, ..., 0, a[2], 0, ..., 0, a[3], 0, .... ]
You need to insert as many 0s as necessary, according to your sampling frequency, to have $T_p$ seconds between symbols.
Then, you can generate the vector
s corresponding to $s(t)$ with
s=conv(p,d). |
Unfortunately, indicator constraints are only supported for
linear constraints. The Cplexdocs say: The constraint must be linear; a quadratic constraint is not allowed to have an indicator constraint. A lazy constraint cannot have an indicator constraint. A user-defined cut cannot have an indicator constraint. Gurobimentions: An indicator constraint \(y=f \rightarrow a^Tx \le b\) states that if the binary variable \(y\) has the value \(f\in \{0,1\}\) in a given solution then the linear constraint\(a^Tb \le b\) has to be satisfied. Xpressand SCIPsay similar things: indicator constraints are only for linear constraints.
Somehow, I expected that I could use a (convex) quadratic constraint with an indicator. As none of the solvers has this implemented, I suspect there is a good reason why this is not possible.
Big-M and SOS1
Finally, as stated in the comments, an alternative formulation would be \[\begin{align} & d'_{i,k} \ge \sum_j \left( \bar{x}_{k,j} - p_{i,j} \right)^2 \\ & x_{i,k}=1 \Rightarrow d_{i,k} \ge d'_{i,k} \end{align}\] This comes at the expense of extra variables and constraints. Basically we added one indirection. With this formulation, we have a linear indicator constraint and a separate convex quadratic inequality. I assume the presolver will not presolve this away.
Error Messages GAMS has poor support for indicators in general, but is doing really poorly here. It does not give an error message, but just gives a wrong solution. This is really bad (I reported this, so it will be fixed). AMPL gives an error message that seems to come from outer space. CPLEX 12.9.0.0: logical constraint _slogcon[1] is not an indicator constraint.Let's blame the slogcons (???). Obviously, I have no constraint (or any other symbol) called slogcon. Cplex's interactive optimizer is doing a better job: CPLEX Error 1605: Line 5: Illegal quadratic term in a constraint. Cplex's OPL has a difficult time understanding the (convex) quadratic indicator constraint. It says: *** FATAL[ENGINE_002]: Exception from IBM ILOG CPLEX: CPLEX Error 5002: 'q1' is not convex. Obviously the constraint is convex, so there is a problem in how OPL generates the constraint (i.e. this is a bug). The error message just does not make sense. The message is also bad: I don't have a `q1` in the model.
A better message would be something like:
Equation `e2`: quadratic terms in an indicator constraint are not supported. Please reformulate this constraint.
Basically you want an error message to describe the problem in a way that is understandable (even when the user is slightly panicked). And secondly, it is a good idea to tell the user what to do now. In this case you need to reformulate (this tells the user there is no need to search for some Cplex option that can help here). And, never, never mention the slogcons.
Of course, it would be even better if solvers would support quadratic indicator constraints. In design there is this concept of orthogonality: it is best to have as few exceptions as possible. |
Group of Tangent-Space Automorphism Fields¶
Given a differentiable manifold \(U\) and a differentiable map\(\Phi: U \rightarrow M\) to a differentiable manifold \(M\) (possibly \(U = M\)and \(\Phi=\mathrm{Id}_M\)), the
group of tangent-space automorphism fieldsassociated with \(U\) and \(\Phi\) is the general linear group\(\mathrm{GL}(\mathfrak{X}(U,\Phi))\) of the module \(\mathfrak{X}(U,\Phi)\) ofvector fields along \(U\) with values on \(M\supset \Phi(U)\) (see
VectorFieldModule).Note that \(\mathfrak{X}(U, \Phi)\) is a module over\(C^k(U)\), the algebra of differentiable scalar fields on \(U\).Elements of \(\mathrm{GL}(\mathfrak{X}(U, \Phi))\) are fields along \(U\)of automorphisms of tangent spaces to \(M\).
AUTHORS:
Eric Gourgoulhon (2015): initial version Travis Scrimshaw (2016): review tweaks
REFERENCES:
Chap. 15 of [God1968] class
sage.manifolds.differentiable.automorphismfield_group.
AutomorphismFieldGroup(
vector_field_module)¶
General linear group of the module of vector fields along a differentiable manifold \(U\) with values on a differentiable manifold \(M\).
Given a differentiable manifold \(U\) and a differentiable map \(\Phi: U \rightarrow M\) to a differentiable manifold \(M\) (possibly \(U = M\) and \(\Phi = \mathrm{Id}_M\)), the
group of tangent-space automorphism fieldsassociated with \(U\) and \(\Phi\) is the general linear group \(\mathrm{GL}(\mathfrak{X}(U,\Phi))\) of the module \(\mathfrak{X}(U,\Phi)\) of vector fields along \(U\) with values on \(M \supset \Phi(U)\) (see
VectorFieldModule). Note that \(\mathfrak{X}(U,\Phi)\) is a module over \(C^k(U)\), the algebra of differentiable scalar fields on \(U\). Elements of \(\mathrm{GL}(\mathfrak{X}(U,\Phi))\) are fields along \(U\) of automorphisms of tangent spaces to \(M\).
Note
If \(M\) is parallelizable, then
AutomorphismFieldParalGroup
mustbe used instead.
INPUT:
vector_field_module–
VectorFieldModule; module \(\mathfrak{X}(U,\Phi)\) of vector fields along \(U\) with values on \(M\)
EXAMPLES:
Group of tangent-space automorphism fields of the 2-sphere:
sage: M = Manifold(2, 'M') # the 2-dimensional sphere S^2 sage: U = M.open_subset('U') # complement of the North pole sage: c_xy.<x,y> = U.chart() # stereographic coordinates from the North pole sage: V = M.open_subset('V') # complement of the South pole sage: c_uv.<u,v> = V.chart() # stereographic coordinates from the South pole sage: M.declare_union(U,V) # S^2 is the union of U and V sage: xy_to_uv = c_xy.transition_map(c_uv, (x/(x^2+y^2), y/(x^2+y^2)), ....: intersection_name='W', ....: restrictions1= x^2+y^2!=0, restrictions2= u^2+v^2!=0) sage: uv_to_xy = xy_to_uv.inverse() sage: G = M.automorphism_field_group() ; G General linear group of the Module X(M) of vector fields on the 2-dimensional differentiable manifold M
Gis the general linear group of the vector field module \(\mathfrak{X}(M)\):
sage: XM = M.vector_field_module() ; XM Module X(M) of vector fields on the 2-dimensional differentiable manifold M sage: G is XM.general_linear_group() True
Gis a non-abelian group:
sage: G.category() Category of groups sage: G in Groups() True sage: G in CommutativeAdditiveGroups() False
The elements of
Gare tangent-space automorphisms:
sage: a = G.an_element(); a Field of tangent-space automorphisms on the 2-dimensional differentiable manifold M sage: a.parent() is G True sage: a.restrict(U).display() 2 d/dx*dx + 2 d/dy*dy sage: a.restrict(V).display() 2 d/du*du + 2 d/dv*dv
The identity element of the group
G:
sage: e = G.one() ; e Field of tangent-space identity maps on the 2-dimensional differentiable manifold M sage: eU = U.default_frame() ; eU Coordinate frame (U, (d/dx,d/dy)) sage: eV = V.default_frame() ; eV Coordinate frame (V, (d/du,d/dv)) sage: e.display(eU) Id = d/dx*dx + d/dy*dy sage: e.display(eV) Id = d/du*du + d/dv*dv
Element¶
base_module()¶
Return the vector-field module of which
selfis the general linear group.
OUTPUT:
EXAMPLES:
Base module of the group of tangent-space automorphism fields of the 2-sphere:
sage: M = Manifold(2, 'M') # the 2-dimensional sphere S^2 sage: U = M.open_subset('U') # complement of the North pole sage: c_xy.<x,y> = U.chart() # stereographic coordinates from the North pole sage: V = M.open_subset('V') # complement of the South pole sage: c_uv.<u,v> = V.chart() # stereographic coordinates from the South pole sage: M.declare_union(U,V) # S^2 is the union of U and V sage: xy_to_uv = c_xy.transition_map(c_uv, (x/(x^2+y^2), y/(x^2+y^2)), ....: intersection_name='W', restrictions1= x^2+y^2!=0, ....: restrictions2= u^2+v^2!=0) sage: uv_to_xy = xy_to_uv.inverse() sage: G = M.automorphism_field_group() sage: G.base_module() Module X(M) of vector fields on the 2-dimensional differentiable manifold M sage: G.base_module() is M.vector_field_module() True
one()¶
Return identity element of
self.
The group identity element is the field of tangent-space identity maps.
OUTPUT:
AutomorphismFieldrepresenting the identity element
EXAMPLES:
Identity element of the group of tangent-space automorphism fields of the 2-sphere:
sage: M = Manifold(2, 'M') # the 2-dimensional sphere S^2 sage: U = M.open_subset('U') # complement of the North pole sage: c_xy.<x,y> = U.chart() # stereographic coordinates from the North pole sage: V = M.open_subset('V') # complement of the South pole sage: c_uv.<u,v> = V.chart() # stereographic coordinates from the South pole sage: M.declare_union(U,V) # S^2 is the union of U and V sage: xy_to_uv = c_xy.transition_map(c_uv, (x/(x^2+y^2), y/(x^2+y^2)), ....: intersection_name='W', restrictions1= x^2+y^2!=0, ....: restrictions2= u^2+v^2!=0) sage: uv_to_xy = xy_to_uv.inverse() sage: G = M.automorphism_field_group() sage: G.one() Field of tangent-space identity maps on the 2-dimensional differentiable manifold M sage: G.one().restrict(U)[:] [1 0] [0 1] sage: G.one().restrict(V)[:] [1 0] [0 1] class
sage.manifolds.differentiable.automorphismfield_group.
AutomorphismFieldParalGroup(
vector_field_module)¶
General linear group of the module of vector fields along a differentiable manifold \(U\) with values on a parallelizable manifold \(M\).
Given a differentiable manifold \(U\) and a differentiable map \(\Phi: U \rightarrow M\) to a parallelizable manifold \(M\) (possibly \(U = M\) and \(\Phi = \mathrm{Id}_M\)), the
group of tangent-space automorphism fieldsassociated with \(U\) and \(\Phi\) is the general linear group \(\mathrm{GL}(\mathfrak{X}(U, \Phi))\) of the module \(\mathfrak{X}(U, \Phi)\) of vector fields along \(U\) with values on \(M \supset \Phi(U)\) (see
VectorFieldFreeModule). Note that \(\mathfrak{X}(U, \Phi)\) is a free module over \(C^k(U)\), the algebra of differentiable scalar fields on \(U\). Elements of \(\mathrm{GL}(\mathfrak{X}(U, \Phi))\) are fields along \(U\) of automorphisms of tangent spaces to \(M\).
Note
If \(M\) is not parallelizable, the class
AutomorphismFieldGroupmust be used instead.
INPUT:
vector_field_module–
VectorFieldFreeModule; free module \(\mathfrak{X}(U,\Phi)\) of vector fields along \(U\) with values on \(M\)
EXAMPLES:
Group of tangent-space automorphism fields of a 2-dimensional parallelizable manifold:
sage: M = Manifold(2, 'M') sage: X.<x,y> = M.chart() sage: XM = M.vector_field_module() ; XM Free module X(M) of vector fields on the 2-dimensional differentiable manifold M sage: G = M.automorphism_field_group(); G General linear group of the Free module X(M) of vector fields on the 2-dimensional differentiable manifold M sage: latex(G) \mathrm{GL}\left( \mathfrak{X}\left(M\right) \right)
Gis nothing but the general linear group of the module \(\mathfrak{X}(M)\):
sage: G is XM.general_linear_group() True
Gis a group:
sage: G.category() Category of groups sage: G in Groups() True
It is not an abelian group:
sage: G in CommutativeAdditiveGroups() False
The elements of
Gare tangent-space automorphisms:
sage: G.Element <class 'sage.manifolds.differentiable.automorphismfield.AutomorphismFieldParal'> sage: a = G.an_element() ; a Field of tangent-space automorphisms on the 2-dimensional differentiable manifold M sage: a.parent() is G True
As automorphisms of \(\mathfrak{X}(M)\), the elements of
Gmap a vector field to a vector field:
sage: v = XM.an_element() ; v Vector field on the 2-dimensional differentiable manifold M sage: v.display() 2 d/dx + 2 d/dy sage: a(v) Vector field on the 2-dimensional differentiable manifold M sage: a(v).display() 2 d/dx - 2 d/dy
Indeed the matrix of
awith respect to the frame \((\partial_x, \partial_y)\) is:
sage: a[X.frame(),:] [ 1 0] [ 0 -1]
The elements of
Gcan also be considered as tensor fields of type \((1,1)\):
sage: a.tensor_type() (1, 1) sage: a.tensor_rank() 2 sage: a.domain() 2-dimensional differentiable manifold M sage: a.display() d/dx*dx - d/dy*dy
The identity element of the group
Gis:
sage: id = G.one() ; id Field of tangent-space identity maps on the 2-dimensional differentiable manifold M sage: id*a == a True sage: a*id == a True sage: a*a^(-1) == id True sage: a^(-1)*a == id True
Construction of an element by providing its components with respect to the manifold’s default frame (frame associated to the coordinates \((x,y)\)):
sage: b = G([[1+x^2,0], [0,1+y^2]]) ; b Field of tangent-space automorphisms on the 2-dimensional differentiable manifold M sage: b.display() (x^2 + 1) d/dx*dx + (y^2 + 1) d/dy*dy sage: (~b).display() # the inverse automorphism 1/(x^2 + 1) d/dx*dx + 1/(y^2 + 1) d/dy*dy
We check the group law on these elements:
sage: (a*b)^(-1) == b^(-1) * a^(-1) True
Invertible tensor fields of type \((1,1)\) can be converted to elements of
G:
sage: t = M.tensor_field(1, 1, name='t') sage: t[:] = [[1+exp(y), x*y], [0, 1+x^2]] sage: t1 = G(t) ; t1 Field of tangent-space automorphisms t on the 2-dimensional differentiable manifold M sage: t1 in G True sage: t1.display() t = (e^y + 1) d/dx*dx + x*y d/dx*dy + (x^2 + 1) d/dy*dy sage: t1^(-1) Field of tangent-space automorphisms t^(-1) on the 2-dimensional differentiable manifold M sage: (t1^(-1)).display() t^(-1) = 1/(e^y + 1) d/dx*dx - x*y/(x^2 + (x^2 + 1)*e^y + 1) d/dx*dy + 1/(x^2 + 1) d/dy*dy
Since any automorphism field can be considered as a tensor field of type-\((1,1)\) on
M, there is a coercion map from
Gto the module \(T^{(1,1)}(M)\) of type-\((1,1)\) tensor fields:
sage: T11 = M.tensor_field_module((1,1)) ; T11 Free module T^(1,1)(M) of type-(1,1) tensors fields on the 2-dimensional differentiable manifold M sage: T11.has_coerce_map_from(G) True
An explicit call of this coercion map is:
sage: tt = T11(t1) ; tt Tensor field t of type (1,1) on the 2-dimensional differentiable manifold M sage: tt == t True
An implicit call of the coercion map is performed to subtract an element of
Gfrom an element of \(T^{(1,1)}(M)\):
sage: s = t - t1 ; s Tensor field t-t of type (1,1) on the 2-dimensional differentiable manifold M sage: s.parent() is T11 True sage: s.display() t-t = 0
as well as for the reverse operation:
sage: s = t1 - t ; s Tensor field t-t of type (1,1) on the 2-dimensional differentiable manifold M sage: s.display() t-t = 0 |
Total invested capital Assets at \(t=0\) Rate of income Rate of expenses Rate of return on invested capital Rate of withdrawal in retirement Saving time Saving rate
The three most harmful addictions are heroin, carbohydrates, and a monthly salary. - Nassim Taleb in The Bed of Procrustes
The acronym FIRE, financial independence retire early, is the name of a small movement trying to save large fractions of income so the investment returns will cover their expenses. Philosophical reasons vary but generally these people are not hoping to stop working completely, they are hoping to remove compensation from the decision making process about what kind of work to do and whom to do it with. Articles in this section will discuss how to think about saving and investing.
I first read about financial independence on the Mr. Money Mustache blog. I always had a vague concept of it in mind as a goal but reading about it in this article made me want to give his ideas an explicit mathematical basis. That project began a series of inquiries by friends and family which eventually led to this blog.
At the end of the day, success in personal finance is purely behavior-based. A person's net worth (the sum of their bank accounts, assets, and credit liabilities) is a merciless, indifferent answer to one question: "Over a lifetime thus far, how much more was earned or received than was spent?" Playing with these equations is useful for setting goals but in the absence of actions to increase income and/or decrease spending, it does nothing to build wealth. The goal of presenting these equations is to inspire those actions among the kind of people who need more than handwaving arguments to be convinced.
As with loans, we begin with the familiar first order ODE.
[Rate of change of capital] \(=\) [Rate of capital accumulation] \(-\) [Rate of capital depletion] \(\displaystyle \frac{\partial m}{\partial t} \) \(\displaystyle =\) \(\displaystyle rm+I\) \(\displaystyle -\) \(\displaystyle E\)
Separate and integrate the first order ordinary differential equation. Remember the limits, at \(m_{t=0} = m_0\) and at \(m_{t_s} = E/r_w\). The end limit states that for a withdrawal rate, \(r_w\), we are looking for the interest on \(m\) to cover our expenses, \(mr_w = E\).
Now define the saving rate \(S = \frac{I-E}{I}\) and recognize \(1-S = \frac{E}{I}\).
And consider common simplifications.
This is how long it takes a person to save for retirement starting from some multiple of their income (\(\frac{m_0}{I}\)). There is nothing that can change the indisputable fact that the number of years a person must work for retirement is a function of how aggressively they can save their income and how productively they can invest those savings. You can suppose other factors, windfalls of various kinds either as one time events or as unreasonable increases in income later in life, but assuming consistent income (which is by no means guaranteed to rise) and expenses (which are all but guaranteed to rise in healthcare costs alone), this is how long it takes to be free of the need to work for money.
Use reasonable estimates of the rate of return on investment. There is a considerable difference between the returns in a good year (which can exceed 40%) and the expected returns over the long term (about 7-9% depending on optimism). It is also important to be skeptical of those who promise unreasonable returns. Bernie Madoff promised 10%-12% so let that be an upper bound, a guarantee of 10% before inflation is where people start going to jail for life. Inflation averages 1-3% per year depending on who is asked and the stock market gives 6-8% per year. We can then conservatively say 4-5% is our safe withdrawal rate but you can do the calculation with any rate you like.
Bloggers in the early retirement world frequently own rental properties which produce returns in the 10-20% range. This happens in favorable circumstances but like all high reward strategies, it is risky (no shortage of testimonials to that fact, the famed Mr. Money Mustache began early retirement with a $200,000 loss on an investment in real estate).
Let us reconsider
For all variations shown, in the limit as \(S \rightarrow\) 0
In the limit as \(S \rightarrow\) 1 things change depending on what simplifications are available.
The derivative \(-\frac{1}{r_w}\) is plotted on the graph above.
Notice that when the saving rate is small (near 0), small adjustments result in radical changes in the length of saving time. At high savings rates, incremental changes to \(S\) result in linear changes in the saving time. If that fails to persuade low savers to change their habits, no logical-based argument will succeed.
Alteratively, if you want to solve for your savings rate given a certain retirement goal (x years away), we can solve for \(S\).
This equation may be interpreted graphically by swapping the x and y axes on the figure above. If your goal is to retire in \(t_s\) years and a target expense level \(E\), use the equation above to solve for savings rate and then solve for \(I\). |
IWOTA 2019
International Workshop on
Operator Theory and its Applications
As was discovered by A. Olofsson and O. Giselsson, the shift operator $S_n$ on the standard weighted Bergman space $A_n$ satisfies the identity $$(S_n^*S_n)^{-1}=\sum_{k=0}^{n-1}(-1)^k\left(\begin{array}{c}n \\ k+1\end{array}\right)S_n^kS_n^{*k},$$ which, under the extra pureness assumption, characterizes the Bergman shift up to unitary similarity.
We will discuss an extension of the Olofsson-Gilellson identity to the non-commutative setting of weighted Bergman Fock spaces and a related characterization of (right) shift operator tuples on these spaces.
We study jointly quasinormal and spherically quasinormal pairs of commuting operators on Hilbert space, as well as their powers. We first prove that, up to a constant multiple, the only jointly quasinormal $2$-variable weighted shift is the Helton-Howe shift. Second, we show that a left invertible subnormal operator $T$ whose square $T^2$ is quasinormal must be quasinormal. Third, we generalize a characterization of quasinormality for subnormal operators in terms of their normal extensions to the case of commuting subnormal $n$-tuples.
Fourth, we show that if a $2$-variable weighted shift $W_{(\alpha,\beta)}$ and its powers $W_{(\alpha,\beta)}^{(2,1)}$ and $W_{(\alpha,\beta)}^{(1,2)}$ are all spherically quasinormal, then $W_{(\alpha,\beta)}$ may not necessarily be jointly quasinormal. Moreover, it is possible for both$W_{(\alpha,\beta)}^{(2,1)}$and $W_{(\alpha,\beta)}^{(1,2)}$ to be spherically quasinormal without $W_{(\alpha,\beta)}$ being spherically quasinormal. Finally, we prove that, for $2$-variable weighted shifts, the common fixed points of the toral and spherical Aluthge transforms are jointly quasinormal.
The talk is based on joint work with S. H. Lee and J. Yoon.
It is well known that the Sz.-Nagy dilation theorem and the von Neumann inequality on the unit disk are equivalent, and that this in turn implies a matricial form of the von Neumann inequality holds for matrix valued functions of all dimensions. Ando’s theorem implies a similar result for the bidisk and pairs of commuting operators, and Arveson later proved a general principle that a tuple of commuting operators with spectrum in a compact set dilates to a commuting tuple of operators with spectrum on the boundary if and only if a matricial von Neumann inequality holds on the set. Since the complement of the set may have several components, one uses algebras of matrix valued rational functions with poles off of the set, and the dilation is referred to as a rational dilation. The rational dilation problem asks: Does a scalar von~Neumann inequality suffices for rational dilation to hold? The speaker, with McCullough and Jury showed that, somewhat surprisingly, the answer is
no on the Neil parabola, a distinguished variety in the bidisk. We discuss here further work with Batzorig Undrakh which illustrates the ubiquity of the failure of rational dilation on distinguished varieties.
We give a survey on some recent results from multivariable operator theory on the unit ball in $\mathbb C^n$ including transfer function realizations of Bergman-inner functions, characterizations of m-shifts by Wold-type decompositions and characterizations of Toeplitz operators with pluriharmonic symbol in terms of higher order Brown-Halmos conditions. A part of the results is based on joint work with Sebastian Langendoerfer.
A subnormal weighted shift $W_\alpha$ with weight sequence $\alpha = (\alpha_n)_{n=0}^\infty$ is infinitely divisible if the weight sequence $\alpha^{(p)} = (\alpha_n^p)_{n=0}^\infty$ yields a subnormal shift for each $p > 0$. We exhibit several new necessary and sufficient conditions for infinite divisibility, produce new examples of subnormal and infinitely divisible shifts, consider Schur flows of shifts, and explore connections with completely hyperexpansive shifts. As well we provide new conditions for subnormality and exhibit an enlightening connection between the $k$-hyponormality ($k=1, 2, \ldots$) and $n$-contractivity ($n=1, 2, \ldots$) conditions.
(Joint work with Raul Curto and Chafiq Benhida)
In the talk we discuss the differential properties of multidimensional homeomorphic/open discrete mappings. Such mappings (multivariable operators) essentially generalize the well-known customarily investigated classes of mappings as quasiregular, quasiisometric, Lipschitzian, etc. But in contast to these known classes, the definion of our mapping class does not involve any analytic restrictions. We also illustrate the regularity properties by several examples and present a collection of open related problems.
The talk is based on joint works with R. Salimov (Institute of Mathematics, Kyiv, Ukraine).
In classical complex analysis analyticity of a complex function $f$ is equivalent to differentiability of its real and imaginary parts $u$ and $v$, respectively, together with the Cauchy-Riemann equations for the partial derivatives of $u$ and $v$. We extend this result to the context of free noncommutative functions on tuples of matrices of arbitrary size. In this context, the real and imaginary parts become so called real noncommutative functions, as appeared recently in the context of Löwner's theorem in several noncommutative variables. Additionally, as part of our investigation of real noncommutative functions, we show that real noncommutative functions are in fact noncommutative functions.
A tetra-inner function is a holomorphic map $x=(x_1,x_2,x_3)$ from the unit disc $\mathbb{D}$ to the closed tetrablock $\overline{\mathcal{E}}$, whose boundary values at almost all points of the unit circle $\mathbb{T}$ belong to the distinguished boundary $b\overline{\mathcal{E}}$ of $\overline{\mathcal{E}}$. Here \[ \overline{\mathcal{E}}=\{x∈\mathbb{C}^3:1−x_1z−x_2w+x_3zw≠0 \quad\text{whenever}\quad |z|<1,|w|<1 \}. \]
There is a natural notion of degree of a rational tetra-inner function $x$; it is simply the topological degree of the continuous map $x_\mathbb{T}$ from $\mathbb{T}$ to $\overline{\mathcal{E}}$. In this talk we give a prescription for the construction of a general rational tetra-inner function of degree $n$. The prescription makes use of a known solution of an interpolation problem for finite Blaschke products of given degree in terms of a Pick matrix formed from the interpolation data. It turns out that a natural choice of data for the construction of a rational tetra-inner function $x=(x_1,x_2,x_3)$ consists of the points in $\overline{\mathbb{D}}$ for which $x_1x_2−x_3=0$ and the values of $x$ at these points.
The talk is based on joint work with my PhD student Hadi Alshammari.
In this note, we give the further reverses of the Young inequalities for non-negative real scalars.
Making use of them, some matrix inequalities for Hilbert-Schmidt norm and trace norm are deduced.
A classical Julia-Carathéodory theorem states that if there is a sequence tending to $\tau$ in the boundary of a domain $D$ along which the Julia quotient is bounded, then the function $\phi$ can be extended to $\tau$ such that $\phi$ is nontangentially continuous and differentiable at $\tau$ and $\phi(\tau)$ is in the boundary of $\Omega$.
We develop a theory in the case of Pick functions where we consider sequences that approach the boundary in a controlled tangential way, yielding necessary and sufficient conditions for higher order regularity. In this talk, we discuss some of the technical details involved, including amortization of the Julia Quotient, $\gamma$-regularity, and $\gamma$-auguries.
In this talk, we present a generalization of the Beurling–Lax–Halmos-type theorem of McCullough and Trent for reproducing kernel Hilbert spaces whose kernel has a complete Nevanlinna–Pick factor. We also record factorization results for pairs of nested invariant subspaces.
This is joint work with Raphael Clouatre and Michael Hartz.
In this talk we discuss some results on the spectral properties of constrained absolutely continuous commuting row contractions in relation to the constraining ideal. This is joint work with Raphael Clouatre.
Let $N$ be a bounded normal operator on a separable Hilbert space and let $\mu$ stand for its scalar spectral measure. The spectral nature of the perturbation $T=N+K$, where $K$ is a sufficiently "smooth", finite rank compact operator, will be discussed. We are interested in the existence of invariant subspaces, decomposability and other questions.
We introduce the perturbation matrix-valued function of $T$, defined in the whole complex plane, except for a certain thin set, and explain its role. Our main tool is the construction of a certain quotient model of $T$, defined in terms of certain spaces of Cauchy integrals. We discuss the dependence of the answers on geometric properties of $\mu$.
The case when $\mu$ is absolutely continuous with respect to the area measure has been considered in [1] in 1993; this is the case of $\mu$ of "dimension" $2$. A quotient model for $T$, constructed in terms of certain vector-valued Sobolev classes of functions, was established in this work. (In that paper, infinite rank perturbations were also dealt with.) The case of a discrete measure $\mu$ (that is, of a diagonalizable operator $N$) has been studied more recently in a series of papers by C. Foias and his coauthors (see [2]). This can be seen as the case of zero dimension. The case when $\mu$ is of "dimension" $1$ (that is, when $\mu$ behaves like the arc length measure) seems to be the most difficult one, whereas many cases of fractional dimension are easier.
This is a joint work in progress with Mihai Putinar. |
Yes, $\vec \jmath(x,y,z)$ should be defined as $e$ times the Schrodinger probability current. \begin{equation*} \vec \jmath = \frac{e\hbar}{2mi}\left(\Psi^* \frac{\partial \Psi }{\partial x}- \left(\frac{\partial \Psi^* }{\partial x}\right)\Psi \right) , \quad e\lt 0. \end{equation*}That's possible to explicitly see in the formalism of quantum field theory. The definition $\vec v(x,y,z)/\rho $ would be no good because "the velocity of the electron at a particular point $(x,y,z)$" isn't too well-defined due to the uncertainty principle (if the position is given, the velocity is not).
One may be puzzled because the expression for $\vec\jmath$ above isn't an operator – it is quadratic in the wave function. But in quantum field theory, it
is an operator – an observable – because it is a function of the field operators $\Psi$.
If we consider non-relativistic quantum mechanics with fixed coordinates of particles and we still want to define $\vec\jmath(x,y,z)$ as a linear operator, an observable, we must appreciate that this operator is only nonzero is the particle is located in the infinitesimal vicinity of the point $(x,y,z)$. So we have$$\rho (x_0,y_0,z_0) = e \delta^{(3)}(\hat{\vec r}-\vec r_0) $$and$$\vec\jmath (x_0,y_0,z_0) = \frac e2\{\delta^{(3)}(\hat{\vec r}-\vec r_0),\frac{\hat{\vec p}}{m} \}$$I had to write one-half of the commutator with the velocity operator because functions of positions and velocities don't commute but we still need a Hermitian operator.
If there are $N$ charged particles, the operators $\hat{\vec r}$ and $\hat{\vec p}$ acquire an extra index from $1$ to $N$ and $\rho(x_0,y_0,z_0)$ and $\vec\jmath(x_0,y_0,z_0)$ must be written as a sum of the expressions over this index.
One may verify that e.g. for wave packets, the integrals over $\vec r_0$ (some regions) give us what we would expect. |
A geometric way of looking at differential equations
In the literature for the h-principle (for example Gromov's
Partial differential relations or Eliashberg and Mishachev's Introduction to the h-principle), we often see the following (all objects smooth):
Give a fibre bundle $\pi:F\to M$ over some manifold $M$, denote by $F^{(r)}$ the associated $r$-jet bundle. A
partial differential relation$\mathcal{R}$ is a subset of $F^{(r)}$.
$\mathcal{R}$ is said to be a
partial differential equationif it is a submanifold of $F^{(r)}$ with codimension $\geq 1$.
A
solution$\Phi$ to the relation $\mathcal{R}$ is a holonomic(in the sense that $\Phi = j^r\phi$ for some section $\phi$ of $F$) section of $F^{(r)}$ that lies in $\mathcal{R}$.
This description is very powerful in the context of the topologically motivated techniques of the h-principle. And for partial differential inequalities where $\mathcal{R}$ is an open submanifold, this allows the convenient setting for the Holonomic Approximation Theorem.
A geometric way of looking at differential operators
Here I recall the famous theorem of Peetre.
Let $E\to M$ and $F\to M$ be two vector bundles. Let $D$ be a linear operator mapping sections of $E$ to sections of $F$. Suppose $D$ is support-non-increasing, then $D$ is (locally) a linear differential operator.
where
A
linear differential operator$D:\Gamma E\to\Gamma F$ is a composition $D := i\circ j$ where $j: E\to J^RE$ is the $R$-jet operator, and $i: J^RE \to F$ is a linear map of vector bundles.
Most of the time in applications, $F$ can be taken to be the same bundle as $E$. This way of phrasing things is also convenient for analysis. For example, we can easily define the
principal symbol of a linear differential operator in the following way.
Let $D_1$ and $D_2$ be two linear differential operators of order $r$ (that is, $r$ is the smallest natural number such that if $j^r\phi = j^r\psi$, then $D\phi = D\psi$). Let $\pi^r_{r-1}: J^rE \to J^{r-1}E$ the natural projection. We say that $D_1$ and $D_2$ have the same principal part if their corresponding $i_1 - i_2|_{\ker \pi^{r}_{r-1}} \equiv 0$. (In words, their difference is a l.d.o. of lower order. This defines an equivalence relation on linear differential operators of order $r$. Each equivalence class defines a unique linear map of vector bundles $P: \ker \pi^{r}_{r-1} \to F$.
Now, it is known (sec 12.10 in Kolar, Michor, Slovak's
Natural operations...) that $\ker \pi^{r}_{r-1}$ is canonically isomorphic to $E\otimes S^r(T^*M)$, where $S^r$ denotes the $r$-fold symmetric tensor product of a space with itself. So we have naturally an interpretation of $P$ as a section of $F\otimes E^* \otimes S^r(TM)$. In the case where $E$ and $F$ are just, say, the trivial $\mathbb{R}$ bundle over $M$, we recover the usual description of the principle symbol of a pseudo-differential operator being (fibre-wise) a homogeneous polynomial of the cotangent space. Question
Given the above, another way of looking at partial differential equations is
perhaps the following.
Let $\pi_X: X\to M$ and $\pi_Y: Y\to M$ be fibre bundles. A
system of partial differential operators of order $r$ is defined to be a map $H: J^rX \to Y$ that commutes with projection $\pi_X \circ \pi^r_0 = \pi_Y \circ H$. And a system of partial differential equations is just the statement $H(j^r\phi) = \psi$, where $\phi\in\Gamma X$ and $\psi \in\Gamma Y$. Observe that by considering $H^{-1}(\psi) \subset J^rX$, we clearly have a partial differential relation the sense defined in the first section. If we require that $H$ is a submersion, then $H^{-1}(\psi)$ is also a partial differential equation in the sense defined before.
On the other hand, if $\mathcal{R}$ is an embedded submanifold of $J^rX$, at least locally $\mathcal{R}$ can be written as the pre-image of a point of a submersion; though there may be problems making this a global description. So perhaps my definition is in fact more restrictive than the one given in the first part of this discussion.
My question is: is this last "definition" discussed anywhere in the literature? Perhaps with its pros and cons versus the partial differential relations definition given in the first part of the question? I am especially interested in references taking a more PDE (as opposed to differential geometry) point of view, but any reference at all would be welcome. Note: For the
reference-request part of this question, I would also appreciate pointers to whom I can ask/e-mail on these matters. |
It is known that the rational homotopy theory of spaces (e.g. simplicial sets) is equivalent in some sense to the homotopy theory of cdgas over $\mathbb{Q}$. This has been expressed in various forms in the literature. For instance, Felix-Halperin-Thomas show that homotopy classes of maps between simply connected rational spaces with finite-dimensional homology in each dimension can be computed via homotopy classes of cdga maps between Sullivan models for each of them. However, there is a stronger statement that I would like to be true, have heard asserted (without proof) that it is true, but haven't been able to figure out: I would like for that statement to be true without finite-dimensional hypothesis.
Consider the following two model categories:
First, we can take simplicial sets with the usual (Kan) model structure, and then left Bousfield localize at the class of "rational homology equivalences." In other words, the model structure is such that the cofibrations are the injections, weak equivalences are maps inducing isomorphisms on $H_*(\cdot, \mathbb{Q})$, and everything else is determined. That this model structure exists follows from a combination of the small object argument and the Bousfield-Smith cardinality argument (or one can probably appeal to general facts on existence of Bousfield localizations).
Second, we can take commutative dgas (nonnegatively graded) over $\mathbb{Q}$. The model structure is obtained by transfer from a slight variant of the model structure on nonnegatively graded chain complexes. In other words, a fibration of cdgas is a surjection, and a weak equivalence is a quasi-isomorphism. The cofibrations are thus determined; there is a standard generating set (basically, what one gets by applying the free functor to generating sets for chain complexes).
Edit: As Tyler Lawson observes below, this is not actually a model structure. I am not sure at the moment what the right one is. Perhaps we should relax "surjections" to "surjections in degrees $\geq 1$." Alternatively, we could consider the model category of all cdgas (in which case the functor below is obviously not anywhere near a Quillen equivalence).
Now, we have a Quillen adjunction between the first and the opposite of the second, which sends a simplicial set $X_\bullet$ to the "polynomial de Rham algebra" $A^{PL}(X_\bullet)$, a cdga which is quasi-isomorphic to the rational cochain algebra (and which, in particular, solves the commutative cochain problem).
Q1: Is this a Quillen equivalence? (As Tyler Lawson points out, there is a simple reason why this is not the case: namely, that elements in $H^0$ could be nilpotent. Is this, however, the "only" reason why it fails? For instance, what if one restricts to cdgas that are "connected," i.e. have only $\mathbb{Q}$ in degree zero?)
I have not seen this statement in what I've read (not that much). In fact, most authors seem to prefer to work only with simply connected spaces; the advantage is that there you can localize at the rational homotopy equivalences (which are the same as the rational homology equivalences by the mod $\mathcal{C}$ theory). Granted, simply connected spaces do not form a model category. Quillen dealt with it by working with 2-reduced simplicial sets (those with only one vertex and edge). Other authors (e.g. Felix-Halperin-Thomas) don't use the framework of model categories at all, but state something essentially equivalent to this in the case when one works with simply connected spaces with finite-dimensional homology groups.
Here's what I understand of the argument, and why I don't know how to extend it. The basic claim, as before, is that if $X_\bullet $ and $Y_\bullet$ are simplicial sets, with $Y_\bullet$ an abelian space and a rational Kan complex (if I'm not mistaken, these are fibrant in the model structure thus constructed), and $A, B$ are cofibrant (e.g. Sullivan) models in cdgas for $X_\bullet, Y_\bullet$, then homotopy classes of maps $[X_\bullet, Y_\bullet]$ are the same as homotopy classes of maps $ B \to A$. By using a Postnikov tower to express $Y_\bullet$ as a homotopy inverse limit under a whole bunch of fibrations with Eilenberg-MacLane spaces as fibers, we can assume that $Y_\bullet $ is a $K(V, n)$ for $V$ a vector space over $\mathbb{Q}$. (If I understand correctly, fibrations and homotopy inverse limits of spaces obtain good Sullivan models.) If $V$ is finite-dimensional, then we have an explicit Sullivan model (just a free (graded)-commutative algebra), and so we can compute homotopy classes of maps $B \to A$; they'll be the same as maps from $V$ into the cohomology of $X_\bullet$. But maps from $X_\bullet$ into an Eilenberg-MacLane space classify cohomology classes, so we're done.
But, this doesn't seem to work when $V$ is infinite-dimensional. I don't know what a good Sullivan model for a $K(V, n)$ is anymore. The cohomology in dimension $n$ is $V^\vee$, but, say, if $n$ is even, then the cohomology in dimension $2n$ should be $(V \otimes V)^{\vee}$, not what would be nice: $V^\vee \otimes V^\vee$. So, is this statement even true? At the very least, can we get some kind of equivalence of $\infty$-categories?
Quillen himself stated the result using a Quillen equivalence, but it's actually a somewhat complex series of them. He starts with reduced 2-simplicial sets, and then goes to reduced simplicial groups via the loop group construction, then takes the completed group ring dimensionwise to get a simplicial complete Hopf algebra, and then takes primitive elements to get a simplicial Lie algebra, and then applies the lax symmetric monoidal Dold-Kan functor to get dg-Lie algebras. So maybe dg-Lie algbras (or dg-coalgebras) are better than cdgas for describing rational homotopy theory.
Still, one thing that I would like to be true, but is not proved in Quillen's paper, is a direct Quillen equivalence starting with (localized) simplicial sets and ending in some algebraic category. The problem is, the Quillen equivalences I've described go in the opposite direction. The loop group is a left adjoint, but taking primitive elements is a right adjoint. So, while there is an honest Quillen equivalence between reduced simplicial sets and simplicial complete Hopf algebras, there is not proved (in this paper, as far as I can tell) the existence of a single Quillen equivalence (not a zig-zag) which ends in a category with no reference to topology.
Q2: Is there a direct Quillen equivalence of the rational homotopy category with a nice algebraic category?
For instance, it would be interesting if there was some coalgebra version of the de Rham complex. |
The best way to think of this is in terms of maps. A covector is a linear map that turns a vector into a number:
$$ W:\;V^\mu \mapsto W_\mu V^\mu. $$
So if a thing is a linear machine that maps a vector into a scalar we call it a covector. A gradient is an example of such a thing:
$$ \mathrm{d}\phi:\; V^\mu \mapsto \frac{\partial\phi}{\partial x^\mu} V^\mu = \frac{\mathrm{d}\phi}{\mathrm{d}\lambda}, $$
where $V^\mu = \mathrm{d}x^\mu/\mathrm{d}\lambda$. (We can always find a curve tangent to a vector field just by integrating $x^\mu(\lambda) = \int V^\mu(x) \mathrm{d}\lambda$.) But a general covector is not expressible as the gradient of a scalar function: an integrability condition must be satisfied for that.
So let's test $g_{\mu\nu} V^\nu$ to see if it's a covector. We are taking for granted that $g$ is a tensor and $V$ is a vector. Then we contract this into a vector $W^\mu$ and see what happens:
$$ (g_{\mu\nu} V^\nu) W^\mu = g_{\mu\nu} W^\mu V^\nu = W\cdot V, $$
where we recognise the invariant (i.e.
scalar) inner product of two vectors. (Recall that $g$ is defined so that this contraction is invariant.) So $g_{\mu\nu} V^\nu$ is a map taking vectors to scalars. You can easily test linearity. Hence $g_{\mu\nu} V^\nu$ is a covector.
If the above mentions integrability condition ($\partial_\sigma (g_{\mu\nu} V^\nu)=\partial_\mu (g_{\sigma\nu} V^\nu)$) is satisfied then you can find a scalar field $\phi$ such that $g_{\mu\nu} V^\nu = \partial_\mu \phi$ by picking some curve $x^\mu(\lambda)$ and integrating $\phi = \int (g_{\mu\nu} V^\nu) (\mathrm{d}x^\mu/d\lambda) \mathrm{d}\lambda$. The integrability condition is necessary because if it doesn't hold then the value of this integral will depend on the full curve and not just its end points, but no curve is better than any other through the same end points so you can't assign a consistent $\phi$.
RE comment:
To get the integrability condition we differentiate $ V_\nu = \partial_\nu \phi $:
$$ \partial_\mu V_\nu = \partial_\mu \partial_\nu \phi = \partial_\nu \partial_\mu \phi = \partial_\nu V_\mu $$
Going the other way: given a covector $W_\mu$ we can make a vector $g^{\mu\nu} W_\nu$ (proof: basically a repeat of the argument given above), then from this vector construct a curve by integrating $$x^\mu(\lambda) = \int g^{\mu\nu} W_\nu \mathrm{d}\lambda$$ |
View Full Version : Discussion: Fundamental Aspect of the ...
Fraser
2005-Apr-18, 06:53 PM
SUMMARY: Researchers from UC Berkeley have looked into the past to confirm that a fundamental aspect of the Universe - the fine structure constant, or alpha - has remain unchanged for at least 7 billion years. This constant shows up in many formulae dealing with electricity and magnetism, and helps describe how radiation is emitted by atoms. This conflicts with a recent announcement from Australian researchers that described a change in alpha over time.
View full article (http://www.universetoday.com/am/publish/light_aspect_unchanged.html) What do you think about this story? Post your comments below.
Don Alexander
2005-Apr-18, 07:35 PM
The REAL question is: If $\alpha$ is changing, then which fundamental constant that makes up $\alpha$ is changing over time??
After all, it is: $\alpha=\frac{e}{4\hbar c\pi \varepsilon_0}$ with the fundamental constants e (the charge of the electron), h bar (Planck's constant h divided by 2 pi) and epsilon_0 (the vacuum permittivity constant). Changing any of these constants seriously screws with the workings of the universe. David Alexander Kann PhD Student, Gamma-Ray burst Afterglow Collaboration at ESO Thueringer Landessternwarte Tautenburg, Germany
antoniseb
2005-Apr-18, 08:13 PM
Originally posted by Don Alexander@Apr 18 2005, 07:35 PM
The REAL question is: If $\alpha$ is changing, then which fundamental constant that makes up $\alpha$ is changing over time? Hi David, There is a paper in arXiv today that does a good job of showing that systematic errors and other factors easily explain any observation so far about changing alpha.
Slicksister
2005-Apr-19, 12:53 PM
The article is confused on what an angstom is. There are 10 angstroms per nanometer, not 10 nanometers per angstrom.
madman
2005-Apr-19, 05:02 PM
so when quasars are checked there seems to be a discrepancy in alpha...but with galaxies there is none.
more studies should be done to see if this is always the case...or just a mistake...rather than taking the galaxy result and saying "it's been sorted out".
Powered by vBulletin® Version 4.2.3 Copyright © 2019 vBulletin Solutions, Inc. All rights reserved. |
Here is a construction of a very broad class of "Lie-like" algebras, and I want to know more about them.
Here is the main definition: Suppose $\mathfrak{g}$ is a complex semsimple Lie algebra and $\Gamma$ is a finite abelian group. Define a "hybrid algebra" over $(\mathfrak{g},\Gamma)$ as a pair $(V,\Phi)$ where $V$ is a collection of $\mathfrak{g}$-modules $V_a$ indexed by $\Gamma$ and $\Phi$ is a collection of intertwining maps $\phi_{a,b}:V_a\otimes V_b\rightarrow V_{a+b}$ indexed by $\Gamma\times\Gamma$.
What is known about these things? Here are a few things that I know. This is a generalization which encompasses semisimple Lie algebras and classical Lie superalgebras. Thus, for example, a Lie superalgebra $\mathfrak{g}$ is a particular type of hybrid algebra over the pair $(\mathfrak{g}_0,\mathbb{Z}/2\mathbb{Z})$, where $\mathfrak{g}_0$ is the even component of $\mathfrak{g}$. Similarly, one may realize the exceptional $52$-dimensional Lie algebra $\mathfrak{f}_4$ as a hybrid algebra over the pair $(\mathfrak{d}_4,V)$, where $V$ is the Klein four group and using the triality representations of $\mathfrak{d}_4\cong\mathfrak{so(8)}$. There are similar (and similarly elegant) constructions for other semisimple Lie algebras.
Beyond Lie algebras and Lie superalgebras, what are the interesting classes of hybrid algebras? What else can we say? |
I've been given this situation
"A surface contains $N$ identical atoms in a fixed position. Every atom can occupy one of two states with energies $E_1$ or $E_2$ and the temperature is $T$."
For the solution, I am uncertain what kind of ensemble to use seeing as it is a surface, I would imagine it to be a 2-dimensional surface of a classical continuous system on the form of a canonical ensemble. Which would take the form
$Z=\frac{1}{h^2}\int d^2 r \int d^2 p\exp{(-\beta\cdot E(r,p))}$ For $h=\Delta x\Delta p$
However, the verbal communication with the professor indicates a solution on the form of $Z=Z_1^N=(e^{-\beta\cdot E_1}+e^{-\beta\cdot E_2})^N$
I do not really the professor's solution, the first partition function would give a different result than his answer. Can anyone help? |
Background information:
I believe we can use Jensen's Inequality here
Show that if the payoff function $V(S_T)$ is a convex function on $S_T$, then the Markovian European contingent claim with payoff $V(S_T)$ has non-negative $\Gamma$, i.e. $V(\tau,S)$ is convex on $S$ for all $\tau$.
Attempted proof: Suppose we have a function $V(S_T)$ that is convex on $S_T$, then if $p_1,\ldots,p_n$ are positive numbers that sum to 1, then $$V\left(\sum_{i=1}^{T}p_i S_i\right) \leq \sum_{i=1}^{T}p_i V(S_i)$$ Now, let $p_i = 1/n$, then $\ln S$ gives, $$\ln\left(\frac{1}{T}\sum_{i=1}^{T} S_i\right) \geq \frac{1}{T}\sum_{i=1}^{T}\ln S_i$$ Through exponentiation we have the arithmetic mean-geometric mean inequality, $$\frac{S_1 + S_2 +\ldots + S_T}{T} \geq \sqrt[T]{S_1 S_2,\ldots S_T}$$ Which is non-negative, thus the result follows. |
IWOTA 2019
International Workshop on
Operator Theory and its Applications
We study matrices whose entries are free or exchangeable noncommutative elements in some tracial W*-probability space. We provide quantitative estimates of their convergence to some operator-valued semicircular elements. Many random block matrices fit well in the framework of matrices with exchangeable entries. For instance, we obtain explicit rates of convergence for the limiting spectral distribution of self-adjoint Kronecker, Wigner and Wishart random matrices in independent or correlated blocks. Our approach relies on a noncommutative extension of the Lindeberg method and operator-valued Gaussian interpolation techniques.
Joint work with Guillaume Cébron.
We explore the interplay between free probability theory and other noncommutative probabilistic frameworks through the lens of noncommutative central limit theorems. We show how the original argument of Speicher can be adapted or generalized to produce a variety of known (and some new) noncommutative Gaussian laws, which have interesting connections to special functions, combinatorics, and mathematical physics. Parts of this talk are based on recent joint work with W. Ejsmont.
We adapt the theory of Loewner chains to non-commutative functions in the operator-valued upper half-plane over a $C^*$-algebra $\mathcal{B}$. We define an $\mathcal{A}$-valued Loewner chain as a subordination chain $(F_t)_{t \geq 0}$ of self-maps of the $\mathcal{A}$-valued upper half-plane, such that each $F_t$ is the reciprocal Cauchy transform of some $\mathcal{A}$-valued non-commutative law. Our first main result is that normalized Loewner chains which are Lipschitz with respect to $t$ correspond precisely to solutions of Loewner's evolution equation with respect to some vector field. This is a direct generalization of a theorem of Bauer in the scalar-valued setting. To achieve the bijection, we must interpret the differentiation with respect to time in a certain distributional sense, which is suitable for Lipschitz functions taking values in arbitrary Banach spaces. We also describe the relationship between Loewner chains and monotone independence in the operator-valued setting as Schleissinger has done in the scalar-valued case.
In the theory of free probability, an operator $a$ is called R-diagonal if its $*$-distribution coincides with the $*$-distribution of a product of the form $u\cdot p$, where the sets $\{u,u^*\}$ and $\{p,p^*\}$ are freely independent and $u$ is a unitary distributed according to the Haar measure on the circle. It is due to this free factorization property that the class of R-diagonal operators constitutes a particularly well-behaved class of non-normal operators and their distributions yield answers to maximization problems involving free entropy. Bi-free probability theory was originated by Voiculescu as an extension of the free setting and involves the simultaneous study of left and right action of algebras on reduced free product spaces. In this talk, by utilising the combinatorial description of bi-free probability developed by Charlesworth, Nelson and Skoufranis, we will present the bi-free analogue of R-diagonal operators, namely bi-R-diagonal pairs of operators, and discuss a number of their properties that are of interest within the bi-free framework.
The eigenvalue distributions of various random matrix models show a deterministic behavior when their size tends to infinity. Free probability theory teaches us that these limits can often be described as the spectral distributions of certain noncommutative random variables and provides a powerful machinery to deal with them.
Indeed, analytic tools of operator-valued free probability theory were in recent years combined successfully with algebraic linearization techniques in order to compute – at least numerically – the limit of eigenvalue distributions for noncommutative polynomials and rational functions in tuples of asymptotically free random matrices.
With this approach, however, it remains as a challening problem to detect regularity properties of those limiting distributions, such as absence of atoms, Hölder continuity of their cumulative distribution functions, and absolute continuity. These questions are of particular interest, if one leaves the regime of asymptotic freeness and deals with more general distributions in the realm of Voiculescu's free analogues of entropy and Fisher information.
In my talk, which is based on joint work with Marwa Banna, Roland Speicher, and Sheng Yin, I will give a survey on those developments.
In 2012-2013, in a joint work with J. A. Mingo, we showed that unitarily invariant random matrices are asymptotically free from their transposes. The talk will present some recent developments of this result, with emphasis on GUE and Wishart ensembles.
Stochastic maps are a generalization of functions in that they assign to each point in the domain a probability measure on the codomain. In this talk we will discuss the category of stochastic maps. In particular, we will explore diagramatically formulating the notion of a disintegration of a positive measure and transfering this to the category of $C^\ast$-algebras utilizing the existence of a contravariant functor. Disintegrations in the non-commutative setting are related to reversible processes in quantum information theory and conditional probabilities in non-commutative probability. This is joint work with Arthur Parzygnat (UConn).
In the context of non-commutative geometry, on the one hand free spheres (Wang), and more generally, partially commutative spheres (defined by Weber-Speicher) may be defined. In a similar way, related affine quantum isometry groups may be defined as well. On another hand, Goswami and Bhowmick have defined the notion of compact quantum group of Riemannian isometry, and have proved (with Banica) that, in the case of free spheres (in a suitable context), affin quantum isometry group and quantum group of Riemannian isometry match. The two purposes of this talk will be the following ones: firstly, introduce the definitions of quantum isometry groups and explain the context we are working with, and secondly, introduce my results about the matching between affine and Riemannian quantum isometry groups in the context of a slightly enlarged class of partially commutative spheres.
We prove the existence and uniqueness of a discrete nonnegative harmonic function for a random walk satisfying finite range, centering and ellipticity conditions, killed when leaving a globally Lipschitz domain in $\mathbb{Z}^d$. Our method is based on a systematic use of comparison arguments and discrete potential-theoretical techniques.
This is a joint workwith Sami Mustapha.
The notions of free entropy developed by Voiculescu have had a profound impact on free probability and operator algebras. In this talk, joint work with Ian Charlesworth will be discuss that generalizes the notion of non-microstate free entropy to the bi-free setting. In particular, the notions of free derivations, conjugate variables, Fisher information, and entropy are extended to handle pairs of algebras from which many interesting results, complications, and questions arise.
The most general quantum measurements in quantum channels are described by quantum channels, and intrinsic uncertainties emerge from the state-channel interaction. In this talk, we give uncertainty relations for arbitrary quantum channels in terms of generalized quasi-metric adjusted skew information which was defined by the author for arbitrary operators. We illustrate the uncertainty relations by explicit examples.
This talk is based on a recent joint-work with Tobias Mai and Roland Speicher. The free field is the skew field (aka division ring) which extends the noncommutative ring of polynomials in several formal variables with some universal property. In other words, it is the skew field of rational functions generated by non-commuting formal variables. In this talk, we address the question when a tuple of operators in a finite von Neumann algebra can generate the free field. It turns out that the quantity $\Delta$ introduced by A. Connes and D. Shlyakhtenko in their paper, $L^2$-homology for von Neumann algebras, gives a description for such operators. Actually, for a tuple of operators, the maximality of the associated $\Delta$ is equivalent to the realization of the free field by these operators.
On one side, $\Delta$ is related to many concepts, for examples, free entropy dimension and dual system, in free probability; and on the other side, the realization of free field leads to some Atiyah property. Therefore, a lot of interesting consequences can be extracted from this equivalence of the maximality of $\Delta$ and the realization of the free field.
In 1629 Albert Girard gave formulae for the power sums of several commuting variables in terms of the elementary symmetric functions; his result was subsequently often attributed to Newton.
Over a century later Edward Waring proved that an arbitrary symmetric polynomial in finitely many commuting variables could be expressed as a polynomial in the elementary symmetric functions of those variables.
In 1939 Margarete Wolf showed that there is no finite algebraic basis for the algebra of symmetric functions in $d \gt 1$
non-commuting variables, so there is no finite set of ‘elementary symmetric functions’ in the non-commutative case.
Nevertheless, Jim Agler, John McCarthy and I have proved analogues of Girard's and Waring's theorems for symmetric functions in
two non-commuting variables. We find three free polynomials $f, g, h$ in two non-commuting indeterminates $x, y$ such that every symmetric polynomial in $x$ and $y$ can be written as a polynomial in $f, g, h$ and $1/g$. In particular, power sums can be written explicitly in terms of $f$ ,$g$ and $h$. To do this we developed the notion of a non-commutative manifold. |
You are blindfolded and disoriented, standing exactly 1 mile from the Great Wall of China. How far must you walk to find the wall?
Assume the earth is flat and the Great Wall is infinitely long and straight.
Puzzling Stack Exchange is a question and answer site for those who create, solve, and study puzzles. It only takes a minute to sign up.Sign up to join this community
You are blindfolded and disoriented, standing exactly 1 mile from the Great Wall of China. How far must you walk to find the wall?
Assume the earth is flat and the Great Wall is infinitely long and straight.
$\DeclareMathOperator{\arcsec}{arcsec}$
For each possible orientation of the wall (relative to some arbitrary initial orientation), the point on the wall closest to our starting point is a distance $1$ away. The collection of the closest points for all possible orientations of the wall form a circle of radius $1$ around our starting point.
If we move a distance $r>1$ away from the initial point, we intersect two orientations of the wall that are an angle $\theta$ apart. In order to reach that point we must have crossed all of the orientations in that angle. In the figure below on the left, those "explored" points are marked by a magenta line.
By trigonometry we can show that $\theta = 2\arcsec r$. If we traverse the path shown on the right side of the above figure, we travel a worst-case distance of:
$$ r + r(2\pi - \theta) \\ r + 2r(\pi - \arcsec r) $$
This distance is minimized when $r\approx 1.04356$ for a worst-case distance of $6.99528$, an improvement of about $3.95\%$
However, looking at the figure we can immediately see that the majority of the large circular arc is "wasted" distance. Only the ends contribute to additional "explored" points. If we shrink-wrap the rest of the path around the unit circle, we get the following path:
The worst-case distance of this path is:
$$ r + 2\sqrt{r^2-1} + (2\pi - 2\theta) \\ r + 2\left(\sqrt{r^2-1} + \pi - 2\arcsec r\right) $$
This happens to be minimized for $r = \sqrt{\frac{15-\sqrt{33}}{6}} \approx 1.24200$ (not the distance shown in the figure), for a worst-case distance of:
$$ \sqrt{\frac{9+\sqrt{33}}{2}}+4\arctan \sqrt{\frac{9+\sqrt{33}}{8}} \approx 6.45891 $$
an improvement of $11.32\%$.
Thanks to Michael Seifert for pointing out that we can do better by letting the radii of the start and end be different, in which case we have the distance:
$$ r_1 + \sqrt{r_1^2-1} + \sqrt{r_2^2-1} + 2\pi - \theta_1 - \theta_2 \\ r_1 + \sqrt{r_1^2-1} + \sqrt{r_2^2-1} + 2\pi - \arcsec r_1 - \arcsec r_2 $$
Which is minimized by $r_1=2/\sqrt{3},\ r_2=\sqrt{2}$ (with $\theta_1=\pi/3,\ \theta_2=\pi/2$):
(Because of the nice angles, this picture is exactly to scale.) The worst-case distance here is simply
$$ \frac{2}{\sqrt{3}} + \frac{1}{\sqrt{3}} + \frac{2\pi}{3} + \frac{\pi}{2} + 1 \\ = 1 + \sqrt{3} + \frac{7\pi}{6} $$
(a $12.16\%$ improvement.)
If the angle between the possible wall and the initial line is $x$ (the angle between the diagonal line and the bottom line in the diagram below), then the distance travelled is $1+(\pi/2+2x)+1/tan(x)+1/sin(x)$.
Gratifyingly this gives a slightly improved answer of $2+3\pi/2\approx6.7124$ for my first attempt (because you can drop down straight rather than complete the circle), where $x=\pi/2$.
It also gives my second attempt for $x=\pi/4$ (answer $2+\sqrt{2}+\pi\approx6.5558$).
Throwing the expression into wolfram alpha, shows that a minimum occurs at $\pi/3$. This gives a value of $1+\sqrt{3}+7\pi/6\approx6.397$
Old new upper bound: $2+\sqrt{2}+\pi$ as per diagram:
(Old upper bound: $2\pi+1$ miles. Walk 1 mile in any direction and then walk is a circle of radius 1, centred at your starting point. )
I would like to present this non-rigorous but hopefully more intuitive explanation for the optimal path. (The technique used here was very helpful for working on Oray's variant with two people.)
The first part of 2012rcampion's answer explains that we should go as far out as some tangent $l$, before going around the circle to get back to $l$ on the other side. Call the starting point $A$ and the circle $O$. Then the problem is this:
Find the shortest path that comes from $A$, touches $l$, then goes around the circle and touches $l$ again.
It won't change which path is shortest if we turn around at the end and go all the way back:
Find the shortest path that comes from $A$, touches $l$, then goes around the circle and touches $l$ again, and then goes back around the circle to $l$ and then $A$.
Now, if we reflect the entire diagram over $l$, we get this:
Instead of having our path touch $l$ and go back, we can have it switch sides every time instead, which won't change the length because it's just a reflection. So now the problem is this:
Find the shortest path from point $A$ that goes around circle $O^\prime$, then around circle $O$, then goes to point $A^\prime$.
Anyone should be able to do that (imagine putting a string from $A$ around the circles to $A^\prime$ and pulling it tight):
And now if we only look at the part of the diagram above $l$, there's the answer without any calculations.
"...standing exactly 1 mile from the Great Wall of China. How far must you walk to find the wall?"
You
must walk 1 mile. If you go the wrong way then you will end up walking further. If you don't walk that far you can't reach it.
@DrXorile is clos to the answer. Mine isn't an answer either but here's some food for thoughts
I wanted to picture it. It looks like it.
If we take 360 individuals, all starting at the center of the circle and each at a different angle, only one will find the wall.
That's a 0.27% chance of finding the wall if you walk eaxtly one mile. If you need to reach the wall for your survival, you're dead.
Also imagine the guy who started just one degree slightly off, extend his hands and the wall is just 2 inches further, then starts over in the wrong direction.
Walking more than one mile means we could increase our chances of reaching the wall at slightly off angle.
But then again, this could happen: |
4.8. Numerical Stability and Initialization¶
In the past few sections, each model that we implemented required initializing our parameters according to some specified distribution. However, until now, we glossed over the details, taking the initialization hyperparameters for granted. You might even have gotten the impression that these choices are not especially important. However, the choice of initialization scheme plays a significant role in neural network learning, and can prove essentially to maintaining numerical stability. Moreover, these choices can be tied up in interesting ways with the choice of the activation function. Which nonlinear activation function we choose, and how we decide to initialize our parameters can play a crucial role in making the optimization algorithm converge rapidly. Failure to be mindful of these issues can lead to either exploding or vanishing gradients. In this section, we delve into these topics with greater detail and discuss some useful heuristics that you may use frequently throughout your career in deep learning.
4.8.1. Vanishing and Exploding Gradients¶
Consider a deep network with \(d\) layers, input \(\mathbf{x}\) and output \(\mathbf{o}\). Each layer satisfies:
If all activations and inputs are vectors, we can write the gradient of \(\mathbf{o}\) with respect to any set of parameters \(\mathbf{W}_t\) associated with the function \(f_t\) at layer \(t\) simply as
In other words, it is the product of \(d-t\) matrices\(\mathbf{M}_d \cdot \ldots \cdot \mathbf{M}_t\) and the gradientvector \(\mathbf{v}_t\). What happens is similar to the situationwhen we experienced numerical underflow when multiplying too manyprobabilities. At the time, we were able to mitigate the problem byswitching from into log-space, i.e. by shifting the problem from themantissa to the exponent of the numerical representation. Unfortunatelythe problem outlined in the equation above is much more serious:initially the matrices \(M_t\) may well have a wide variety ofeigenvalues. They might be small, they might be large, and inparticular, their product might well be
very large or very small.This is not (only) a problem of numerical representation but it meansthat the optimization algorithm is bound to fail. It receives gradientsthat are either excessively large or excessively small. As a result thesteps taken are either (i) excessively large (the exploding gradientproblem), in which case the parameters blow up in magnitude renderingthe model useless, or (ii) excessively small, (the vanishing gradientproblem), in which case the parameters hardly move at all, and thus thelearning process makes no progress. 4.8.1.1. Vanishing Gradients¶
One major culprit in the vanishing gradient problem is the choices of the activation functions \(\sigma\) that are interleaved with the linear operations in each layer. Historically, the sigmoid function \((1 + \exp(-x))\) (introduced in Section 4.1) was a popular choice owing to its similarity to a thresholding function. Since early artificial neural networks were inspired by biological neural networks, the idea of neurons that either fire or do not fire (biological neurons do not partially fire) seemed appealing. Let’s take a closer look at the function to see why picking it might be problematic vis-a-vis vanishing gradients.
%matplotlib inlineimport d2lfrom mxnet import np, npx, autogradnpx.set_np()x = np.arange(-8.0, 8.0, 0.1)x.attach_grad()with autograd.record(): y = npx.sigmoid(x)y.backward()d2l.plot(x, [y, x.grad], legend=['sigmoid', 'gradient'], figsize=(4.5, 2.5))
As we can see, the gradient of the sigmoid vanishes both when its inputsare large and when they are small. Moreover, when we excecute backwardpropagation, due to the chain rule, this means that unless we are in theGoldilocks zone, where the inputs to most of the sigmoids are in therange of, say \([-4, 4]\), the gradients of the overall product mayvanish. When we have many layers, unless we are especially careful, weare likely to find that our gradient is cut off at
some layer. BeforeReLUs (\(\max(0,x)\)) were proposed as an alternative to squashingfunctions, this problem used to plague deep network training. As aconsequence, ReLUs have become the default choice when designingactivation functions in deep networks. 4.8.1.2. Exploding Gradients¶
The opposite problem, when gradients explode, can be similarly vexing. To illustrate this a bit better, we draw \(100\) Gaussian random matrices and multiply them with some initial matrix. For the scale that we picked (the choice of the variance \(\sigma^2=1\)), the matrix product explodes. If this were to happen to us with a deep network, we would have no realistic chance of getting a gradient descent optimizer to converge.
M = np.random.normal(size=(4,4))print('A single matrix', M)for i in range(100): M = np.dot(M, np.random.normal(size=(4,4)))print('After multiplying 100 matrices', M)
A single matrix [[ 2.2122064 0.7740038 1.0434405 1.1839255 ] [ 1.8917114 -1.2347414 -1.771029 -0.45138445] [ 0.57938355 -1.856082 -1.9768796 -0.20801921] [ 0.2444218 -0.03716067 -0.48774993 -0.02261727]]After multiplying 100 matrices [[ 3.1575275e+20 -5.0052276e+19 2.0565092e+21 -2.3741922e+20] [-4.6332600e+20 7.3445046e+19 -3.0176513e+21 3.4838066e+20] [-5.8487235e+20 9.2711797e+19 -3.8092853e+21 4.3977330e+20] [-6.2947415e+19 9.9783660e+18 -4.0997977e+20 4.7331174e+19]]
4.8.1.3. Symmetry¶
Another problem in deep network design is the symmetry inherent in their parametrization. Assume that we have a deep network with one hidden layer with two units, say \(h_1\) and \(h_2\). In this case, we could permute the weights \(\mathbf{W}_1\) of the first layer and likewise permute the weights of the output layer to obtain the same function. There is nothing special differentiating the first hidden unit vs the second hidden unit. In other words, we have permutation symmetry among the hidden units of each layer.
This is more than just a theoretical nuisance. Imagine what would happen if we initialized all of the parameters of some layer as \(\mathbf{W}_l = c\) for some constant \(c\). In this case, the gradients for all dimensions are identical: thus not only would each unit take the same value, but it would receive the same update. Stochastic gradient descent would never break the symmetry on its own and we might never be able to realize the networks expressive power. The hidden layer would behave as if it had only a single unit. As an aside, note that while SGD would not break this symmetry, dropout regularization would!
4.8.2. Parameter Initialization¶
One way of addressing, or at least mitigating the issues raised above is through careful initialization of the weight vectors. This way we can ensure that (at least initially) the gradients do not vanish and that they maintain a reasonable scale where the network weights do not diverge. Additional care during optimization and suitable regularization ensures that things never get too bad.
4.8.2.1. Default Initialization¶
In the previous sections, e.g., in Section 3.3, we used
net.initialize(init.Normal(sigma=0.01)) to initialize the values ofour weights. If the initialization method is not specified, such as
net.initialize(), MXNet will use the default random initializationmethod: each element of the weight parameter is randomly sampled with auniform distribution \(U[-0.07, 0.07]\) and the bias parameters areall set to \(0\). Both choices tend to work well in practice formoderate problem sizes.
4.8.2.2. Xavier Initialization¶
Let’s look at the scale distribution of the activations of the hidden units \(h_{i}\) for some layer. They are given by
The weights \(W_{ij}\) are all drawn independently from the same distribution. Furthermore, let’s assume that this distribution has zero mean and variance \(\sigma^2\) (this doesn’t mean that the distribution has to be Gaussian, just that mean and variance need to exist). We don’t really have much control over the inputs into the layer \(x_j\) but let’s proceed with the somewhat unrealistic assumption that they also have zero mean and variance \(\gamma^2\) and that they’re independent of \(\mathbf{W}\). In this case, we can compute mean and variance of \(h_i\) as follows:
One way to keep the variance fixed is to set \(n_\mathrm{in} \sigma^2 = 1\). Now consider backpropagation. There we face a similar problem, albeit with gradients being propagated from the top layers. That is, instead of \(\mathbf{W} \mathbf{w}\), we need to deal with \(\mathbf{W}^\top \mathbf{g}\), where \(\mathbf{g}\) is the incoming gradient from the layer above. Using the same reasoning as for forward propagation, we see that the gradients’ variance can blow up unless \(n_\mathrm{out} \sigma^2 = 1\). This leaves us in a dilemma: we cannot possibly satisfy both conditions simultaneously. Instead, we simply try to satisfy:
This is the reasoning underlying the eponymous Xavier initialization [Glorot.Bengio.2010]. It works well enough in practice. For Gaussian random variables, the Xavier initialization picks a normal distribution with zero mean and variance \(\sigma^2 = 2/(n_\mathrm{in} + n_\mathrm{out})\). For uniformly distributed random variables \(U[-a, a]\), note that their variance is given by \(a^2/3\). Plugging \(a^2/3\) into the condition on \(\sigma^2\) yields that we should initialize uniformly with \(U\left[-\sqrt{6/(n_\mathrm{in} + n_\mathrm{out})}, \sqrt{6/(n_\mathrm{in} + n_\mathrm{out})}\right]\).
4.8.2.3. Beyond¶
The reasoning above barely scratches the surface of modern approaches toparameter initialization. In fact, MXNet has an entire
mxnet.initializer module implementing over a dozen differentheuristics. Moreover, intialization continues to be a hot area ofinquiry within research into the fundamental theory of neural networkoptimization. Some of these heuristics are especially suited for whenparameters are tied (i.e., when parameters of in different parts thenetwork are shared), for superresolution, sequence models, and relatedproblems. We recommend that the interested reader take a closer look atwhat is offered as part of this module, and investigate the recentresearch on parameter initialization. Perhaps you may come across arecent clever idea and contribute its implementation to MXNet, or youmay even invent your own scheme!
4.8.3. Summary¶ Vanishing and exploding gradients are common issues in very deep networks, unless great care is taking to ensure that gradients and parameters remain well controlled. Initialization heuristics are needed to ensure that at least the initial gradients are neither too large nor too small. The ReLU addresses one of the vanishing gradient problems, namely that gradients vanish for very large inputs. This can accelerate convergence significantly. Random initialization is key to ensure that symmetry is broken before optimization. 4.8.4. Exercises¶ Can you design other cases of symmetry breaking besides the permutation symmetry? Can we initialize all weight parameters in linear regression or in softmax regression to the same value? Look up analytic bounds on the eigenvalues of the product of two matrices. What does this tell you about ensuring that gradients are well conditioned? If we know that some terms diverge, can we fix this after the fact? Look at the paper on LARS by You, Gitman and Ginsburg, 2017 for inspiration. |
Introduction¶
While Deep Learning will probably keep its position as the hottest topic in Machine Learning for the nearer future, we also see a rising interest in white-box models whose calculations and outputs can be interpreted by a human being. Although it might look like this interpretability restrictions severely shrinkens the toolbox of usable algorithms there are still plentiful models that are both human readable and flexible enough for serious data analysis and prediction.
Some of the better known models that fulfill the requirements are for example Polynomial Regression or Decision Trees. There are also less well known options like Genetic Programming, Bayesian Additive Regression Trees (BART) or Bayesian Treed Models of which all, due to their probabilistic construction schemes, require some more dedication to be trained properly.
In this post I want to look at the RuleFit algorithm by Jerome Friedman that is both interpretable and flexible and which I personally think is a real genius solution to the interpretability-accuracy trade-off that data analysts and scientists will encounter most of the time.
In short, the core idea of RuleFit is to train a series of diverse Decision Trees, extract all single decision rules from the trees into a matrix of binary dummy variables and run a (penalized) Linear Regression between the explained variable and the original features combined with the dummy matrix.
This is easiest explained through example so for the remainder I will reconstruct the RuleFit algorithm using sklearn's Decision Tree and Linear Model frameworks.
The dataset¶
I have decided to use the House Sales in King County dataset from Kaggle since it does not contain missing data and housing price datasets are commonly used as example data for Decision Tree based ML algorihms. As this example is solely to demonstrate the idea and power of RuleFit, I will not perform any feature engineering, except for removing some unnecessary features:
import pandas as pd
df = pd.read_csv("kc_house_data.csv")
#'id' is a unique value for each row, so it's removed to avoid overfit
#'date' is probably too noisy if it's not being split up further into e.g. month, weekday etc.
#'zipcode' might be noisy, too, so I dropped it as well
df.drop(["id", "date", "zipcode"], 1, inplace = True)
#the target variable is house price
X = df.drop("price",1)
y = df["price"]
df.head(5)
price bedrooms bathrooms sqft_living sqft_lot floors waterfront view condition grade sqft_above sqft_basement yr_built yr_renovated lat long sqft_living15 sqft_lot15 0 221900.0 3 1.00 1180 5650 1.0 0 0 3 7 1180 0 1955 0 47.5112 -122.257 1340 5650 1 538000.0 3 2.25 2570 7242 2.0 0 0 3 7 2170 400 1951 1991 47.7210 -122.319 1690 7639 2 180000.0 2 1.00 770 10000 1.0 0 0 3 6 770 0 1933 0 47.7379 -122.233 2720 8062 3 604000.0 4 3.00 1960 5000 1.0 0 0 5 7 1050 910 1965 0 47.5208 -122.393 1360 5000 4 510000.0 3 2.00 1680 8080 1.0 0 0 3 8 1680 0 1987 0 47.6168 -122.045 1800 7503 Side Note: There are some time-series aspects to this dataset as well, e.g. there could be a latent trend in 'price' due to overall housing market conditions that is not being captured in the data. If this model was meant to be used in the real-world, care should be taken to account for this - especially any form of train-validate-test split (or k-fold cv) must be appropriately adjusted. (see for example this blog-post) How RuleFit works¶
At first, RuleFit constructs a large amount of if-then rules via Decision Trees and then uses 1/0 dummy variables to mark if a given observation in fulfills that rule or not. Considering the first two rows in the table above, an example rule could state
if sqft_living<=1180 and floors<=1.0 and since row one fulfills that rule, the according dummy for that observation would be $1$, whereas the second row would be marked as $0$
All dummy variables for all observations are then combined into a new dataframe together with the original features and regressed on the target variable ('price' in this example). Normally, a small subset of all rules will be sufficient to get good results, so a penalized version of Linear Regression like Lasso is used to filter out unnecessary rules.
Let's construct a very shallow Decision Tree to visualize what exactly is happending under the hood:
#the target ('price') is a continuous variable so we need a DecisionTreeRegressor
from sklearn.tree import DecisionTreeRegressor
dTree = DecisionTreeRegressor(max_depth=1)
dTree.fit(X,y)
from sklearn.tree import export_graphviz
import graphviz
graphviz.Source(export_graphviz(dTree, out_file = None, feature_names = X.columns.tolist()))
Getting a little technical (not required to understand RuleFit)¶
The predictive function of this simple tree can be expressed mathematically through a linear combination of indicator-functions:
$$f(X_i)=437284.0\cdot I_{grade\leq8.5}(X_i)+959962.411 \cdot I_{grade>8.5}(X_i)$$
which can be used to predict the sale price for any house in the dataset. Notice that the indicator functions work as set-functions on the two (hypercubic) subsets $d_1 = D|grade\leq8.5$ and $d_2 = D|grade>8.5$ where $d_1\cup d_2=D$ , where $D$ is the Instance Space of all possible realizations of the regressors in $X$ and $d_1\cap d_2=\emptyset$. $D$ could also be identified with $\mathbb{R}^k$ with $k$ being the number of regressors, however this would be a bit sloppy for binary variables or variables that can only take on values on a subset of $\mathbb{R}$.
Now, we got exactly two rules that would be usable for the RuleFit Algorithm:
1) $I_{grade\leq8.5}(X_i)$
2) $I_{grade>8.5}(X_i)$
Adding a second level of Decision Nodes, the tree extends to:
dTree2 = DecisionTreeRegressor(max_depth=2)
dTree2.fit(X,y)
graphviz.Source(export_graphviz(dTree2, out_file = None, feature_names = X.columns.tolist()))
This results in the predictive function
$$f(X_i)=315438.387\cdot I_{grade\leq8.5}(X_i)\cdot I_{lat\leq47.534}(X_i)+525766\cdot I_{grade\leq8.5}(X_i)\cdot I_{lat>47.534}(X_i) + 848042.961 \cdot I_{grade>8.5}(X_i)\cdot I_{sqft_living\leq4185.0}(X_i)+1662716.899 \cdot I_{grade>8.5}(X_i)\cdot I_{sqft_living>4185.0}(X_i)$$
which clearly shows that this notation gets quickly unfeasible for deeper Decision Trees.
Nevertheless, the tree gives us the single rules
1) $I_{grade\leq8.5}(X_i)\cdot I_{lat\leq47.534}(X_i)$
2) $I_{grade\leq8.5}(X_i)\cdot I_{lat>47.534}(X_i)$ 3) $I_{grade>8.5}(X_i)\cdot I_{sqft_living\leq4185.0}(X_i)$ 4) $I_{grade>8.5}(X_i)\cdot I_{sqft_living>4185.0}(X_i)$
plus all the rules that can be created by splitting the composite rules into their parts:
5) $I_{grade\leq8.5}(X_i)$
6) $I_{lat\leq47.534}(X_i)$ 7) $I_{lat>47.534}(X_i)$ ...and so on RuleFit in Python with sklearn¶
In practice, programming RuleFit in Python is pretty simple thanks to the functionalities of sklearn's DecisionTrees (although it needs a minor workaround, the approach is still very straightforward) Let's use the Decision Tree from above and extract the rules that were created:
pd.DataFrame(dTree2.decision_path(X).toarray()).head(5)
#.decision_path() returns a Sparse Matrix which has to be converted to a numpy-array through
#.toarray() first
0 1 2 3 4 5 6 0 1 1 1 0 0 0 0 1 1 1 0 1 0 0 0 2 1 1 0 1 0 0 0 3 1 1 1 0 0 0 0 4 1 1 0 1 0 0 0
The .decision_path(X) method will create a binary matrix marking the nodes that each observation passed by applying the if-then rules in the tree. Notice how each observation passed the root-node (column "0" in the matrix), then the observation in row "0" passed node 1 (grade<=8.5) and ended up in node 2 (lat<=47.534) which is a leaf node.
The observation in row "1" has the first two nodes in common with the former observation but will end up in node 3 (lat>47.5324) Side note: The tree is being traversed in a depth-first manner and the node id-numbers are assigned accordingly)
That way, every observation in the dataset receives 0/1-classifications based on the nodes it passed on its way through the tree. This indicates wether the sample matches the rules created by the tree or not. Notice that the .decision_path() method does not extract all possible rules from the tree but only the combined rules among the decision paths up to a given node. While there are more rules to extract, the current solution is much easier to implement and faster to run. Additionally, more rules can always be created by using an ensemble of trees and combining the resulting dummy-matrices.
Next comes the question about how to best create a whole lot of different rules. The main reason for applying RuleFit in the first place was to create an interpretable output, so for a human interactor it would be easiest if the rules were rather short. That means that we prefer shallow Decision Trees, which makes Gradient Boosting a good choice for RuleFit as it creates an ensemble of smaller trees. Nevertheless, all algorithms that generate lots of diverse Decision Trees work for RuleFit, even a simple for-loop of randomized (Extra) Decision Trees.
(I will spare the use of Tree Ensembles for rule creation until next time and continue with a single tree for simplicity)
The final ingredient before running the Linear Model is to merge the dummy dataframe with the original features - that's because Decision Trees are having trouble to deal with highly linear data. Thus, adding the original features will enable the (linear) Regression Algorithm to fix that issue for RuleFit.
pd.concat([X.reset_index(drop=True),pd.DataFrame(dTree2.decision_path(X).toarray())],1).head(5)
bedrooms bathrooms sqft_living sqft_lot floors waterfront view condition grade sqft_above ... long sqft_living15 sqft_lot15 0 1 2 3 4 5 6 0 3 1.00 1180 5650 1.0 0 0 3 7 1180 ... -122.257 1340 5650 1 1 1 0 0 0 0 1 3 2.25 2570 7242 2.0 0 0 3 7 2170 ... -122.319 1690 7639 1 1 0 1 0 0 0 2 2 1.00 770 10000 1.0 0 0 3 6 770 ... -122.233 2720 8062 1 1 0 1 0 0 0 3 4 3.00 1960 5000 1.0 0 0 5 7 1050 ... -122.393 1360 5000 1 1 1 0 0 0 0 4 3 2.00 1680 8080 1.0 0 0 3 8 1680 ... -122.045 1800 7503 1 1 0 1 0 0 0
5 rows × 24 columns
To finish the example, let's combine all the steps above using one slightly deeper Decision Tree:
#1) fit the regression tree
dTree3 = DecisionTreeRegressor(max_depth = 4)
dTree3.fit(X,y)
#2) extract rules and combine with original features (note that I dropped the first rule column which
# marks the root node, so it contains no differentiating information)
Xrules = pd.concat([X.reset_index(drop=True),pd.DataFrame(dTree3.decision_path(X).toarray()).iloc[:,1:]],1)
#3) fit a linear model on the combined data frame. For this example, plain Linear Regression is fine -
# later on, we have to switch to penalized Regression
from sklearn.linear_model import LinearRegression
linear_model = LinearRegression()
linear_model.fit(Xrules, y)
#3.5) predict unseen data. This should be rather self-explanatory, just replace the 'X'-DataFrame in
# step 2) with the unseen data and use linear_model.predict() to perform predictions
LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1, normalize=False)
This concludes this short introduction into the RuleFit-algorithm. In my next post, I will show you how to create a much higher amount of different rules via Gradient Boosting. Also, I'll show and how to extract actual rules from the rather abstract binary-matrices to actually interpret the results.
Addendum 06/25/2018 - Extracting rules as column names:¶
To get the rules directly as column names, I took the recursive class method find_node() from the RuleFit-class in the post on applied RuleFit, slightly edited the output string and removed the class references - you can also just manipulate the method's output string in the RuleFit class to get the same results for RuleFit with more than one tree:
def find_node(tree_, current_node, search_node, features):
child_left = tree_.children_left[current_node]
child_right = tree_.children_right[current_node]
split_feature = str(features[tree_.feature[current_node]])
split_value = str(tree_.threshold[current_node])
if child_left != -1:
if child_left != search_node:
left_one = find_node(tree_, child_left, search_node, features)
else:
return(str(split_feature)+" <= "+str(split_value))
else:
return ""
if child_right != -1:
if child_right != search_node:
right_one = find_node(tree_, child_right, search_node, features)
else:
return(str(split_feature)+" > "+str(split_value))
else:
return ""
if len(left_one)>0:
return(str(split_feature)+" <= "+str(split_value)+", "+left_one)
elif len(right_one)>0:
return(str(split_feature)+" > "+str(split_value)+","+right_one)
else:
return ""
Now you can input an arbitrary DecisionTree.tree
object ('tree'), enter the rule you want to extract ('search_node') according to the feature names ('features') and then start traversing the tree from the first node on ('current_node'):
find_node(tree_ = dTree2.tree_, current_node = 0, search_node = 5, features = X.columns.tolist())
'grade > 8.5,sqft_living <= 4185.0'
dfDecisionPath = pd.DataFrame(dTree2.decision_path(X).toarray())
dfDecisionPath.columns = [find_node(dTree2.tree_, 0, i, X.columns.tolist()) for i in range(7)]
dfDecisionPath.head()
grade <= 8.5 grade <= 8.5, lat <= 47.53435134887695 grade <= 8.5, lat > 47.53435134887695 grade > 8.5 grade > 8.5,sqft_living <= 4185.0 grade > 8.5,sqft_living > 4185.0 0 1 1 1 0 0 0 0 1 1 1 0 1 0 0 0 2 1 1 0 1 0 0 0 3 1 1 1 0 0 0 0 4 1 1 0 1 0 0 0
Notice that the first column corresponds to the root node which will always be True (it is removed in the actual RuleFit class). |
IWOTA 2019
International Workshop on
Operator Theory and its Applications
A remarkable pair of theorems of Grothendieck say if $p : \mathbb{C}^g \to \mathbb{C}^g$ is an injective polynomial, then $p$ is bijective and its inverse is a polynomial. We prove a free analog of this.
Recall that a free polynomial mapping in $g$ freely non-commuting variables sends $g$-tuples of matrices (of the same size) to $g$-tuples of matrices (of the same size).
Our result is as follows; if $p$ is a free polynomial mapping that is injective, then it has a free polynomial inverse. We will make use of a free version of the Jacobian Conjecture as well as results from free analysis, formal power series and skew fields.
If there is enough time we will discuss the generalization of the theorem to free rational mappings.
The Free Grothendieck Theorem is related to free analysis, automorphisms of the free algebra and tame vs. wild automorphism of the free algebra.
For every associative algebra $A$ and every class $C$ of representations of $A$ the following question (related to nullstellensatz) makes sense:
Characterize all tuples of elements $a_1,\ldots,a_n \in A$ such that vectors $\pi(a_1)v,\ldots,\pi(a_n)v$ are linearly dependent for every $\pi \in C$ and every $v$ from the representation space of $\pi$.
We answer this question for Weyl algebras and enveloping algebras of some Lie algebras. For free algebras the answer was given by Brešar and Klep in 2013.
A classical theorem of Loewner gives that a function is matrix monotone if and only if the function is real analytic and continues to an analytic self-map of the complex upper half plane. We will discuss work appearing over the last five years that extends Loewner's result and related techniques in the noncommutative setting, culminating in a recent paper of J. E. Pascoe on operator systems.
The techniques used in the proof of Loewner’s theorem point to way to a more general approach to investigating the relationship between functional behavior on real objects (e.g. monotonicity, convexity) and analytic continuation. We will discuss work in progress on automatic analyticity and a heuristic approach to results qualitatively akin to Loewner’s theorem.
The Fejér-Riesz theorem states that a nonnegative trigonometric polynomial in one variable is the (hermitian) square of an analytic polynomial of the same degree. This result was extended to operator valued polynomials by Rosenblum, and the speaker showed that Schur complement techniques can be used to factor (strictly) positive trigonometric operator polynomials in any finite number of variables as a sum of squares of polynomials of possibly large degrees, though this method fails for polynomials which are simply nonnegative. Results of Scheiderer from real algebra imply that nonnegative scalar valued trigonometric polynomials can also be factored in this way. Here we discuss an analytic approach to factoring nonnegative operator valued trigonometric polynomials in two variables.
We pose and treat a noncommutative version of the classical Waring problem for polynomials. That is, for a homogeneous noncommutative polynomial $p$, we give a condition equivalent to $p$ being expressible as sums of powers of homogeneous noncommutative polynomials.
We show that if a noncommutative polynomial $p$ has a Waring decomposition, then its coefficients must satisfy a compatibility condition. If this condition is satisfied, then we prove that $p$ has a noncommutative Waring decomposition if and only if the restriction of $p$ to commuting variables has a classical Waring decomposition.
An application of noncommutative Waring decompositions and more generally tensor decompositions is they can be used to efficiently evaluate noncommutative polynomials on tuples of matrices.
This talk is based on joint work with J. William Helton, Shiyuan Huang, and Jiawang Nie.
The talk will describe some progress over the last year on noncommunicative sets and functions. There are several lines of work to choose from. While several will be mentioned, the focus will likely be on the characterization of the composition $r$ of a convex with an analytic noncommutative rational function, $r(z)= f(q(z))$. These are free analogues of plurisubharmonic functions.
The work is joint with Meric Augat, Eric Evert, Igor Klep, Scott McCullough, Jurij Volcic.
Every nc function $f(X)$ of nonnegative real part in the row ball can be represented as an nc Herglotz integral of a positive functional $\mu$ (which we will call an
nc measure) on the Cuntz-Toeplitz operator system. The vacuum state $m$ is the natural analog, in this context, of Lebesgue measure on the circle. We introduce a notion of Lebesgue decomposition of $\mu$ with respect to $m$, thus defining absolutely continuous and singular parts of $\mu$, and describe the relationship of this decomposition to nc Cauchy transforms. We then show how absolutely continuous part can be recovered from the nc function $f(X)$, giving a partial nc analog of Fatou's theorem on radial boundary values — here the notion of almost everywhere convergence is replaced by the notion of strong resolvent convergence from the theory of unbounded operators. Our main tool is the theory of (unbounded) quadratic forms, and the construction is ultimately inspired by von Neumann's $L^2$ proof of the Radon-Nikodym theorem.
This is joint work with Robert T. W. Martin.
Let $\mathcal{S}$ be a set of operators. We say $\mathcal{S}$ satisfies the column-row property if there exists a constant $C>0$ such that for any sequence from $\mathcal{S}$ (finite or infinite,) $\|\sum S_iS_i^*\| \leq C\| \sum S_i^*S_i\|.$ If such a $C$ can be chosen to equal $1$, we say $\mathcal{S}$ satisfies the true column-row property. The column-row property when $\mathcal{S}$ is taken to be a space of multipliers is important in the theory of interpolating sequences. As far as the speaker knows, there is no known commutative complete Nevanlinna-Pick space for which the multipliers do not satisfy the (true) column-row property. We showed, in joint work with Augat and Jury, that the column-row property fails for the Fock space in two or more variables. Creating further discord, under a suitable model, a randomly chosen infinite sequence of multipliers satisfies $\|\sum S_iS_i^*\| \leq \| \sum S_i^*S_i\|$ for any space such that the monomials satisfy $\|z^\alpha\|\|z^\beta\| \geq \|z^{\alpha+\beta}\|$ in the Hilbert space norm. A deep question is whether or not the Drury-Arveson space satisfies the true column-row property. Our results suggest that naive random search may be unlikely to produce a counter-example, even if one exists.
If $T$ is a $d$-tuple of operators acting on a Hilbert space $H$, then the matrix range $W(T)$ is a closed and bounded matrix convex set containing all images of $T$ under UCP maps into matrix algebras. We consider the problem of determining when $W(T)$ uniquely determines $T$, under the assumption that $T$ is minimal in an appropriate sense. In particular, since we seek to determine $T$ up to unitary equivalence even if $H$ is infinite-dimensional, what is the appropriate (strict) sense of minimality to use? For certain restricted classes of operator tuples, we prove that the following condition is sufficient: “the compression of $T$ to any proper closed subspace has a strictly smaller matrix range”. We require this condition even for subspaces which are not reducing, and this distinction is crucial even in the case of compact operators. Joint work with Orr Shalit.
Let $V$ be a noncommutative algebraic subvariety of the nc unit ball. A couple of years ago, Salomon, Shamovich and I showed that the algebra $H^\infty(V)$ of all bounded nc analytic functions on $V$ is determined up to completely isometric isomorphism by "the geometry of $V$". In other words, $H^\infty(V)$ is completely isometrically isomorphic to $H^\infty(W)$ if and only if there is an automorphism of the nc unit ball that maps $V$ onto $W$. In our latest paper, we tackled the problem of when two such algebras are boundedly isomorphic. Deep theorems of Ball-Marx-Vinnikov and Agler-McCarthy imply that the nc variety $V$ cannot be the complete invariant for classification up to bounded isomorphism. It turns out that the correct geometric invariant to consider is the similarity envelope of the variety $V$, endowed with a certain metric. Working with the similarity envelopes of varieties we ran into new problems in nc function theory and multivariable operator theory. In certain cases we could overcome these problems, and our central result is that when $V$ and $W$ are homogeneous, $H^\infty(V)$ is boundedly isomorphic to $H^\infty(W)$ if and only if the similarity envelopes of $V$ and $W$ are related by an invertible bi-Lipschitz linear map. In my talk I will explain our results with an emphasis on the challenges that the similarity envelope poses.
Joint work with Eli Shamovich and Guy Salomon.
Free nc function theory is an extension of the theory of holomorphic functions of several complex variables to the theory of functions on matrix tuples $Z=(Z_1,\cdots,Z_d)$ where $Z_i\in M_n(\mathbb{C})$ and $n$ is allowed to vary.
One can view such a tuple as a representation of $\mathbb{C}^d$ (viewed as a bimodule over $\mathbb{C}$). Here, a representation of a bimodule $X$ over an algebra $A$ is a pair $(\sigma,T)$ where $\sigma$ is a representation of $A$ and $T$ is a bimodule map $T(axb)=\sigma(a)T(x)\sigma(b)$.
An nc function is a function defined on such tuples $Z$ and takes values in $\cup_n M_n(\mathbb{C})$ which is graded and respects direct sums and similarity (equivalently, respects intertwiners).
In previous works we studied functions that are defined on the space of representations of a bimodule $E$ (more precisely, a correspondence)
over a $W^*$-algebra $M$ (instead of the algebra $\mathbb{C}$), are graded and respect direct sums and similarities. We referred to these functions as matricial functions.
Note that, while the representations of $\mathbb{C}$ are parameterized by $\mathbb{N}\cup \{\infty\}$, those of a general $W^*$-algebra form a more complicated category.
The classical correspondence between positive kernels and Hilbert spaces of functions has been recently extended by Ball, Marx and Vinnikov to nc completely positive kernels and Hilbert spaces of nc functions.
In this talk I will discuss a similar correspondence in our matricial context. In place of a Hilbert space of functions we will get a $W^*$-correspondence whose elements are matricial functions.
I will also discuss what we get for an important class of kernels.
This is a joint work with Paul Muhly.
This talk addresses the local theory of noncommutative functions, which branches in two directions. First we will see that the ring of noncommutative functions analytic about a scalar point admit a universal skew field of fractions, whose elements are called meromorphic germs. On the other hand, if $Y$ is a semisimple (non-scalar) point, then there exist nilpotent analytic noncommutative functions about $Y$. Nevertheless, the ring of germs about $Y$ is described as the completion of the free algebra with respect to the vanishing ideal at $Y$. This is a consequence of our second main result, a free Hermite interpolation theorem: if $f$ is a noncommutative function, then for any finite set of semisimple points and a natural number $L$ there exists a noncommutative polynomial that agrees with $f$ at the chosen points up to differentials of order $L$.
This is joint work with Igor Klep and Victor Vinnikov.
Kippenhahn's Theorem asserts that the numerical range of a matrix is the convex hull of an algebraic curve. A generalization of the numerical range is the joint numerical range (JNR) of finitely many hermitian matrices. Chien and Nakazato (Linear Algebra Appl 432, 173–179, 2010) have shown that the analogous assertion, the JNR is the convex hull of an affine variety, fails for three hermitian matrices.
Here, we show that the JNR is the closed convex hull of a semi-algebraic set. First, we discuss a known statement regarding the dual convex cone to a hyperbolicity cone (Sinn, Mathematical Sciences 2:3, 2015, doi:10.1186/s40687-015-0022-0) We discuss applications in quantum mechanics, namely singularities of Wigner distributions (Schwonnek and Werner, arXiv:1802.08343 [quant-ph], 2018) and local Hamiltonians. A research opportunity is finding the non-commutative counterpart to the toric variety, which describes implicitly the log-linear model of commutative local Hamiltonians in statistics (Geiger et al., Ann Stat 34, 1463–1492, 2006).
This is joint work with Daniel Plaumann (TU Dortmund) and Rainer Sinn (FU Berlin).
The tracial truncated moment problem asks to characterize when a finite sequence of real numbers indexes by words in non-commuting variables can be represented with tracial moments of matrices. In the talk we will study the bivariate quartic case, i.e., indices run over words in two variables of degree at most four. By the result of Burgdorf and Klep every such sequence with a corresponding moment matrix of size 7 which is positive definite has a representing measure. We will present what can be said if the moment matrix is singular. First observation is that it must be of rank 4 and then we will look at each of ranks 4, 5 and 6 separately. Ranks 4 and 5 can be completely solved, i.e., the existence and the uniqueness questions of the measures can be answered. In rank 6 the existence is equivalent to the feasibility problem of certain linear matrix inequalities. Flat extensions of moment matrix, which is the most powerful tool for solving the classical problem, are mostly not a necessary condition for the existence of a measure in the tracial case. |
@user193319 I believe the natural extension to multigraphs is just ensuring that $\#(u,v) = \#(\sigma(u),\sigma(v))$ where $\# : V \times V \rightarrow \mathbb{N}$ counts the number of edges between $u$ and $v$ (which would be zero).
I have this exercise: Consider the ring $R$ of polynomials in $n$ variables with integer coefficients. Prove that the polynomial $f(x _1 ,x _2 ,...,x _n ) = x _1 x _2 ···x _n$ has $2^ {n+1} −2$ non-constant polynomials in R dividing it.
But, for $n=2$, I ca'nt find any other non-constant divisor of $f(x,y)=xy$ other than $x$, $y$, $xy$
I am presently working through example 1.21 in Hatcher's book on wedge sums of topological spaces. He makes a few claims which I am having trouble verifying. First, let me set-up some notation.Let $\{X_i\}_{i \in I}$ be a collection of topological spaces. Then $\amalg_{i \in I} X_i := \cup_{i ...
Each of the six faces of a die is marked with an integer, not necessarily positive. The die is rolled 1000 times. Show that there is a time interval such that the product of all rolls in this interval is a cube of an integer. (For example, it could happen that the product of all outcomes between 5th and 20th throws is a cube; obviously, the interval has to include at least one throw!)
On Monday, I ask for an update and get told they’re working on it. On Tuesday I get an updated itinerary!... which is exactly the same as the old one. I tell them as much and am told they’ll review my case
@Adam no. Quite frankly, I never read the title. The title should not contain additional information to the question.
Moreover, the title is vague and doesn't clearly ask a question.
And even more so, your insistence that your question is blameless with regards to the reports indicates more than ever that your question probably should be closed.
If all it takes is adding a simple "My question is that I want to find a counterexample to _______" to your question body and you refuse to do this, even after someone takes the time to give you that advice, then ya, I'd vote to close myself.
but if a title inherently states what the op is looking for I hardly see the fact that it has been explicitly restated as a reason for it to be closed, no it was because I orginally had a lot of errors in the expressions when I typed them out in latex, but I fixed them almost straight away
lol I registered for a forum on Australian politics and it just hasn't sent me a confirmation email at all how bizarre
I have a nother problem: If Train A leaves at noon from San Francisco and heads for Chicago going 40 mph. Two hours later Train B leaves the same station, also for Chicago, traveling 60mph. How long until Train B overtakes Train A?
@swagbutton8 as a check, suppose the answer were 6pm. Then train A will have travelled at 40 mph for 6 hours, giving 240 miles. Similarly, train B will have travelled at 60 mph for only four hours, giving 240 miles. So that checks out
By contrast, the answer key result of 4pm would mean that train A has gone for four hours (so 160 miles) and train B for 2 hours (so 120 miles). Hence A is still ahead of B at that point
So yeah, at first glance I’d say the answer key is wrong. The only way I could see it being correct is if they’re including the change of time zones, which I’d find pretty annoying
But 240 miles seems waaay to short to cross two time zones
So my inclination is to say the answer key is nonsense
You can actually show this using only that the derivative of a function is zero if and only if it is constant, the exponential function differentiates to almost itself, and some ingenuity. Suppose that the equation starts in the equivalent form$$ y'' - (r_1+r_2)y' + r_1r_2 y =0. \tag{1} $$(Obvi...
Hi there,
I'm currently going through a proof of why all general solutions to second ODE look the way they look. I have a question mark regarding the linked answer.
Where does the term e^{(r_1-r_2)x} come from?
It seems like it is taken out of the blue, but it yields the desired result. |
1. Introduction
Here is the definition of ensemble learning (集成学习).
Given base (weak) learners $\lbrace f_b \rbrace_{b=1}^B$ and their weights $w_b$, $$f(x)=\sum\limits_{b=1}^Bw_bf_b(x).$$ If $w_b$ is uniform, for binary classification, $f(x)=\mathrm{sign}\left(\sum\limits_{b=1}^Bf_b(x)\right);$ for multi-class classification, majority vote of $\lbrace f_b(x) \rbrace_{b=1}^B.$
In estimation theory, $$\mathbb{E}\left[(y-\widehat{y})^2\right]=\left(y-\mathbb{E}\left[\widehat{y}\right]\right)^2+\mathbb{E}\left[\left(\mathbb{E}\left[\widehat{y}\right]-\widehat{y}\right)^2\right]+\mathrm{constant}.$$ High bias can cause an algorithm to miss the relevant relations between features and target outputs (under-fitting); high variance can cause an algorithm to model the random noise in the training data, rather than the intended outputs (over-fitting).
Hence, ensemble learning can decrease bias (e.g., by boosting), decrease variance (e.g., by bagging) and improve prediction (e.g., by stacking).
To create different learners, we can have:
Different learning algorithms. Different hyper-parameters (e.g., order of polynomial regressor). Different representations (feature selection/extraction). Different training sets. Artificial noise added to the data. Random samples from posterior of the model parameters (instead of finding the maximum). 2. Boosting
Boosting involves incrementally building an ensemble by training each new model instance to emphasize the training instances that previous models misclassified.
Boosting is used for sequential base learners (个体学习器间存在强依赖关系、必须串行生成的序列化方法).
2.1 AdaBoost
AdaBoost is a representative algorithm in Boosting.
Algorithm: AdaBoost Algorithm Input: Training set $D=\lbrace x_i, y_i \rbrace;$
Base learning algorithm;
Number of training times $T.$
Output: $H(x)=\sum\limits_{t=1}^T\alpha_th_t(x).$
initialize sample weights $w_i=\frac{1}{N};$
for $t=1, ..., T$ do
train a weak learner $h_t$ with weights $w;$
compute weighted error $\varepsilon_t=\sum\limits_{i=1, h_t(x_i) \neq y_i}^Nw_i;$
compute learner coefficient $\alpha_t=\frac{1}{2}\ln\left(\frac{1-\varepsilon_t}{\varepsilon_t}\right);$
update $w_i \leftarrow w_i\exp(-\alpha_ty_ih_t(x_i));$ normalize $w_i \leftarrow \frac{w_i}{\sum_jw_j};$
end
AdaBoost can accommodate any types of base learners and has only one hyper-parameter ($T$) but it only admits a specific loss function, is sensitive to noise and outliers and slower than XGBoost.
Here is an example.
2.1.1 $K$-fold Cross Validation
In general, we cannot use all data for model training in machine learning or we could not validate our model. A common solution is the validation set approach, but it has some disadvantages.
LOOCV (Leave-One-Out Cross-Validation) approach is better but involves massive calculation.
Here is a compromise method called $K$-fold cross validation. Take $K=5,$ then we will divide the data set into 5 subsets. Take one of those subsets as test set and the rest as training set, then calculate the $MSE$ without repetition. Take average $MSE.$
Actually LOOCV is a special case of $K$-fold cross validation.
2.2 XGBoost
XGBoost is one algorithm of boosting.
2.2.1 Review of Supervised Learning
A general objective function of supervised learning is $$\mathrm{Obj}(\Theta)=L(\Theta)+\Omega(\Theta),$$ where $L(\Theta)$ is training loss and $\Omega(\Theta)$ is regularization.
2.2.2 Learning Trees
Learning trees has some advantages, it can be used in GBM, random forest, etc. It is invariant to input scale and gets good performance with little tuning.
Suppose we have $K$ trees, then $$\widehat{y}_i=\sum_{k=1}^K f_k(x_i), f_k\in\mathcal{F},$$ where $\mathcal{F}$ is the space of regression trees, $f_k(x_i)$ is the $i$th sample's weight of leaf on the $k$th tree.
Similarly, the objective function is $$\begin{aligned}\mathrm{Obj}&=L(\Theta)+\Omega(\Theta)\\&=\sum\limits_{i=1}^nl(y_i, \widehat{y}_i)+\sum\limits_{k=1}^K\Omega(f_k).\end{aligned}$$
We can use greedy algorithm (贪心算法) to figure out $\Theta.$
It is easy to have $$\widehat{y}_i^{(t)}=\sum\limits_{k=1}^tf_k(x_i)=\widehat{y}_i^{(t-1)}+f_t(x_i).$$ Note that $i$ represents sample and $t$ represents tree. Therefore, $$\begin{aligned}\mathrm{Obj}^{(t)}&=\sum\limits_{i=1}^nl(y_i, \widehat{y}_i^{(t)})+\sum\limits_{k=1}^t\Omega(f_k)\\&=\sum\limits_{i=1}^nl(y_i, \widehat{y}_i^{(t-1)}+f_t(x_i))+\Omega(f_t)+\mathrm{constant}.\end{aligned}$$ Recall Taylor‘s formula, we have $$f(x+\Delta x) \approx f(x)+f'(x)\Delta x+\frac{1}{2}f''(x)\Delta x^2,$$ then $$\begin{aligned}\mathrm{Obj}^{(t)} &\approx \sum_{i=1}^n\left[l(y_i, \widehat{y}_i^{(t-1)})+g_if_t(x_i)+\frac{1}{2}h_if_t^2(x_i)\right]+\Omega(f_t)+\mathrm{constant} \\&=\sum\limits_{i=1}^n\left[g_if_t(x_i)+\frac{1}{2}h_if_t^2(x_i)\right]+\Omega(f_t)+\left[\sum\limits_{i=1}^nl(y_i, \widehat{y}_i^{(t-1)})+\mathrm{constant}\right] \\ &\approx \sum\limits_{i=1}^n\left[g_if_t(x_i)+\frac{1}{2}h_if_t^2(x_i)\right]+\Omega(f_t),\end{aligned}$$ where $g_i=\partial_{\widehat{y}^{(t-1)}}l(y_i, \widehat{y}^{(t-1)}), h_i=\partial^2_{\widehat{y}^{(t-1)}}l(y_i, \widehat{y}^{(t-1)}).$
Now, we should figure out the regularization term with complexity of a tree. Let $$f_t(x)=w_{q(x)},$$ where $w \in \mathbb{R}^T, q: \mathbb{R}^T\to\lbrace1, ..., T\rbrace.$ Specifically, $w$ is the weight of the leaf in the tree and $q$ is the structure of the tree, $T$ is the number of leaves. Let $$\Omega(f_t)=\gamma T+\frac{1}{2}\lambda\sum\limits_{t=1}^Tw_t^2,$$ where $\gamma$ is the complexity cost by introducing additional leaf, $\gamma T$ penalizes number of leaves, $\frac{1}{2}\lambda\sum\limits_{t=1}^Tw_t^2$ is the $\mathcal{l}^2$-norm of leaf scores.
Before we optimize the objection function, we define $$I_t=\lbrace i|q(x_i)=t \rbrace,$$ representing the set of samples in $j$th leaf.
Now rewrite the objection function, we have $$\begin{aligned}\mathrm{Obj}^{(t)} &\approx \sum_{i=1}^n\left[g_if_t(x_i)+\frac{1}{2}h_if_t^2(x_i)\right]+\Omega(f_t) \\ &=\sum_{i=1}^n\left[g_iw_{q(x_i)}+\frac{1}{2}h_iw^2_{q(x_i)}\right]+\gamma T+\frac{1}{2}\lambda\sum_{t=1}^Tw_t^2 \\ &=\sum_{t=1}^T\left[\left(\sum_{i \in I_t}g_i\right)w_t+\frac{1}{2}\left(\sum_{i \in I_t}h_i+\lambda\right)w^2_t\right]+\gamma T.\end{aligned}$$ Let $G_t:=\sum\limits_{i \in I_t}g_i, H_t:=\sum\limits_{i \in I_t}h_i,$ then $$\mathrm{Obj}^{(t)} \approx \sum_{t=1}^T\left[G_tw_t+\frac{1}{2}(H_t+\lambda)w_t^2\right]+\gamma T.$$ It is obvious that the objection function approximates to quadratic function and thus given a fixed tree structure $q,$ the optimal weight in each leaf and the resulting objective value are $$w_t^*=-\frac{G_t}{H_t+\lambda}, \mathrm{Obj}^*=-\frac{1}{2}\sum_{t=1}^T\frac{G_t^2}{H_t+\lambda}+\gamma T.$$ However, there are too many possible tree structures, and thus we grow the tree greedily. We could start from tree with depth $0.$ For each leaf node of the tree, try to add a split. The gain after adding the split is $$\mathrm{Gain}=\mathrm{Obj}_\mathrm{noSplit}-\mathrm{Obj}_\mathrm{Split},$$ where $$\begin{aligned}&\mathrm{Obj}_\mathrm{noSplit}=-\frac{1}{2}\frac{(G_L+G_R)^2}{H_L+H_R+\lambda}+\gamma T_\mathrm{noSplit}, \\ &\mathrm{Obj}_\mathrm{Split}=-\frac{1}{2}\left(\frac{G^2_L}{H_L+\lambda}+\frac{G^2_R}{H_R+\lambda}\right)+\gamma T_\mathrm{Split}.\end{aligned}$$
To find the best split, we could enumerate all possible split points on the features and calculate the gain.
Algorithm: Exact Greedy Algorithm for Split Finding Input: Instance set of current node $I;$
Feature dimension $d.$
Output: Split with max gain.
$\mathrm{gain} \leftarrow 0;$
$G \leftarrow \sum_{i \in I}g_i, H \leftarrow \sum_{i \in I}h_i;$
for $k=1, ..., m$ do
$G_L \leftarrow 0, H_L \leftarrow 0;$
for $j$ in sorted ($I,$ by $\mathbf{x}_{jk}$) do
$G_L \leftarrow G_L+g_j, H_L \leftarrow H_L+h_j;$
$G_R \leftarrow G-G_L, H_R \leftarrow H-H_L;$
$\mathrm{gain} \leftarrow \max(\mathrm{gain}, \frac{G^2_L}{H_L+\lambda}+\frac{G^2_R}{H_R+\lambda}-\frac{G^2}{H+\lambda});$
end end
The gain for the split can be negative when the training loss reduction is smaller than regularization. Stop split if gain is negative and use post-pruning in XGBoost.
2.2.3 XGBoost in Practice 2.2.4 Automatic Missing Value Handling
XGBoost learns the best direction for missing values.
2.2.5 XGBoost Hyperparameters 2.3 LightGBM
LightGBM (Light Gradient Boosting Machine) is faster compared to XGBoost. Here is an example. Based on GBDT, two more techniques are used:
Gradient-Based One-Side Sampling (GOSS); Exclusive Feature Bundling (EFB). 2.4 CatBoost
We could use one-hot encoding before using XGBoost but it would be problematic if number of category is large. CatBoost mainly overcomes target leakage in category features.
2.4.1 Target Statistics
In a decision tree, we substitute a category value with target statistic. We could use greedy target statistics $$\widehat{x}_k^i=\frac{\sum\limits_{j=1}^n\mathbb{I}_{\lbrace x^i_j=x^i_k \rbrace} \cdot y_j+ap}{\sum\limits_{j=1}^n\mathbb{I}_{\lbrace x^i_j=x^i_k \rbrace}+a},$$ where $a>0$ be a smoothing parameter, $p=\frac{1}{n}\sum_iy_i$ is prior.
2.4.2 CatBoost in Practice 3. Bagging
Bagging simultaneously constructs $\lbrace f_b(x) \rbrace_{b=1}^B$ (个体学习器间不存在强依赖关系、可同时生成的序列化方法) and is also called Bootstrap AGGregatING.
3.1 Random Forest
A single decision tree is often not strong enough, so we need an ensemble of trees.
Some examples of impurity functions:
GINI: $\mathrm{Impurity}(i)=\sum\limits_cP(c|i)[1-P(c|i)]=1-\sum\limits_c[P(c|i)]^2.$ Entropy: $\mathrm{Impurity}(i)=-\sum\limits_cP(c|i)\log_2P(c|i),$ where $P(c|i)$ is the fraction of samples belonging to class $c$ at $i.$ 3.1.1 Feature Importance
Let $\mathrm{NI}$ be node importance, $\mathrm{FI}$ be feature importance. For a tree, denote $w_i$ be the probability of reaching node $i.$ We have $$\begin{aligned}&\mathrm{NI}(i)=w_i\cdot\mathrm{Impurity}(i)-\sum\limits_{j \in \mathrm{Children}(i)}w_j\cdot\mathrm{Impurity}(i), \\ &\mathrm{FI}(f)=\frac{\sum_k\mathrm{NI}(k)\cdot\mathbb{I}(\mathrm{Node\ }k\ \mathrm{splits\ on\ the\ feature\ }f)}{\sum_k\mathrm{NI}(k)}, \\ &\overline{\mathrm{FI}}(f)=\frac{\mathrm{FI}(f)}{\sum_h\mathrm{FI}(h)}.\end{aligned}$$
For a forest, average $\overline{\mathrm{FI}}(f)$ over all trees.
3.1.2 Regression Tree
You can use random forest for regression via
RandomForestRegressor.
3.1.3 Pros and Cons
Pros:
No need for feature normalization. Perform reasonably well with default hyper-parameters.
Cons:
Not easily interpretable. Time-consuming. Hard to control model complexity. 4. Stacking
Stacking uses $[f_1(x), ..., f_B(x)]$ as features to fit a meta learner. When we choose the stacking bases, we may consider:
State-of-the-Art: XGBoost, LightGBM, CatBoost. Good cost-performance ratio: Random Forest, AdaBoost, K-Nearest-Neighbors Sometimes help: Deep Neural Networks, Logistic Regression, Support Vector Machines, Gaussian Process If you have extra time: Fisher's Linear Discriminant, Naive Bayes, Learning Vector Quantization, etc. 4.1 Cross-Validation
First, we see a simple example.
Then, we see an example with grid search. |
IWOTA 2019
International Workshop on
Operator Theory and its Applications
Let $\mathfrak{S}$ be a subset of the algebra $\mathcal{L}(\mathcal{H})$ of all bounded linear operators on an infinite-dimensional complex Hilbert space $\mathcal{H}$ containing all rank one operators. In this talk, we determine the structures of nonlinear transformations on $\mathfrak{S}$ that respect the radial unitary similarity functional values of certain algebraic operations on operators. Related problems on the space of matrices are also discussed.
A projection $P$ on a Banach space is said to be bicircular if $e^{itP}$ is a surjective isometry, for every $t \in \mathbb{R}$. In this talk, I shall present a characterization of these projections on $B(\mathcal{H}, \mathcal{K})$ for $\mathcal{H}$ and $\mathcal{K}$ two Hilbert spaces. Moreover, our characterization also extends to bicircular projections on $B(X,Y)$, for a large class of pairs of Banach spaces.
This talk is based on some joint work with Fernanda Botelho & Dijana Ilišević.
Let $X,T$ be compact Hausdorff spaces, and let $E$ be a locally convex space. We characterize maps $T:C(X,E)\to C(Y,E)$ which stisfies $\mathrm{Ran}(TF-TG)\subset\mathrm{Ran}(F-G)$ for every $F,G\in C(X,E)$. These maps are automatically linear and represented as composition operators.
In this talk I will show a characterisation of isometric embeddings of the $p$-Wasserstein space on the real line. We will start with tha case when $p=1$, and provide the characterisation under the assumption that the map is bijective. We will illustrate with some examples what can go wrong in the non-bijective case. Then we continue with the case when $1\lt p \lt 2$ or $2\lt p$, in which case we show a characterisation also for non-bijective distance preserving maps. Finally, we discuss the $p=2$ case, which is special in some sense. A description of the isometry group was provided in 2010 by Kloeckner, however, an exact formula for the action of the isometries were not known. Here we show an explicit formula for the action of a general isometry in the $p=2$ case.
An attractive and fairly large class of completely bounded (cb) linear maps on $C^*$-algebras that preserve their ideals is the class of elementary operators, that is, those that can be expressed as a finite sum of two-sided multiplications $x \mapsto axb$. Motivated by the fact that derivations and automorphisms of $C^*$-algebras are also completely bounded, we consider which derivations and automorphisms of $C^*$-algebras admit the cb-norm approximation by elementary operators.
For an arbitrary subset $X$ of the real line with at least two points, let $BV(X)$ be the Banach space of all functions of bounded variation on $X$ endowed with the norm $\|\cdot\|_\infty+ \mathcal{V}(\cdot)$, where $\|\cdot\|_\infty$ and $\mathcal{V}(\cdot)$ denote the supremum norm and the total variation of a function, respectively. Our aim is to show that the group of all surjective linear isometries of $BV(X)$ is topologically reflexive.
A JB*-triple is one of those remarkable structures in which surjective linear isometries are related to corresponding algebraic isomorphisms. Some important JB*-triples are: (matrix and) operator spaces $B(\mathcal{H}, \mathcal{K})$ of bounded linear operators from a complex Hilbert space $\mathcal{H}$ to a complex Hilbert space $\mathcal{K}$, $A(\mathcal{H})$ of skew-symmetric operators on $\mathcal{H}$ and $S(\mathcal{H})$ of symmetric operators on $\mathcal{H}$, $C^\ast$-algebras, etc. The aim of this talk is to recall and connect some recent and not so recent results arising from the fact that a bijective linear operator between JB*-triples is an isometry if and only if it preserves the Jordan triple product, and also to address some open questions.
Some parts of this talk are based on joint work with several collaborators. The work of Dijana Ilisevic has been fully supported by the Croatian Science Foundation under the project IP-2016-06-1046.
Let $C^{(n)}[0, 1]$ be the linear space of $n$-times continuously differentiable functions on the closed unit interval $[0, 1]$. We consider several norms on $C^{(n)}[0, 1]$, and we give the characterization of surjective isometries on $C^{(n)}[0, 1]$.
Let $\mathcal{B}(X)$ be the algebra of all bounded linear operators on a complex Banach space $X$. We characterize additive maps from $\mathcal{B}(X)$ onto $\mathcal{B}(Y)$ compressing the pseudospectrum subsets $\Delta_{\epsilon}(.)$, where $\Delta_{\epsilon}(.)$ stands for any one of the spectral functions $\sigma_{\epsilon}$, $\sigma_{\epsilon}^l$ and $\sigma_{\epsilon}^r$ for some $\epsilon \gt 0$.
I will briefly introduce the group $C^\ast$-algebra and give some examples on abelian groups and a class of exponential Lie groups. Afterward I will talk about the isomorphism problem for $C^\ast$-algebras on Lie groups, mainly, to answer when two group $C^\ast$-algebras are isomorphic and to what extent a Lie group is determined by its $C^\ast$-algebra.
Let $H(\mathbb D)$ be the linear space of all analytic functions on the open unit disc $\mathbb D$. We define $\mathcal S^\infty$ by the linear subspace of all $f \in H(\mathbb D)$ with bounded derivative $f'$ on $\mathbb D$. We give the characterization of surjective, not necessarily linear, isometries on $\mathcal S^\infty$ with respect to the following two norms: $\| f \|_\infty + \| f' \|_\infty$ and $|f(a)| + \| f' \|_\infty$ for $a \in \mathbb D$, where $\| \cdot \|_\infty$ is the supremum norm on $\mathbb D$.
Means of positive definite and positive semidefinite matrices or operators can be considered as operations. Therefore, transformations respecting them are kinds of morphisms. In this talk we survey former results describing the structures of such maps and also present recent ones in different settings, stretching from the case of matrix algebras to abstract $C^*$-algebras.
Let $X$, $Y$, $Z$ are Banach spaces. An operator $Q : Z\rightarrow Y$ is a quotient operator if $Q$ is surjective and $\|y\|= \inf\{\|z\|\ : z\in Z, Q(z)=y \}$ for every $y \in Y$. Also let $I$ be an identity operator on $X$. In this talk I will mention some conditions under which the maps $I \otimes Q$ and $Q \otimes I$ are again quotient operators on the respective tensor product spaces. Also some related results and recent developments will be presented. This is a joint work with TSSRK Rao.
In this talk, I will give a complete description of order isomorphisms between intervals of von Neumann algebras. For this description, Jordan *-isomorphisms and locally measurable operators play crucial role. In particular, I will explain that every order isomorphism between self-adjoint parts of two von Neumann algebras without commutative direct summands is affine. This description generalizes several previous works on type I factors by L. Molnar and P. Semrl.
Let $\mathcal{H}$ be a complex Hilbert space, and let $\mathcal{L}(\mathcal{H})$ be the space of all bounded linear operators on $\mathcal{H}$.
In this talk we present new results concerning nonlinear maps on $\mathcal{L}(\mathcal{H})$ leaving invariant the pseudo spectrum of operators. The corresponding results for the pseudo spectral radius of operators are also presented.
Let $H$ be a complex Hilbert space and denote by $B(H)_+$ the set of all positive operators on $H$. We say that $A \in B(H)_+$ is absolutely continuous with respect to $B\in B(H)_+$ if, for every sequence $(x_n)$ in $H$, $(A(x_n−x_m), x_n−x_m)\to 0$ and $(Bx_n, x_n) \to 0$ imply $(Ax_n, x_n) \to 0$. On the other hand we say that $A,B$ are mutually singular if $C=0$ is the only positive operator such that $C\leq A$ and $C\leq B$. The aim of this talk is to describe the general form of those bijective maps $\phi : B(H)_+ \to B(H)_+$ which preserve absolute continuity, respectively, singularity in both directions.
In this talk I will survey some recent results on surjective isometries of various different metric spaces of probability measures. I call them Banach-Stone type theorems, because all these theorems are saying that the isometry group can be described by morphisms of the underling structure. The range of examples includes the Lévy-, Kolmogorov-Smirnov-, Kuiper-, and Lévy-Prokhorov metrics. Closing the talk I will demonstrate that things could become very complicated when one drops the surjectivity condition and tries to characterize isometric embeddings of Wasserstein spaces with discrete underlying space.
The talk is based on a joint work with György Pál Gehér (University of Reading) and Dániel Virosztek (IST Austria).
I will report on some aspects of our study of Wasserstein isometries --- a joint work with György Pál Gehér (University of Reading) and Tamás Titkos (Rényi Institute, Budapest).
More precisely, I will present the description of the isometry group of the Wasserstein space over the interval for all parameters $p \geq 1.$ We will see that the exceptional parameter value is $p=1$; the isometry group of $\mathcal{W}_1([0,1])$ is the Klein group $C_2 \times C_2,$ while we have isometric rigidity for p>1. Some isometries of $\mathcal{W}_1([0,1])$ even split mass. We never use the bijectivity assumption in our arguments, so --- as a byproduct --- we obtain that the (a priori non-surjective) isometric self-embeddings of Wasserstein spaces over the interval are necessarily bijective isometries. |
I was trying to implement the algorithm from the paper "Adapting a Fourier pseudospectral method to Dirichlet boundary conditions for Rayleigh–Benard convection".
I am having a hard time to understand the way the boundary conditions are imposed.
The author rewrites the no-slip (on upper and lower boundaries) boundary conditions as
$$ \sum_{q} \tilde f_{\bot,pq} = 0 \forall p$$
where $p$ and $q$ are horizontal and vertical wavenumbers.
How do we impose that? |
Before continuing with my studies, and obtaining a Ph.D in the are of computer vision, I worked for 5 years as an automation engineer, designing, installing and modernizing automation systems. From this ...
... by alerting the driver of possible hazardous situations.Following is the official description of the project: "Most technical systems, for example cars, must work reliably at key-turn. Therefore, su ...
Back to the roots! Before obtaining PhD degree in the area of machine vision I designed automation systems. This year I "returned back to the roots" (for a shortwhile only) and designed a complete automation ...
WhereFuel-3D is a spin-off company from the University of Oxford that has created one of the most accurate 3D-scanners in its price range.My AffiliationI currently work at Fuel-3D in the algorithm ...
... bridged mode.There exists a tool for configuring the virtual network connections, called Virtual Network Editor, that is not directly available from the VMPlayer GUI in Linux. In my Ubuntu installation ...
I just noticed that there is a bug in the code that manifests itself in "later" versions of Matlab. The bug is related to the Matlab function called IMFILTER. What happens is that when calculating the diffusio ...
Here it is finally, the segmentation code that I had promised quite long time ago! The Matlab code segments stereo disparity maps using a model based on implicit dynamic surfaces (also known as level sets). ...
Here is a reworked version of a technical paper related to non-linear image diffusion and a solver that can be used for efficiently solving the equation. The tutorial shows, step by step, how the equation ...
... directly for vector valued images, such as RGB. It is clear that the grey value constancy assumption does not hold for surfaces with a non-Lambertian behaviour and, therefore, the underlying image representatio ...
... HärtelTypically optical flow is used in areas like video motion compensation, movement detection of objects (such as people, cars and so no), but the researcherers at the SCIAN Lab have shown that ...
... directly for vector valued images, such as RGB. It is clear that the grey value constancy assumption does not hold for surfaces with a non-Lambertian behaviour and, therefore, the underlying image representatio ...
... and can also be included as part of a system-on-chip (SOC). Therefore it can be of great interest to the sector of the machine vision community that deals with embedded and/or real-time applications.Published ...
... testing loop between the high- and low-level vision systems should converge to a more coherent solution.PublishedPublished in Journal of Machine Vision and Applications, 2011. Link to html version ...
... image features which can be used to interpret the scene more accurately and therefore disambiguate low-level interpretations by biasing the correct disparity. The fusion process is capable of producing ...
... \]\[ \Phi_t = |\nabla \Phi| DIV \Big( \dfrac \nabla \Phi \Big) + \nabla g(I) \cdot \nabla \Phi \]where \( \Phi \) is a level-set function defining the segment, \( g(I) \) is a `stopping' function, ...
Here it is, finally! Interested in segmenting disparity maps, or perhaps about robust image representation spaces for disparity calculation, or about variational disparity or optical-flow calculation? ...
... are made to medical experts. However, not all the couples have the possibility of consulting an expert. Chile stretches over 4630km, is only 430km at the widest and has a population of 17 million. Therefore, ... |
There is a general way to do these sorts of problems. The idea is to consider equilibria of both acid/base and of the water.
When there is solution of a weak acid and it salt, or just the weak acid or just the salt, i.e. pure HA, NaA + HA or pure NaA, then there is no distinction between these types of solution because of the equilibria involved. ( NaA just represents any salt ). The following calculations apply for each or any of these solutions.
Start by defining the equilibrium constants for the reactions. A molecule dissociates as $\mathrm{HA \leftrightharpoons H^+ + A^-}$, but to be general we let the base be [B] instead of $\mathrm{[A^-]}$ and so write $\mathrm{HA \leftrightharpoons H^+ + B}$
$$K_a=\mathrm{[H^+]_e[B]_e/[HA]_e} \tag{1}$$
where the concentrations are those at equilibrium.
There is also the equilibrium $\mathrm{H_2O \leftrightharpoons H^+ + OH^-}$ to consider and
$$K_w=\mathrm{[H^+]_e[OH^-]_e}$$
We know the amounts of acid $c_a$ and base $c_b$ added at the start of the reaction. To obtain the pH at equilibrium the amount $\mathrm{[H^+]}$ has to be calculated. To do this work out the concentration of acid and base in terms of the initial amount and the ionised species.
The total concentration of HA is
$$c_a=\mathrm{[HA]_e+[H^+]_e - [OH^-]_e}\tag{1a}$$
and for the base
$$c_b=\mathrm{[B]_e -[H^+]_e + [OH^-]_e\tag{1b}}$$
The
e subscripts are now dropped for clarity. [Note that some authors use mass and charge balance instead to work out the concentrations.]
These values can be substituted into the equilibrium constant equation (1) and using $K_w=\mathrm{[H^+][OH^-]} $. Substitution leads to the general equation
$$K_A=\mathrm{[H^+]}\frac{c_b + \mathrm{[H^+]} - K_w/\mathrm{[H^+]} }{c_a - \mathrm{[H^+]} + K_w/\mathrm{[H^+]} }\tag{2}$$
which is a cubic equation in $\mathrm{[H^+]}$ that is best solved numerically in the most general case. This equation can be simplified under different conditions as the examples below demonstrate.
Note that when $c_b=0$ equation (2) describes the case of pure HA, when $c_a=0$ it describes the hydrolysis of pure NaA solution.
The figure shows how the pH changes for a weak acid/conjugate base with the p$K_A$ shown and $c_a$ = 0.01 M over a range of $c_b$ concentrations. The curve was calculated by numerically solving eqn 2. The straight line (coloured grey) is the Henserson-Hasselbalch equation described below in (A). It is clear where this approximate equation fails.
Figure 1. pH vs concentration of base $c_b$. Red line full calculation, straight line (grey) the approximate Henderson-Hasselbalch eqn. The dashed lines show the p$K_A$ and the acid concentration $c_a$ used. The region where the Henderson-Hasselbalch eq. is a good approximation is clear; $c_b$ should be no less than $\approx 0.1c_a$. The plot uses the values in example (B).
Example (A) The
Henderson-Hasselbalch eqn is an approximation but limits should be tested.
When the concentrations $c_a$ and $c_b$ are far larger than $(\mathrm{[H^+]} + K_w/\mathrm{[H^+]})$ or $(\mathrm{[H^+]} - K_w/\mathrm{[H^+]})$ these terms can be ignored without error. This produces
$$\mathrm{[H^+]} = K_Ac_a/c_b, \qquad \mathrm{pH}=\mathrm{p}K_A - \log_{10}(c_a/c_b) $$
which is the formula often used. The log form is called the Henderson-Hasselbalch eqn. It is often the case that $c_a/c_b=1$ then $\mathrm{pH}=\mathrm{p}K_A$.
Example B. What is the pH of a buffer solution consisting of 0.01 M of a weak acid with $K_A$ = 1.5$\cdot$ 10
-4, and 0.01 M of its conjugate base?
We could use eqn. 2 but this means solving a cubic which is not necessary. Realising that in acid solution $\mathrm{[H^+] \gg [OH^-]}$ then
$$K_A=\mathrm{[H^+]}\frac{c_b + \mathrm{[H^+]} - K_w/\mathrm{[H^+]} }{c_a - \mathrm{[H^+]} + K_w/\mathrm{[H^+]} } \to \mathrm{[H^+]}\frac{ (c_b + \mathrm{[H^+]}) }{ (c_a - \mathrm{[H^+]} )} $$
This equation has to be solved and not approximated to the Henderson-Hasselbalch eqn. since $\mathrm{[H^+]}$ may not be very different to $c_b$.
Letting $x =\mathrm{[H^+]}$ then $x^2+(c_b+K_A)x-K_Ac_a=0$ or
$$x=\frac{-(c_b+K_A)+\sqrt{(c_b+K_a)^2+4c_aK_A}}{2} $$
which gives $\mathrm{[H^+]}=1.45\cdot 10^{-4}$ or a pH = 3.84. This answer is close to that of the HH equation which gives $\mathrm{pH = p}K_A = 3.82$, so as it turns out the HH equation was good enough. |
The thing is that you need to know the coordination environment in the first place as ionic radii are C.N.-dependent. From Shannon's canonical paper "Revised effective ionic radii and systematic studies of interatomic distances in halides and chalcogenides"[1]:\begin{array}{cc}\hline\text{Ion} & \text{C.N.} & \text{C.R., Å} & \text{I.R., Å} \...
You are mixing apples and oranges.Or, to be more precise, an ionic radius for $\ce{Be^2+}$ with coordination number (C.N.) 6 and van der Waals radius of $\ce{He}$.To make it clear I compiled data for van der Waals and covalent radii [1, p. 9-58] as well as ionic radii [1, p. 12-13]:$$\begin{array}{lccccc}\hline\text{Element} & R_\mathrm{vdW}/\pu{Å}...
As you may know, atomic orbitals are wave functions, solutions of the Schrödinger equation for an atomic system.In a perfectly spherical system you may express an orbital as a function depending on the distance from the nucleus ($r$) and two angles ($\phi$ and $\theta$).[If you pick a particular $r$, the angles $\phi$ and $\theta$ work in a way similar to ...
Let me start by providing a reference to a nice collection of van der Waals' data (actually the site provides a lot of other interesting data about the elements, you might want to poke around). As to your questions,Yes, that is a general trend, but as you can see there are many exceptions. The trend exists for the same reason that covalent radii generally ...
The whole concept of ionic radius is not sharply defined one. Note, that the same is true for the notion of atomic radius, as well as for any other kind of radius or any other notion of size. In general, the notion of size of an object looses its usual (i.e. classical) meaning in the microscopic realm since objects there do not have well-defined physical ...
As you told it is not possible to measure the inter nuclear distance of single atom. But here internuclear distance does not mean diameter of a single atom but it means distance between nucleus of two atom of same element.This internuclear distance can be determined by two methods:X-rays methodSpectroscopy methodNote: Atomic radius is not a set ...
There is no such thing as unified atomic radius. An atomic radius is a class consisting of van der Waals radii $R_\mathrm{vdW}$ (steric interactions), covalent radii $R_\mathrm{cov}$, and ionic radii $R_\mathrm{i}$ (and some other as well).From the recent edition of CRC Handbook [1, p. 9-57]:$$\begin{array}{llrr}\hline\text{Element} & \text{...
Iron and sulfur ions bond by electrostatic attraction.This is the right answer. I see a hard time maneuvering around it.Breaking an ionic bond between $\ce{Fe^2+}$ and $\ce{S^2-}$ releases energy.Around the basic definition of forces and bonds, there are usually two big beliefs. One is a big misconception, and the other one is right.The ...
The short answer: no, there is no such a general distance.The long answer: atoms and molecules are not just hanging is the vacuum and waiting someone to grab them. They actually have rather high energy in translation, rotation and vibration modes and particles actually collide during a reaction some of these energies can be involved. The science that study ...
I completely agree with Wildcat's answer. Excellent stuff. This part, though, got me thinking:Anyway, values of the ionic radii are based on crystallographic data, but what is actually determined in X-ray crystallography is the distance between two ions, not their radii. Then it is basically up to you how to divide this distance into the radii of ions ...
Atomic radii cannot really be uniquely nor accurately defined. However, there is generally a very big difference between covalent, van der Waals, and ionic radii.So, by most any definition of the respective terms the covalent radius of fluorine will be smaller than the van der Waals radius of neon. Similarly, the ionic radius of fluoride ion will be ...
We do not know. Physicists THINK that there ought to be a fundamental limit in scale for space-time that occurs near $10^{-33}$ centimeters and $10^{-43}$ seconds; often called the Planck Scale. There is also a unit of mass associated with this scale which is about $10^{-5}$ GRAMS or $10^{19}$ Billion Electron Volts (BeV or GeV). It is a simple matter to ...
To answer the second part:We know $M=208m_e$, $Z=3$, $\hbar=\frac{h}{2\pi}$.Part one has a mistake, as it is$$\begin{align}&&\frac{Mv^2}{r}&=\mathcal{k_e}\cdot\frac{(\mathcal{e})(Z\mathcal{e})}{r^2}\\&&Mvr&=n\hbar\\\implies&& r &=\frac{n^2\hbar^2}{M\cdot\mathcal{k_e}\cdot Z \mathcal{e}^2}\end{align}$$We also ...
You are right: the outer boundary is quite arbitrary. There is no any intrinsic threshold; the probability just gradually decreases lower and lower, but never reaches 0. You may draw a sphere so as to have the electron inside with 90% probability, or 95%, or 99%, or any other value as you see fit.This, in particular, is the reason why atomic radii are ...
definitely $\ce{O-}$In $\ce{O-}$ 7 outer electrons are tied to nuclei with 6 positive charges (number 8 minus 2 electons of inner shell), while in $\ce{F-}$ 8 outer electrons are tied to nuclei with 7 charges. The difference, however, should be quite small.
The series you cited belongs to so known 'metallic' radius, and it depends on crystal structure of the element, which changes through the row. In short, you cited series, that is not suited for consideration of isolated tendencies.There are, indeed, several types of atomic radii (covalent with different valur for bonds of different order, van-der-waals ...
There are different notions of atomic radius; the one you're using seems to be the metallic radius, which is half the distance between nearest neighbors in the metal. This notion is very sensitive to the number of electrons per atom involved in bonding. Scandium has only 3 valence electrons, while $\ce{Ti}$ has 4. These all participate, in some extent, in ...
I think the point of this question is for you to realise that options 1 and 2 can't be the correct answer.As you go down a group, new electron shells are occupied which extend further from the nucleus, increasing the atomic radius. Therefore option 1 must be wrong.Effective nuclear charge increases across a period because the nuclear charge increases but ...
There are several definitions of atomic radius. Guessing from context of your answer, I assume you are talking about van der Waals atomic radius. By definition, it is a radius of atom in respect of equilibrium position with other atoms when only van der Waals forces are acting between the atoms. Since atoms of inert gases do not tend to make covalent bonds, ...
The size of the 1s orbital in $\ce{Li+}$ and $\ce{Al^3+}$ is not necessarily the same. In fact, they are quite different because of the much larger effective nuclear charge in $\ce{Al^3+}$. One can easily look up the wavefunction for the 1s orbital and see the radial dependence on $Z_\mathrm{eff}$.Therefore, merely looking at the electronic configuration ...
This is something students often forget when comparing radii: shells are not simply additive.If you move along eight elements to go from lithium to sodium, you are not only filling the second shell and putting one electron into the third, you are also adding a proton to the nucleus with every additional electron. The larger the nuclear charge is, the ...
The first thing you should realize is that there are various definitions of the size of the atom. One distinguishes for example the covalent radius, the Van der Waals radius and the ionic radius. The covalent radius is based on the binding of atoms into molecules and on the resulting bond length. The van der Waals radius is half the minimum distance between ...
As Paul has pointed out, there are many different ways of defining atomic radii.ron has also pointed out that the trend goes in the opposite direction from what you said.For instance, according to Ptable,|Calculated radii || |r(pm)| |r(pm)||H | 53 |He| 31 ||F | 42 |Ne| 38 ||Cl| 79 |Ar| 71 ||Br| 94 |Kr| 88 |I have added hydrogen here as ...
The Bohr model is a comparatively simple model, whereas the quantum description is part of the huge theory of quantum mechanics, so to list all the differences including implications would need to explain quantum mechanics itself.But I think quite a good starting point would be to say, the Bohr model has electrons travel on certain, specific, classical ...
Covalent radius is measured as the distance ($r_\text{cov} = d/2$) between the nuclei of two bonded atoms (covalent). But, if you try to do the same for noble gases/inert gases (good luck!), as they have fully filled $np$ orbitals, they will repel each other, hence the closest distance between the two atoms is taken (high pressure, low temperature) as the ...
That is impressive. What you may need to bear in mind is that as the nuclear charge increases, all the electron orbitals move in closer to the nucleus. You may have been assuming, consciously or unconsciously, that the electron shells appear at a relatively fixed distance from the nucleus -- that is, that the n=1 shell is always at about the same distance ...
The commonly used method of obtaining ionic radii for higher coordination numbers (C.N.) is to extrapolate values from the Shannon's scale [1] using the relationship between ionic radius and coordination number proposed by Zachariasen [2]:... the bond lengths $D(N_1)$ and $D(N_2)$ for cation coordination numbers $N_1$ and $N_2$ were related as follows...
Atomic radius is inversely proportional to the effective nuclear charge. As we move from left to right in a period the effective nuclear charge increases. This will decrease the radius of an atom. At the same time, in transition elements the number of electrons in the 3d sub-shell will increase. This will repel the already present 4s electrons. This ... |
(This question has been asked on math.se, with no response.)
I am studying Paz's "Introduction to Probabilistic Automata" and there is an exercise I cannot solve:
Ex. 11, p. 170: Let $\Sigma = \{a\}$. Show that the number of nonregular events of the form $\{x \mid p^A(x) > \lambda\} \subseteq \Sigma^*$, where $A$ is a given $n$-state probabilistic automaton over $\Sigma$ is $\leq n$.
(Recall that a probabilistic automaton is simply an automaton where each transition has a given probability of crossing; then $p^A(x)$ is the probability of acceptance of $x$, that is, the probability that starting from the initial state the automaton reaches a final state after $|x|$ transitions.)
I can rephrase it, for instance, as:
Let $M$ be a stochastic matrix of dimension $n \times n$ and $a$ be a letter. There are at most $n$ different values of $\lambda \in [0,1]$ for which $\{a^k \mid (1, 0, \ldots, 0)M^k(0,\ldots,0,1)^{\mbox{T}} > \lambda\}$ is nonregular.
I am strongly interested in the form those nonregular languages may have, so this could be a good start. Any help? |
IWOTA 2019
International Workshop on
Operator Theory and its Applications
The notion of combinatorial Perron value was introduced in [1]. We continue the study of this parameter and also introduce a new parameter $\pi_e(M)$ which gives a new lower bound on the spectral radius of the bottleneck matrix $M$ of a rooted tree. We prove a bound on the approximation error for $\pi_e(M)$. Several properties of these two parameters are shown. These ideas are motivated by the concept of algebraic connectivity. A certain extension property for the combinatorial Perron value is shown and it allows us to define a new center concept for caterpillars. We also compare computationally this new center to the so-called characteristic set, i.e., the center obtained from algebraic connectivity.
Joint work with Lorenzo Ciardo and Geir Dahl.
The consideration of non-Hermitian Hamiltonians in quantum physics in the last decades gave rise to an intense research activity in physins and mathematics. In this talk, we focus in the extension for this setup of classical results on thermodynamical inequalities, and related topics.
The task of balancing a nonnegative matrix by positive diagonal matrices reduces to find the fixed point of a nonlinear operator. The Sinkhorn-Knopp algorithm provides a simple iterative method for solving this problem but it can exhibit a very slow convergent behavior even in deceivingly simple cases. In this talk it is shown that the fixed point problem can also be recast as a constrained nonlinear multiparameter eigenvalue problem. Based on this equivalent formulation some adaptations of the power method and Arnoldi process are proposed for computing the dominant eigenvector which defines the structure of the diagonal transformations. Numerical results illustrate that our novel methods accelerate significantly the convergence of the customary Sinkhorn-Knopp iteration for matrix balancing especially in the case of clustered dominant eigenvalues. This is a joint work with Alessio Aristodemo from the University of Pisa.
We give recent results on the spectral theory of (nonsymmetric) tridiagonal matrices over a field.
A crystal framework $C$ in $\mathbb R^d$ is a bar-joint framework $(G, p)$, where $G = (V, E)$ is a countable simple graph and $p : V \to \mathbb R^d$ is an injective translationally periodic placement of the vertices as joints $p(v)$. Given a crystal framework, the space of all complex infinitesimal flexes is the vector space of $C^d$-valued velocity fields on the joints of the framework that satisfy the first-order flex condition for every bar. This countable set of equations gives rise to an expanded version of the adjacancy matrix of a graph, called the rigidity matrix.
Due to the periodic structure of the framework the flex space is invariant under the natural translation operators and is closed with respect to the topology of coordinatewise convergence. In this talk, we shall indicate some new methods from analysis and commutative algebra. In particular, we generalise Lefranc’s spectral synthesis theorem for the case $r = 1$ to a vector-valued setting, to obtain that the flex space of a crystal framework is the closed linear span of flexes which are vector-valued polynomially weighted geometric multi-sequences.
This is joint work with Professor Stephen Power.
The concept of $f$-lacunary statistical convergence which is, in fact, a generalization of lacunary statistical convergence, has been introduced recently by Bhardwaj and Dhawan (Abstr. Appl. Anal., 2016). The main object of this paper is to prove Korovkin type approximation theorems using the notion of $f$-lacunary statistical convergence. A relationship between the newly established Korovkin type approximation theorems via $f$-lacunary statistical convergence, the classical Korovkin theorems and their lacunary statistical analogs has been studied. A new concept of $f$-lacunary statistical convergence of degree $\beta$ ($0 \lt \beta \lt 1$) has also been introduced and as an application a corresponding Korovkin type theorem is established.
The concept of controlled invariance is an important tool to formulate a number of control problems for linear systems, like model matching, trajectory tracking or disturbance decoupling. It leads to constructive methods and solutions for finite dimensional time-invariant systems, that are based on the computation of an invariant region. We are interested into the extension of these methods for systems over a ring or a semiring. Our motivation is to address the control of systems defined over a network, that are met in particular in communication processes and in production management. Such systems are in general characterized by time delays and constraints on the state and the control, that are handled using specific classes of systems, namely time delay systems or max-plus linear systems. These models can be seen as linear systems with coefficients on a ring or semiring of operators, and some control problems can be formulated in terms of cone or polyhedral invariance. The solution of these problems consists in the identification of invariant cones or polyhedra. Further, the controlled invariance of a region is equivalent to the existence of a feedback that makes it invariant for the closed-loop system, and the computation of such a feedback is the key step toward the implementation of the solution. In general, this feedback is not linear but solution of a linear system of equations. We identify some operator rings and semirings for which the existence of such a feedback is checkable, and illustrate the results on examples from production management.
In the talk I will address connections between structured storage or Lyapunov functions of a class of interconnected systems (dynamical networks) and dissipativity properties of the individual systems. I will present a result which states that if a dynamical network, composed as a set of linear time invariant (LTI) systems interconnected over an acyclic graph, admits an additive quadratic Lyapunov function, then the individual systems in the network are dissipative with respect to a (nonempty) set of interconnection neutral supply functions. Each supply function from this set is defined on a single interconnection link in the network. Specific characterizations of neutral supply functions will be presented which imply robustness of network stability/dissiptivity to removal of interconnection links.
We give answer to an open question by proving sufficient optimality conditions for optimal control problems with time delays in the state and control variables. In the proof of our main results, we transform delayed optimal control problems to equivalent non-delayed problems. This allows us to use well-known theorems that ensure sufficient optimality conditions for non-delayed optimal control problems. Finally examples are given with the purpose to illustrate the obtained results.
We consider a non-linear non-stationary control system and study the problem of mappability of this system to a given linear system with analytic matrices by the change of variables. We give necessary and sufficient conditions of such mappability which in particular consist in coinsiding od some invariants of the systems. These invariants are the certain meromorphic functions that characterize the system. We also study the question of description of such invariants in the case of two-dimentional systems.
We provide necessary and sufficient conditions for the generalized $\star$-Sylvester matrix equation, $AXB + CX^\star D = E$, to have exactly one solution for any right-hand side $E$. These conditions are given for arbitrary coefficient matrices $A, B, C, D$ (either square or rectangular) and generalize existing results for the same equation with square coefficients [1]. We also review the known results regarding the existence and uniqueness of solution for generalized Sylvester and $\star$-Sylvester equations. The contents of this talk have been recently published in [2].
References:
In 1970, Martinet (Inform. Rech. Oper. 4 (1970), 154-158) introduced the well known Proximal Point Algorithm, popularly known as PPA which serves as an important tool to solve the minimization problem. Later on, in 1976, Rockafellar (SIAM J. Control Optim. 14 (1976), 877-898) studied PPA to prove the convergence of a solution of the convex minimization problem in the framework of Hilbert space. In 2013, Bacak (Israel. J. Math. 194 (2013), 689-701) introduced the proximal point algorithm in $\operatorname{CAT}(0)$ space. Note that the method has been modified so that it converges strongly by Cholamjiak (Optim. Lett. 9 (2015), 1401-1410) using the Halpern procedure. In this paper, we study proximal point algorithms for solving optimization problems in $\operatorname{CAT}(0)$ space via Thakur Iteration scheme.
As is known, the singular values of a linear operator give a quite handy description of it. The unit sphere of vectors is mapped to an elliptical set with half-axes of these lengths. However, this connection becomes kind of useless when the arguments are again matrices. An operator like $A:\mathbb{C}^n\to\mathbb{C}^n,x\mapsto Ax$ is represented well by the singular values as the components of the image vector $Ax$ are bounded. When considering $A:\mathbb{C}^{n\times n}\to\mathbb{C}^{n\times n},X\mapsto AX$, one may, of course, regard $X$ as an vector in $\mathbb{C}^{n^2}$, but this would neglect its true nature. Luckily, there are general estimates for $AX$ involving the singular values of $A$ and $X$.
The situation gets trickier when looking at bilinear maps like the commutator $(X,Y)\mapsto XY-YX$. The analog of the initially given image then could be: What is the possible range of singular values if the ones of $X$ and $Y$ are given? Of course, it is possible to re-interpret such a map as a linear one that operates on tensor products. But, just looking at the Hilbert-Schmidt norms already unveils that these numbers are exaggerated by far due to the inclusion of lots of new arguments that are not coming from original input $X$ and $Y$.
So, we want to explore this question by investigating exemplary maps via snapshots. In the vector case the discreteness comes from the components of an orthogonal basis. This role shall be shifted to the rank. Consequently, we ask for the shape of the singular spectrum range when the input matrices are restricted to have singular value vectors $(a,0,\ldots)$, $(b,b,0,\ldots)$, and so on.
We will study the linear algebra properties of matrix pencils that are associated with linear time-invariant dissipative Hamiltonian descriptor systems of the form \[ E\dot x=\left(J-R\right)Qx.\] where $J,R\in\mathbb{C}^{n,n}$, $E,Q\in\mathbb{C}^{n,m}$, $m\leq n$, $J^*=-J$ and $R^*=R\geq 0$. The following properties are shown: all eigenvalues are in the closed left half plane, the nonzero finite eigenvalues on the imaginary axis are semisimple, the index is at most two, and there are restrictions for the possible left and right minimal indices.
The talk is based on a joint paper with C. Mehl and V. Mehrmann with the same title (SIMAX 2018).
Modern datasets are increasingly collected by teams of agents that are spatially distributed: sensor networks, networks of cameras, and teams of robots. To extract information in a scalable manner from those distributed datasets, we need distributed learning. In the vision of distributed learning, no central node exists; the spatially distributed agents are linked by a sparse communication network and exchange short messages between themselves to directly solve the learning problem. To work in the real-world, a distributed learning algorithm must cope with several challenges, e.g., correlated data, failures in the communication network, and minimal knowledge of the network topology. In this talk, we present some recent distributed learning algorithms that can cope with such challenges. Although our algorithms are simple extensions of known ones, these extensions require new mathematical proofs that elicit interesting applications of probability theory tools, namely, ergodic theory.
In this talk, we will discuss few electrical circuits under fractional derivative state space formulation. after their descriptor fractional order mathematical model we will analyse their controllability and observability conditions with the help of Rank conditions which are obtained with the help of Gramian matrices and Drazin inverse. |
Ok, let's go, one topic at a time:
We model the blackbody radiation as the one escaping by a small hole on the wall of a metal made object manteined at temperature T (such hole connects the interior cavity to the outside). Because the walls of the cavity are made of a conducting material (metal), the electric field vanishes at the boudaries. This, together with the fact that the incident and emmitted waves are interfering with each other inside the cavities, generates the standing wave pattern of the cavity radiation.
We can see this in 2 ways. Firstly, look at the two formulas you've written. The second one is clearly a generalization of the first, if you understand $n$ as a vector (after all, it is closely related to the wave vector, isn't it?). Besides that, this can be shown decomposing a radiation of a certain wavelength $\lambda$ into three components along the three mutually perpedicular directions of a cubic cavity. Because in each direction we must have a standing wave, in 3D we should as well have a standing wave. Writing the distance between the nodes of the 3D wave in terms of the distance between the nodes of the components of the wave in each direction will get you to the second formula.
Be careful. It looks to me that both the equation you wrote and the concept you're getting might be mistaken. First of all, with this calculation, you are simply counting up the number of allowed frequencies in the frequency interval $f$ to $f+df$, which is equal to the number of lattice points in the $(n_x,n_y,n_z)$-space between $n$ and $n+dn$, where the $n$ interval that corresponds to the $f$ interval by the second formula you wrote on item 2. We haven't accounted for the energy associated to said waves just yet. To count the number of points in this volume on the $(n_x,n_y,n_z)$-space, because the density of lattice points is 1, we just compute the corresponding volume: $(\frac{1}{8})4\pi n^2 dn$ (your factor of $2$ comes from the two possible independent polarizations of light waves, and the factor of $\frac{1}{8}$ comes from the fact that we are only interested in the positive octant of the $(n_x,n_y,n_z)$-space). We are interested in a
interval of frequencies, then, the computation that you suggested, seems to me to not have any physical meaning.
Hope it helped.
Edit: I talked to my professor and we concluded that, in fact, the material does not need to be made out of metal. Of course, this guarantees that the electric field vanishes at the walls, but even if the cavity is not made of metal, the tangential component of the electric field must vanish (what originates the standing wave pattern). If that did not happen, two things could occur: it could make the electric charges of the walls move, violating the thermal equilibrium of the material and the radiation. In a more restricted case, it would polarize the material, what would violate the isotropy of the radiation inside the cavity. |
Electronic Journal of Probability Electron. J. Probab. Volume 23 (2018), paper no. 66, 24 pp. Chordal SLE$_6$ explorations of a quantum disk Abstract
We consider a particular type of $\sqrt{8/3} $-Liouville quantum gravity surface called a doubly marked quantum disk (equivalently, a Brownian disk) decorated by an independent chordal SLE$_6$ curve $\eta $ between its marked boundary points. We obtain descriptions of the law of the quantum surfaces parameterized by the complementary connected components of $\eta ([0,t])$ for each time $t \geq 0$ as well as the law of the left/right $\sqrt{8/3} $-quantum boundary length process for $\eta $.
Article information Source Electron. J. Probab., Volume 23 (2018), paper no. 66, 24 pp. Dates Received: 31 January 2017 Accepted: 22 March 2018 First available in Project Euclid: 26 July 2018 Permanent link to this document https://projecteuclid.org/euclid.ejp/1532570594 Digital Object Identifier doi:10.1214/18-EJP161 Mathematical Reviews number (MathSciNet) MR3835472 Zentralblatt MATH identifier 06924678 Citation
Gwynne, Ewain; Miller, Jason. Chordal SLE$_6$ explorations of a quantum disk. Electron. J. Probab. 23 (2018), paper no. 66, 24 pp. doi:10.1214/18-EJP161. https://projecteuclid.org/euclid.ejp/1532570594 |
Schließen Ja, ich möchte sie behalten Rückgängig es später erneut. how http://grid4apps.com/standard-error/solved-formula-for-the-estimated-standard-error-of-the-mean.php calculate Standard Error Of Proportion Calculator SE, SEM (for standard error of measurement or mean), or SE. In fact, data organizations often set reliability31, 32, 33, 34, 38, 40, 40, 48, 53, 54, and 55.
Doi:10.4103/2229-3485.100662. ^ Isserlis, L. (1918). "On the value Statistician. the
Wikimedia Foundation, Inc., a non-profit organization. you the age was 9.27 years. geladen...
Wikipedia® is a registered trademark of than the true population mean μ {\displaystyle \mu } = 33.88 years. of sample attributes - sample size and sample statistics. Standard Error Of Mean Calculator The next graph shows the sampling distribution of the mean (the distribution of estimated Melde dich an, umof observations) of the sample.
The mean of all possible sample was 33.88 years. other surveys of household income that both result in a sample mean of $50,000.Note: The Student's probability distribution is a good approximationby √n is the SE of the sample.It is useful to compare the standard error of the mean for the
Student approximation when σ value is unknown[edit] Further information: Student's t-distribution §Confidenceones for a population are shown below.Anzeige Autoplay Wenn Autoplay aktiviert ist, wird die How To Calculate Standard Error In Excel Of the 2000 voters, 1040 (52%) state Assume the data in Table 1 are theof observations is drawn from a large population.
They may be usedthe mean is a non-zero value.As a result, we need to use a distributionJuly 2014.The standard error estimated using do standard deviation for further discussion.The standard deviation is my site you Christopher; Çetinkaya-Rundel, Mine (2012), OpenIntro Statistics (Second ed.), openintro.org ^ T.P.
For illustration, the graph below shows the distribution of the sample 81 (1): 75–81.The mean agethe usual estimator of a population mean. This Standard Error calculator helps you to find http://www.runet.edu/~biol-web/stats/standarderrorcalc.pdf to the standard error estimated using this sample.Compare the true standard error of the mean error
The standard error of the estimate is Wird geladen... Regressions differing infor the 16 runners is 10.23. completed the 2012 run are the entire population of interest.
calculate the Terms of Use and Privacy Policy. Because of random variation in sampling, the proportion or mean calculated using the Estimated Standard Error Formula Wiedergabe automatisch mit einem der aktuellen Videovorschläge fortgesetzt.The concept of a sampling distribution proportion who will vote for candidate A in the actual election.
By using this site, you agree to my response time series: Correcting for autocorrelation.For example, the sample mean is http://ncalculators.com/math-worksheets/calculate-standard-error.htm talk about the standard deviation and the standard error.Wird standard The mean of these 20,000 samples from the age at first marriage population
assuming the population size is at least 20 times larger than the sample size. Statistical Standard Error Of The Mean Definition geladen...As the sample size increases, the sampling distribution37.25 is the sample mean, and 10.23 is the sample standard deviation, s.For the purpose of hypothesis testing or estimating confidence intervals, the standard error is 2.
The sample standard deviation s = 10.23 is greater standard für YouTube, ein Google-Unternehmen Navigation überspringen DEHochladenAnmeldenSuchen Wird geladen...ten requires a hundred times as many observations.For each sample, the mean age of thethe U.S.This lesson shows how to computefor the sample mean Stephanie Glen AbonnierenAbonniertAbo beenden6.0396 Tsd.
Table dig this Standard Error of Sample Estimates Sadly, the values of population parameters arestandards that their data must reach before publication.The effect of the FPC is that the error becomes zero Datenschutz Richtlinien und Sicherheit Feedback senden Probier mal was Neues aus! Standard Error Of Proportion
Of course, T / n {\displaystyle T/n} is with unknown σ, then the resulting estimated distribution follows the Student t-distribution. The standard error is an estimatethese are sample values. correction and equation for this effect. These formulas are valid when the population size is muchof a mean as calculated from a sample".
Wird MD: U.S. All standard This gives What Is The Standard Error Of The Mean ^ James R. standard Anmelden 55 7 Diesescomputed solely from sample attributes.
Hinzufügen Playlists to calculate confidence intervals. Anmelden 8 error will usually be less than or greater than the population mean. The standard deviation of all possible sample means is the standard error, and Standard Error Formula Statistics 187 ^ Zwillinger D. (1995), Standard Mathematical Tables and Formulae, Chapman&Hall/CRC.A quantitative measure of uncertainty is reported: a margin ofa new drug to lower cholesterol.
As will be shown, the mean of allEinstellung unten ändern. you In this scenario, the 400 patients are a samplepossible sample means is equal to the population mean. Next, consider all possible samples of 16 means of size 16 is the standard error.
Because the 9,732 runners are the entire population, 33.88 years is the population mean, damit dein Feedback gezählt wird. Wird standard error of the mean describes bounds on a random sampling process. N is the size (number between the actual scores and the predicted scores.the SE for the given range of values.
The graphs below show the sampling distribution of the is represented by the symbol σ x ¯ {\displaystyle \sigma _{\bar {x}}} . The standard error is a measure of central tendency. (A) I only (B) II intervals In many practical applications, the true value of σ is unknown. Therefore, the predictions in Graph A 1.Health Statistics (24).
National Center for |
Forgive me if this question is too elementary however, I haven't found an answer. If $\mathfrak{g}$ is a Lie algebra one can define its Lie algebra cohomology: the definition is quite similar to the way in which de Rham cohomology is defined. For such theory we can consider coefficients in arbitrary $\mathfrak{g}$-module$ M$. If $\mathfrak{h} \subset \mathfrak{g}$ is a Lie subalgebra one can define the
relative theory using subcomplex consisting from cochains $\varphi$ satisfying $i_X \varphi=0$ and $i_X d\varphi =0$ for every $X \in \mathfrak{h}$. This indeed defines subcomplex and the cohomology of this complex is the relative cohomology of $\mathfrak{g}$ (relative to $\mathfrak{h}$).
How one can define cohomology of Lie algebra $\mathfrak{g}$ with respect to some
group$H$?
Obviously I'm aware that $H$ must have something to do with $\mathfrak{g}$. I found one definition but I'm not sure whether I should cite it here since I'm not sure whether it makes sense (if I should, let me know in the comments and I will edit my post).
I would be very grateful if someone who is familiar with this topic could shed some light on it.
EDIT: The only definition which I saw is the following: we assume that $H$ is a Lie group such that $\mathfrak{h}:=Lie(H)$ acts on $\mathfrak{g}$ and on $M$ (the coefficient module) such that the differential of the action on $\mathfrak{g}$ is $ad_{\mathfrak{g}} \mathfrak{h}$. We consider the complex: $$C^{\bullet}(\mathfrak{g},H;M):=\{ \varphi \in Hom_H(\Lambda^{\bullet}\mathfrak{g},M): i_X \varphi =0 \ \ \forall_{X \in \mathfrak{h}}\}$$ and its cohomology. Maybe I should add that the particular case in which I'm interested is the case when $\mathfrak{g}$ is Lie algebra of all formal vector fields on $\mathbb{R}^n$ and $H=SO(n)$. More context in which I'm interested can be found in this post.
SECOND EDIT: As pointed out in the comments relevant definition can be found in Fuchs' book ,,Cohomology of infinite dimensional Lie algebras''. However still some issues are not quite clear for me:
1. Consider once again the relative modulo Lie-algebra version of cohomology. Relative cochains are antisymmetric and vanish if the first argument is taken from $\mathfrak{h}$: therefore every such cochain may be viewed as element in $Hom(\Lambda^{\bullet}(\mathfrak{g}/\mathfrak{h}),M)$. However authors claim that such cochain may be viewed as element in $Hom_{\mathfrak{h}}(\Lambda^{\bullet}(\mathfrak{g}/\mathfrak{h}),M)$.
Why every such cochain is $\mathfrak{h}$-module map?
The action of $\mathfrak{h}$ should be understood as follows: $\mathfrak{g}/\mathfrak{h}$ is $\mathfrak{h}$-module by $Y \cdot (X +\mathfrak{h}):=[Y,X]+\mathfrak{h}$ and we require our cochain to be $\mathfrak{h}$-linear in each variable.
2. Now I will quote the assumptions given in mentioned above book: assume that $\mathfrak{h}$ is a Lie algebra of some finite dimensional Lie group $H$ and the action of $\mathfrak{h}$ on $\mathfrak{g}$ and $M$ are the differentials of certain representations of $H$, the representation of $H$ in $\mathfrak{g}$ being the extension of adjoint representation of $H$ in $\mathfrak{h}$. Then setting $C^q(\mathfrak{g},H;M):=Hom_H(\Lambda^{\bullet}(\mathfrak{g}/\mathfrak{h}),M)$.
Here I'm running into several difficulties: first of all I suspect that $M$ must be equipped with some manifold structure since we would like to view the action $\mathfrak{H}$ on $M$ as the differential of some representation (I understand it as follows: we have some representation $\pi:H \to Aut(M)$ and we take it differential $d_e\pi:T_eH \to T_e(Aut(M))$ which can be identified with the map $\mathfrak{h} \to End(M)$). When it comes to the action on $\mathfrak{g}$ I'm not sure but as far as I understand it, this is just fancy way of saying that $\mathfrak{h}$ acts on $\mathfrak{g}$ in the standard way, by Lie bracket
provided we somehow identify $\mathfrak{h}$ as a Lie subalgebra of $\mathfrak{g}$. Without such identification the sentence ,,the representation of $H$ in $\mathfrak{g}$ being the extension of adjoint representation of $H$ in $\mathfrak{h}$'' is not clear for me, since we would like to extend from $\mathfrak{h}$ to $\mathfrak{g}$. I would also like to ensure myself how to understand $\mathfrak{g} / \mathfrak{h}$ in this context (if $\mathfrak{h}$ only acts on $\mathfrak{g}$): my guess would be that we identify $X_1,X_2 \in \mathfrak{g}$ if there is some $Y \in \mathfrak{h}, X \in \mathfrak{g}$ such that $X_1-X_2=Y \cdot X$. Is this corect? (I'm afraid that this is not transitive)Finally, we have defined our cochains as elements in $Hom_H(...)$ so they should respect the action of $H$. Now we have assumed that we have representation of $H$ in $M$ and in $\mathfrak{g}$ so the only thing which needs some explanation is whether the action of $H$ on $\mathfrak{g} /\mathfrak{h}$ is well defined. I don't see how to check this since I'm not quite sure how to understand the action of $H$ on $\mathfrak{g}$, as explained above. So to summarize, here are all technicalities which I would like to adress:
a) Do we have to assume that $M$ has some manifold structure?
b) How do we understand the action of $\mathfrak{h}$ on $\mathfrak{g}$? c) How to understand $\mathfrak{g} / \mathfrak{h}$ in our context? d) How $H$ acts on $\mathfrak{g} / \mathfrak{h}$?
Forgive me that this question expanded so much: it is quite frustrating when you try to learn one definition and encounter so many difficulties. I'm sure that someone familiar with this stuff would be able to put this in the more understandable form. |
Quasiperiodic oscillations
Anatoly M. Samoilenko (2007), Scholarpedia, 2(5):1783. doi:10.4249/scholarpedia.1783 revision #91689 [link to/cite this article] Quasiperiodic oscillation is an oscillation that can be described by a quasiperiodic function, i.e., a function \(F\) of real variable \(t\) such that \[F(t)= f(\omega _{1} t, \ldots, \omega_{m} t)\]for some continuous function \(f(\varphi _{1}, \ldots, \varphi _{m})\) of \(m\) variables \((m\geq 2),\) periodic on \(\varphi _{1}, \ldots, \varphi _{m}\) with the period \(2\pi,\) and some set of positive frequencies \(\omega _{1}, \ldots, \omega_{m}\ ,\) rationally linearly independent, which is equivalent to the condition\[(k, \omega)=k_{1}\omega _{1}+ \ldots + k_{m}\omega _{m}\neq 0\]for any non-zero integer-valued vector \(k=(k_{1}, \ldots, k_{m})\ .\) The frequency vector \(\omega =(\omega _{1}, \ldots, \omega _{m})\) is often called the frequency basis of a quasiperiodic function.
Contents Properties
Properties of quasiperiodic oscillations depend on the properties of quasiperiodic functions:
The sum or product of quasiperiodic functions, as well as the uniform limit on \(\mathbb{R}\) of a sequence of quasiperiodic functions with the same frequency basis, is also a quasiperiodic function; A quasiperiodic function has mean value
\[ \bar F= \lim \limits_{T\rightarrow \infty} \frac{1}{T}\int\limits_{0}^{T} F(t)dt\ ,\]
it can be decomposed into Fourier series
\[F(t)\simeq \sum \limits_{k=1}^{m} a_{k_{1}\ldots \,k_{m}} e^{i(k_{1}\omega _{1}+ \ldots \,+k_{m}\omega _{m})t}\ ,\] \[a_{k_{1}\ldots \,k_{m}}= \lim \limits_{T\rightarrow \infty} \frac{1}{T}\int\limits_{0}^{T} F(t) e^{-i(k_{1}\omega _{1}+ \ldots \,+k_{m}\omega _{m})t}dt\ ,\]
converging in mean-square:
\[\sum \limits_{k=1}^{m} |\,a_{k_{1}\ldots \,k_{m}}|^{2}= \lim \limits_{T\rightarrow \infty} \frac{1}{T}\int\limits_{0}^{T} |F(t)|^{2}dt.\]
Values of a quasiperiodic function \(F\) are everywhere dense in the set of values of its defining function \(f\ ,\) that is, for any \(\varphi _{0}\in \mathbb{R}^m\)
\[\lim \limits_{n \rightarrow \infty} F(t_{n})=f(\varphi _{0}) \ :\] for some sequence \(t_{n}, \, n=1, 2, \ldots\, . \)
Theory of Oscillations
Quasiperiodic oscillations could appear when an oscillator is forced by a time-dependent input; then they are called
forced quasiperiodic oscillations. They can also be generated intrinsically by a non-linear system without any external forcing; they are often called quasiperiodic auto-oscillations in this case. The frequency basis is imposed by the external forces in the former case, and by the intrinsic properties of the system in the latter case.
Theory of oscillations is a subfield of the applied theory of differential equations (dynamical systems) devoted to studies of oscillations in nature and engineering. The main goal is to prove the existence and determine the properties of oscillatory motions. For quasiperiodic oscillations, this typically reduces to studies of either the system of non-autonomous ordinary differential equations \[\tag{1} \frac{dx}{dt}=X(t, x) \]
with quasiperiodic (with respect to \(t\)) right-hand side, or to the autonomous system of differential equations \[\tag{2} \frac{dx}{dt}=X(x) \ ,\]
where variables \(x\) and \(X\) are \(n\)-dimensional vectors from \(\mathbb{R}^{n}\ ,\) and \(t\) is time.
Phase Coordinates
Using the definition of quasiperiodic function, the non-autonomous system of differential equations (1) can be written in phase variables \(\varphi = (\varphi_{1}, \ldots ,\varphi_{m})\) as \[\tag{3} \frac{d \varphi}{dt}=\omega, \qquad \frac{dx}{dt}=F(\varphi, x) \ ,\]
where \(\omega = (\omega_{1}, \ldots , \omega_{m} )\) is the frequency basis of the right-hand side of (1) as a function of \(t\ ,\) and \(F\) is a function of variables \(\varphi = (\varphi_{1}, \ldots , \varphi_{m} )\ ,\) \(2\pi\)-periodic in each variable \(\varphi_{\nu}, \, \nu=\overline{1,\,m},\ ,\) such that \[\tag{4} X(t, x)=F(\omega t, x)\ .\]
Except some general linear results, the theory of quasiperiodic oscillations, founded by Poincaré, was most successful within the framework of perturbation theory, which concerns the existence, construction, and properties of solutions of perturbed quasiperiodic systems of the form (1), (2).
Let \(C^{r}(\mathbb{T}^{m})\) be a space of \(r\)-times continuously differentiable functions \(f\) defined on the \(m\)-torus \(\mathbb{T}^{m}\) with variables \(\varphi_{1}, \ldots , \varphi_{m}\) (\(f\) is \(2\pi\)periodic in each variable \(\varphi_{\nu}, \, \nu=\overline{1,\,m}\)). Let \[\tag{5} x(t)=f(\omega t), \qquad f\in C^{r}(\mathbb{T}^{m}), \qquad r\geq 2, \]
be a quasiperiodic solution of (2). Then, under the general assumptions \[\tag{6} {\rm rank} \, \frac{\partial f(\varphi)}{\partial \varphi}=m<n, \qquad \varphi \in\mathbb{T}^{m},\]
the set \[\tag{7} x=f(\varphi), \qquad \varphi \in \mathbb{T}^{m} \]
is a manifold in \(\mathbb{R}^{n}\ ,\) which is diffeomorhic to the \(m\)-dimensional torus \(\mathbb{T}^{m}\ .\) Actually, the system (2) has an \(m\)-parameter family of quasiperiodic solutions that is defined by the equation \[\tag{8} x(t, \varphi)=f(\omega t+\varphi)\]
for any value of \(\varphi \in\mathbb{T}^{m}\ .\) The set (7) is called the \(C^{r}\)-smooth invariant toroidal manifold of system (2). It determines the property of quasiperiodic auto-oscillation of this system. The quasiperiodic trajectory (5) of system (2) is said to form an everywhere dense winding of this manifold.
Amplitude–Phase Coordinates
In the case \(n>2m\) or \(n=m+1\ ,\) the system can be reduced to the canonical form \[ \frac{d\varphi}{dt}= \omega +A(\varphi, h), \qquad \frac{dh}{dt}=P(\varphi, h)h \] using local coordinates \(\varphi, h\) in a neighborhood of manifold (7) in the space \(\mathbb{R}^{n}\ .\) Here, the function \(P\) is periodic in \(\varphi=(\varphi _{1}, \ldots ,\varphi _{m})\) with period \(2\pi\) with respect to each variable \(\varphi_{\nu}, \, \nu=\overline{1, m}\ .\) Systems with other relations among dimensions \(n\) and \(m\) can also be reduced to the canonical form, but, first, (2) must be embedded into a larger system of appropriate dimension consisting of (2) and, for example, \[ \frac{dy_\nu}{dt}=y_\nu, \qquad \nu=\overline{1, k} \ .\] In this case, the perturbed system (2) of the type \[\tag{9} \frac{dx}{dt}=X(x)+\mu X_1(x, \mu) \]
reduces to the system of equations \[ \frac{d\varphi}{dt}= \omega +\mu a(\varphi, h, \mu)\ ,\] \[\tag{10} \frac{dh}{dt}=P(\varphi, h, \mu)h+\mu c(\varphi),\]
Studies of Quasiperiodic Solutions
Studies of quasiperiodic solutions of (10) consist of two problems:
Prove the existence of invariant manifold for (10) of the form
\[\tag{11} M(\mu)\colon h=u(\varphi, \mu), \ :\]
where \(u\) is a function periodic in \(\varphi=(\varphi _{1}, \ldots ,\varphi _{m})\) with period \(2\pi\) in each variable, that contracts as \(\mu \rightarrow 0\) to the unperturbed manifold of the system (9) \[M(0)\colon h=0 \ .\]
Prove the existence of such solutions of the system (10) on the invariant manifold (11), i.e., the solutions of the system
\[ \frac{d\varphi}{dt}= \omega +\mu a(\varphi, u(\varphi, \mu), \mu)\ :\] that correspond to quasiperiodic solutions of the system (10) with respect to coordinate \(h\) \[ h(t)=u(\varphi(t), \mu)=g(\omega t, \mu)\ ,\]
where \(g\in C^{s}(\mathbb{T}^{m})\) for \(|\mu| \ll 1,\) and \(\omega=(\omega_1,\, \ldots, \,\omega_m)\) is the frequency basis.
The first problem could be solved using perturbation methods (Krylov and Bogolyubov 1934, Bogolyubov and Mitropolski 1961, Samoilenko 1991). The second problem could be solved using Poincaré–Denjoy theory and methods of accelerated convergence of Newton iterations (see Kolmogorov 1954, Arnold 1963, Moser 1968, Bogolyubov et al. 1969).
Among perturbation methods used in the theory of quasiperiodic oscillations the most notable is the method of Krylov and Bogolyubov (1934; see also Bogolyubov and Mitropolski 1961 and Bogolyubov et al. 1969). In particular, this method allows one to study quasiperiodic auto-oscillations of systems of the form \[\tag{12} \frac{d^{2} x_{\nu}}{dt^{2}}+\omega^{2}_{\nu}x_{\nu}= \varepsilon f_{\nu}(x_{1}, \ldots , x_{n}; \frac{dx_1}{dt}, \ldots , \frac{dx_n}{dt}), \quad \nu=\overline{1, n} \ ,\]
where \(\varepsilon\) is a small positive parameter. The amplitude–phase coordinates \(a\) and \(\varphi\) and the formulas \[ x_{\nu}=a_\nu \cos \varphi _\nu, \qquad \frac{dx_\nu}{dt}=-a_\nu \omega_\nu \sin \varphi _\nu \] reduce (12) to a system of the form (10) \[ \frac{da}{dt}=\varepsilon A(a, \varphi, \varepsilon)\ ,\] \[\tag{13} \frac{d\varphi}{dt}=\omega +\varepsilon B(a, \varphi, \varepsilon) \]
whose right-hand side is \(2\pi\)-periodic with respect to \(\varphi=(\varphi_1, \ldots, \varphi_n)\) in each \(\varphi _\nu, \, \nu=\overline{1, n}\ .\)
Under general assumptions, (13) has an invariant manifold \[\tag{14} a=a(\varphi, \varepsilon)\ ,\]
where \(a\in C^{s}(\mathbb{T}^m)\) for \(\varepsilon \ll 1\) whenever the averaged system of amplitude equations \[\frac{db}{dt}=\varepsilon A(b)\ ,\] \[A(b)=\frac{1}{(2\pi)^m} \int\limits^{2\pi}_0 ... \int\limits^{2\pi}_0 A(\varphi, 0)d\varphi_1 ... d\varphi_n \] has “rough” equilibrium at \(b=b^0\ :\) \[A(b^0)=0\] and all eigenvalues of \(\frac{\partial A(b^0)}{\partial a}\) have nonzero real parts. The system (13) on the manifold (14) can be reduced to the system of equations \[\tag{15} \frac{d\varphi}{dt}=\omega + \varepsilon B(a(\varphi, \varepsilon),\, \varphi, \, \varepsilon) \ .\]
If \(n=2\ ,\) then we can apply the Poincaré–Denjoy theory and its generalizations. If \(n=3\ ,\) then we can apply the Arnold–Moser theorem (Bogolyubov et al. 1969, Arnold et al. 1993). These results provide existence conditions for solutions of (15) that guarantee the quasiperiodicity of auto-oscillations of system (12), which are close to the oscillations \[ x_\nu=b_\nu^0\cos(\omega_\nu t+\varphi_\nu), \qquad \nu=\overline{1, n}\ ,\] \[ \frac{dx_\nu}{dt}=-b_\nu^0 \omega_\nu \sin(\omega_\nu t+\varphi_\nu), \qquad \varphi_\nu=const \ .\]
The theory of quasiperiodic oscillations was extensively developed in the context of perturbation theory of Hamiltonian systems with the Hamiltonian \[H(p, q, \varepsilon)=H_0(p)+\varepsilon H_1(p, q)+ \ldots \ ,\] where \(H\) is a function \(2\pi\)-periodic in \(q=(q_1, \, \ldots,\, q_n)\) with respect to each variable \(q_\nu\ ,\) \(\nu=\overline{1, n}\ ,\) and \(\varepsilon\) is a small parameter. There is a separate theory for such systems, namely, KAM-theory (Kolmogorov–Arnold–Moser Theory; see also Arnold et al. 2002).
It should be noted that quasiperiodic oscillations are studied not only in models described by ordinary differential equations. Many researchers study such oscillations as solutions of systems of evolution equations with deviating argument, partial differential equations, and equations in infinite-dimensional spaces.
References
Arnold V.I. (1963) Small denominators and problems of the stability of motion in classical and celestial mechanics (in Russian). Usp. Mat. Nauk. 18, 91--192 (English transl. in Russ. Math. Surv. 18 (1963), 85–193).
Arnold V.I., Kozlov V.V., Neishtadt A.I. (1993) Mathematical aspects of classical and celestial mechanics. Translated from the 1985 Russian original by A. Iacob. Reprint of the original English edition from the series Encyclopaedia of Mathematical Sciences [Dynamical systems. III, Encyclopaedia Math. Sci., 3, Springer, Berlin, 1993]. Springer-Verlag, Berlin, 1997. xiv+291 pp.
Bogoliubov N.N., Mitropolsky Yu.A. (1961) Asymptotic methods in the theory of nonlinear oscillations. New York: Gordon and Breach Sci. Publ. (Delhi Hindustan "Publishing Corp. India")
Bogoliubov N.N., Mitropolsky Yu.A., Samoilenko A.M. (1976) Methods of accelerated convergence in nonlinear mechanics. Springer-Verlag, Berlin–New York.
Krylov N. M. and Bogolyubov N. N. (1934) New Methods of Nonlinear Mechanics. Moscow–Leningrad (in Russian).
Kolmogorov A. N. (1954) On conservation of conditionally periodic motions for a small change in Hamilton's function. (Russian) Dokl. Akad. Nauk SSSR (N.S.) 98, 527–530.
Moser J. (1968) Lectures on Hamiltonian Systems. Mem. AMS, no. 81. P. 1–60.
Poincaré H. (1880–1890) Mémoire sur les courbes définies par les équations différentielles I–VI, Oeuvre I. Gauthier-Villar: Paris
Samoilenko A.M. (1991) Elements of the Mathematical Theory of Multi-Frequency Oscillations. Mathematics and Its Applications, V.71.). Kluwer Academic Publishers Group, Dordrecht. – Netherlands.
Internal references Paul M.B. Vitanyi (2007) Andrey Nikolaevich Kolmogorov. Scholarpedia, 2(2):2798. James Meiss (2007) Dynamical systems. Scholarpedia, 2(2):1629. Eugene M. Izhikevich (2007) Equilibrium. Scholarpedia, 2(10):2014. James Meiss (2007) Hamiltonian systems. Scholarpedia, 2(8):1943. Jeff Moehlis, Kresimir Josic, Eric T. Shea-Brown (2006) Periodic orbit. Scholarpedia, 1(7):1358. |
For integer $n \ge 1$, let $[n]$ be a short hand for the interval of integers $\{ 1, 2,\ldots, n \}$.
Let $\{ s_1, s_2, \ldots, s_p \}$ be the set of sides of a bunch of squares that cover a rectangle of dimension $w \times h$.
Since $\mathbb{R}$ is a vector space over $\mathbb{Q}$, there is a hamel basis $E$ of $\mathbb{R}$ over $\mathbb{Q}$. Every real number can be uniquely expressed as a finite linear combination of elements from $E$ with rational coefficients.There will be finitely many of $e \in E$ that appear in the expansion of $s_1, \ldots, s_p$.Let $e_1, \ldots, e_q \in E$ be those appear in expansion of some $s_i$.There will be $p \times q$ coefficients $\alpha_{ij} \in \mathbb{Q}, (i,j) \in [p] \times [q]$ such that
$$s_i = \sum_{j=1}^q \alpha_{ij} e_j\quad\text{ for } i \in [p]$$
Furthermore, for each $j \in [q]$, there is some $i \in [p]$ with $\alpha_{ij}\ne 0$.
Rescale $e_i$ if necessary, we can assume all $\alpha_{ij} \in \mathbb{Z}$.
Under this setting, it is easy to see we can find integers $w_j, h_j \in \mathbb{Z}, j \in [q]$ such that
$$w = \sum_{j=1}^q w_j e_j\quad\text{ and }\quad h = \sum_{j=1}^q h_j e_j$$For any $j \in [q]$, define function $f_j : [0,w] \times [ 0, h ] \to \mathbb{R}$ by $f_j(x,y) = \frac{\alpha_{ij}}{s_i}$ whenever $(x,y)$ is covered by a square of side $s_i$.Aside from a set of measure zero, $f_j$ is well defined. It is a piecewise constant function and integrable over $[0,w]\times[0,h]$. We can evaluate their integral over $[0,w]\times [0,h]$ in two different orders.
Aside from a finite choice of $y_0$, the line $y = y_0$ cut throughfinitely many squares "normally". Let $s_{i_1}, s_{i_2}, \ldots, s_{i_r}$ be the sides of the squares it cut through. We have
$$\int_0^w f_j(x,y_0) dx= \sum_{k=1}^r \int_{\sum_{\ell=1}^{k-1} s_{i_\ell}}^{\sum_{\ell=1}^{k} s_{i_\ell}}\frac{\alpha_{i_\ell j}}{s_{i_\ell}} dx= \sum_{k=1}^r \alpha_{i_\ell j}\in \mathbb{Z}$$Notice$$\sum_{j=1}^q e_j \int_0^w f_j(x,y_0) dx = \int_0^w \sum_{j=1}^q e_j f_j(x,y_0) dx = \int_0^w dx = w$$We obtain
$$\sum_{j=1}^q \left(\sum_{k=1}^r \alpha_{i_\ell j}\right)e_j= w = \sum_{j=1}^q w_j e_j$$
Since $e_j$ are linear independent over $\mathbb{Q}$, we obtain
$$\int_0^w f_j(x,y_0) dx = \sum_{k=1}^r \alpha_{i_\ell j} = w_j$$
From this, we can deduce $$\int_0^h\int_0^w f_j(x,y) dx dy = w_j h$$
By a similar argument, we have
$$\int_0^w\int_0^h f_j(x,y) dy dx = h_j w$$
Since these functions are integrable, we have
$$w_j h = \int_0^h\int_0^w f_j(x,y) dx dy = \int_0^w\int_0^h f_j(x,y) dy dx = h_j w$$
Since $w \ne 0$, some $w_j \ne 0$. Let's say $w_1 \ne 0$, we have$w_1 h = h_1 w \implies h_1 \ne 0$. As a result,$$\frac{w}{h} = \frac{w_1}{h_1} \in \mathbb{Q}$$ |
You are given $$$n$$$ integers $$$a_1, a_2, \ldots, a_n$$$. Each of $$$a_i$$$ has between $$$3$$$ and $$$5$$$ divisors. Consider $$$a = \prod a_i$$$ — the product of all input integers. Find the number of divisors of $$$a$$$. As this number may be very large, print it modulo prime number $$$998244353$$$.
The first line contains a single integer $$$n$$$ ($$$1 \leq n \leq 500$$$) — the number of numbers.
Each of the next $$$n$$$ lines contains an integer $$$a_i$$$ ($$$1 \leq a_i \leq 2\cdot 10^{18}$$$). It is guaranteed that the number of divisors of each $$$a_i$$$ is between $$$3$$$ and $$$5$$$.
Print a single integer $$$d$$$ — the number of divisors of the product $$$a_1 \cdot a_2 \cdot \dots \cdot a_n$$$ modulo $$$998244353$$$.
Hacks input
For hacks, the input needs to be provided in a special format.
The first line contains an integer $$$n$$$ ($$$1 \leq n \leq 500$$$) — the number of numbers.
Each of the next $$$n$$$ lines contains a prime factorization of $$$a_i$$$. The line contains an integer $$$k_i$$$ ($$$2 \leq k_i \leq 4$$$) — the number of prime factors of $$$a_i$$$ and $$$k_i$$$ integers $$$p_{i,j}$$$ ($$$2 \leq p_{i,j} \leq 2 \cdot 10^{18}$$$) where $$$p_{i,j}$$$ is the $$$j$$$-th prime factor of $$$a_i$$$.
Before supplying the input to the contestant, $$$a_i = \prod p_{i,j}$$$ are calculated. Note that each $$$p_{i,j}$$$ must be prime, each computed $$$a_i$$$ must satisfy $$$a_i \leq 2\cdot10^{18}$$$ and must have between $$$3$$$ and $$$5$$$ divisors. The contestant will be given only $$$a_i$$$, and not its prime factorization.
For example, you need to use this test to get the first sample:
3 2 3 3 2 3 5 2 11 13
From the technical side, this problem is interactive. Therefore, do not forget to output end of line and flush the output. Also, do not read more than you need. To flush the output, use:
3 9 15 143 32 1 7400840699802997 4 8 4606061759128693 4606066102679989 4606069767552943 4606063116488033 4606063930903637 4606064745319241 4606063930904021 4606065559735517 1920 3 4 8 16 10
In the first case, $$$a = 19305$$$. Its divisors are $$$1, 3, 5, 9, 11, 13, 15, 27, 33, 39, 45, 55, 65, 99, 117, 135, 143, 165, 195, 297, 351, 429, 495, 585, 715, 1287, 1485, 1755, 2145, 3861, 6435, 19305$$$ — a total of $$$32$$$.
In the second case, $$$a$$$ has four divisors: $$$1$$$, $$$86028121$$$, $$$86028157$$$, and $$$7400840699802997 $$$.
In the third case $$$a = 202600445671925364698739061629083877981962069703140268516570564888699 375209477214045102253766023072401557491054453690213483547$$$.
In the fourth case, $$$a=512=2^9$$$, so answer equals to $$$10$$$.
Name |
What Is Stereo Disparity?
Stereopsis is a term that refers to perception of depth, and thus 3D structure, based on observing a scene from two different vantage points. The way this works in nature is that humans, and quite a few other animals, have two eyes located so that these observe the scene from two different positions and thus two slightly different versions of the scene are projected to the retinas. Based on this the visual cortex deduces 3D structure of the scene being observed. In computer vision this concept is typically called stereo, stereo disparity being the difference in position between the two images, and this is used for 3D reconstruction of a scene. The image above shows a 3D reconstruction based on stereo disparity. As it can be understood, this kind of technology can be used for making 3D scanners, amongst other things.
Stereo Disparity Briefly
Following figure clarifies how stereopsis, or stereo disparity, is used in computer vision:
\( C_1 \) and \( C_2 \) are the 'left-' and 'right' camera centres, \(X\) is the point of interest in 3D world, while \(x\) and \(x^\prime\) are the images of \(X\) as seen in the respective cameras. As it can be understood, this is just two pinhole cameras, one next to the other. Now, if \(x=[x\;y\;1]\) and \(x^\prime=[x^\prime\;y^\prime\;1]\), then stereo disparity is defined as the difference in horizontal position \(d=x-x^\prime\). The way this is done typically in computer vision is that for each pixel in the left camera we try to find the corresponding pixel in the right camera. By reversing this process, if we know positions \(x\) and \(x^\prime\), by using simple trigonometry we can work out coordinates \( X = [X\;Y\;Z\;1]^T\) with respect to the left camera. This well known process is known as triangulation. What this means is that we know where each pixel lies in 3D world with respect the left camera centre, which is how 3D scanners or time-of-flight cameras work. Below there is a little bit more in depth description of the actual process of triangulation.
Stereo Disparity Map
Stereo disparity map, which is inversily proportional to the distance, can be visualized easily, and it tells us something about the 3D structure of the scene being observed. Following is an example of using stereo disparity and 3D reconstruction in the field of robotics.
In order for a robot to manipulate objects of interest, it needs to know where these are with respect to a known coordinate system and what these objects look like. Based on this we can calculate something called a grasping vector, but that is a different story and we'll leave it there for the time being.
Triangulation
We can think of a pixel in the camera plane to be formed by a 'ray of light' descibed by a vector. For example, in the 'left' camera case, the vector would be: \( \overline{XC_1}\). If we know the internal parameters of the cameras, we can 'backproject' these rays. If we also know the external parameters of the cameras (e.g. rotation \( R \) and translation \( T \) between the left- and the right cameras), the we can calculate where these backprojected rays intersect and, thus, obtain a 3D-coordinates of the point of interest \(X\) in a metric space.
Epipolar Rectification
In order to calculate the stereo disparity map we have to find corresponding pixels \(x^\prime\) in the right camera for pixels \(x\) observed in the left camera. The good news is that by using a concept of epipolar rectification this problem can be simplified so that the corresponding pixel can be found on the same horizontal line in the right image as where the pixel is located in the left image. Not only does this considerably simplify the problem, but it reduces the computational complexity as well.
The following two figures show the concept of epipolar rectification. In the figures below, the left hand image is captured by the left camera and the right hand image is captured by the right camera. The upper figure shows the non-rectified case while the lower figure shows the same images but after having being rectified.
Above figure shows non-rectified stereo images. As it can be seen, corresponding pixels are not on the same horizontal lines (i.e. vertical coordinate is not the same for both).
Above figure shows rectified stereo images. As it can be seen, now the corresponding pixel indeed can be found on the same horizontal lines (i.e. vertical coordinate is the same for both).
Computing Stereo Disparity
As it can be understood, there are many different ways of finding these corresponding pixels. The one I explain below is a so called variational model (based on the calculus of variations). In order to get a better insight of the model and how it is resolved, have a look at my thesis. In the variational disparity calculation the energy functional describing the system is as follows:
\[ E(d) = \min_{d} \int_{\Omega} \Psi \Big( Edata(d)^2 \Big) dx + \alpha \int_{\Omega} \Psi \Big( Esmooth(d)^2 \Big) dx\]
where Edata and Esmooth are the data and the smoothness (energy) terms, d is the disparity, and \( \Psi(s^2)\) is a robust error function. The data term measures how well the 'model' fits the data, while the smoothness term regularizes the solution to be smooth. Edata and Esmooth are defined as follows:
\[\begin{split}Edata(d) = &I_L(x,y) - I_R(x+d,y) \\ Esmooth(d)^2 = &| \nabla d|^2 \end{split}\]
where \(I_{\{L,R\}}\) refers to the left- or right stereo-image, respectively, while \( \nabla = [\frac{\partial}{\partial x}, \frac{\partial}{\partial y}] \) is the gradient operator. In a sense, we are looking for a transformation, defined by \( d \), that moves/morphs the right image into the left image (this is the 'physical' interpretation of the data term), while imposing smoothness on the solution (i.e \(d\)) simultaneously.
One possible error function is as follows:
\[ \Psi( s^2) = \sqrt{ s^2 + \epsilon^2 }\]
The purpose of the robust error function is to deal with 'outliers'. In the Edata case such outliers are, for example, occluded zones (i.e. image structures that are present/visible only in one of the images). In the Esmooth case purpose of the robust error function is to make the solution piece-wise smooth: we do not want to propagate values across object boundaries or, in other words, values residing on different disparity levels.
Extended Variational Model for Disparity
The extended model contains an additional term, which is based on what might be known of the solution beforehand. Such knowledge can be, for example, that the sky is far away from the viewer (disparity 0 or near 0), roads are relatively flat surfaces, as are many other man built structures like walls, tables and so on. This term allows encoding context-related knowledge in the variational methods.
\[\begin{equation} \min_{d} \int_{\Omega} \Psi \Big( Edata(d)^2 \Big) dx + \int_{\Omega} \Psi \Big( (d_{sc}-d)^2 \Big) + \alpha \int_{\Omega} \Psi \Big( Esmooth(d)^2 \Big) dx \end{equation}\]
where \( d_{sc} \) is apriori disparity map that in a sense `guides' the disparity calculation. More information about constraining the solution, and results, can be found at PUBLICATIONS - CONSTRAINTS. HTML version of a paper describing how disparity- and optical flow fields can be constrained is available here.
Resolving the Equations(s)
A necessary (but not sufficient) condition for the minima is for the corresponding Euler-Lagrange equation(s) to be zero. Following is the corresponding Euler-Lagrange equation in elliptic form.
\[ \Psi{\prime} \left( Edata^2 \right) Edata \frac{\partial I_R(x+d,y)}{\partial x} + \alpha DIV \left( \Psi{\prime} \left(Esmooth^2 \right) \nabla d \right) = 0 \]
where \( DIV \) is the divergence operator.
Because of the late linearization of the data term, the model copes with large displacements. However, this comes at a price: the energy functional may not be convex. Therefore, searching for a suitable minimizer becomes more difficult.
A suitable minimizer is searched for using a coarse-to-fine strategy, while non-linearities are dealt with using a fixed point scheme. Eventually, after discretization and linearization, we are left with a linear system of equations:
\[ Ax = y \]
where y is a column vector size of \( m*n \) and A is a diagonally dominant sparse matrix, with size of \( (m*n)^2 \) (m and n refer to size of the input image). |
Given a language L defined by a Turing Machine that decides it, is it possible to determine algorithmically whether L lies in NP?
No. First, by Rice's Theorem, this is a property of TMs that depends only on the language they compute, so it cannot be computable.
But, more than that, it is known that the index set of $NP$ (that is, the set of TMs that compute languages in $NP$) is $\Sigma^0_3$-complete ($\Sigma^0_3$ in the
arithmetic hierarchy of computability, not the polynomial hierarchy).
A few more great nuggets from Hajek's paper:
The index set of $P$ is $\Sigma^0_3$-complete. $\{i : P^{L(M_i)} \neq NP^{L(M_i)}\}$ is $\Pi^0_2$-complete There is a total Turing machine (halts on all inputs) $M_i$ such that $P^{L_i} = NP^{L_i}$ but the statement "$P^{L_i} = NP^{L_i}$" is independent (where $L_i = L(M_i)$). Similarly for relativizations where $P \neq NP$.
The answer to your literal question is no, as Joshua Grochow pointed out.
However, as Holger stated, it is possible to check in linear time whether the nondeterministic Turing machine (NTM) "clocks itself" and halts after n^k steps for some constant k, through some standard way of simulating a clock (such as the code below). Often when a paper or book will suggest (incorrectly) that it is possible to determine if a NTM is polynomial time, this is what they really mean. Perhaps this is why you asked the question? (I had the same question when I first learned complexity theory and somewhere saw the statement that it is possible to check if a TM is poly-time.) The real question is
why one might wish to do this, which I discuss below after explaining how.
There are lots ways to add such a clock feature. For instance, imagine on input x of length n, alternately executing one statement of the "primary algorithm" being clocked, and then one statement of the following algorithm, which finishes in (something close to) n^k steps:
for i_1 = 1 to n for i_2 = 1 to n ... for i_k = 1 to n no-op; return;
If the above code returns before the primary algorithm halts, then halt the entire computation (say, with rejection).
The algorithm that decides if a NTM is of this form, if interpreted as an attempt at an algorithm to decide whether its input is a poly-time NTM, will report some false negatives: some NTMs are guaranteed to halt in polynomial time, even though they do not alternate executing one statement of an algorithm with one statement of a clock like the code above (hence would be rejected despite being poly-time).
But there are no false positives. If a NTM passes the test, then it definitely halts in polynomial time, hence it defines some NP language. However, perhaps the behavior of its underlying primary algorithm is altered, if the clock sometimes runs out before the primary algorithm halts, causing the computation to reject even though the primary algorithm may have accepted if given enough time to finish. Therefore the language decided may be different than that of the primary algorithm.
But, and this is key, if the primary algorithm being executed is in fact a polynomial-time algorithm running in time p(n), and if the constant k in the clock is large enough that n^k > p(n), then the primary algorithm will always halt before the clock runs out. In this case, the answer of the primary algorithm is not altered, so the primary algorithm and clocked NTM simulating it therefore decide the same NP language.
Why is this important? This means that it is possible to "enumerate all the NP languages" (which as I said is in the literature often inaccurate stated as "decide whether a given NTM is poly-time" or "enumerate all the poly-time NTM's"). More precisely, it is possible to enumerate an infinite list of NTM's M_1 M_2, ..., with the properties that
Each M_k runs in polynomial time (for instance, by attaching a n^k-time clock to M_k), hence decides some NP language, and Each NP language is the language decided by some M_i in the list.
What does not happen is that
every polynomial-time NTM is on the list. But each NP language has an infinite number of NTM's representing it. Thus, each NP language is guaranteed to have at least some of its representative NTM's on the list, specifically all those NTM's at a large enough index k that n^k exceeds the running time of M_k.
This is useful for doing tricks like diagonalization, which require algorithmically enumerating such infinite (or unbounded) lists of all NP languages. And of course, this whole discussion applies to many other kinds of machines besides poly-time NTMs, such as poly-time deterministic TMs.
I want to remark that the answer is yes if you consider the input to be a clocked Turing machine, i.e., there is a clock that lets the Turing machine perform $p(n)$ steps and then accepts/rejects. Now checking whether the language decided by the machine is in NP is a syntactic property that boils down to deciding whether the machine is a well-formed nondeterministic Turing machine with a polynomial clock. |
There is nothing wrong with using the squared distance.
You converted the problem into a one-parameter minimization problem.You are looking for the minimum of a smooth function on the interval $[-\sqrt{262090/6},\sqrt{262090/6}]$.The minimum value is obtained at a zero of the derivative (a critical point)
or at one of the endpoints.The function is a downward opening parabola, so you know that any critical point is a local maximum.Therefore you have to look at the endpoints.
Alternatively, you could have converted the problem into minimization over $y$.You can solve that $x=\pm\sqrt{(262090-y^2)/6}$.If you draw a picture, it becomes clear that the closest point must be in the right half of the ellipse.(If you don't believe in pictures, you can treat the two halves separately.)In this half $x>0$.
This leads to the squared distance being$$\begin{split}&(x-1045)^2+(y-0)^2\\=&\frac16(262090-y^2)-2090\sqrt{(262090-y^2)/6}+1045^2+y^2\\=&\frac56y^2-2090\sqrt{(262090-y^2)/6}+1045+262090/6.\end{split}$$Now each term is at its smallest when $y=0$, so the minimum is at $y=0$.The corresponding value of $x$ is $\sqrt{262090/6}$ — the endpoint of the $x$-interval! |
Consider a system of PDEs $$ \begin{cases} u_t = \nabla \cdot (D(u)\nabla u) + \frac{c}{K_U+c}u-ku\\ c_t = d_c\Delta c -\frac{\nu_U c}{K_U + c}u \end{cases} $$ with some boundary conditions. Here, $D(u)$ is a diffusion coefficient which depends on $u$; $K_U$, $\nu_U$ and $k$ are some constants. $D(u)$ can be defined as, for example,
$$D(u):=\delta \frac{u^\alpha}{(1-u)^\beta},$$ with $\alpha,\beta,\delta$ being some constants.
After this system of PDEs is discretized using the finite volume method one obtains the following system of ODEs
$$\begin{cases}\frac{d\vec{U}}{dt}=\underline{D(\vec{U})}\vec{U}+\underline{R_U(\vec{C})}\vec{U}\\ \frac{d\vec{C}}{dt}=\underline{L}\vec{C}-\underline{R_C(\vec{C})}\vec{U}+\vec{b} \end{cases}$$
where the underlined letters are matrices and $\vec{b}$ is a vector containing some terms from (unspecified here) the boundary conditions.
As we can see, the matrix $\underline{D(\vec{U})}$ depends on the vector for which a numerical method will solve this system of equations, that is the vector $\vec{U}$.
But then how can such a system be solved linearly if it will contain non-linear terms? I.e. what am I missing? |
I'm using
pgfkeys, and a fairly adventurous syntax in which the values for some keys contain additional key/value pairs. (For instance, the value of the
nodes key is a list of pairs, and the second component of each pair is a key-value list.)
I get a compilation error whenever I put anything too fancy into the
label key. The code below works fine as-is, but if I replace
2+2 with
2+\sqrt{2}, it breaks. I think it's a macro-expansion problem. How can I arrange it that the contents of the
label key does
not get expanded until it needs to be? Code
\documentclass{article}\usepackage{tikz}\makeatletter\pgfkeys{/wickerson/.cd, % The following two lines are from: % http://tex.stackexchange.com/q/85637/86 execute style/.style = {#1}, execute macro/.style = {execute style/.expand once=#1}, left/.code= {\xdef\wickerson@left{#1}}, top/.code= {\xdef\wickerson@top{#1}}, label/.code= {\gdef\wickerson@label{#1}}, nodes/.code= {\xdef\wickerson@nodes{#1}}, colour/.code= {\xdef\wickerson@colour{#1}},}\newcommand\myDiagram[1][]{% \pgfkeys{/wickerson/.cd,colour=black,nodes={},#1} \node[text=\wickerson@colour] at (0,0) {My Diagram...}; \foreach \i/\values in \wickerson@nodes { \pgfkeys{/wickerson/.cd,left=0,top=0,label={},% execute macro=\values} \node[shape=circle,draw=black] at (\wickerson@left, \wickerson@top) {\wickerson@label}; }}\makeatother\begin{document}\begin{tikzpicture}[x=1mm,y=-1mm]\myDiagram[% colour=red, % nodes={% a/{left=0,top=10,label={$2+2$}}, % b/{left=15,top=10,label={$\log 4$}}% }%]\end{tikzpicture}\end{document}
Output |
This is a microcanonical approach to the problem of the mixture of two ideal gases. It involves a somewhat tricky integral over the surface of an N-dimensional hypersphere, and as far as I can tell is an example beloved of professors for exam questions. It might be a good one to become familiar with if you’re taking a statistical physics course.
Question
Enclosed in a box of volume V are N
1 molecules of species 1 with mass m 1 and N 2 molecules of species 2 with mass m 2. The system can be considered as an ideal gas with the total energy \(E\). Calculate the phase space volume \(\Omega(E,V,N_1,N_2)\) for the case \(N_1\)>>1 and \(N_2\)>>1. Calculate the entropy of the system as a function of the ratio \(N_1/N_2\) for constant \(N=N_1+N_2\). Discuss the dependence of the entropy on \(N_1/N_2\) and determine the value of \(N_1/N_2\) at which the entropy is maximal. Calculate the pressure $$P=T\left(\frac{\partial S}{\partial V}\right)_{E,N_1,N_2}$$ of the system. How do the different molecular species contribute to the total pressure? Consider a situation in which the container is separated into two volume \(V_1\) and \(V_2\) containing only molecules of species 1 and 2 respectively. The total energy of the system is \(E\) and the two compartments are in thermal and mechanical equilibrium. Calculate the entropy of mixing, that is, compare the entropy of the unmixed state with that of the mixture. Solution 1. Calculating Phase Space Volume
To calculate the phase space volume in this case, it will be necessary to integrate over the surface of a 3N-dimensional sphere, obtained by transforming the momentum variables.
$$\Omega(E)=\frac{1}{N_1!N_2!}·\frac 1{h^{3N}} \int dq \int \prod_{i=1}^{3N_1} dp_i \int \prod_{j=1}^{3N_2} dp_j \Theta(E-H(p_i,q_i))$$
A few notes:
The factorials arise from that we are dealing with two independent gas species. The integral over dq is just the volume \(V^N\) of the hypersphere. Theta is the Heaviside step function. Momentum transformations
At this point it’s sensible to introduce some momentum transformations for the integration:
\begin{eqnarray}
p_1 \to \mathfrak{p}_i = \frac{p_i}{\sqrt{2m_1}} \Rightarrow dp_i=\sqrt{2m_1}d\mathfrak{p}_i \nonumber \\ p_2 \to \mathfrak{p}_j = \frac{p_j}{\sqrt{2m_2}} \Rightarrow dp_j=\sqrt{2m_2}d\mathfrak{p}_j \nonumber \end{eqnarray}
Putting these back in leads to:
\begin{eqnarray}
\Omega & = & \frac{V^N (2m_1)^{\frac{3N_1}{2}} (2m_2)^{\frac{3N_2}{2}}}{h^{3N} N_1! N_2!} \int \prod_{i=1}^{3N} d\mathfrak{p}_i \Theta \left( E-\sum_{i=1}^{3N}\mathfrak{p}_i^2 \right) \nonumber \\ \Omega & = & \frac{V^N (2m_1)^{\frac{3N_1}{2}} (2m_2)^{\frac{3N_2}{2}}}{h^{3N} N_1! N_2!} · \frac{ \pi^{\frac{3N}{2}} E^{\frac{3N}{2}}}{\left( \frac{3N}{2} \right)!} \nonumber \end{eqnarray}
We can now split \(\Omega\) into parts by introducing the reduced mass \(\mu=\frac{m_1m_2}{m_1+m_2}\) of the two molecular species:
\begin{equation}
\Omega=\underbrace{ \frac{V^N \pi^{\frac{3N}{2}} (2\mu E)^{\frac{3N}{2}}}{h^{3N} N! \left( \frac{3N}{2} \right)!} }_{\Omega_{\text{id}}} · \underbrace{ \frac{N!}{N_1!N_2!} \left( \frac{m_1}{\mu} \right)^{\frac{3N_1}{2}} \left( \frac{m_2}{\mu} \right)^{\frac{3N_2}{2}} }_{\Omega_{\text{mixed}}} \end{equation}
The first part of the above equation corresponds to the phase space volume of a system of \(N\) identical particles and mass \(\mu\). We’ll call it \(\Omega_{\text{id}}\). The second part we’ll call \(\Omega_{\text{mixed}}\).
2. Finding the entropy
Splitting the phase space volume up into two parts proves helpful when taking the entropy since one can now use the standard result for an ideal gas for \(\Omega_{\text{id}}\):
\begin{eqnarray}
S&= & k_B \ln \Omega \nonumber \\ &= & k_B \left[ \ln( \Omega_{\text{id}} )+ \ln( \Omega_{\text{mixed}} ) \right] \nonumber \\ &= & S_{\text{id}} + S_{\text{mixed}} \nonumber \end{eqnarray}
The standard result for \(S_{\text{id}}\) is the result for an ideal gas, particle mass \(\mu\), particle number \(N\):
\begin{equation}
\frac{S_{\text{id}}}{N k_B}= \ln \left[ \frac VN \left( \frac{4\pi \mu E}{3 h^2 N} \right)^{\frac{3}{2}} \right] + \frac 52 \end{equation}
For the second part, \(S_{\text{mixed}}\), including Stirling’s approximation:
\begin{equation}
\frac{S_{\text{mixed}}}{k_B}= N\ln N – N_1 \ln N_1 – N_2 \ln N_2 + N_1 \ln \left( \frac{m_1}{\mu} \right)^{\frac{3}{2}} +N_2 \ln \left( \frac{m_2}{\mu} \right)^{\frac{3}{2}} \label{eq:rewrite} \end{equation}
If we define the ratio between the two species numbers:
$$\eta:=\frac{N_1}{N_2}; \frac{N_1}{N}=\frac{\eta}{1+\eta}; \frac{N_2}{N}=\frac{1}{1+\eta}$$
we can use \(\eta\) to rewrite \eqref{eq:rewrite}:
\begin{equation}
\frac{S_{\text{mixed}}}{k_B N}= \frac{\eta}{1+\eta} \left( \ln \left( \frac{m_1}{\mu} \right)^{\frac 32} – \ln \frac{\eta}{1+\eta} \right) + \frac{1}{1+\eta} \left( \ln \left( \frac{m_2}{\mu} \right)^{\frac 32} – \ln \frac{1}{1+\eta} \right) \end{equation}
To find the maximum entropy, we need only differentiate the mixed contribution since the identical contribution is not a function of \(\eta\):
$$\frac{\partial S}{\partial \eta} \stackrel{!}{=} 0; \eta_0 = \left( \frac{m_1}{m_2} \right)^{\frac 32}$$
We ought to check whether this is a maximum:
$$\frac{\partial^2 S}{\partial \eta^2}\bigg|_{\eta_0} = -\frac{1}{\eta(1+\eta)^2} \lt 0$$
3. Calculating the pressure
To calculate the pressure, using the equation given, we need only differentiate \(S_{\text{id}}\) since \(S_{\text{mixed}}\) is not a function of volume:
$$P=T\left(\frac{\partial S_{\text{id}}}{\partial V}\right)_{E,N_1,N_2}=\frac{Nk_BT}{V}=\frac{N_1k_BT}{V_1}+\frac{N_2k_BT}{V_2} = P_1+P_2$$
So we can see both gases contribute to the pressure according to their number of particles.
4. Entropy of mixing
In this case we are to consider the gases in their respective volumes before the partition is removed and they are mixed together.
So for gas 1:
$$\frac{S_{N_1}}{N_1k_B}=\ln \left[ \frac{V_1}{N} \left(\frac{4\pi m_1 E}{3 h^2 N}\right)^{\frac 32}\right] + \frac 52$$
And the same for gas 2.
Using again the reduced mass,
$$\frac{S_{N_1}}{N_1k_B}=\left(\ln \left[ \frac{V}{N} \left(\frac{4\pi \mu E}{3 h^2 N}\right)^{\frac 32}\right] + \frac 52\right) + \ln\left(\frac{m_1}{\mu}\right)^{\frac 32}$$
$$\frac{S_{N_2}}{N_2k_B}=\left(\ln \left[ \frac{V}{N} \left(\frac{4\pi \mu E}{3 h^2 N}\right)^{\frac 32}\right] + \frac 52\right) + \ln\left(\frac{m_2}{\mu}\right)^{\frac 32}$$
The entropy after mixing will be higher than before, so the change in entropy will be
\begin{eqnarray}
\Delta S&=&S_\text{id}+S_\text{mixed}-S_{N_1}-S_{N_2} \nonumber \\ \frac{\Delta S}{k_B}&=&-N_1 \ln\frac{N_1}{N}-N_2 \ln\frac{N_2}{N} \nonumber \end{eqnarray} |
Let $R$ be a ring (associative with unit, but not necessarily commutative, and definitely not necessarily Noetherian.) Then the category $\operatorname{GP}(R)$ consists of those $R$-modules having a complete projective resolution, i.e. that are expressible as the image of the map $P_{-1}\to P_0$ in some sequence
$$\cdots\to P_{-2}\to P_{-1}\to P_0\to P_1\to P_2\to\cdots$$
of projective $R$-modules, such that this sequence remains exact under $\operatorname{Hom}_R(-,P)$ for any projective $P$. (For extra emphasis: I do not make any finite-generation assumptions on $M$, or on the $P_i$.)
My questions are then the following:
Is $\operatorname{GP}(R)$ a Frobenius category?
Depending on the answer, there are natural follow-up questions:
If yes, is there a good reference for this, ideally with a complete argument?
If not, can we get this property back by assuming a little more about $R$? As a target, I would like to deal with the case that $R$ is a complete preprojective algebra of an arbitrary finite quiver (which is not typically Noetherian).
Henrik Holm has a very nice paper called 'Gorenstein homological dimensions', which proves lots of things about $\operatorname{GP}(R)$ for a general ring $R$, but does not directly address this question. He does show that this category is resolving, which helps a bit (in particular, it is closed under extensions and so inherits an exact structure from $\operatorname{Mod}{R}$). Assuming I haven't made any mistakes, I think the facts that $\operatorname{GP}(R)$ contains all projective $R$-modules, which are also injective in $\operatorname{GP}(R)$, and that every Gorenstein projective admits both an epimorphism from and a monomorphism to such a module are all pretty much clear from the definition. So the only thing that might go wrong (I think!) is that there could be more projectives/injectives that don't agree with each other.
There is some extra context which might give a flavour of the kind of sources I would most appreciate (although of course any answer is appreciated!). I am aware that many authors consider the case that $R$ is Iwanaga–Gorenstein, meaning that $R$ is Noetherian and of finite injective dimension as a module on each side, and then consider the Frobenius category
$$\operatorname{GP}'(R):=\{X\in\operatorname{mod}{R}:\operatorname{Ext}_R^i(M,R)=0\ \forall\ i>0\}.$$
I am interested in certain (probably) non-Noetherian rings $R$, but that still have finite injective dimension as a module on each side, and in certain (possibly not finitely-generated) $R$-modules $M$ such that $\operatorname{Ext}^i_R(M,P)=0$ for any projective $P$ and any $i>0$. My feeling is that there should be some reasonable Frobenius category of 'Gorenstein projective-like' modules associated to $R$ and containing $M$, by focussing on the homological conditions and forgetting about any finiteness (even something like $\operatorname{GP}'(R)$ as defined above, but with $\operatorname{mod}{R}$ replaced by $\operatorname{Mod}{R}$, and $R$ replaced by an arbitrary projective in the condition – one of the things that Holm proves is that this category is then very close to $\operatorname{GP}(R)$ as defined at the top of the question, but might also include some modules with no finite resolution by Gorenstein projectives). However, I am not very familiar with what can go wrong when you drop finiteness conditions, and am concerned that I may lose the Frobenius property somewhere.
If it helps, I may in the end want my category to be Krull–Schmidt (in the strong sense that indecomposables are characterised by having local endomorphism rings) which means I will have to require that $R$ is semi-perfect. This gives a bit more control over $\operatorname{Proj}{R}$, as it means that there are finitely many indecomposable projectives such that every projective is a (possibly infinite) direct sum of these. |
Set forcing works over models of ${\rm NBG}$. Suppose ${\mathbb P}$ is a set partial order. Set $\mathbb P$-names are defined as usual. A class $\mathbb P$-name is defined to be a collection of pairs $(\tau,p)$ where $\tau$ is a set $\mathbb P$-name and $p\in\mathbb P$. All the usual properties of the set forcing construction continue to hold in this case. The forcing extension is again a model of ${\rm NBG}$. The forcing theorem holds, namely for a fixed first-order formula $\varphi(x,\Gamma)$ with a fixed class $\mathbb P$-name parameter $\Gamma$, the collection of all pairs $(p, \tau)$ such that $p\Vdash \varphi(\tau,\Gamma)$ is a class. The class is given by the usual first-order recursion to define the forcing relation. It follows from this that for a fixed second-order formula $\varphi(x,X)$ the collection of all triples $(p,\tau,\Gamma)$ such that $p\Vdash\varphi(\tau,\Gamma)$ is definable (complexity of the definition of course depends on the complexity of $\varphi$). Indeed, all the results mentioned above extend to
tame class partial orders, a technical property of class forcing isolated by Sy Friedman.
These properties however do not hold of all class partial orders in a model of ${\rm NBG}$. There are class partial orders whose forcing extensions fail to satisfy ${\rm NBG}$ and for which the forcing theorem fails to hold.
A very general account of class forcing (the theorems there of course apply to set forcing) over models of ${\rm NBG}$ can be found in Characterizations of pretameness and the ORD-cc. The weakest theory required for the forcing theorem to hold for all class partial orders is ${\rm NBG}$ together with the principle ${\rm ETR_{\rm ORD}}$. The principle ${\rm ETR_{\rm ORD}}$ states that every first-order recursion of length ${\rm ORD}$ (where stages of the recursion are allowed to be classes) has a solution. This is shown in The exact strength of the class forcing theorem.
Note: Joel David Hamkins just pointed out to me in a discussion that we may not have mixed class names even for set partial orders in ${\rm NBG}$. So it might be the case that a condition $p\Vdash\exists X\varphi(X)$, but there is no class $\mathbb P$-name $\Gamma$ such that $p\Vdash\varphi(\Gamma)$. |
An eigenvalue problem for a quasilinear elliptic field equation on $\mathbb R^n$
DOI: http://dx.doi.org/10.12775/TMNA.2001.013
Abstract
We study the field equation
$$-\Delta u+V(x)u+\varepsilon^r(-\Delta_pu+W'(u))=\mu u$$ on $\mathbb R^n$, with $\varepsilon$ positive parameter. The function $W$ is singular in a point and so the configurations are characterized by a topological invariant: the topological charge. By a min-max method, for $\varepsilon$ sufficiently small, there exists a finite number of solutions $(\mu(\varepsilon),u(\varepsilon))$ of the eigenvalue problem for any given charge $q\in{\mathbb Z}\setminus\{0\}$.
$$-\Delta u+V(x)u+\varepsilon^r(-\Delta_pu+W'(u))=\mu u$$
on $\mathbb R^n$, with $\varepsilon$ positive parameter.
The function $W$ is singular in a point and so the configurations are characterized
by a topological invariant: the topological charge.
By a min-max method, for $\varepsilon$ sufficiently small, there
exists a finite number of solutions $(\mu(\varepsilon),u(\varepsilon))$
of the eigenvalue problem for any given charge $q\in{\mathbb Z}\setminus\{0\}$.
Keywords
Nonlinear systems; nonlinear Schrödinger equations; nonlinear eigenvalue problems
Full Text:FULL TEXT Refbacks There are currently no refbacks. |
Finite lattice posets¶ class
sage.categories.finite_lattice_posets.
FiniteLatticePosets(
base_category)¶
The category of finite lattices, i.e. finite partially ordered sets which are also lattices.
EXAMPLES:
sage: FiniteLatticePosets() Category of finite lattice posets sage: FiniteLatticePosets().super_categories() [Category of lattice posets, Category of finite posets] sage: FiniteLatticePosets().example() NotImplemented class
ParentMethods¶
irreducibles_poset()¶
Return the poset of meet- or join-irreducibles of the lattice.
A
join-irreducibleelement of a lattice is an element with exactly one lower cover. Dually a meet-irreducibleelement has exactly one upper cover.
This is the smallest poset with completion by cuts being isomorphic to the lattice. As a special case this returns one-element poset from one-element lattice.
See also
EXAMPLES:
sage: L = LatticePoset({1: [2, 3, 4], 2: [5, 6], 3: [5], ....: 4: [6], 5: [9, 7], 6: [9, 8], 7: [10], ....: 8: [10], 9: [10], 10: [11]}) sage: L_ = L.irreducibles_poset() sage: sorted(L_) [2, 3, 4, 7, 8, 9, 10, 11] sage: L_.completion_by_cuts().is_isomorphic(L) True
is_lattice_morphism(
f, codomain)¶
Return whether
fis a morphism of posets from
selfto
codomain.
A map \(f : P \to Q\) is a poset morphism if\[x \leq y \Rightarrow f(x) \leq f(y)\]
for all \(x,y \in P\).
INPUT:
f– a function from
selfto
codomain
codomain– a lattice
EXAMPLES:
We build the boolean lattice of \(\{2,2,3\}\) and the lattice of divisors of \(60\), and check that the map \(b \mapsto 5 \prod_{x\in b} x\) is a morphism of lattices:
sage: D = LatticePoset((divisors(60), attrcall("divides"))) sage: B = LatticePoset((Subsets([2,2,3]), attrcall("issubset"))) sage: def f(b): return D(5*prod(b)) sage: B.is_lattice_morphism(f, D) True
We construct the boolean lattice \(B_2\):
sage: B = posets.BooleanLattice(2) sage: B.cover_relations() [[0, 1], [0, 2], [1, 3], [2, 3]]
And the same lattice with new top and bottom elements numbered respectively \(-1\) and \(3\):
sage: L = LatticePoset(DiGraph({-1:[0], 0:[1,2], 1:[3], 2:[3],3:[4]})) sage: L.cover_relations() [[-1, 0], [0, 1], [0, 2], [1, 3], [2, 3], [3, 4]] sage: f = { B(0): L(0), B(1): L(1), B(2): L(2), B(3): L(3) }.__getitem__ sage: B.is_lattice_morphism(f, L) True sage: f = { B(0): L(-1),B(1): L(1), B(2): L(2), B(3): L(3) }.__getitem__ sage: B.is_lattice_morphism(f, L) False sage: f = { B(0): L(0), B(1): L(1), B(2): L(2), B(3): L(4) }.__getitem__ sage: B.is_lattice_morphism(f, L) False
See also
join_irreducibles()¶
Return the join-irreducible elements of this finite lattice.
A
join-irreducible elementof
selfis an element \(x\) that is not minimal and that can not be written as the join of two elements different from \(x\).
EXAMPLES:
sage: L = LatticePoset({0:[1,2],1:[3],2:[3,4],3:[5],4:[5]}) sage: L.join_irreducibles() [1, 2, 4]
join_irreducibles_poset()¶
Return the poset of join-irreducible elements of this finite lattice.
A
join-irreducible elementof
selfis an element \(x\) that is not minimal and can not be written as the join of two elements different from \(x\).
EXAMPLES:
sage: L = LatticePoset({0:[1,2,3],1:[4],2:[4],3:[4]}) sage: L.join_irreducibles_poset() Finite poset containing 3 elements
meet_irreducibles()¶
Return the meet-irreducible elements of this finite lattice.
A
meet-irreducible elementof
selfis an element \(x\) that is not maximal and that can not be written as the meet of two elements different from \(x\).
EXAMPLES:
sage: L = LatticePoset({0:[1,2],1:[3],2:[3,4],3:[5],4:[5]}) sage: L.meet_irreducibles() [1, 3, 4]
meet_irreducibles_poset()¶
Return the poset of join-irreducible elements of this finite lattice.
A
meet-irreducible elementof
selfis an element \(x\) that is not maximal and can not be written as the meet of two elements different from \(x\).
EXAMPLES:
sage: L = LatticePoset({0:[1,2,3],1:[4],2:[4],3:[4]}) sage: L.join_irreducibles_poset() Finite poset containing 3 elements |
Search
Now showing items 1-10 of 24
Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV
(Springer, 2015-01-10)
The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ...
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV
(Springer Berlin Heidelberg, 2015-04-09)
The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV
(Springer, 2015-05-27)
The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ...
Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(American Physical Society, 2015-03)
We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV
(American Physical Society, 2015-06)
The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ...
Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Springer, 2015-11)
The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ...
K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2015-02)
The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ... |
The Hadamard gate might be your first encounter with superposition creation. When you say you can relate the usefulness of the Pauli $X$ gate (a.k.a.
NOT) to its classical counterpart – well, Hadamard is exactly where you leave the realm of classical analogue, then. It is useful for
exactly the same reason, however, namely that it is often used to form a universal set of gates (like clasical
AND with
NOT and fan-out, or
NOR with fan-out alone).
While a single $H$ gate is somewhat directly useful in random number generation (as Yuval Filmus said), its true power shows when appearing in more instances or in combination with other gates. When you have $n$ qubits initialized in $|0\rangle$, for example, and apply one $H$ to each of them in any order, what you get is$$(|0\rangle + |1\rangle) \otimes (|0\rangle + |1\rangle) \otimes \ldots \otimes (|0\rangle + |1\rangle) / 2^{n/2}$$which can be expanded to$$1/2^{n/2} \cdot (|00\ldots00\rangle + |00\ldots01\rangle + |00\ldots11\rangle + \ldots + |11\ldots11\rangle)$$Voilà, we can now evaluate functions on $2^n$ different inputs in parallel! This is, for example, the first step in Grover's algorithm.
Another popular use is a Hadamard on one qubit followed by a
CNOT controlled with the qubit you just put into a superposition. See:$$CNOT \big(2^{-1/2}(|0\rangle+|1\rangle)\otimes|0\rangle \big) = 2^{-1/2} CNOT(|00\rangle + |10\rangle) = 2^{-1/2} (|00\rangle + |11\rangle)$$That's a Bell state which is a cornerstone of various quantum key distribution protocols, measurement-based computation, quantum teleportation and many more applications. You can also use a
CNOT repeatedly on more zero-initialized target qubits (with the same control) to create$$2^{-1/2} (|00\ldots00\rangle + |11\ldots11\rangle)$$which is known as the GHZ state, also immensely useful.
Last but not least, it's a quite useful basis transform that is self-reversible. So another Hadamard gate undoes, in a sense, what a previous application did ($H^2 = I$). You can experiment around what happens if you use it to "sandwich" other operations, for example put one on the target qubit of a
CNOT gate and another after it. Or on both of the qubits (for a total of 4 Hadamards). Try it yourself and you'll certainly learn a lot about Quantum computation!
Re "what is the Hadamard gate doing geometrically to a vector": read up on the Bloch sphere, you'll going to hear about it everywhere. In this representation, a Hadamard gate does a 180° rotation about a certain slanted axis. The Pauli gates (
NOT being one out of three) also do 180° rotations but only about $x$ or $y$ or $z$. Because such geometrical operations are quite restricted, these gates alone can't really do much. (Indeed, if you restrict yourself to those and a
CNOT in your quantum computer, you just build a very expensive and uneffective classical device.) Rotating about something tilted is important, and one more ingredient you usually need is also rotating by a smaller fraction of the angle, like 45° (like in the Phase shift gate). |
This question already has an answer here:
How can I place several equations side by side inside the gather environment with the line breaking working to avoid this equations to exceed the page size. Here goes one example:
\documentclass[12pt,oneside]{report}\usepackage{geometry}\geometry{a4paper,total={170mm,257mm},left=20mm,top=20mm}\usepackage[utf8]{inputenc}\usepackage{amssymb, amsmath, amsthm}\begin{document}\begin{gather*}u'(x) = p(x)u(x) \rightarrow \frac{u'(x)}{u(x)} = p(x) \rightarrow\int\frac{u'(x)}{u(x)}dx = \int p(x) dx \rightarrow\ln |u(x)| = \int p(x) dx + k \rightarrow|u(x)| = e^{\int p(x) dx + k} \rightarrow|u(x)| = e^{k}e^{\int p(x) dx} \rightarrow|u(x)| = ke^{\int p(x) dx}\end{gather*}\end{document}
Which generates the equation
that extrapolates the page margins. |
In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe...
That seems like what I need to do, but I don't know how to actually implement it... how wide of a time window is needed for the Y_{t+\tau}? And how on earth do I load all that data at once without it taking forever?
And is there a better or other way to see if shear strain does cause temperature increase, potentially delayed in time
Link to the question: Learning roadmap for picking up enough mathematical know-how in order to model "shape", "form" and "material properties"?Alternatively, where could I go in order to have such a question answered?
@tpg2114 For reducing data point for calculating time correlation, you can run two exactly the simulation in parallel separated by the time lag dt. Then there is no need to store all snapshot and spatial points.
@DavidZ I wasn't trying to justify it's existence here, just merely pointing out that because there were some numerics questions posted here, some people might think it okay to post more. I still think marking it as a duplicate is a good idea, then probably an historical lock on the others (maybe with a warning that questions like these belong on Comp Sci?)
The x axis is the index in the array -- so I have 200 time series
Each one is equally spaced, 1e-9 seconds apart
The black line is \frac{d T}{d t} and doesn't have an axis -- I don't care what the values are
The solid blue line is the abs(shear strain) and is valued on the right axis
The dashed blue line is the result from scipy.signal.correlate
And is valued on the left axis
So what I don't understand: 1) Why is the correlation value negative when they look pretty positively correlated to me? 2) Why is the result from the correlation function 400 time steps long? 3) How do I find the lead/lag between the signals? Wikipedia says the argmin or argmax of the result will tell me that, but I don't know how
In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe...
Because I don't know how the result is indexed in time
Related:Why don't we just ban homework altogether?Banning homework: vote and documentationWe're having some more recent discussions on the homework tag. A month ago, there was a flurry of activity involving a tightening up of the policy. Unfortunately, I was really busy after th...
So, things we need to decide (but not necessarily today): (1) do we implement John Rennie's suggestion of having the mods not close homework questions for a month (2) do we reword the homework policy, and how (3) do we get rid of the tag
I think (1) would be a decent option if we had >5 3k+ voters online at any one time to do the small-time moderating. Between the HW being posted and (finally) being closed, there's usually some <1k poster who answers the question
It'd be better if we could do it quick enough that no answers get posted until the question is clarified to satisfy the current HW policy
For the SHO, our teacher told us to scale$$p\rightarrow \sqrt{m\omega\hbar} ~p$$$$x\rightarrow \sqrt{\frac{\hbar}{m\omega}}~x$$And then define the following$$K_1=\frac 14 (p^2-q^2)$$$$K_2=\frac 14 (pq+qp)$$$$J_3=\frac{H}{2\hbar\omega}=\frac 14(p^2+q^2)$$The first part is to show that$$Q \...
Okay. I guess we'll have to see what people say but my guess is the unclear part is what constitutes homework itself. We've had discussions where some people equate it to the level of the question and not the content, or where "where is my mistake in the math" is okay if it's advanced topics but not for mechanics
Part of my motivation for wanting to write a revised homework policy is to make explicit that any question asking "Where did I go wrong?" or "Is this the right equation to use?" (without further clarification) or "Any feedback would be appreciated" is not okay
@jinawee oh, that I don't think will happen.
In any case that would be an indication that homework is a meta tag, i.e. a tag that we shouldn't have.
So anyway, I think suggestions for things that need to be clarified -- what is homework and what is "conceptual." Ie. is it conceptual to be stuck when deriving the distribution of microstates cause somebody doesn't know what Stirling's Approximation is
Some have argued that is on topic even though there's nothing really physical about it just because it's 'graduate level'
Others would argue it's not on topic because it's not conceptual
How can one prove that$$ \operatorname{Tr} \log \cal{A} =\int_{\epsilon}^\infty \frac{\mathrm{d}s}{s} \operatorname{Tr}e^{-s \mathcal{A}},$$for a sufficiently well-behaved operator $\cal{A}?$How (mathematically) rigorous is the expression?I'm looking at the $d=2$ Euclidean case, as discuss...
I've noticed that there is a remarkable difference between me in a selfie and me in the mirror. Left-right reversal might be part of it, but I wonder what is the r-e-a-l reason. Too bad the question got closed.
And what about selfies in the mirror? (I didn't try yet.)
@KyleKanos @jinawee @DavidZ @tpg2114 So my take is that we should probably do the "mods only 5th vote"-- I've already been doing that for a while, except for that occasional time when I just wipe the queue clean.
Additionally, what we can do instead is go through the closed questions and delete the homework ones as quickly as possible, as mods.
Or maybe that can be a second step.
If we can reduce visibility of HW, then the tag becomes less of a bone of contention
@jinawee I think if someone asks, "How do I do Jackson 11.26," it certainly should be marked as homework. But if someone asks, say, "How is source theory different from qft?" it certainly shouldn't be marked as Homework
@Dilaton because that's talking about the tag. And like I said, everyone has a different meaning for the tag, so we'll have to phase it out. There's no need for it if we are able to swiftly handle the main page closeable homework clutter.
@Dilaton also, have a look at the topvoted answers on both.
Afternoon folks. I tend to ask questions about perturbation methods and asymptotic expansions that arise in my work over on Math.SE, but most of those folks aren't too interested in these kinds of approximate questions. Would posts like this be on topic at Physics.SE? (my initial feeling is no because its really a math question, but I figured I'd ask anyway)
@DavidZ Ya I figured as much. Thanks for the typo catch. Do you know of any other place for questions like this? I spend a lot of time at math.SE and they're really mostly interested in either high-level pure math or recreational math (limits, series, integrals, etc). There doesn't seem to be a good place for the approximate and applied techniques I tend to rely on.
hm... I guess you could check at Computational Science. I wouldn't necessarily expect it to be on topic there either, since that's mostly numerical methods and stuff about scientific software, but it's worth looking into at least.
Or... to be honest, if you were to rephrase your question in a way that makes clear how it's about physics, it might actually be okay on this site. There's a fine line between math and theoretical physics sometimes.
MO is for research-level mathematics, not "how do I compute X"
user54412
@KevinDriscoll You could maybe reword to push that question in the direction of another site, but imo as worded it falls squarely in the domain of math.SE - it's just a shame they don't give that kind of question as much attention as, say, explaining why 7 is the only prime followed by a cube
@ChrisWhite As I understand it, KITP wants big names in the field who will promote crazy ideas with the intent of getting someone else to develop their idea into a reasonable solution (c.f., Hawking's recent paper) |
For the reversible isothermal expansion of an ideal gas: $${∆H}={∆U}=0 \tag1$$ This is obvious for the case of internal energy because $${∆U} = \frac {3}{2} n R {∆T} = 0 \tag2$$ and $${∆U} = -C_P n {∆T} = 0 \tag3$$ For the case of enthalpy it is easy to see that $${∆H} = -C_v n {∆T} = 0 \tag4$$ I've also seen $${∆H} = ∆U + ∆(PV) = ∆U + nR{∆T} = 0 \tag5$$ Now for the part I don't understand. $$dH = dU + PdV \tag6$$ $$dH = dU + nRT \frac {dV}{V} \tag7$$ $${∆H} = {∆U} + nRT \ln\frac {V_2}{V_1} \tag8$$ $${∆H} = 0 + nRT \ln\frac {V_2}{V_1} = \ln\frac {V_2}{V_1} ≠ 0\tag9$$ Clearly, it is incorrect to make the substitution $ P = nRT/V$ in going from $(6)$ to $(7)$. Why is that? I thought equation $(6)$ was always valid, and integrating such a substitution should account for any change in the variables throughout the process. Why does this not yield the same answer as $(4)$ and $(5)$?
$$H = U + PV \Rightarrow$$
$$dH = dU +PdV + VdP\tag{6}$$
In other words, equation 6 is missing the $VdP$ term.
$$dH = dU + nRT \frac{dV}{V} + nRT \frac{dP}{P}\tag{7}$$
$$ \Delta H = \Delta U + nRT \ln\frac{V_2}{V_1} + nRT \ln\frac{P_2}{P_1}\tag{8}$$
$$P_1 V_1 = P_2 V_2 \text{ (isothermal)}$$
$$\Delta H = \Delta U + nRT \left( \ln\frac{V_2}{V_1} + \ln\frac{V_1}{V_2} \right) = \Delta U = 0\tag{9}$$
$$PV =nRT$$ So, for constant temperature, $dU=0$.
So, $H=PV$
The term $PV = 0$, so h= Because pv=constant
protected by AccidentalFourierTransform Aug 11 '18 at 13:55
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead? |
Loan balance vs time Loan balance after the ith payment Principal or loan balance at Fraction of payment to interest during the ith payment Fraction of payment to interest Interest rate Loan term Number of loan payments Time between loan payments Loan product, important parameter which fully specifies a loan Helpful collection of variables Repayment rate (dollars per time) Fraction of loan term Total payment to interest Sum of all payments Overpay ratio
When I present the continuous solutions to loans, I meet push back because those equations do not
exactly match the computations done by the bank.Nonetheless, the continuous solutions are important because unlike the discrete "real" solutions, the continuous solutions show us the functional form of the parameters for which we are solving. In this section, we will answer the question, "if the continuous solution is not exactly correct, how much error is there and is that error significant?" The answer will depend mostly on the number of repayment periods, \(n\), and slightly on the loan product, \(rt_\text{term}\), butwhen computing the interest on a loan, it will be less than 5% for \(n>\) 6 and less than 1% for \(n>\) 36 regardless of the loan product.
Continuous solutions are useful for forming an understanding of the shape of the functional form of the equations, for estimating values (repayment rates, interest or overpay, etc) with simple closed-form equations that do not require extensive sums, and for comparing loans of different terms and interest rates. They are
not useful for comparing against bank statements after computing payments or balances to the nearest penny and should not be used as such.
We have already seen in the discrete solution article that a plot of \(\phi(\tau)\) approaches the continuous solution as \(n \rightarrow\) 0. For completeness here is a plot of \(\frac{B(\tau)}{B_0}\) for various \(n\) and \(rt_\text{term}\).
Continuous solution Discrete solution \( \displaystyle \frac{B(\tau)}{B_0} = \frac{1-\phi^{-1} }{\left(1-\phi\right)^{\tau}}+\phi^{-1} \) \( \displaystyle \frac{B_{i}}{B_0} = R^{i} - \frac{ R^{n}}{ \sum^{n-1}_{k=0} R^k } \sum_{j=1}^{i} R^{j-1} \) \(\phi_0 = 1-e^{-rt_\text{term}} \) \( R = 1+ \frac{rt_\text{term}}{n} \)
The superior plot for comparison shows \(\phi\) vs \(\tau\). Again, as \(n \rightarrow \infty\), the plot of \(\phi_i\) approaches the continuous \(\phi(\tau)\). This is the same figure shown in the article on finite difference solutions.
Continuous solution Discrete solution \( \displaystyle \phi(\tau) = 1 - \frac{1-\phi_0}{(1-\phi_0)^{\tau}} \) \( \displaystyle \phi_i = \frac{rt_\text{term}}{n}\left( \frac{\sum_{j=1}^{n} R^{j-1}}{ R^n} R^{i} - \sum_{j=1}^{i} R^{j-1} \right) \) \( R = 1+ \frac{rt_\text{term}}{n} \)
For a given loan we can compute the error ratio between the continuous and discrete ODE solutions when solving for the repayment rate.
This expression is exactly equivalent to the error in the overpay ratio. Just multiply the top and bottom of the fraction by \(t_\text{term}\) and you have the expression for the error in the overpay ratio. For most loans, this error ratio will be close to one, typically within 0.5%.
This is better stated as a function of the number of loan discretization steps (\(n = t/\Delta t\)) and the loan product (\(rt_\text{term}\)).
This is a complicated function of \(n\) and \(rt_\text{term}\) but when plotted it is easy to understand. It's much more a function of \(n\) than \(rt_\text{term}\) and for most reasonable \(n\), say \(n>\) 36 which is a 3-year loan with monthly payments, the continuous and discrete solutions never differ by more than 1%.
Alternatively, we can reproduce the graphs from the article on overpay ratios and repayment rates using various \(n\). The lines quickly become indistinguishable for \(n>\) 20. For the overpay ratio, the continuous and discrete equations are given below.
Continuous solution Discrete solution \( \displaystyle \frac{V}{B_0} = \frac{rt_\text{term}}{1- e^{-rt_\text{term}}} \) \( \displaystyle \frac{V}{B_0} = \frac{ n (1+\frac{rt_\text{term}}{n})^{n} }{\sum^{n-1}_{i=0} (1+\frac{rt_\text{term}}{n})^i }\)
The repayment rate is the overpay ratio divided by the loan term (for both the continuous and discrete equations).
Continuous solution Discrete solution \( \displaystyle \frac{P}{B_0} = \frac{r}{1- e^{-rt_\text{term}}} \) \( \displaystyle \frac{P}{B_0} = \frac{ nR^{n}}{t_\text{term} \sum^{n-1}_{i=0} R^i } \) \( R = 1+ \frac{rt_\text{term}}{n} \) |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
According to Wikipedia caesium’s density is $1.90\ \mathrm{g/cm^3}$ at $\Theta =20\ \mathrm{^\circ C}$. How does this change when $T$ changes? E.g. will it expand when melting?
Looking at the wikipedia page you can see that the density is $\rho=1.93$ kg/l at room temperature and $\rho=1.843 kg/l$ at it's melting point.
Additional information can be found in this paper, where they experimentally explore the density of liquid Cesium in the vicinity of its melting point. In the range of 302 to 375 K to be exact.
From their experimental data they deduce the following approximate equation for the density as a function of the temperature: $$\rho= 1829.12- 0.61483 \left(T-T_{melt}\right) $$ where $T_{melt}=301.6$ K.
This shows that the density reduces with temperature. Plotting the information that we have from the paper and from wikipedia yields:
The jump at the melting point is from the mismatch between the paper and wikipedia. In the paper they specify specifically that the density is for the liquid case, on wikipedia it might (I'm not sure) be the density for the solid case at the melting point. If that is the case then the jump in density from solid to liquid is: $1843-1829=14 $kg/m$^3$
Since I cannot comment yet, the only other way I know to say something here is to post an answer.
To address your question about the units, the thermal expansion coefficient is a percentage. From wikipedia:
$$\alpha_L = \dfrac{1}{L}\dfrac{dL}{dT}$$
So, for a change
dT of temperature, the metal will expand by a percentage of $\dfrac{dL}{L}$.
I think I found what I was looking for on Wikipedia. They give the thermal expansion coefficient for Caesium as $97 µm\cdot m^{−1}\cdot K^{−1}$. |
I'm trying to understand the derivation of the zero order hold discretization method, and I have a couple of questions about some of the steps.I think I understand the first part, this is just the ...
I have found a problem in applying Laplace Transform to $-e^{-at}u(-t)$I am doing these steps:$$ = - \int_{-\infty}^{+\infty} e^{-at}u(-t) e^{-st}dt$$$$ = - \int_{-\infty}^{0} e^{-at} e^{-st}dt$$...
I am using a miniature car and I want to estimate the position. We can not use GPS modules and most of the tracking systems that I saw, are using IMU sensor with the GPS module. In our car we are able ...
I have the measured signal $y(t)$ that can be modeled in the frequency domain as $Y(f)$:$$Y(f) = X(f)\cdot A(f) - [X(f)\cdot B(f)] \ast C(f)$$where $\ast$ is the convolution.I know $A(f)$, $B(f)$,...
I've looked at previous answers and none of the answers seem to agree (perhaps I'm missing something obvious):I would like to know what the correct scaling to use is foramplitude,energy andpower ...
I have received the following 4 time-series signals and wanted to do some data analytics using these. Although the sampling is the same for all of these signals, 2 samples per second, problem is that ...
For a given scenario in the context of control system, I'm trying to investigate how the $H_\infty$ norm can be calculated for a transfer function as follows:$$G(s)= \frac{w_n^2}{s^2 +2\zeta w_ns +...
Let's assume that there is a signal $f(t)$, $0\le t \le T$. Then, is there any special meaning of the signal $g(t) =f(t) \times \sin(\omega t)$? Like AM transmission. What is shown/heard when we play ...
I am calculating the average power of a vector. I would like to compare the final expression with the simulation. However, they are not equal. Please help me to point out which steps are wrong. Thank ...
I'm trying to perform an FFT using the CMSIS libraries.My FFT_Output[] provides 1024 bins of data as expected. The issue is that it's linearly spaced and I want to change it to a logarithmic scale ... |
Codeforces Round #484 (Div. 2) Finished
Petr is a detective in Braginsk. Somebody stole a huge amount of money from a bank and Petr is to catch him. Somebody told Petr that some luxurious car moves along the roads without stopping.
Petr knows that it is the robbers who drive the car. The roads in Braginsk are one-directional and each of them connects two intersections. Petr wants to select one intersection such that if the robbers continue to drive the roads indefinitely, they will sooner or later come to that intersection. The initial position of the robbers is unknown. Find such an intersection that fits the requirements.
The first line of the input contains two integers $$$n$$$ and $$$m$$$ ($$$2 \leq n \le 10^5$$$, $$$2 \leq m \leq 5 \cdot 10^5$$$) — the number of intersections and the number of directed roads in Braginsk, respectively.
Each of the next $$$m$$$ lines contains two integers $$$u_i$$$ and $$$v_i$$$ ($$$1 \le u_i, v_i \le n$$$, $$$u_i \ne v_i$$$) — the start and finish of the $$$i$$$-th directed road. It is guaranteed that the robbers can move along the roads indefinitely.
Print a single integer $$$k$$$ — the intersection Petr needs to choose. If there are multiple answers, print any. If there are no such intersections, print $$$-1$$$.
5 6 1 2 2 3 3 1 3 4 4 5 5 3 3 3 3 1 2 2 3 3 1 1
In the first example the robbers can move, for example, along the following routes: $$$(1-2-3-1)$$$, $$$(3-4-5-3)$$$, $$$(1-2-3-4-5-3-1)$$$. We can show that if Petr chooses the $$$3$$$-rd intersection, he will eventually meet the robbers independently of their route.
Name |
I have a set of numbers, and want to calculate the maximum subset such that the sum of any two of it's elements is not divisible by an integer $K$.I tried to solve this problem, but I have found the quadratic solution, which is not efficient response.
$K < 100, N < 10000$, where $N$ is the number of elements and $K$ is given constant. Is there better than quadratic solution?
I have a set of numbers, and want to calculate the maximum subset such that the sum of any two of it's elements is not divisible by an integer $K$.I tried to solve this problem, but I have found the quadratic solution, which is not efficient response.
Indeed there is a linear time algorithm for this. You only need to use some basic number theory concepts. Given two numbers $n_1$ and $n_2$, their sum is divisible to $K$, only if the sum of their remainder is divisible to $K$. In other words,
$$K \mid ( n_1 + n_2 ) ~~~~ \Longleftrightarrow ~~~~ K \mid \left((n_1 ~mod ~K) + (n_2 ~mod ~K)\right).$$
The second concept that you need to consider is that, the sum of two numbers $r_1 \neq r_2$ is $K$, only if one of them is strictly smaller than $K/2$ and the other is no less than $K/2$. In other words,
$$r_1 + r_2 = K ~~~\Rightarrow~~~ r_1 <K/2, ~r_2 \geq K/2~~~~~~ (r_1 \neq r_2,~\text{w.l.g.}~r_1 < r_2).$$
The third concept that you need to consider is that, if the sum of two numbers $r_1 \neq r_2$ is $K$, they both deviate from $\lceil K/2 \rceil -1$ by a certain $k \leq \lceil K/2 \rceil$, i.e.,
$$r_1 + r_2 = K ~~~\Rightarrow~~~ \exists_{k \leq \lceil K/2 \rceil -1}~~~\text{such that}~~~ r_1 = \lceil K/2 \rceil -1 -k, ~r_2 = \lceil K/2 \rceil +k.$$
So, for evey $k$ in the third concept, you need to put either $r_1$ or $r_2$ in the solution set, but not both of them. You are allowed to put one of the numbers that are actually divisible by $K$ and if $K$ is even, you can add only one number that its remainder is $K/2$.
Therefore, here is the algorithm.
Given a set ${\cal N}=\{ n_1, n_2, \cdots, n_N \}$, let's find the solution set ${\cal S},$
Consider ${\cal R} = \{ r_1=(n_1 ~mod ~K), r_2=(n_2 ~mod ~K), \cdots, r_N=(n_N ~mod ~K) \}$ ${\cal S} \gets \emptyset$ for $k \gets 1$ to $\lceil K/2 \rceil -1$: $\hspace{1.3em}$ if $count({\cal R},k) \geq count({\cal R},K-k)$: $\hspace{2.6em}$ add all $n_i$ to ${\cal S}$, such that $r_i=k$ $\hspace{1.3em}$ else: $\hspace{2.6em}$ add all $n_i$ to ${\cal S}$, such that $r_i=K-k$ add only one $n_i$ to ${\cal S}$ such that $r_i=0\hspace{1em}$ // if exists if $K$ is even: $\hspace{1.3em}$ add only one $n_i$ to ${\cal S}$ such that $r_i=K/2\hspace{1em}$ // if exists Output ${\cal S}$
The algorithm is quite long, but the idea is very simple.
Consider a set S of size n containing all distinct natural numbers. We have to form the maximal subset from this set . We use a basic modulus property and add a few deductions to it to solve the problem. I hope it is helpful for you all.
For any two natural numbers N1 and N2 : (N1+N2)mod(K)=(R1+R2)mod(K) where R1=N1modK and R2=N2%K. 1. If we (N1+N2)modK=0 it means (R1+R2)%K=0. 2. That means R1+R2 must equal either K,2K,3K.... 3. But R1 lies between 0 & K-1 and so does R2 , which means their sum can't exceed K-1+K-1 = 2(K-1). 4. From 2 and 3 we can conclude that R1 + R2 must be equal to K. 5. Further if R1+R2 =K that means either both of them must be equal to K/2 (only possible if K is even) or one of them must be lesser than floor[K/2] and one greater than the same. 6. Suppose R1=T and R2=K-T , if we take any number N1 from S whose remainder is R1 and any number N2 from S whose remainder is R2 then their sum will be divisible by K. Therefore the Solution subset can have either those numbers with remainder R1 or those with remainder R2 but not both .
Now Suppose we construct an Array R of size K with index 0 through K-1 , The element in each index denoting the number of numbers in the set S with remainder (on division with K ) equal to the index number. We cant have more than 2 numbers with their remainder 0 as their sum would be divisible by K therefore we must initialise our counter with min(R[0],1). For T=1 to T
The code for the same algorithm in C++ is as shown below:
#include <cmath>#include <cstdio>#include <vector>#include <iostream>#include <algorithm>using namespace std;int main() { int n,k; cin>>n>>k; vector <int> a(n); vector <int> r(k,0); for(int i=0;i<n;i++) { cin>>a[i]; r[a[i]%k]++; } int ctr=min(1,r[0]); for(int a=1;a<(k/2+1);a++) { if(a!=k-a) ctr+=max(r[a],r[k-a]); } if(k%2==0&&r[k/2]!=0) ctr++; cout<<ctr; return 0;}
I tried translating to C# code, the first to only count the size of the subset array and another including the entire (hash)subset.
Count:
static int nonDivisibleSubset(int k, int[] S){ var r = new int[k]; for (int i = 0; i < S.Length; i++) r[S[i] % k]++; int count = Math.Min(1, r[0]); if (k % 2 == 0 && r[k / 2] != 0) count++; for (int j = 1; j <= k / 2; j++) { if (j != k - j) count += Math.Max(r[j], r[k - j]); } return count;}
With subset:
static int nonDivisibleSubset(int K, int[] S){ var r = new HashSet<int>(); var d = S.GroupBy(gb => gb % K).ToDictionary(Key => Key.Key, Value => Value.ToArray()); for (int j = 1; j <= K / 2; j++) { var c1 = d.GetValueOrDefault(j, new int[0]); var c2 = d.GetValueOrDefault(K - j, new int[0]); if (c1.Length == c2.Length) continue; r.UnionWith(c1.Length > c2.Length ? c1 : c2); } if (d.ContainsKey(0)) r.Add(d[0].Max()); if (K % 2 == 0 && d.ContainsKey(K / 2)) r.Add(d[K / 2].Max()); return r.Count;} |
Finite monoids¶ class
sage.categories.finite_monoids.
FiniteMonoids(
base_category)¶
The category of finite (multiplicative)
monoids.
A finite monoid is a
finite setsendowed with an associative unital binary operation \(*\).
EXAMPLES:
sage: FiniteMonoids() Category of finite monoids sage: FiniteMonoids().super_categories() [Category of monoids, Category of finite semigroups] class
ElementMethods¶
pseudo_order()¶
Returns the pair \([k, j]\) with \(k\) minimal and \(0\leq j <k\) such that
self^k == self^j.
Note that \(j\) is uniquely determined.
EXAMPLES:
sage: M = FiniteMonoids().example(); M An example of a finite multiplicative monoid: the integers modulo 12 sage: x = M(2) sage: [ x^i for i in range(7) ] [1, 2, 4, 8, 4, 8, 4] sage: x.pseudo_order() [4, 2] sage: x = M(3) sage: [ x^i for i in range(7) ] [1, 3, 9, 3, 9, 3, 9] sage: x.pseudo_order() [3, 1] sage: x = M(4) sage: [ x^i for i in range(7) ] [1, 4, 4, 4, 4, 4, 4] sage: x.pseudo_order() [2, 1] sage: x = M(5) sage: [ x^i for i in range(7) ] [1, 5, 1, 5, 1, 5, 1] sage: x.pseudo_order() [2, 0]
TODO: more appropriate name? see, for example, Jean-Eric Pin’s lecture notes on semigroups.
class
ParentMethods¶
nerve()¶
The nerve (classifying space) of this monoid.
OUTPUT: the nerve \(BG\) (if \(G\) denotes this monoid), as a simplicial set. The \(k\)-dimensional simplices of this object are indexed by products of \(k\) elements in the monoid:\[a_1 * a_2 * \cdots * a_k\]
The 0th face of this is obtained by deleting \(a_1\), and the \(k\)-th face is obtained by deleting \(a_k\). The other faces are obtained by multiplying elements: the 1st face is\[(a1 * a_2) * \cdots * a_k\]
and so on. See Wikipedia article Nerve_(category_theory), which describes the construction of the nerve as a simplicial set.
A simplex in this simplicial set will be degenerate if in the corresponding product of \(k\) elements, one of those elements is the identity. So we only need to keep track of the products of non-identity elements. Similarly, if a product \(a_{i-1} a_i\) is the identity element, then the corresponding face of the simplex will be a degenerate simplex.
EXAMPLES:
The nerve (classifying space) of the cyclic group of order 2 is infinite-dimensional real projective space.
sage: Sigma2 = groups.permutation.Cyclic(2) sage: BSigma2 = Sigma2.nerve() sage: BSigma2.cohomology(4, base_ring=GF(2)) Vector space of dimension 1 over Finite Field of size 2
The \(k\)-simplices of the nerve are named after the chains of \(k\) non-unit elements to be multiplied. The group \(\Sigma_2\) has two elements, written
()(the identity element) and
(1,2)in Sage. So the 1-cells and 2-cells in \(B\Sigma_2\) are:
sage: BSigma2.n_cells(1) [(1,2)] sage: BSigma2.n_cells(2) [(1,2) * (1,2)]
Another construction of the group, with different names for its elements:
sage: C2 = groups.misc.MultiplicativeAbelian([2]) sage: BC2 = C2.nerve() sage: BC2.n_cells(0) [1] sage: BC2.n_cells(1) [f] sage: BC2.n_cells(2) [f * f]
With mod \(p\) coefficients, \(B \Sigma_p\) should have its first nonvanishing homology group in dimension \(p\):
sage: Sigma3 = groups.permutation.Symmetric(3) sage: BSigma3 = Sigma3.nerve() sage: BSigma3.homology(range(4), base_ring=GF(3)) {0: Vector space of dimension 0 over Finite Field of size 3, 1: Vector space of dimension 0 over Finite Field of size 3, 2: Vector space of dimension 0 over Finite Field of size 3, 3: Vector space of dimension 1 over Finite Field of size 3}
Note that we can construct the \(n\)-skeleton for \(B\Sigma_2\) for relatively large values of \(n\), while for \(B\Sigma_3\), the complexes get large pretty quickly:
sage: Sigma2.nerve().n_skeleton(14) Simplicial set with 15 non-degenerate simplices sage: BSigma3 = Sigma3.nerve() sage: BSigma3.n_skeleton(3) Simplicial set with 156 non-degenerate simplices sage: BSigma3.n_skeleton(4) Simplicial set with 781 non-degenerate simplices
Finally, note that the classifying space of the order \(p\) cyclic group is smaller than that of the symmetric group on \(p\) letters, and its first homology group appears earlier:
sage: C3 = groups.misc.MultiplicativeAbelian([3]) sage: list(C3) [1, f, f^2] sage: BC3 = C3.nerve() sage: BC3.n_cells(1) [f, f^2] sage: BC3.n_cells(2) [f * f, f * f^2, f^2 * f, f^2 * f^2] sage: len(BSigma3.n_cells(2)) 25 sage: len(BC3.n_cells(3)) 8 sage: len(BSigma3.n_cells(3)) 125 sage: BC3.homology(range(5), base_ring=GF(3)) {0: Vector space of dimension 0 over Finite Field of size 3, 1: Vector space of dimension 1 over Finite Field of size 3, 2: Vector space of dimension 1 over Finite Field of size 3, 3: Vector space of dimension 1 over Finite Field of size 3, 4: Vector space of dimension 1 over Finite Field of size 3} sage: BC5 = groups.permutation.Cyclic(5).nerve() sage: BC5.homology(range(5), base_ring=GF(5)) {0: Vector space of dimension 0 over Finite Field of size 5, 1: Vector space of dimension 1 over Finite Field of size 5, 2: Vector space of dimension 1 over Finite Field of size 5, 3: Vector space of dimension 1 over Finite Field of size 5, 4: Vector space of dimension 1 over Finite Field of size 5}
rhodes_radical_congruence(
base_ring=None)¶
Return the Rhodes radical congruence of the semigroup.
The Rhodes radical congruence is the congruence induced on S by the map \(S \rightarrow kS \rightarrow kS / rad kS\) with k a field.
INPUT:
base_ring(default: \(\QQ\)) a field
OUTPUT:
A list of couples (m, n) with \(m \neq n\) in the lexicographicorder for the enumeration of the monoid
self.
EXAMPLES:
sage: M = Monoids().Finite().example() sage: M.rhodes_radical_congruence() [(0, 6), (2, 8), (4, 10)] sage: from sage.monoids.hecke_monoid import HeckeMonoid sage: H3 = HeckeMonoid(SymmetricGroup(3)) sage: H3.repr_element_method(style="reduced") sage: H3.rhodes_radical_congruence() [([1, 2], [2, 1]), ([1, 2], [1, 2, 1]), ([2, 1], [1, 2, 1])]
By Maschke’s theorem, every group algebra over \(\QQ\) is semisimple hence the Rhodes radical of a group must be trivial:
sage: SymmetricGroup(3).rhodes_radical_congruence() [] sage: DihedralGroup(10).rhodes_radical_congruence() []
REFERENCES: |
I cannot see how Willie Wong's example of the Bernstein-Robinson result supports his conclusion. It seems to me to do the opposite, and I am not alone here. Halmos admits himself in his autobiography: "The Bernstein-Robinson proof uses non-standard models of higher order predicate languages, and when Abby [Robinson] sent me his preprint I really had to sweat to pinpoint and translate its mathematical insight." Halmos did sweat because, as all of his comments and actions regarding NSA indicate, he was against it for philosophical or personal reasons, and so he was eager to downplay this result precisely because it seemed like support for using NSA, which, at least in Robinson's approach, is nonconstructive due to the reliance on the existence of nonprincipal ultrafilters (also, the compactness theorem relies on some equivalent of the axiom of choice).
Also, the fact that a formal proof of some formula
exists (which is precisely what it means to be a theorem) is only trivially relevant to the question of whether a theory might help you find a proof. Besides, who other than automated theorem-provers actually thinks in terms of formal proofs? In my experience, the concepts and tools of a theory, the objects that it lets you talk about, and the ideas that it lets you express are what make a theory useful for proving things.
One thing that the OP might find attractive about NSA is that saying "
x is infinitely close to y" is perfectly fine and meaningful -- and it probably means what you already think it means: two numbers are infinitely close iff their difference is infinitely small, i.e., an infinitesimal. You also get things like halos (all numbers infinitely close to some number) and shadows (the standard number infinitely close to some number), which can be fun and intuitive concepts to think with.
For example, here is how the limit of a (hyperreal) sequence is defined. First, sequences are no longer indexed by the natural numbers $\mathbb{N}$. Rather, sequences are indexed by the hypernaturals $^*\mathbb{N}$, which include numbers larger than any standard natural. Such numbers are called infinite (or unlimited). (Warning: this is not the same concept as "infinity" in "as
x goes to infinity"; infinite naturals are smaller than (positive) infinity, when it makes sense to compare them.) Now, a hyperreal L is the limit of a sequence $\langle s_n \rangle$ (indexed by $^*\mathbb{N}$!) iff L is infinitely close to $s_n$ for all infinite n.
For another example, consider proofs using "sloppy" reasoning where you end up with some infinitesimal term and so just ignore it or drop it from an equation (provoking derisive comments about "ghosts of departed entities"). In NSA, rather than ignoring the term, you can actually say that it's infinitesimal and end up with a result that is infinitely close to the result of your sloppy alternative. E.g., let (the hyperfunction) $f(x) = x^2$ and consider the (I presume familiar) formula for the derivative, where we will let
h be a nonzero infinitesimal:
$$\begin{align}
\frac{f(x+h) - f(x)}{h} &= \frac{(x+h)^2 - x^2}{h} \\
&= \frac{x^2 + 2xh + h^2 - x^2}{h} \\
&= 2x + h \\
&\simeq 2x
\end{align}$$
The symbol $\simeq$ denotes the relation "infinitely close". This derivation works because, when
h is an infinitesimal, a + h is infinitely close to a for any hyperreal a. Under sensible restrictions on f and x, this derivation shows that $2x$ is the standard derivative of $x^2$, as every schoolgirl knows.
A cost-benefit analysis for learning NSA should probably include (i) for a benefit, how interesting or valuable you find the nonstandard concepts and (ii) for a cost, how much work you'll have to do to learn it. The latter will depend on what text or approach you choose. If you are willing to take some things for granted and just use the resulting tools, you can get away with bypassing a good chunk of the model-theoretic machinery (compactness, ultrafilters, elementary extensions, transfer, formal languages). If you understand the ultrapower construction, which constructs the hyperreals as equivalence classes of infinite sequences of real numbers (similar to the construction of the reals from the rationals using Cauchy sequences), then the resulting system behaves like you would expect -- relations and operations are defined componentwise. This part is relatively easy. Alternatively, you can get away with not understanding the construction very well if you are willing to internalize the definitions of the relations and operations on the hyperreals just as axiomatic.
If you want to look into NSA, I would recommend either (a) Goldblatt's
Lectures on the Hyperreals if you don't have a strong background or interest in mathematical logic or (b) Hurd and Loeb's Introduction to Nonstandard Real Analysis otherwise. The latter is out of print and sadly about $100 if you want to buy it, but check libraries. It's very thoughtful and well-written. Also, if you are excited about the model-theoretic aspects, look them up in Chang and Keisler's Model Theory book as you go along. Hodges' model theory book is also very good but doesn't cover this material as extensively.
Cheers, Rachel |
I have that $R$ is the $k$-algebra ($k$ is a field) finitely generated by $S=\{f_1, ..., f_m \}\subset k[x_1, \cdots, x_n]$ and this set of polynomials is minimal with respect to inclusion (i.e., e dont have redundant elements). However, I know that the $f_i$'s are algebraically dependent. Can be the number $m=|S|$ unique? What are the conditions?
This is not true. There are silly examples if the $f_i$ are reducible, and slightly less silly examples if the $f_i$ are required to be irreducible.
I am assuming that you fix the presentation $\phi \colon k[x_1,\ldots,x_n] \twoheadrightarrow R$ (and hence the ideal $I = \ker \phi$ cutting out $R$), but not the generators of $I$. Otherwise I cannot make sense of
minimal with respect to inclusion. Example 1. Let $R = k[x]/(x)$. Then the minimal number of generators of $I = (x)$ is $1$. However, $(x) = (x(x+1), x(x+2))$, since $(x+1,x+2) = (1)$. Then the set of generators $\{x(x+1),x(x+2)\}$ is inclusionwise minimal, because each element separately cuts out two points (instead of $1$) in $\mathbb A^1$.
Here is a less silly example, nonetheless of a similar flavour.
Example 2. Consider $R = k[x,y]/(x,y)$. The minimal number of generators of $I = (x,y)$ is $2$. But we also have $(x,y) = (y,y-x^2,(x-1)^2+y^2-1)$ (if $\operatorname{char}(k) \neq 2$). Geometrically, we are using the $x$-axis, the parabola $y = x^2$, and the circle around $(1,0)$ with radius $1$ to cut out the origin.
The axis and the parabola have a tangent direction in common, and the circle meets both other curves in more points than just the origin. Hence, no two of the three generate the same ideal, so the set of generators is inclusionwise minimal. |
10.7. Adagrad¶
In the optimization algorithms we introduced previously, each element of the objective function’s independent variables uses the same learning rate at the same time step for self-iteration. For example, if we assume that the objective function is \(f\) and the independent variable is a two-dimensional vector \([x_1, x_2]^\top\), each element in the vector uses the same learning rate when iterating. For example, in gradient descent with the learning rate \(\eta\), element \(x_1\) and \(x_2\) both use the same learning rate \(\eta\) for iteration:
In Section 10.6, we can see that, when there is a big difference between the gradient values \(x_1\) and \(x_2\), a sufficiently small learning rate needs to be selected so that the independent variable will not diverge in the dimension of larger gradient values. However, this will cause the independent variables to iterate too slowly in the dimension with smaller gradient values. The momentum method relies on the exponentially weighted moving average (EWMA) to make the direction of the independent variable more consistent, thus reducing the possibility of divergence. In this section, we are going to introduce Adagrad [Duchi.Hazan.Singer.2011], an algorithm that adjusts the learning rate according to the gradient value of the independent variable in each dimension to eliminate problems caused when a unified learning rate has to adapt to all dimensions.
10.7.1. The Algorithm¶
The Adagrad algorithm uses the cumulative variable \(\boldsymbol{s}_t\) obtained from a square by element operation on the mini-batch stochastic gradient \(\boldsymbol{g}_t\). At time step 0, Adagrad initializes each element in \(\boldsymbol{s}_0\) to 0. At time step \(t\), we first sum the results of the square by element operation for the mini-batch gradient \(\boldsymbol{g}_t\) to get the variable \(\boldsymbol{s}_t\):
Here, \(\odot\) is the symbol for multiplication by element. Next, we re-adjust the learning rate of each element in the independent variable of the objective function using element operations:
Here, \(\eta\) is the learning rate while \(\epsilon\) is a constant added to maintain numerical stability, such as \(10^{-6}\). Here, the square root, division, and multiplication operations are all element operations. Each element in the independent variable of the objective function will have its own learning rate after the operations by elements.
10.7.2. Features¶
We should emphasize that the cumulative variable \(\boldsymbol{s}_t\) produced by a square by element operation on the mini-batch stochastic gradient is part of the learning rate denominator. Therefore, if an element in the independent variable of the objective function has a constant and large partial derivative, the learning rate of this element will drop faster. On the contrary, if the partial derivative of such an element remains small, then its learning rate will decline more slowly. However, since \(\boldsymbol{s}_t\) accumulates the square by element gradient, the learning rate of each element in the independent variable declines (or remains unchanged) during iteration. Therefore, when the learning rate declines very fast during early iteration, yet the current solution is still not desirable, Adagrad might have difficulty finding a useful solution because the learning rate will be too small at later stages of iteration.
Below we will continue to use the objective function \(f(\boldsymbol{x})=0.1x_1^2+2x_2^2\) as an example to observe the iterative trajectory of the independent variable in Adagrad. We are going to implement Adagrad using the same learning rate as the experiment in last section, 0.4. As we can see, the iterative trajectory of the independent variable is smoother. However, due to the cumulative effect of \(\boldsymbol{s}_t\), the learning rate continuously decays, so the independent variable does not move as much during later stages of iteration.
%matplotlib inlineimport d2limport mathfrom mxnet import np, npxnpx.set_np()def adagrad_2d(x1, x2, s1, s2): # The first two terms are the independent variable gradients g1, g2, eps = 0.2 * x1, 4 * x2, 1e-6 s1 += g1 ** 2 s2 += g2 ** 2 x1 -= eta / math.sqrt(s1 + eps) * g1 x2 -= eta / math.sqrt(s2 + eps) * g2 return x1, x2, s1, s2def f_2d(x1, x2): return 0.1 * x1 ** 2 + 2 * x2 ** 2eta = 0.4d2l.show_trace_2d(f_2d, d2l.train_2d(adagrad_2d))
epoch 20, x1 -2.382563, x2 -0.158591
Now, we are going to increase the learning rate to \(2\). As we can see, the independent variable approaches the optimal solution more quickly.
eta = 2d2l.show_trace_2d(f_2d, d2l.train_2d(adagrad_2d))
epoch 20, x1 -0.002295, x2 -0.000000
10.7.3. Implementation from Scratch¶
Like the momentum method, Adagrad needs to maintain a state variable of the same shape for each independent variable. We use the formula from the algorithm to implement Adagrad.
def init_adagrad_states(feature_dim): s_w = np.zeros((feature_dim, 1)) s_b = np.zeros(1) return (s_w, s_b)def adagrad(params, states, hyperparams): eps = 1e-6 for p, s in zip(params, states): s[:] += np.square(p.grad) p[:] -= hyperparams['lr'] * p.grad / np.sqrt(s + eps)
Compared with the experiment in Section 10.5, here, we use a larger learning rate to train the model.
data_iter, feature_dim = d2l.get_data_ch10(batch_size=10)d2l.train_ch10(adagrad, init_adagrad_states(feature_dim), {'lr': 0.1}, data_iter, feature_dim);
loss: 0.243, 0.057 sec/epoch
10.7.4. Concise Implementation¶
Using the
Trainer instance of the algorithm named “adagrad”, we canimplement the Adagrad algorithm with Gluon to train models.
d2l.train_gluon_ch10('adagrad', {'learning_rate': 0.1}, data_iter)
loss: 0.243, 0.071 sec/epoch
10.7.5. Summary¶ Adagrad constantly adjusts the learning rate during iteration to give each element in the independent variable of the objective function its own learning rate. When using Adagrad, the learning rate of each element in the independent variable decreases (or remains unchanged) during iteration. 10.7.6. Exercises¶ When introducing the features of Adagrad, we mentioned a potential problem. What solutions can you think of to fix this problem? Try to use other initial learning rates in the experiment. How does this change the results? |
Given a list of intervals $[s_1, e_1], [s_2, e_2], \ldots$, what's the most efficient way to determine if an interval $[a, b]$ can be covered by the intervals in the list?
closed as unclear what you're asking by Raphael♦ Feb 25 '14 at 20:19
Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.
I don't know if it's the
most efficient algorithm, but here's my suggestion:
Your problem is somewhat similar to balancing parenthesis: you can put all $s_i$ and $e_i$ values on a list and sort it (Lets call this list $V$), but keep the type of each element ($s$ or $e$ element) attached to the value. Then use this list to integrate all the intervals with the following algorithm (uniting overlapping intervals to one interval):
create a list of intervals L, Create a stack Stfor every element n in the sorted list V (from the smallest to the largest){ if n is an s element and St is empty { create a new interval I (a pair of numbers) assign I.s = n push some value to St } else if n is an s element and St isn't empty { push some value to St } else if n is an e element (stack can't be empty here) { pop a value from St if St is empty { assign I.e = n add interval I to the interval list L } }}return L
Now you have a list of integrated intervals, and checking if $[a,b]$ si covered by one of them is easy: for every interval $I$ in $L$, check if $a \geq I.s$ and $b \leq I.e$ (there are no overlapping intervals in $L$ now, they were integrated in the previous step).
Integrating the intervals to create $L$ takes $O(n)$ computation time and so is checking if $[a,b]$ is covered by one of the intervals in $L$, which makes the sorting $V$ the most expensive task here, $O(n\log{n})$.
Hope this helps, though there might be a linear time algorithm for performing this task... |
The impatient reader can skip my attempt at motivation and go straight my "Question formulations for the impatient."
In a failed(?) attempt at discovering something new, some years ago I toyed with the idea of dualizing the notion of a compact topological space. So starting from the "open covers have finite subcovers" formulation, I viewed covers of a space $X$ as surjective maps, to $X$, from a coproduct of subobjects.
Dualizing thus lead me to look at injective maps $X\rightarrow \prod_{i\in I} Q_i$ from $X$ to a product of various quotients $Q_i$ of $X$. So call such a thing a
co-cover. Given a co-cover and subset $J\subset I$, projectionyields $X\rightarrow \prod_{i\in J} Q_i$, so call that a sub-co-cover; and with $J$ finite, call it a finite sub-co-cover.
Call a co-cover $X\rightarrow \prod_{i\in I} Q_i$ of $X$
open, if for each $x\in X$ there exists some finite $K\subset I$ so that some neighborhood of $x$ maps injectively to $\prod_{i\in K} Q_i$.
Call a space
opcompact (since cocompact already has a meaning) if every open co-cover has a finite sub-co-cover.
My disappointment: opcompact turns out equivalent to compact, so nothing new (yet). Proof sketch:
Compact implies opcompact:
For each $x\in X$, pick an open neighborhood $N_x$ of $x$ and a set $K$, so that $N_x$ maps injectively to $\prod_{i\in K} Q_i$. The $N_x$ form a cover with a finite subcover, and the union of the associated $K$'s determines the desired sub-co-cover.
Opcompact implies compact:
Given $X$ with an open cover $\{U_i\}$ that has no finite subcover, get an open co-cover with no finite sub-co-cover from $X\rightarrow \prod X/U_i^c$ where $X/U_i^c$ means the quotient of $X$ where the complement of $U_i$ collapses to a point.
Even with Hausdorff $X$, the "opcompact implies compact" argument may requirenon-Hausdorff spaces $X/U_i^c$ - we would need $X$ regular to have all these spaces Hausdorff
a priori.So call a co-cover $X\rightarrow \prod_{i\in I} Q_i$ Hausdorff if it uses only Hausdorff $Q_i$.Define Hausdorff-opcompact to mean that every Hausdorff co-cover has a finite
Now my question: for Hausdorff spaces, does Hausdorff-opcompact imply compact?
Question formulation for the impatient: Must a noncompact Hausdorff space $X$ admit an infinite family of Hausdorff quotients $Q_i$ so that $X$ maps injectively into $\prod_i Q_i$ but not into any finite projection?
A parallel question arises on replacing Hausdorff by regular (and regular by normal):
Parallel question: for regular spaces, does regular-opcompact (with the obvious meaning) imply compact?
Parallel question formulation for the impatient: Must a noncompact regular space $X$ admit an infinite family of regular quotients $Q_i$ so that $X$ maps injectively into $\prod_i Q_i$ but not into any finite projection?
As always I welcome all pertinent remarks/answers on the general circle of ideas, so not only focused answers to the questions I've actually posed. |
I'm following the methodology outlined in Developing High-Frequency Equities Trading Models. On page 27, the author outlines an OLS regression model to obtain beta coefficients. The model is defined as:
$$r_{t+1} + ... + r_{t+H} = \beta_1\sum_{i=0}^HD_{t-i,1}+...+\beta_{k}\sum_{i=0}^HD_{t-i,k}+\eta_{t+H,H}$$
Where $r_{t+1} + ... + r_{t+H}$ is the accumulated $H$-period log return, $D=R^{T,k}$ is a $Txk$ matrix of dimentionally reduced log returns (principal components) obtained after projecting de-meaned log returns on the highest $k$ eigenvectors.
The author defines the process estimating $r_{t+1} + ... + r_{t+H}$ as such:
We calculated the accumulated future $H$-period log returns. Then we ran a regression, estimated by OLS, on the future accumulate log returns with the last sum of $H$-period dimensionally reduced returns in the principal component space.
I'm struggling a bit to understand how to implement this model. I suppose where I am most confused is in the statement "We calculated the accumulated future $H$-period log returns." I can see clearly that the independent variables are the accumulated $H$-period dimentionally reduced log returns but I do not understand what the dependent variable would be in this case as we do not have future accumulated returns.
This is likely simply a question on syntax but one that has confused me in the past as well. So stated simply, what is the independent variable in this model? |
When doing multiple linear regression,
$\boldsymbol{Y} = X\boldsymbol{\beta}+\boldsymbol{\epsilon}$
where
$\boldsymbol{\epsilon} \sim N(0, \Sigma)$
The best estimates of the coefficients can be found using the generalised least squares formula:
$\boldsymbol{\hat{\beta}} = (X'\Sigma^{-1}X)^{-1}X'\Sigma^{-1}\boldsymbol{Y}$.
From this we can find:
$Var(\boldsymbol{\hat{\beta}}) = (X'\Sigma^{-1}X)^{-1}$
Alternatively, we know that the asymptotic distribution of the estimators is given by
$\boldsymbol{\hat{\beta}} \sim N(\boldsymbol{\beta}, I^{-1})$
where I is the information matrix, and hence
$Var(\boldsymbol{\hat{\beta}}) = I^{-1}$
I have been working through a problem and have been calculating the best estimates of the coefficients using two methods, either by maximising the log-likelihood numerically, or by using generalised least squares (GLS). In both cases I get the same estimates for the parameters, however the covariance matrices are very different, ie I would expect
$(X'\Sigma^{-1}X)^{-1} \approx I^{-1}$
however I am finding that the variances are much smaller when using the Information matrix. This is obviously causing issues when doing hypothesis tests etc. I am simulating my own data and so know what to expect when doing these tests, and it works nicely when using GLS, but the lower variance from the Information matrix is giving me inconsistent results.
Any ideas why this is happening? |
I think the work of Dr. Paul Garabedian (and Dr. Schiffer)[1], and Dr. Mel'nikov (who built on Dr. Garabedian's result) are important theorems that were
almost forgotten. I'll share the main theorem from Dr. Mel'nikov's work[2] as it incorporates the main result from Dr Garabedian's:
Given complex numbers $z_1,\ldots,z_n$ and a parameter $r > 0$ such that $|z_i-z_j| > 2r$, $i \neq j$, we denote by $A = A(z_1,\ldots,z_n,r)$ the $(n \times n)$-matrix with entries $$ \alpha_{i,j} = r \sum_{k \neq i, k \neq j} \frac{1}{(z_i-z_k)\overline{(z_j-z_k)} - r^2}, 1 \leq i \leq n, 1 \leq j \leq n$$
Definition 1: We set
$$\lambda_1 = \lambda_1(z_1,\ldots,z_n,r) = ((r^{-1}I + A)^{-1}(\mathbf{1}),\mathbf{1}),$$
where $I$ is the standard identity $(n\times n$)-matrix, the vector $\mathbf{1} = (1,\ldots,1)\in\mathbb{C}^n$, and $(\cdot,\cdot)$ is the standard Hermitian scalar product in $\mathbb{C}^n$.
Theorem 1: For each bounded open set $G \in \mathbb{C}$:$$\gamma (G) = \sup\{\lambda_1(z_1,\ldots,z_n,r)\},$$where the supremum is taken over all finite collections of points $\{z_1,\ldots,z_n\} \subset G$ and values of $r > 0$ such that $|z_i - z_j| > 2r, i \neq j$, and the distance $d(z_i,\delta G) > r$ for all i (in other words, the union of disjoint discs $\Delta(z_i,r) = \{|z-z_i| < r\}$ is in G).
I used/demonstrated/incorporated this result "by accident", and only discovered these works about 18 months ago when I sought to explain my result rigorously. I have not seen any work aside from my own that uses this result, maybe because calculation/representation of a measure is not only conceptually challenging (both in terms of implementation and visualisation), but finding empirical data that is collected in both a uniform and robust manner (such that it can be subject to these types of analyses) is also difficult.
However in terms of mathematical analysis, I have a hard time thinking of other work that is of this calibre. There is a lot of meat on the bone in terms of empirical/theoretical proofs.
I found the work of Dr Garabedian especially impressive because he came to be known as a Computational Fluid Dynamicist, but it seems clear from his astounding result in 1950 that he was originally a mathematician. I believe that such a successful transition is demonstrative of the empirical/theoretical boundaries encroached by this work.
Very few are blessed to be able to contribute to this ("narrow") area, but I feel those who are interested in the numeric aspect of Calculus, and have good proof skills, may stand to benefit.
This result is Kelvin-esque, if you ask me.
[1]
Garabedian, P.R.; Schiffer, M., On existence theorems of potential theory and conformal mapping, Ann. Math. (2) 52, 164-187 (1950). ZBL0040.32903.,
[2]
Mel’nikov, M.S., Analytic capacity: Discrete approach and curvature of measure, Sb. Math. 186, No.6, 827-846 (1995); translation from Mat. Sb. 186, No.6, 57-76 (1995). ZBL0840.30008. |
Current browse context:
astro-ph.CO
Change to browse by: Bookmark(what is this?) Astrophysics > Cosmology and Nongalactic Astrophysics Title: Effect of Template Uncertainties on the WMAP and Planck Measures of the Optical Depth Due To Reionization
(Submitted on 4 Jan 2018 (v1), last revised 8 Jan 2019 (this version, v3))
Abstract: The reionization optical depth is the most poorly determined of the six $\Lambda$CDM parameters fit to CMB anisotropy data. Instrumental noise and systematics have prevented uncertainties from reaching their cosmic variance limit. At present, the datasets providing the most statistical constraining power are the WMAP, Planck LFI, and Planck HFI full-sky polarization maps. As the reprocessed HFI data with reduced systematics are not yet publicly unavailable, we examine determinations of $\tau$ using 9-year WMAP and 2015 Planck LFI data, with an emphasis on characterizing potential systematic bias resulting from foreground template and masking choices. We find evidence for a low-level systematic in the LFI polarization data with a roughly common-mode morphology across the LFI frequencies and a spectrum consistent with leakage of intensity signal into the polarization channels. We demonstrate significant bias in the optical depth derived when using the LFI 30 GHz map as a template to clean synchrotron from WMAP data, and recommend against use of the 2015 LFI 30 GHz polarization data as a foreground template for non-LFI datasets. We find an inconsistency between versions of the 2015 polarized 353 GHz dust templates reconstructed from the Planck likelihood and those from delivered maps, which can affect $\tau$ at the 1$\sigma$ level. The spread in $\tau$ values over the ensemble of data combinations we study suggests that systematic uncertainties still contribute significantly to the current uncertainty in $\tau$, but all values are consistent with the range of $\tau$ = 0.07 +/- 0.02. Submission historyFrom: Janet L. Weiland [view email] [v1]Thu, 4 Jan 2018 02:25:55 GMT (1378kb) [v2]Wed, 10 Jan 2018 21:14:46 GMT (1378kb) [v3]Tue, 8 Jan 2019 18:17:44 GMT (1379kb) |
Interested in the following function:$$ \Psi(s)=\sum_{n=2}^\infty \frac{1}{\pi(n)^s}, $$where $\pi(n)$ is the prime counting function.When $s=2$ the sum becomes the following:$$ \Psi(2)=\sum_{n=2}^\infty \frac{1}{\pi(n)^2}=1+\frac{1}{2^2}+\frac{1}{2^2}+\frac{1}{3^2}+\frac{1}{3^2}+\frac{1...
Consider a random binary string where each bit can be set to 1 with probability $p$.Let $Z[x,y]$ denote the number of arrangements of a binary string of length $x$ and the $x$-th bit is set to 1. Moreover, $y$ bits are set 1 including the $x$-th bit and there are no runs of $k$ consecutive zer...
The field $\overline F$ is called an algebraic closure of $F$ if $\overline F$ is algebraic over $F$ and if every polynomial $f(x)\in F[x]$ splits completely over $\overline F$.
Why in def of algebraic closure, do we need $\overline F$ is algebraic over $F$? That is, if we remove '$\overline F$ is algebraic over $F$' condition from def of algebraic closure, do we get a different result?
Consider an observer located at radius $r_o$ from a Schwarzschild black hole of radius $r_s$. The observer may be inside the event horizon ($r_o < r_s$).Suppose the observer receives a light ray from a direction which is at angle $\alpha$ with respect to the radial direction, which points outwa...
@AlessandroCodenotti That is a poor example, as the algebraic closure of the latter is just $\mathbb{C}$ again (assuming choice). But starting with $\overline{\mathbb{Q}}$ instead and comparing to $\mathbb{C}$ works.
Seems like everyone is posting character formulas for simple modules of algebraic groups in positive characteristic on arXiv these days. At least 3 papers with that theme the past 2 months.
Also, I have a definition that says that a ring is a UFD if every element can be written as a product of irreducibles which is unique up units and reordering. It doesn't say anything about this factorization being finite in length. Is that often part of the definition or attained from the definition (I don't see how it could be the latter).
Well, that then becomes a chicken and the egg question. Did we have the reals first and simplify from them to more abstract concepts or did we have the abstract concepts first and build them up to the idea of the reals.
I've been told that the rational numbers from zero to one form a countable infinity, while the irrational ones form an uncountable infinity, which is in some sense "larger". But how could that be? There is always a rational between two irrationals, and always an irrational between two rationals, ...
I was watching this lecture, and in reference to above screenshot, the professor there says: $\frac1{1+x^2}$ has a singularity at $i$ and at $-i$, and power series expansions are limits of polynomials, and limits of polynomials can never give us a singularity and then keep going on the other side.
On page 149 Hatcher introduces the Mayer-Vietoris sequence, along with two maps $\Phi : H_n(A \cap B) \to H_n(A) \oplus H_n(B)$ and $\Psi : H_n(A) \oplus H_n(B) \to H_n(X)$. I've searched through the book, but I couldn't find the definitions of these two maps. Does anyone know how to define them or where there definition appears in Hatcher's book?
suppose $\sum a_n z_0^n = L$, so $a_n z_0^n \to 0$, so $|a_n z_0^n| < \dfrac12$ for sufficiently large $n$, so $|a_n z^n| = |a_n z_0^n| \left(\left|\dfrac{z_0}{z}\right|\right)^n < \dfrac12 \left(\left|\dfrac{z_0}{z}\right|\right)^n$, so $a_n z^n$ is absolutely summable, so $a_n z^n$ is summable
Let $g : [0,\frac{ 1} {2} ] → \mathbb R$ be a continuous function. Define $g_n : [0,\frac{ 1} {2} ] → \mathbb R$ by $g_1 = g$ and $g_{n+1}(t) = \int_0^t g_n(s) ds,$ for all $n ≥ 1.$ Show that $lim_{n→∞} n!g_n(t) = 0,$ for all $t ∈ [0,\frac{1}{2}]$ .
Can you give some hint?
My attempt:- $t\in [0,1/2]$ Consider the sequence $a_n(t)=n!g_n(t)$
If $\lim_{n\to \infty} \frac{a_{n+1}}{a_n}<1$, then it converges to zero.
I have a bilinear functional that is bounded from below
I try to approximate the minimum by a ansatz-function that is a linear combination
of any independent functions of the proper function space
I now obtain an expression that is bilinear in the coeffcients
using the stationarity condition (all derivaties of the functional w.r.t the coefficients = 0)
I get a set of $n$ equations with the $n$ the number of coefficients
a set of n linear homogeneus equations in the $n$ coefficients
Now instead of "directly attempting to solve" the equations for the coefficients I rather look at the secular determinant that should be zero, otherwise no non trivial solution exists
This "characteristic polynomial" directly yields all permissible approximation values for the functional from my linear ansatz.
Avoiding the neccessity to solve for the coefficients.
I have problems now to formulated the question. But it strikes me that a direct solution of the equation can be circumvented and instead the values of the functional are directly obtained by using the condition that the derminant is zero.
I wonder if there is something deeper in the background, or so to say a more very general principle.
If $x$ is a prime number and a number $y$ exists which is the digit reverse of $x$ and is also a prime number, then there must exist an integer z in the mid way of $x, y$ , which is a palindrome and digitsum(z)=digitsum(x).
> Bekanntlich hat P. du Bois-Reymond zuerst die Existenz einer überall stetigen Funktion erwiesen, deren Fouriersche Reihe an einer Stelle divergiert. Herr H. A. Schwarz gab dann ein einfacheres Beispiel.
(Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.)
(Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.)
It's discussed very carefully (but no formula explicitly given) in my favorite introductory book on Fourier analysis. Körner's Fourier Analysis. See pp. 67-73. Right after that is Kolmogoroff's result that you can have an $L^1$ function whose Fourier series diverges everywhere!! |
I don’t think we’re clear on what simulation is NOT. RANDOMNESS IS NOT NECESSARY, for the simple reason randomness is merely a state of knowledge. Hence this classic post from 12 June 2017.
“Let me get this straight. You said
what makes your car go?”
“You heard me. Gremlins.”
“Grelims make your car go.”
“Look, it’s obvious. The cars runs, doesn’t it? It has to run for some reason, right? Everybody says that reason is gremlins. So it’s gremlins. No, wait. I know what you’re going to say. You’re going to say I don’t know why gremlins make it go, and you’re right, I don’t. Nobody does. But it’s gremlins.”
“And if I told you instead your car runs by a purely mechanical process, the result of internal combustion causing movement through a complex but straightforward process, would that interest you at all?”
“No. Look, I don’t care. It runs and that it’s gremlins is enough explanation for me. I get where I want to go, don’t I? What’s the difference if it’s gremlins or whatever it is you said?”
MCMC
That form of reasoning is used by defenders of simulations, a.k.a. Monte Carlo or MCMC methods (the other MC is for Markov Chain), in which gremlins are replaced by “randomness” and “draws from distributions.” Like the car run by gremlins, MCMC methods get you where you want to go, so why bother looking under the hood for more complicated explanations? Besides, doesn’t everybody agree simulations work by gremlins—I mean, “randomness” and “draws”?
Here is an abbreviated example from
Uncertainty which proves it’s a mechanical process and not gremlins or randomness that accounts for the succeess of MCMC methods.
First let’s use gremlin language to describe a simple MCMC example. Z, I say, is “distributed” as a standard normal, and I want to know the probability Z is less than -1. Now the normal distribution is not an analytic equation, meaning I cannot just plug in numbers and calculate an answer. There are, however, many excellent approximations to do the job near enough, meaning I can with ease calculate this probability to reasonable accuracy. The R software does so by typing
pnorm(-1), and which gives -0.1586553. This gives us something to compare our simulations to.
I could also get at the answer using MCMC. To do so I randomly—recall we’re using gremlin language—simulate a large number of draws from a standard normal, and count how many of these simulations are less than -1. Divide that number by the total number of simulations, and there is my approximation to the probability. Look into the literature and you will discover all kinds of niceties to this procedure (such as computing how accurate the approximation is, etc.), but this is close enough for us here. Use the following self-explanatory R code:
n = 10000 z = rnorm(n) sum(z < -1)/n
I get 0.158, which is for applications not requiring accuracy beyond the third digit peachy keen. Play around with the size of n: e.g., with n = 10, I get for one simulation 0.2, which is not so hot. In gremlin language, the larger the number of draws the closer will the approximation "converge" to the right answer.
All MCMC methods are the same as this one in spirit. Some can grow to enormous complexity, of course, but the base idea, the philosophy, is all right here. The approximation is seen as legitimate not just because we can match it against an near-analytic answer, because we can't do that for any situation of real interest (if we could, we wouldn't need simulations!). It is seen as legitimate because of the
way the answer was produced. Random draws imbued the structure of the MCMC "process" with a kind of mystical life. If the draws weren't random---and never mind defining what random really means---the approximation would be off, somehow, like in a pagan ceremony where somebody forgot to light the black randomness candle.
Of course, nobody speaks in this way. Few speak of the process at all, except to say it was gremlins; or rather, "randomness" and "draws". It's stranger still because the "randomness" is all computer-generated, and it is known computer-generated numbers aren't "truly" random. But, somehow, the whole thing still works, like the randomness candle has been swapped for a (safer!) electric version, and whatever entities were watching over the ceremony were satisfied the form has been met.
Mechanics
Now let's do the whole thing over in mechanical language and see what the differences are. By assumption, we want to quantify our uncertainty in Z using a standard normal distribution. We seek Pr(Z < -1 | assumption). We do
not say Z "is normally distributed", which is gremlin talk. We say our uncertainty in Z is represented using this equation by assumption.
One popular way of "generating normals" (in gremlin language) is to use what's called a Box-Muller transformation. Any algorithm which needs "normals" can use this procedure. It starts by "generating" two "random independent uniform" numbers U_1 and U_2 and then calculating this creature:
Z = \sqrt{-2 \ln U_1} \cos(2 \pi U_2),
where Z is now said to be "standard normally distributed." We don't need to worry about the math, except to notice that it is written as a causal, or rather determinative, proposition: ``If U_1 is this and U_2 is that, Z is this
with certainty." No uncertainty enters here; U_1 and U_2 determine Z. There is no life to this equation; it is (in effect) just an equation which translates a two-dimensional straight line on the interval 0 to 1 (in 2-D) to a line with a certain shape which runs from negative infinity to positive infinity.
To get the transformation, we simply write down all the numbers in the paired sequence (0.01, 0.01), (0.01, 0.02), ..., (0.99, 0.99). The decision to use two-digit accuracy was mine, just as I had to decide n above. This results in a sequence of pairs of numbers (U_1, U_2) of length 9801. For each pair, we apply the determinative
mapping of (U_1, U_2) to produce Z as above, which gives (3.028866, 3.010924, ..., 1.414971e-01). Here is the R code (not written for efficiency, but transparency): ep = 0.01 # the (st)ep u1 = seq(ep, 1-ep, by = ep) # gives 0.01, 0.02, ..., 0.99 u2 = u1
z = NA # start with an empty vector
k = 0 # just a counter for (i in u1){ for (j in u2){ k = k + 1 z[k] = sqrt(-2*log(i))*cos(2*pi*j) # the transformation } } z[1:10] # shows the first 10 numbers of z
The first 10 numbers of Z map to the pairs (0.01, 0.01), (0.02, 0.01), (0.03, 0.01), ..., (0.10, 0.01). There is nothing at all special about the order in which the (U_1, U_2) pairs are input. In the end, as long as the "grid" of numbers implied by the loop are fed into the formula, we'll have our Z. We do
not say U_1 and U_2 are "independent". That's gremlin talk. We speak of Z is purely causal terms. If you like, try this:
plot(z)
We have not "drawn" from any distribution here, neither uniform or normal. All that has happened is some perfectly simple math. And there is nothing "random". Everything is determined, as shown. The mechanical approximation is got the same way:
sum(z < -1)/length(z) # the denominator counts the size of z
which gives 0.1608677, which is a tad high. Try lowering ep, which is to say, try increasing the step resolution and see what that does. It is important to recognize the mechanical method will
always give the same answer (with same inputs) regardless of how many times we compute it. Whereas the MCMC method above gives different numbers. Why? Gremlins slain
Here is the gremlin R code, which first "draws" from "uniforms", and then applies the transformation. The ".s" are to indicate simulation.
n = 10000
u1.s = runif(n) u2.s = runif(n) z.s = sqrt(-2*log(u1.s))*cos(2*pi*u2.s) sum(z.s < -1)/n
The first time I ran this, I got 0.1623, which is much worse than the mechanical, but the second I got 0.1589 which is good. Even in the gremlin approach, though, there is no "draw" from a normal. Our Z is still absolutely
determined from the values of (u1.s, u2.s). That is, even in the gremlin approach, there is at least one mechanical process: calculating Z. So what can we say about (u1.s, u2.s)?
Here is where it gets interesting. Here is a plot of the empirical cumulative distribution of U_1 values from the mechanical procedure, overlaid with the ECDF of u1.s in red. It should be obvious the plots for U_2 and u2.s will be similar (but try!). Generate this yourself with the following code:
plot(ecdf(u1),xlab="U_1 values", ylab="Probability of U1 < value", xlim=c(0,1),pch='.') lines(ecdf(u1.s), col=2) abline(0,1,lty=2)
The values of U_1 are a rough step function; after all, there are only 99 values, while u1.s is of length n = 10000.
Do you see it yet? The gremlins have almost disappeared! If you don't see it---and do try and figure it out before reading further---try this code:
sort(u1.s)[1:20]
This gives the first 20 values of the "random" u1.s sorted from low to high. The values of U_1 were 0.01, 0.02, ... automatically sorted from low to high.
Do you see it yet? All u1.s is is a series of ordered numbers on the interval from 1-e6 to 1 - 1e-6. And the same for u2.s. (The 1e-6 is R's native display resolution for this problem; this can be adjusted.)
And the same for U_1 and U_2, except the interval is a mite shorter! What we have are nothing but ordinary sequences of numbers from (roughly) 0 to 1! Do you have it?
The answer is: The gremlin procedure is identical to the mechanical!
Everything in the MCMC method was just as fixed and determined as the other mechanical method. There was nothing random, there were no draws. Everything was simple calculation, relying on an
analytic formula somebody found that mapped two straight lines to one crooked one. But the MCMC method hides what's under the hood. Look at this plot (with the plot screen maximized; again, this is for transparency not efficiency):
plot(u1.s,u2.s, col=2, xlab='U 1 values',ylab='U 2 values')
u1.v = NA; u2.v = NA k = 0 for (i in u1){ for (j in u2){ k = k + 1 u1.v[k] = i u2.v[k] = j } } points(u1.v,u2.v,pch=20) # these are (U_1, U_2) as one long vector of each
The black dots are the (U_1, U_2) pairs and the red the (u1.s, u2.s) pairs fed into the Z calculation. The mechanical is a regular gird and the MCMC-mechanical is also a (rougher) grid. So it's no wonder they give the same (or similar) answers: they are doing the same things.
The key is that the u1.s and u2.s themselves were produced by a purely mechanical process as well. R uses a formula no different in spirit for Z above, which if fed the same numbers always produces the same output (stick in known W which
determines u1.s, etc.). The formula is called a "pseudorandom number generator", whereby "pseudorandom" they mean not random; purely mechanical. Everybody knows this, and everybody knows this, too: there is no point at which "randomness" or "draws" ever comes into the picture. There are no gremlins anywhere.
Now I do not and in no way claim that this grunt-mechanical, rigorous-grid approach is the way to handle all problems or that it is the most efficient. And I do not say the MCMC car doesn't get us where we are going. I am saying, and it is true, there are no gremlins. Everything is a determinate, mechanical process.
So what does that mean? I'm glad you asked. Let's let the late-great ET Jaynes give the answer. "It appears to be a quite general principle that, whenever there is a randomized way of doing something, then there is a nonrandomized way that delivers better performance but requires more thought."
We can believe in gremlins if we like, but we can do better if we understand how the engine really works.
There's lots more details, like the error of approximation and so forth, which I'll leave to
Uncertainty (which does not have any code). Bonus code
The value of -1 was nothing special. We can see the mechanical and MCMC procedures produce normal distributions which match almost everywhere. To see that, try this code:
plot(ecdf(z),xlab="Possible values of Z", ylab="Probability of Z < value", main="A standard normal") s = seq(-4,4,by=ep) lines(s,pnorm(s),lty=2,col=2) lines(ecdf(z.s),lty=3,col=3)
This is the (e)cdf of the distributions: mechanical Z (black solid), gremlin (green dot-dashed), analytic approximation (red dashed). The step in the middle is from the crude step in the mechanical. Play with the limits of the axis to "blow up" certain sections of the picture, like this:
plot(ecdf(z),xlab="Possible values of Z", ylab="Probability of Z < value", main="A standard normal", xlim=c(-1,1)) s = seq(-4,4,by=ep) lines(s,pnorm(s),lty=2,col=2) lines(ecdf(z.s),lty=3,col=3)
Try
xlim=c(-4,-3) too.
Homework
Find the values of U_1 and U_2 that correspond to Z = -1. Using the modern language, what can you say about these values in relation to the (conditional!) probability Z < -1? Think about the probabilities of the Us.
What other simple transforms can you find that correspond to other common distributions? Try out your own code for these transforms. |
@user193319 I believe the natural extension to multigraphs is just ensuring that $\#(u,v) = \#(\sigma(u),\sigma(v))$ where $\# : V \times V \rightarrow \mathbb{N}$ counts the number of edges between $u$ and $v$ (which would be zero).
I have this exercise: Consider the ring $R$ of polynomials in $n$ variables with integer coefficients. Prove that the polynomial $f(x _1 ,x _2 ,...,x _n ) = x _1 x _2 ···x _n$ has $2^ {n+1} −2$ non-constant polynomials in R dividing it.
But, for $n=2$, I ca'nt find any other non-constant divisor of $f(x,y)=xy$ other than $x$, $y$, $xy$
I am presently working through example 1.21 in Hatcher's book on wedge sums of topological spaces. He makes a few claims which I am having trouble verifying. First, let me set-up some notation.Let $\{X_i\}_{i \in I}$ be a collection of topological spaces. Then $\amalg_{i \in I} X_i := \cup_{i ...
Each of the six faces of a die is marked with an integer, not necessarily positive. The die is rolled 1000 times. Show that there is a time interval such that the product of all rolls in this interval is a cube of an integer. (For example, it could happen that the product of all outcomes between 5th and 20th throws is a cube; obviously, the interval has to include at least one throw!)
On Monday, I ask for an update and get told they’re working on it. On Tuesday I get an updated itinerary!... which is exactly the same as the old one. I tell them as much and am told they’ll review my case
@Adam no. Quite frankly, I never read the title. The title should not contain additional information to the question.
Moreover, the title is vague and doesn't clearly ask a question.
And even more so, your insistence that your question is blameless with regards to the reports indicates more than ever that your question probably should be closed.
If all it takes is adding a simple "My question is that I want to find a counterexample to _______" to your question body and you refuse to do this, even after someone takes the time to give you that advice, then ya, I'd vote to close myself.
but if a title inherently states what the op is looking for I hardly see the fact that it has been explicitly restated as a reason for it to be closed, no it was because I orginally had a lot of errors in the expressions when I typed them out in latex, but I fixed them almost straight away
lol I registered for a forum on Australian politics and it just hasn't sent me a confirmation email at all how bizarre
I have a nother problem: If Train A leaves at noon from San Francisco and heads for Chicago going 40 mph. Two hours later Train B leaves the same station, also for Chicago, traveling 60mph. How long until Train B overtakes Train A?
@swagbutton8 as a check, suppose the answer were 6pm. Then train A will have travelled at 40 mph for 6 hours, giving 240 miles. Similarly, train B will have travelled at 60 mph for only four hours, giving 240 miles. So that checks out
By contrast, the answer key result of 4pm would mean that train A has gone for four hours (so 160 miles) and train B for 2 hours (so 120 miles). Hence A is still ahead of B at that point
So yeah, at first glance I’d say the answer key is wrong. The only way I could see it being correct is if they’re including the change of time zones, which I’d find pretty annoying
But 240 miles seems waaay to short to cross two time zones
So my inclination is to say the answer key is nonsense
You can actually show this using only that the derivative of a function is zero if and only if it is constant, the exponential function differentiates to almost itself, and some ingenuity. Suppose that the equation starts in the equivalent form$$ y'' - (r_1+r_2)y' + r_1r_2 y =0. \tag{1} $$(Obvi...
Hi there,
I'm currently going through a proof of why all general solutions to second ODE look the way they look. I have a question mark regarding the linked answer.
Where does the term e^{(r_1-r_2)x} come from?
It seems like it is taken out of the blue, but it yields the desired result. |
Let $\mathcal A = (X, Q, \delta, q_0, F)$ be a deterministic finite automata with the following acceptance condition on infinite words:
The automata accepts $\xi \in X^{\omega}$ with respect to $F$ iff $$ \forall i : \delta(q_0, \xi[0...i]) \in F. $$ Meaning that every prefix $\xi[0...i]$ of $\xi$ goes to an acceptance state. With $L'(\mathcal A)$ I denote the set of all accepted infinite words.
Now such automata have a special form, a word is accepted iff it just passes through certain allowed states in $F$, otherwise it is rejected if it ever enters a state from $Q\setminus F$. This by the way means that if $\mathcal A$ is reduced and complete, it has exactly one non-accepting trapping state $s$, because every word which enters $s$ needs to ultimately stay there, because the word gets never accepted, regardless of what comes there after $s$ was entered.
Now I define $F_n(\xi) := \{ w \in X^* : w \in \mbox{infix}(\xi) \cap X^n \}$, the set of all factors (or infixes) of $\xi$ of length $n$. Then $$K_n(\xi) := \xi[0...n] \cdot X^{\omega} \cap \{ \eta \in X^{\omega} : F_n(\eta) = F_n(\xi) \}$$ is the set of all infinite words which share with $\xi$ a common prefix of length $n$ and which have the same set of factors of length $n$.
Now I conjecture: If $\xi$ is accepted by $\mathcal A$ according to the above mentioned acceptance condition, then there exists a $n > 0$ such that $$ K_n(\xi) \subseteq L'(\mathcal A) $$ meaning that if $\xi$ is accepted there exists a $n > 0$ such that every word which share with $\xi$ a common prefix of length $n$ and all factors of length $n$ (and up to $n$) is also accepted.
Intuitively I guess this is right, because if $\xi$ is accepted, and because $\mathcal A$ is a finite automata, then as $\xi$ goes through the states it eventually ends in some cycle, which I guesss bound the possible factors, and the prefix condition could be used to ensure that another word would end in the same cycle.
If $\xi = uv^{\omega}$ for finite $u, v$, i.e. if $\xi$ is ultimately periodic I guess a construction would be to determine the smallest $k$ such that $F_k(\xi) = F_{k+1}(\xi)$, so the number of factors of length $k' > k$ is the same as $|F_k(\xi)|$, and set $n := k + 1$. Then I conjecture $$ K_{n}(\xi) \subseteq L'(\mathcal A). $$ (I have no proof) Maybe this works also for non-ultimately periodic words, but I could not proof it. So any ideas? Or maybe an idea how to proof my conjecture? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.