text
stringlengths 256
16.4k
|
|---|
Solve the equation:
You can solve the above assuming that (I'll provide a proof below):
$arcsen(x)+arccos(x)=\dfrac{\pi}{2}$
Starting from this, we can add to the right side: $2 arctan(x)$
Obtaining:
$arcsen(x)+arccos(x)=\dfrac{\pi}{2}=2 arctan(x)$
We deduce:
$arctan(x) =\dfrac{\pi}{4}$
Solving,
$tan(arctan(x))=tan(\dfrac{\pi}{4})$ then $x=1.$
Proof of:
$arcsen(x)+arccos(x)=\dfrac{\pi}{2}$
Considering $u= arcsen(x)$ and $v=arccos(x)$, we have that $u \in [-\dfrac{\pi}{2}, \dfrac{\pi}{2}]$ and $v \in [0, \pi]$ moreover:
$x = sen(u)$ and$x= cos(v)$
so that:
$sen(u)=cos(v)$ (1)
Given that: $u \in [-\dfrac{\pi}{2}, \dfrac{\pi}{2}]$, $v \in [0, \pi]$ and $cos(v)=sen(-v,\dfrac{\pi}{2})$ with $-v+ \dfrac{\pi}{2} u \in [-\dfrac{\pi}{2}, \dfrac{\pi}{2}]$, we make sure that (1) we have $u$ and $v$ such that $u=\dfrac{\pi}{2}-v$, i.e., $u+v=\dfrac{\pi}{2}.$
|
A function $f$ is Riemann integrable on the interval $[a, b]$ if the following condition holds: For any partition $P = \{ x_{0}, ..., x_{n} \}$ of $[a, b]$, we have an $\epsilon > 0$ such that $U(P, f) - L(P, f) < \epsilon$.
Here, $U(P, f) = \sum_{i=1}^{n} M_{i} \Delta x_{i}$, for $M_{i} = sup \{ f(x) : x \in [x_{i-1}, x_{i}]$. The $sup$ is the supremum, or least upper bound.
Similarly, $L(P, f) = \sum_{i=1}^{n} M_{i} \Delta x_{i}$ for $m_{i} = inf \{ f(x) : x \in [x_{i-1}, x_{i}] \}$. The $inf$ is the infimum, or greatest lower bound.
Conceptually, we are just taking Riemann sums. $U(P, f)$ is a Riemann sum where, for each given interval, we take the largest value. Similarly, $L(P, f)$ is a Riemann sum where we take the smallest value on each interval. So essentially, if we can control how much these two Riemann sums differ, we can integrate $f$.
As for conceptualizing this, I'd think about Riemann integrals in this way: you add up the rate and you get a change. That's really what Calculus is about.
|
First I need some notation (it's all standard I think). For a manifold $M$, let $F_nM = F_{0,n}M$ be the space of $n$-tuples of distinct points on $M$ ; let $B_nM = B_{0,n}M = F_nM / \Sigma_n$. When $M= \mathbb{R}^2$ the fundamental group of $B_nM$ is the braid group $B_n$, and that of $F_nM$ is the pure braid group $P_n$.
Further, $F_{m,n} M$ is defined to be $F_n N$ where $N$ is $M$ with $m$ points removed. Likewise for $B_{m,n} M$.
Now this question is about $\pi_1 F_n S^2$. The fundamental group of $B_n S^2$(the "braid group of the sphere") is usually presented as a quotient of $B_n$, adding just one relation to Artin's usual ones ; so the fundamental group of $F_n S^2$ is a quotient of $P_n$.
However, a classical result asserts that there is a fibration
$$ F_{m+r, n-r}M \to F_{m,n}M \to F_{m,r}M$$
for all $m,n$ and $r\le n$. Taking $M=S^2$, $m=0$ and $r=1$ this becomes
$$ F_{n-1} \mathbb{R}^2 \to F_n S^2 \to S^2 $$
And the long exact sequence of homotopy groups gives in particular
$$ \mathbb{Z} \to P_{n-1} \to \pi_1 F_n S^2 \to 0$$
using that $\pi_1(S^2) = 0$ and $\pi_2(S^2) = \mathbb{Z}$.
This gives a presentation of $\pi_1 F_n S^2$ as a quotient of $P_{n-1}$ rather than $P_n$, adding just one relator (with a very simple proof indeed). The group $P_n$ can be generated by $n(n-1)/2$ generators and no fewer, and so $P_{n-1}$ can be generated by $(n-1)(n-2)/2$ generators, giving a much smaller set of generators for $\pi_1 F_n S^2$.
So my questions are: (EDITED)
(1) Did I get something wrong in the above argument?
(2) Does someone know what the image of $\mathbb{Z}$ in $P_{n-1}$ is?
(3) Has this presentation been studied algebraically? Is it easier to work with the group $\pi_1 F_n S^2$ presented as a quotient of $P_{n-1}$ than with the presentation as a quotient of $P_n$ ? Any reference to a work in this direction?
The answer by Ryan Budney below covers (1) and (2), I think. Any help with (3) appreciated.
Thanks !
Pierre
|
I am doing the following exercise:
Consider inviscid, incompressible, steady flow. The Kutta-Joukowski theorem $$L' = \rho_\infty V_\infty\Gamma$$ where
$L'$ is the lift per unit span $\rho_\infty$ is the freestream pressure $V_\infty$ is the velocity $\Gamma$ is the circulation taken around the body
was derived exactly for the case of the lifting cylinder. Equation $L' = \rho_\infty V_\infty\Gamma$ also applies in general to a $2$-dimensional body of arbitrary shape. Although this general result can be proven mathematically, it also can be accepted by making a physical argument as well. Make this physical argument by drawing a closed curve around the body where the closed curve is very far away from the body, so far away that in perspective the body becomes a very small speck in the middle of the domain enclosed by the closed curve.
And here is the solution from the author:
The flow over the airfoil can be syntheized by a proper distribution of singularities, i.e, point sources or vortices. The strength of vortices, added together, give the total circulation, $\Gamma$, around the airfoil. This value of $\Gamma$ is the same along all the closed curves around the airfoil. In this case, the airfoil become a speck on the page, and the distributed point vortices appears as one stronger point vortex with strength $\Gamma$. This is exactly equivalent to the single point vortex for the circular-cylinder case, and
the lift on the airfoil where the circulation is taken as the total $\Gamma$ is the same as for a circular cylinder, namely, equation $L' = \rho_\infty V_\infty\Gamma$.
Author's figure:
I understand the author's solution until the part that said:
the lift on the airfoil where the circulation is taken as the total $\Gamma$ is the same as for a circular cylinder.
The equation $L' = \rho_\infty V_\infty\Gamma$ is derived from the circular-cylinder case which we have to do the integral of the pressure distribution over the cylinder surface. But I have not understood why
the lift on the airfoil is the same for a circular cylinder like author said.
|
I was discussing L'Hôpital's Rule with a Calculus I student earlier today. I mentioned that if the limit obtained by differentiating the numerator and denominator doesn't exist, then L'Hôpital's Rule tells us nothing about the original limit.
A clear example of this is, $$\lim\limits_{ x \to \infty }{ \frac { x+\sin { x } }{ x } } =1.$$ However, L'Hôpital's Rule gives $$\lim\limits_{ x\rightarrow \infty }{ \frac { x+\sin { x } }{ x } } =\lim \limits_{ x\rightarrow \infty }{ \frac { 1+\cos { x } }{ 1 } } =\lim\limits_{ x\rightarrow \infty }{ \left( 1+\cos { x } \right) }, $$ which diverges by oscillation.
I couldn't come up with an example that shows that if the limit from LH is infinite, then the original limit may be finite. This begs these two questions,
Is it true that an infinite result from L'Hôpital's Rule does not imply an infinite limit? Is there a simple example where the LH is infinite, but the limit is actually finite?
|
Im trying to calculate Alpha using CAPM & I have data on everything necessary.
$$R_t-R_f={\alpha}+{\beta}\times(R_m-R_f)$$
i.e.
$${\alpha}=R_t-R_t-{\beta}\times(R_m-R_f)$$
In more detail, I have monthly data on returns, market returns and the risk free rate. Now lets say I'm interested in how a fund has performed in 12 months, which one of the following two methods is correct?
Alpha = monthly returns - monthly risk free rate - beta(monthly market returns - monthly risk free rate)
This will yield a monthly Alpha and then calculating the yearly Alpha per fund by the formula $(a+{\alpha}_1)(a+{\alpha}_2)...-1$ and beta here is calculated by Covariance(Monthly fund return, Monthly market return)/Variance(Monthly Market return)
Or do I
first convert the returns to yearly and then calculate Alpha? Alpha = Yearly returns - Yearly risk free - beta ( yearly market return - yearly risk free rate)
and beta here is calculated by Covariance(Yearly fund return, Yearly market return)/Variance(Yearly market return)
|
Higgins' downloadable book Categories and groupoids has quite a lot on computing colimits of groupoids. The point is that the groupoid van Kampen theorem has the probably optimal theorem of this type in
R. Brown and A. Razak, A van Kampen theorem for unions ofnon-connected spaces, Archiv. Math. 42 (1984) 85-88. pdf
This involves the fundamental groupoid $\pi_1(X,A)$ on a
set of base points, and for this and a general open cover, one needs that $A$ meets each path-component of each $1$-, $2$-, $3$-fold intersection of sets of the cover. This answers a particular point in the question. The case $A=X$ is the theorem as stated in the question, and the special case when $A$ is a singleton is in most texts. The reduction to $3$-fold intersections essentially relies on the idea of Lebesgue covering dimension.
This result translates the problem from topology into algebra; a particular fundamental group, if one wants it, is kind of hidden in the middle of this colimit of groupoids. One then has to do various combinatorial things such as choosing trees in components of graphs, to find the fundamental group. These methods are directly related to methods of use in combinatorial group theory, so one should think of them, including the notion of
covering morphism of groupoids used in Higgins' book, as a form of combinatorial groupoid theory. HNN extensions of groups can also be seen as pushouts of groupoids.
One of the useful tools in Higgins' book is the following: given a groupoid $G$ with object set $X$ and a function $f:X \to Y$ construct a groupoid $U_f(G)$ with object set $Y$. This construction yields free groups, and free products of groups, as special cases. In Chapter 9 of Topology and Groupoids this construction is related to making identifications on a discrete subset of a topological space. For example, one might want to form the circle $S^1$ by identifying $0,1$ in the unit interval $[0,1]$.
These ideas usefully generalise to higher dimensions, via Higher Homotopy Seifert-van Kampen Theorems: see for example Part I of Nonabelian Algebraic Topology for results on second relative homotopy groups. There is more to be said ...
Later: I realise I did not answer the question as to the purpose of the generalisation. As suggested, the immediate purpose was to have a theorem which yielded the fundamental group of the circle, which is, after all, THE basic example in algebraic topology. It also gave easily additional examples: for example, let $X$ be the space formed by identifying all corresponding points of two copies of the interval $[-1,1]$ except for the point $0$. Thus $X$ is a non Hausdorff space. From the groupoid version with $A$ consisting of the two points $\pm 1/2$, we obtain that the fundamental group of $X$ is the integers.
Grothendieck in Section 2 of his 1984 "Esquisse d'un Programme" emphasises that choosing a single base point will often destroy any symmetry in the situation. Consider the following union of five open sets:
One is in a "Goldilock's" situation. Choosing all points as base points is too large for comfort. Choosing one base point is too small. But choosing eight base points is just about right!
Situations like this arise in combinatorial group theory. In general, one chooses the set of base points according to the geometry of the given situation.
The proof given in the paper referred to is by verification of the universal property, and does not require knowledge that the category of groupoids admits colimits, nor any specific method of constructing them.
For me, this work led to the impression that ALL of $1$-dimensional homotopy theory was better expressed in terms of groupoids rather than groups.
The proof also has the advantage of generalising to higher dimensions, once one has the appropriate higher dimensional homotopical gadgets. (It did take me 9 years to find, in conversation with Philip Higgins, the right $2$-dimensional gadget.)
July 28, 2015: For further discussion on this area, see this mathoverflow discussion.
|
I would like to solve a system of second-order differential equations to describe the dynamics of a system of particles. Two Newton-like forces are responsible for the motion of each particle $i$: A force acting on each particle due to other particles $f_{i,j}$ and a stochastic term of noise $f_{\mathrm{noise}}$.
The force acting on each particle due to other particles $f_{i,j}$ depends on the current position $s_i$ and velocity $v_i$ of particle $i$ and the position $s_j$ and velocity $v_j$ of the other particles $j$ of the system.
$$F_i= f_{i,j}(v_i,v_j,s_i,s_j) + f_{\mathrm{noise}}$$
The two components of motion in 2D are included for every term before mentioned. Under an Euler scheme, the velocity and position of each particle would be updated as follows: $$ \begin{alignat}{1} v_i &← v_{i} + \frac{F_i}{m} \Delta t \\ s_i &← s_i + v_i \Delta t \end{alignat} $$
where $m$ is the mass of particle $i$ and $\Delta t$ is the integration step. However, I would like to use the Milstein’s algorithm for the velocity update (since we have a term of noise) and the fourth-order Runge-Kutta method to update the position $s_i$. I am confused due to having $f_{i,j}$ dependent on $s_i$, $s_j$, $v_i$ and $v_j$. How should I operate?
|
Difference between revisions of "Capillary waves"
(more rewriting)
m (added link to (future) harmonic oscillator)
Line 83: Line 83:
where we have dropped a factor of <math>A/4</math> in the last step.
where we have dropped a factor of <math>A/4</math> in the last step.
−
The problem is thus specified by just a potential energy involving the square of <math>\eta(t)</math> and a kinetic energy involving the square of its time derivative: a regular harmonic oscillator. Its equation of motion will be
+
The problem is thus specified by just a potential energy involving the square of <math>\eta(t)</math> and a kinetic energy involving the square of its time derivative: a regular harmonic oscillator. Its equation of motion will be
:<math>\frac{\rho}{|q|} \eta'' + (\rho g+ \sigma q^2) \eta=0,</math>
:<math>\frac{\rho}{|q|} \eta'' + (\rho g+ \sigma q^2) \eta=0,</math>
whose oscillatory solution is
whose oscillatory solution is
Revision as of 15:23, 11 April 2008 Contents Thermal capillary waves
Thermal
capillary waves are oscillations of an interfacewhich are thermal in origin. These take place at the molecular level, where only the surface tensioncontribution is relevant.
Capillary wave theory (CWT) is a classic account of how thermalfluctuations distort an interface (Ref. 1). It starts from some
intrinsic surfacethat is distorted. By performing a Fourier analysis treatment, normal modes are easily found.Each contributes a energy proportional to the square of its amplitude; therefore, according toclassical statistical mechanics, equipartition holds, and themean energy of each mode will be . Surprisingly, this result leads to a divergent surface (the width of the interface is bound to diverge with its area) (Ref 2).This divergence isnevertheless very mild: even for displacements on the order of meters the deviation of the surfaceis comparable to the size of the molecules. Moreover, the introduction of an external fieldremoves the divergence: the actionof gravity is sufficient to keep the width fluctuation on the orderof one molecular diameter for areas larger than about 1 mm 2 (Ref. 2).
Recently, a procedure has been proposed to obtain a molecular intrinsicsurface from simulation data (Ref. 3). The density profiles obtainedfrom this surface are, in general, quite different from the usual
mean density profiles. Gravity-capillary waves
These are ordinary waves excited in an interface, such as ripples on a water surface. Their dispersion relation reads, for waves on the interface between two fluids of infinite depth:
Derivation
This is a sketch of the derivation of the general dispersion relation, see Ref. 4 for a more detailed description.
Defining the problem
Three contributions to the energy are involved: the surface tension, gravity, and hydrodynamics. The part due to gravity is the simplest: integrating the potential energy density due to gravity, from a reference height to the position of the surface, :
(For simplicity, we are neglecting the density of the fluid above, which is often acceptable.)
An increase in area of the surface causes a proportional increase of energy:
where the fist equality is the area in this (de Monge) representation, and the second applies for small values of the derivatives (surfaces not too rough).
The last contribution involves the kinetic energy of the fluid:
where is the module of the velocity field .
Wave solutions
Let us try separation of variables:
where is a two dimensional wave number vector, and the position. In this case,
where a factor of that will appear every integration is dropped for convenience.
To tackle the kinetic energy, suppose the fluid is incompressible and its flow is irrotational (often, sensible approximations) - the flow will then be potential: , and is a potential (scalar field) which must satisfy Laplace equation . If we try try separation of variables with the potential:
with some function of time , and some function of vertical component (height) Laplace equation then requires on the later This equation can be solved with the proper boundary conditions: first, must vanish well below the surface (in the "deep water" case, which is the one we consider, otherwise a more general relation holds, which is also well known in oceanography). Therefore , with some constant . The less trivial condition is the matching between and : the potential field must correspond to a velocity field that is adjusted to the movement of the surface: . This means that , and that , so that . We may now find , which is . Performing the integration first we are left with
where we have dropped a factor of in the last step.
The problem is thus specified by just a potential energy involving the square of and a kinetic energy involving the square of its time derivative: a regular harmonic oscillator. Its equation of motion will be
whose oscillatory solution is
the same dispersion as above if is neglected.
References F. P. Buff, R. A. Lovett, and F. H. Stillinger, Jr. "Interfacial density profile for fluids in the critical region" Physical Review Letters 15pp. 621-623 (1965) J. S. Rowlinson and B. Widom "Molecular Theory of Capillarity". Dover 2002 (originally: Oxford University Press 1982) ISBN 0486425444 E. Chacón and P. Tarazona "Intrinsic profiles beyond the capillary wave theory: A Monte Carlo study", Physical Review Letters 91166103 (2003) Samuel Safran "Statistical thermodynamics of surfaces, interfaces, and membranes" Addison-Wesley 1994 P. Tarazona, R. Checa, and E. Chacón "Critical Analysis of the Density Functional Theory Prediction of Enhanced Capillary Waves", Physical Review Letters 99196101 (2007)
|
This has been bugging me for a while now...
Obviously, to calculate the volume/space occupied by a mole of (an ideal) gas, you'll have to specify temperature ($T$) and pressure ($P$), find the gas constant ($R$) value with the right units and plug them all in the ideal gas equation $$PV = nRT.$$
The problem? It seems to be some sort of common "wisdom" all over the Internet, that one mole of gas occupies $22.4$ liters of space. But the standard conditions (STP, NTP, or SATP) mentioned lack consistency over multiple sites/books. Common claims: A mole of gas occupies,
$\pu{22.4 L}$ at STP $\pu{22.4 L}$ at NTP $\pu{22.4 L}$ at SATP $\pu{22.4 L}$ at bothSTP and NTP
Even Chem.SE is rife with the "fact" that a mole of ideal gas occupies $\pu{22.4 L}$, or some extension thereof.
Being so utterly frustrated with this situation, I decided to calculate the volumes occupied by a mole of ideal gas (based on the ideal gas equation) for each of the three standard conditions; namely: Standard Temperature and Pressure (STP), Normal Temperature and Pressure (NTP) and Standard Ambient Temperature and Pressure (SATP).
Knowing that,
STP: $\pu{0 ^\circ C}$ and $\pu{1 bar}$ NTP: $\pu{20 ^\circ C}$ and $\pu{1 atm}$ SATP: $\pu{25 ^\circ C}$ and $\pu{1 bar}$
And using the equation, $$V = \frac {nRT}{P},$$ where $n = \pu{1 mol}$, by default (since we're talking about one mole of gas).
I'll draw appropriate values of the gas constant $R$ from this Wikipedia table:
The volume occupied by a mole of gas should be:
At STP\begin{align} T &= \pu{273.0 K},& P &= \pu{1 bar},& R &= \pu{8.3144598 \times 10^-2 L bar K^-1 mol^-1}. \end{align} Plugging in all the values, I got $$V = \pu{22.698475 L},$$ which to a reasonable approximation, gives $$V = \pu{22.7 L}.$$ At NTP\begin{align} T &= \pu{293.0 K},& P &= \pu{1 atm},& R &= \pu{8.2057338 \times 10^-2 L atm K^-1 mol^-1}. \end{align} Plugging in all the values, I got $$V = \pu{24.04280003 L},$$ which to a reasonable approximation, gives $$V = \pu{24 L}.$$ At SATP\begin{align} T &= \pu{298.0 K},& P &= \pu{1 bar},& R &= \pu{8.3144598 \times 10^-2 L bar K^-1 mol^-1}. \end{align} Plugging in all the values, I got $$V = \pu{24.7770902 L},$$ which to a reasonable approximation, gives $$V = \pu{24.8 L}.$$
Nowhere does the magical "$\pu{22.4 L}$" figure in the three cases I've analyzed appear. Since I've seen the "one mole occupies $\pu{22.4 L}$ at STP/NTP" dictum so many times, I'm wondering if I've missed something.
My question(s):
Did I screw up with my calculations? (If I didn't screw up) Why is it that the "one mole occupies $\pu{22.4 L}$" idea is so widespread, in spite of not being close (enough) to the values that I obtained?
|
I've written some simple code that, for various $N \geq 1$ and $k = 1,\ldots,N+1$ computes the $n$-th derivative of $\log S_N$ where $S_N(z) = \sum_{k=0}^N \frac{1}{k!}z^k$ as follows:
f[z_, N_] := Sum[z^k/k!, {k, 0, N}]g[z_, N_] := Log[f[z, N]]t = Table[Evaluate[D[g[z, 7], {z, n}]], {n, 8}]
Here I can vary $7$ and $8$ to be any $N$ and $N+1$ (resp.). For a shorter example, the output if $7$ and $8$ are replaced by $3$ and $4$ is
$$ \left\{\frac{\frac{z^2}{2}+z+1}{\frac{z^3}{6}+\frac{z^2}{2}+z+1},\frac{z+1}{\frac{z^3}{6}+\frac{z^2}{2}+z+1}-\frac{\left(\frac{z^2}{2}+z+1\right)^2}{\left(\frac{z^3}{6}+\frac{z^2}{2}+z+1\right)^2},\frac{2 \left(\frac{z^2}{2}+z+1\right)^3}{\left(\frac{z^3}{6}+\frac{z^2}{2}+z+1\right)^3}-\frac{3 (z+1) \left(\frac{z^2}{2}+z+1\right)}{\left(\frac{z^3}{6}+\frac{z^2}{2}+z+1\right)^2}+\frac{1}{\frac{z^3}{6}+\frac{z^2}{2}+z+1},-\frac{6 \left(\frac{z^2}{2}+z+1\right)^4}{\left(\frac{z^3}{6}+\frac{z^2}{2}+z+1\right)^4}+\frac{12 (z+1) \left(\frac{z^2}{2}+z+1\right)^2}{\left(\frac{z^3}{6}+\frac{z^2}{2}+z+1\right)^3}-\frac{4 \left(\frac{z^2}{2}+z+1\right)}{\left(\frac{z^3}{6}+\frac{z^2}{2}+z+1\right)^2}-\frac{3 (z+1)^2}{\left(\frac{z^3}{6}+\frac{z^2}{2}+z+1\right)^2}\right\} $$
Since $S_m'(z) = S_{m-1}(z)$ for all $m \geq 1$, these expressions will always $\mathbb{Z}$-linear combinations of quotients of powers of $S_m$ for varying $m$, and I would like to find a way to tell Mathematica to replace each occurrence of $S_m(z)$ with the symbol
f[z,m]. Is this possible? I've seen other questions on simple substitutions using
/. but none where the replacement scheme seemed easily adapted to this problem. In particular, the replacement scheme would need to be greedy to avoid,
e.g., replacing instances of $S_m(z)$ with $
f[z,m-2] + (1/(m-1)!z^{m-1}) + (1/m!z^m).
Is there a way to make this happen?
Edit: This post seems quite relevant. (I had missed it when searching.) I will see if it can be applied here.
|
I have been studying a book which describes how to write the waveform of sequence that has certain chip rate $T_c$.
The sequence with chip rate $T_c$ is for example
$$X=[-1 -1 +1 +1 +1 +1 +1 +1 +1 -1 +1 -1 -1 +1 +1 -1 -1 -1 +1 +1 -1 -1 -1 -1 +1 -1 +1 -1 +1 -1 -1 +1 +1 +1 -1 -1 -1 -1 -1 -1 -1 +1 -1 +1 +1 -1 -1 +1 -1 -1 +1 +1 -1 -1 -1 -1 +1 -1 +1 -1 +1 -1 -1 +1 +1 +1 -1 -1 -1 -1 -1 -1 -1 +1 -1 +1 +1 -1 -1 +1 +1 +1 -1 -1 +1 +1 +1 +1 -1 +1 -1 +1 -1 +1 +1 -1 +1 +1 -1 -1 -1 -1 -1 -1 -1 +1 -1 +1 +1 -1 -1 +1 -1 -1 +1 +1 -1 -1 -1 -1 +1 -1 +1 -1 +1 -1 -1 +1]$$
Lets assume that $X$ is of length 128.
So the book explains that the waveform can be written as
$$r(nT_c)= X(n ) \underbrace{\exp( (j\pi \frac{n}{2})}_{???}\hspace{1cm} n=0,1,\cdots,127$$
The chip rate $Tc=0.57 ns$...
I don't understand is the part I have ??? under...
Any ideas.
Best Regards.
|
My research group focuses on molecular dynamics, which obviously can generate gigabytes of data as part of a single trajectory which must then be analyzed.
Several of the problems we're concerned with involve correlations in the data set, which means that we need to keep track of large amounts of data in memory and analyze them, rather than using a more sequential approach.
What I'd like to know is what are the most efficient strategies for handling I/O of large data sets into scripts. We normally use Python-based scripts because it makes coding the file I/O much less painful than C or Fortran, but when we have tens or hundreds of millions of lines that need to be processed, it's not so clear what the best approach is. Should we consider doing the file input part of the code in C, or is another strategy more useful? (Will simply preloading the entire array into memory be better than a series of sequential reads of "chunks" (order of megabytes)?
Some additional notes:
We are primarily looking for scripting tools for post-processing, rather than "on-line" tools—hence the use of Python.
As stated above, we're doing MD simulations. One topic of interest is diffusion calculations, for which we need to obtain the Einstein diffusion coefficient: $$D = \frac{1}{6} \lim_{\Delta t \rightarrow \infty} \left< \left( {\bf x}(t + \Delta t) - {\bf x}(t) \right)^2 \right>$$ This means we really need to load all of the data into memory before beginning the calculation—all of the chunks of data (records of individual times) will interact with one another.
|
To solve the problem $x^2 \log (\frac{2}{\sqrt {2-\sqrt3} }) - 4x - 2 = x^2 \log (\frac{1}{2-\sqrt3}) - 4x - 3$, we'll first add $4x+2$ to both sides, giving $x^2 \log (\frac{2}{\sqrt {2-\sqrt3} }) = x^2 \log (\frac{1}{2-\sqrt3}) - 1$.
We'll then use the property of logarithms which states:$\log(\frac{a}{b}) = \log(a)-\log(b)$.
We'll apply that to $x^2 \log (\frac{2}{\sqrt {2-\sqrt3} })$ first, giving $\log(2)*x^2-\log(\sqrt {2-\sqrt3} )*x^2$. Applying the rule of powers inside logarithms, $\log(a^b) = b*\log(a)$, we then obtain $\log(2)*x^2-\frac{1}{2}\log(2-\sqrt3)*x^2$.
Our expression right now should look like this: $\log(2)*x^2-\frac{1}{2}\log(2-\sqrt3)*x^2 = x^2\log(\frac{1}{2-\sqrt3}) - 1$
We'll then solve the right side just like the left. We now have $\log(2)*x^2-\frac{1}{2}\log(2-\sqrt3)*x^2 = \log(1)*x^2 - \log(2-\sqrt3)*x^2-1$.
Adding $\log(2-\sqrt3)*x^2$ to both sides gives$\log(2)*x^2+\frac{1}{2}\log(2-\sqrt3)*x^2=\log(1)*x^2-1$.
Let's now factor out $x^2$, giving us$x^2(\log(2)-\log(1)+\frac{1}{2}\log(2-\sqrt3) = -1$.
$\log(1) = 0$, which we know because $b^0 = 1$. We now have $x^2(\log(2)+\frac{1}{2}\log(2-\sqrt3)) = -1$.
We'll now divide by $(\log(2)+\frac{1}{2}\log(2-\sqrt3))$, allowing us to see that $x^2 = -\frac{1}{\log(2)+\frac{1}{2}\log(2-\sqrt3))}$.
Unfortunately, I can't help you any more until I know what base the $\log$ is. $\log_{10}(2)$ is way different from $\log_{e}(2)$, making it very hard to get any other results from this problem.
If you have any questions, which by my lousy and messy proof you probably do, go ahead and ask them!
|
This is a continuation of this question.
http://ocw.mit.edu/courses/physics/8-01-physics-i-classical-mechanics-fall-1999/video-lectures/lecture-1/ skip this lecture to around 25:50.
After doing dimensional analysis on $t\propto h^\alpha m^\beta g^\gamma$ Lewin concludes that:
$$\alpha = \frac{1}{2}, \gamma = -1/2, \beta = 0$$
This is all fully understood, but he then goes to conclude from this that:
$$t = C \sqrt{\frac{h}{g}}$$
How did he get to this? And why is he allowed to just assume that there is a constant, C, there when he doesn't even know its value or what it is?
Keep things as simple as possible please, I'm 16.
|
A Group and Its Center, Intuitively
Last week we took an intuitive peek into the First Isomorphism Theorem as one example in our ongoing discussion on quotient groups. Today we'll explore another quotient that you've likely come across, namely that of a group by its center.
Example #2: A group and its center
If $G$ is a group, its
center $Z(G)=\{g\in G:gx=xg \text{ for all $x\in G$}\}$ is the subgroup consisting of those elements of $G$ that commute with everyone else in $G$. In line with the the intuition laid out in this mini-series, we'd like to be able to think of (the substantial part of) $G/Z(G)$ as consisting of those elements of $G$ that don't commute with everyone else in $G$.
But how can we see this? Wouldn't it be nice if we had a "commutativity detector"? I think so! But how would we go about finding,
or constructing, such a device?
Here's a BIG hint....
Not too long ago we chatted about the most obvious secret in mathematics: if you want to study (
or detect!) properties ( like the failure to be abelian!*) of an object ( like a group!), it's really helpful to look at maps ( like homomorphisms!) to/from that object to another ( like the group itself!).
So since the secret is out, let's put it to use!
Let's pick an arbitrary element $g\in G$. Our goal is to test $g$ for commutativity: does it commute with every element in $G$? That is, $g$ in $Z(G)$? Or is it not? To find out, let's
define a map $\phi_g:G\to G$ that takes an element $x$ and sends it to $gxg^{-1}$. It's not too hard to check that this map is acutally a homomorphism. Moreover, we gain two key observations: If $g$ isin $Z(G)$, then $gxg^{-1}=x$ for all$x\in G$ and so $\phi_g$ was really the identity map, i.e. $\phi_g(x)= x$ for all $x\in G.$ If $g$ is notin $Z(G)$, then there is at least one $x\in G$ so that $gxg^{-1}\neq x$ and so $\phi_g$ has no chance of being the identity map.
Well this is great! The commutativity (or lack thereof) of $g$ is entirely reflected in whether or not the correponding map $\phi_g$ is the identity! That is, $\phi_g$
is the identity if and only if $g \in Z(G)$. Or equivalently, it's not the identity if and only if $g\not\in Z(G)$. So there we have it! Our commutativity detector! For each element $g\in G$, we simply need to look at the corresponding homomorphism $\phi_g$ and ask, "Is it the identity map?"
The assignment $g\mapsto\phi_g$ turns out to be a group homomorphism in its own right. It maps from $G$ to a group of special homomorphisms $G\to G$ called the
inner automorphisms of $G$, denoted by $\text{Inn}(G).$ These are precisely the isomorphisms from $G$ to itself that are of the form $x\mapsto gxg^{-1}$ for $g\in G$.
Notice that several elements of $G$ may give rise to the same inner automorphism. In fact, $\phi_g=\phi_{gh}$ whenever $h\in Z(G)$. (Check this: if $x\in G$, then $\phi_{gh}(x)=(gh)x(gh)^{-1}=g(hxh^{-1})g^{-1}=gxg^{-1}=\phi_g(x)$.) So we might as well lump those elements together in a single pile, or coset, and call it $gZ(G)$. We might expect, then, a one-to-one correspondence between the cosets $gZ(G)$ in $G/Z(G)$ and the inner automorphisms $G\to G$. And that's exactly what we get! The map $G\to\text{Inn}(G)$ given by $g\mapsto\phi_g$ is surjective, and its kernel---all the elements of $G$ whose corresponding $\phi_g$ is the identity map---is exactly $Z(G)$. So by the First Isomorphism Theorem, $$G/Z(G)\cong\text{Inn(G)}.$$
Of course, there's only one way for an element $g\in G$ to induce the identity map, and that's if $g\in Z(G)$. (This is why there's only one "trivial" coset, namely $Z(G)$, in $G/Z(G)$.) But there may be lots of non-trivial cosets, i.e. lots of elements $g\not\in Z(G)$ that induce different, non-identity inner automorphisms of $G$. But the latter comprise the substantial or interesting part of the quotient. And that is why I think it's helpful to view $G/Z(G)$ as being made up of those elements of $G$ that don't commute!
*"Abelian" is a property that a group may or may not possess: either the elements commute with each other or they don't. Or maybe
some of them do (namely those in the center!) while some of them don't (those not in the center). What's neat is that the size of the quotient $G/Z(G)$ measures just how abelian $G$ is!
For starters, we know that $1\leq |G/Z(G)| \leq |G|$. In fact, $|G/Z(G)|=1$ if and only if $G=Z(G)$ if and only if $G$ is abelian. On the other hand, $|G/Z(G)|=|G|$ if and only if $Z(G)=\{e\}$ if and only if
no non-identity elements of $G$ commute with any other non-identity elements, i.e. $G$ is as non-abelian as possible. So the abelian-ness of $G$ is inversely proportional to $|G/Z(G)|$ as it ranges from 1 to $|G|$.
|
0-1 Integer Programming Task AKA:Binary Integer Programming. See:Assignment Problem, Linear Equality, Linear Inequality, Feasible Region, Convex Polytope. References 2015 (Wikipedia, 2015) ⇒ http://en.wikipedia.org/wiki/Linear_programming#Integer_unknowns Retrieved:2015-12-24. Linear programming(LP ; also calledlinear optimization) is a method to achieve the best outcome (such as maximum profit or lowest cost) in a mathematical model whose requirements are represented by linear relationships. Linear programming is a special case of mathematical programming (mathematical optimization).
More formally, linear programming is a technique for the optimization of a linear objective function, subject to linear equality and linear inequality constraints. Its feasible region is a convex polytope, which is a set defined as the intersection of finitely many half spaces, each of which is defined by a linear inequality . Its objective function is a real-valued affine (linear) function defined on this polyhedron. A linear programming algorithm finds a point in the polyhedron where this function has the smallest (or largest) value if such a point exists.
Linear programs are problems that can be expressed in canonical form as : [math] \begin{align} & \text{maximize} && \mathbf{c}^\mathrm{T} \mathbf{x}\\ & \text{subject to} && A \mathbf{x} \leq \mathbf{b} \\ & \text{and} && \mathbf{x} \ge \mathbf{0} \end{align} [/math] where
xrepresents the vector of variables (to be determined), cand bare vectors of (known) coefficients, Ais a (known) matrix of coefficients, and [math] (\cdot)^\mathrm{T} [/math] is the matrix transpose. The expression to be maximized or minimized is called the objective function (c T xin this case). The inequalities A x≤ b and x≥ 0are the constraints which specify a convex polytope over which the objective function is to be optimized. In this context, two vectors are comparable when they have the same dimensions. If every entry in the first is less-than or equal-to the corresponding entry in the second then we can say the first vector is less-than or equal-to the second vector.
Linear programming can be applied to various fields of study. It is widely used in business and economics, and is also utilized for some engineering problems. Industries that use linear programming models include transportation, energy, telecommunications, and manufacturing. It has proved useful in modeling diverse types of problems in planning, routing, scheduling, assignment, and design.
(Wikipedia, 2015) ⇒ http://en.wikipedia.org/wiki/linear_programming#Integer_unknowns Retrieved:2015-12-24. If all of the unknown variables are required to be integers, then the problem is called an integer programming (IP) or integer linear programming(ILP) problem. In contrast to linear programming, which can be solved efficiently in the worst case, integer programming problems are in many practical situations (those with bounded variables) NP-hard. 0-1 integer programming or binary integer programming(BIP) is the special case of integer programming where variables are required to be 0 or 1 (rather than arbitrary integers). This problem is also classified as NP-hard, and in fact the decision version was one of Karp's 21 NP-complete problems.
If only some of the unknown variables are required to be integers, then the problem is called a mixed integer programming (MIP) problem. These are generally also NP-hard because they are even more general than ILP programs.
There are however some important subclasses of IP and MIP problems that are efficiently solvable, most notably problems where the constraint matrix is totally unimodular and the right-hand sides of the constraints are integers or – more general – where the system has the total dual integrality (TDI) property.
Advanced algorithms for solving integer linear programs include:
Such integer-programming algorithms are discussed by Padberg and in Beasley. If all of the unknown variables are required to be integers, then the problem is called an integer programming (IP) or
|
Following from my previous question I am trying to apply boundary conditions to this non-uniform finite volume mesh,
I would like to apply a Robin type boundary condition to the l.h.s. of the domain ($x=x_L)$, such that,
$$ \sigma_L = \left( d u_x + a u \right) \bigg|_{x=x_L} $$
where $\sigma_L$ is the boundary value; $a, d$ are coefficients defined on the boundary, advection and diffusion respectively; $u_x = \frac{\partial u}{\partial x}$, is the derivative of $u$ evaluated at the boundary and $u$ is the variable for which we are solving.
Possible approaches
I can think of two ways to implement this boundary condition on the above finite volume mesh:
A ghost cell approach.
Write $u_x$ as a finite difference including a ghost cell.$$ \sigma_L = d \frac{u_1 - u_0}{h_{-}} + a u(x_L)$$
A. Then use linear
interpolationwith points $x_0$ and $x_1$ to find the intermediate value, $u(x_L)$.
B. Alternatively find $u(x_L)$ by averaging over the cells, $u(x_L) = \frac{1}{2}(u_0 + u_1)$
In either case, the dependence on ghost cell can be eliminated in the usual way (via substitution into the finite volume equation).
An extrapolation approach.
Fit a linear (or quadratic) function to $u(x)$ by using the values at points $x_1, x_2$ ($x_3$). This will provide the value at $u(x_L)$. The linear (or quadratic) function can then be differentiated to find an expression for the value of the derivative, $u_x(x_L)$, at the boundary. This approach does
notuse a ghost cell. Questions Which approach of the three, (1A, 1B or 2) is "standard" or you would recommend? Which approach introduces the smallest error or is the most stable? I think I can implement the ghost cell approach myself, however, how can the extrapolation approach be implemented, does this approach have a name? Are there any stability difference between fitting a linear function or a quadratic equation? Specific equation
I wish to apply this boundary to the advection-diffusion equation (in conservation form) with non-linear source term,
$$ u_t = -au_x + du_{xx} + s(x,u,t) $$
Discretising this equation on the above mesh using the $\theta$-method gives,
$$ w_{j}^{n+1} - \theta r_a w_{j-1}^{n+1} - \theta r_b w_{j}^{n+1} - \theta r_c w_{j+1}^{n+1} = w_j^n + (1-\theta) r_a w_{j-1}^n + (1-\theta) r_b w_j^n + (1-\theta) r_c w_{j+1}^n + s(x_j,t_n) $$
However for the boundary point ($j=1$) I prefer to use a fully implicit scheme ($\theta=1$) to reduce the complexity,
$$ w_{1}^{n+1} - r_a w_{0}^{n+1} - r_b w_{1}^{n+1} - r_c w_{2}^{n+1} = w_1^n + s_1^n $$
Notice the ghost point $w_0^{n+1}$, this will be removed by applying the boundary condition.
The coefficients have the definitions,
$$ r_a = \frac{\Delta t}{h_j}\left( \frac{ah_j}{2h_{-}} + \frac{d}{h_{-}} \right) $$
$$ r_b = - \frac{\Delta t}{h_j}\left( \frac{a}{2}\left[ \frac{h_{j-1}}{h_{-}} - \frac{h_{j+1}}{h_{+}} \right] + d\left[-\frac{1}{h_{-}} - \frac{1}{h_{+}} \right]\right) $$
$$ r_c = \frac{\Delta t}{h_j}\left(- \frac{ah_j}{2h_{+}} + \frac{d}{h_{+}} \right) $$
All the "$h$" variables are defined as in the above diagram. Finally, $\Delta t$ which is the time step (
N.B. this is a simplified case with constant $a$ and $d$ coefficients, in practice the "$r$" coefficients are slightly more complicated for this reason).
|
I have a ranking of items, $X = (x_1, \ldots, x_n)$. I want to obtain a random sequence $\hat{X}$, which is a permutation of $x_i$, such that the expected rank-correlation (say, Kendall) between $X$ and $\hat{X}$ is $\tau$, where $\tau$ is a parameter in range $[0, 1]$.
Assuming you have a very large number of items in your rank it should be possible, the precision to which you will be able to match to $\tau$ will be determined by the size of your set.
Knowing that, you will want to work backwards from $\tau$, assuming that you have $n$ observations, we can get the concordant ($c$) discordant ($d$) pair ratio as follows: $$ \tau = \dfrac{c-d}{c+d} $$ We also know that the total number of pairs for $\tau$ can be obtained using the following:
$$ {c+d} = \dfrac{1}{2}n(n+1) $$
Hence: $$ {c-d} = \dfrac{1}{2}n(n+1)-2d $$
And if we plug that into the $\tau$ function: $$ \tau = \dfrac{\dfrac{1}{2}n(n+1)-2d} {\dfrac{1}{2}n(n+1)} = 1-\dfrac{4d}{n(n+1)} $$
We can now solve for $d$ which is the variable that will determine the correct permutation necessary:
$$ d = \dfrac{1}{4}n(n+1)(1 - \tau) $$
This is where my mathematical knowledge reaches its limits, and the permutation will be calculated in python (I'm afraid it's a little convoluted, happy to refactor and add more comments):
#Function to calculate Tau without needing concordant pair numberdef tau_from_discordant_and_n(discordant, n): return 1-(4*discordant)/(n*(n+1))#Entrypoint to get the right array for a given tau and input arraydef permutation_for_tau(items: list, tau: float): desired_discordant_pairs = 1/4*len(items)*(1+len(items))*(1-tau) rounded_discordant_pairs = round(desired_discordant_pairs) permutation = permutator(items, rounded_discordant_pairs) return permutation, tau_from_discordant_and_n(desired_discordant_pairs, len(items))# This function creates a permutation of the array increasing the number of discordant pairs by# one at a time, for the array [1,2,3,4] the progression would be as follows:# [1,2,3,4] => [2,1,3,4] => [3,1,2,4] => [4,1,2,3] => [4,2,1,3] => [4,3,1,2] => [4,3,2,1]# The "thresholds" mentioned throughout the function indicate the point at which n number of# digits are fully reversed within the start of the sequence. The adjustment value is the first# digit after the sequence of fully reversed digits.def permutator(items, discordant_pairs): thresholds = [] previous_threshold = 0 for i in range(len(items) - 1): current_threshold = len(items) - i - 1 + previous_threshold thresholds.append(current_threshold) number_of_fully_reversed_digits = i if discordant_pairs <= current_threshold: adjustment_digit = discordant_pairs - previous_threshold + 1 break previous_threshold = current_threshold permutation = [] for i in range(number_of_fully_reversed_digits): permutation.append(len(items) - i) if adjustment_digit != 0: permutation.append(adjustment_digit) missing_digits = list(set(items).difference(set(permutation))) permutation+=missing_digits return permutationarray_to_permutate = [1,2,3,4,5,6]results = [(permutation_for_tau(array_to_permutate, x/10)) for x in range(10, -1, -1)][print('Obtained tau of: {} with array: {}'.format(x[1], x[0])) for x in results]
Output:
Obtained tau of: 1.0 with array: [1, 2, 3, 4, 5, 6]Obtained tau of: 0.9 with array: [2, 1, 3, 4, 5, 6]Obtained tau of: 0.8 with array: [3, 1, 2, 4, 5, 6]Obtained tau of: 0.7 with array: [4, 1, 2, 3, 5, 6]Obtained tau of: 0.6 with array: [5, 1, 2, 3, 4, 6]Obtained tau of: 0.5 with array: [6, 1, 2, 3, 4, 5]Obtained tau of: 0.4 with array: [6, 2, 1, 3, 4, 5]Obtained tau of: 0.30000000000000004 with array: [6, 3, 1, 2, 4, 5]Obtained tau of: 0.19999999999999996 with array: [6, 4, 1, 2, 3, 5]Obtained tau of: 0.09999999999999987 with array: [6, 5, 1, 2, 3, 4]Obtained tau of: 0.0 with array: [6, 5, 2, 1, 3, 4]
Let me know if you need any further clarification, hope this helps, I'm sure there are many more efficient ways to do the permutation, but this should be good enough.
|
Hello,
do you need any constraint qualifications for the KKT conditions to be optimal for the following problem(here)
I have a linear objective, and linear inequalities, and quadratically constrained equalities. x^t * H * x + D*x + C = 0
the matrix H is a diagonal matrix and positive definite, C = 0, and D is a full of negative ones.
Do i need any constraint qualifications or are the KKT points always optimal?
asked
zBirdy
KKT conditions are necessary but not sufficient for optimality, regardless of any constraint qualifications. Consider the following problem:\[\begin{array}{lrcl}\textrm{maximize} & y\\\textrm{s.t.} & x^{2}+y^{2}-x-y & = & 0\\ & x & \le & 1\\ & y & \le & 2\\ & x & \ge & 0\\ & y & \ge & -2\end{array}\]which fits your format. It is easy to check that \((x,y)=(\frac{1}{2},\frac{1-\sqrt{2}}{2})\) and \((x,y)=(\frac{1}{2},\frac{1+\sqrt{2}}{2})\) are both KKT points, with multipliers \(\lambda=-\frac{1}{\sqrt{2}}\) and \(\lambda=\frac{1}{\sqrt{2}}\) respectively for the equality constraint (and zero multipliers for the inequalities, since the inequalities are slack at both points). The first point is a
|
Boost : Subject: Re: [boost] [documentation] Are SVG's in documentation viable now? From: Marco Guazzone ( marco.guazzone_at_[hidden]) Date: 2014-12-10 07:17:36
On Tue, Dec 9, 2014 at 6:27 PM, John Maddock <boost.regex_at_[hidden]> wrote:
> > It looks to me like the PNG's are anti-aliased, but the SVG's aren't, which > would probably be a display setting or something? >
I've just looked in my window manager settings and anti-alias is enabled.
> In any case, I've tried them again slightly larger than before
> (http://jzmaddock.github.io/doctest/html/svg_test/equations.html) is this > any better? > Better than before, but not wrt PNG (IMO) This is the result in Firefox
http://imagebin.ca/v/1k9P4HIJQMdV
For instance:
- Greek symbols: some of the special greek symbols (pi, epsilon,...) looks clearer in PNG. For example, compare $\pi$ in "bernoulli_numbers2" and $\varepsilon$ in "sinh3" - Spacing: it seems SVG formulas are more narrow and math symbols are closer than their PNG counterparts. For example, compare the $\cos\phi$ and $\sin\phi$ expressions in the "bessel_derivatives3". On the SVG side, The $\cos$ ($\sin$) is very close to $\phi$, while on the PNG side there is a little space that makes the expression more readable (IMO).
Anyway, aside these small issues, I vote for the transition to SVG ;)
I think that my issues are motivated by your note in the original email "... the equations reference Window's specific fonts,...". So this can be easily solved in the future
Cheers,
-- Marco
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
|
To avoid repeating it endlessly, assume all rings and rngs are commutative. I do not know if this is necessary.
The question then is exactly the title, but I think a stronger statement is true:
For any rng $S$ there is a ring $R$ and an injective rng-homomorphism $f:S\rightarrow R$ such that for
any ring$T$ and anyrng homomorphism $g:S\rightarrow T$, there is a ringhomomorphism $h:R\rightarrow T$ such that $h$ extends $g$.
In fact I think the construction is pretty clear; let $X=( x_s : s\in S )$ be a set indexed by $S$, and let $R=\mathbb Z[X]/I$, where $I=( x_a+x_b-x_{ab} : a,b\in S) \cup (x_a*x_b-x_{ab} : a,b\in S)$.
It seems clear that if a universal object can exist, this has to be it. But I'm having trouble proving the natural map $f:S\rightarrow R$ (given by $f(a)=s_a$) is actually injective like it ought to be. Is there some classical universal property I'm missing here, or is there a slick way to ignore the details?
Also, I don't think the commutativity is at all necessary for the problem, it's just the situation I'm most used to. I think a similar construction (the free algebra on $S$ and $1$, modulo the same $I$) would do fine for the noncommutative case, and is isomorphic to this in the commutative case.
|
Answer
$T = \frac{1}{\omega}$
Work Step by Step
$V = a~sin~(2\pi~\omega~t)$ We can find the period $T$: $2\pi~\omega~T = 2\pi$ $T = \frac{2\pi}{2\pi~\omega}$ $T = \frac{1}{\omega}$
You can help us out by revising, improving and updating this answer.Update this answer
After you claim an answer you’ll have
24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback.
|
№ 9
All Issues Zagorodnyuk S. M.
Ukr. Mat. Zh. - 2016. - 68, № 9. - pp. 1180-1190
We study a generalization of the class of orthonormal polynomials on the real axis. These polynomials satisfy the following relation: $(J_5 \lambda J_3)\vec{p}(\lambda) = 0$, where $J_3$ is a Jacobi matrix and $J_5$ is a semi-infinite real symmetric five-diagonal matrix with positive numbers on the second subdiagonal, $\vec{p}(\lambda) = (p_0(\lambda ), p_1(\lambda ), p_2(\lambda ),...)^T$, the superscript $T$ denotes the operation of transposition with the initial conditions $p_0(\lambda ) = 1,\; p_1(\lambda) = \alpha \lambda + \beta,\; \alpha > 0, \beta \in R$. Certain orthonormality conditions for the polynomials $\{ pn(\lambda )\}^{\infty}_n = 0$ are obtained. An explicit example of these polynomials is constructed.
Ukr. Mat. Zh. - 2012. - 64, № 8. - pp. 1053-1066
This paper is a continuation of our investigation on the truncated matrix trigonometric moment problem begun in Ukr. Mat. Zh. - 2011. - 63, № 6. - P. 786-797. In the present paper, we obtain the Nevanlinna formula for this moment problem in the general case. We assume here that there is more than one moment and the moment problem is solvable and has more than one solution. The coefficients of the corresponding matrix linear fractional transformation are expressed in explicit form via prescribed moments. Simple determinacy conditions for the moment problem are presented.
Ukr. Mat. Zh. - 2011. - 63, № 6. - pp. 786-797
We study the truncated matrix trigonometric moment problem. We obtain parametrization of all solutions of this moment problem (in both nondegenerate and degenerate cases) via an operator approach. This parametri-zation establishes a one-to-one correspondence between a certain class of analytic functions and all solutions of the problem. We use important results on generalized resolvents of isometric operators, obtained by M. E. Chumakin.
Ukr. Mat. Zh. - 2010. - 62, № 4. - pp. 471–482
We obtain necessary and sufficient conditions for the solvability of the strong matrix Hamburger moment problem. We describe all solutions of the moment problem by using the fundamental results of A. V. Shtraus on generalized resolvents of symmetric operators.
|
What is the Principle of Superposition?
An electric field is created by every charged particle in the universe in the space surrounding it. The originated field can be calculated with the help of Coulomb’s law. The principle of superposition allows for the combination of two or more electric fields.
In the next section, let us discuss how the superposition principle is applied in electrostatics.
Principle of Superposition in Electrostatics
The superposition principle is helpful when there are large number of charges in a system. Let’s consider the following case,
For our convenience let us consider one positive charge, and two negative charges exerting a force on it, from the superposition theorem we know that the resultant force is the vector sum of all the forces acting on the body, therefore the force F
r , the resultant force can be given as follows,
\( \overrightarrow{F_r} ~=~ \frac {1}{4 \pi \in} \left[ \frac {Q q_1}{r^2_{12}} \hat{r}_{12} ~+~\frac {Qq_2}{r^2_{13}} \hat{r}_{13} \right] \)
Where,
\( \hat{r}_{12}\)
\( \in \)
Q, q
1 and q 2 are the magnitude of the charges respectively.
r
12 and r 13 are the distances between the charges Q and q 1 & Q and q 2 respectively. Continuous Charge Distribution:
We know that the smallest form of charge we can obtain would be +e or –e i.e. the charge of an electron or a proton, hence charges are quantized. Continuous charge distribution means that all charges are closely bound together having very less space between each other.
There are different ways in which charges can be distributed:
Linear charge distribution. Surface charge distribution. Volume charge distribution. Linear charge distribution:
Linear charge distribution is when the charges get distributed uniformly along a length, like around the circumference of a circle or along a straight wire, linear charge distribution is denoted by the symbol λ.
λ = dq/dl and it is measured in Coulombs per meter.
Surface charge distribution:
When a charge is distributed over a specific area, like the surface of a disk, it is called as surface charge distribution, it is denoted by Greek letter σ.
Surface charge distribution is measured Coulombs per square meter or Cm
-2. Volume charge distribution:
When a charge is distributed uniformly over a volume it is said to be volume charge distribution, like distribution of charge inside a sphere, or a cylinder. It is denoted by ρ.
Volume charge distribution is measured in coulombs per cubic meter or Cm
-3.
Stay tuned with Byju’s to know more about continuous charge distribution, electrostatics and much more.
|
This is a possible salvage for the failed attempt in this posting.
Informally this theory asserts the existence of a set $\{x|\phi\}$ for any formula $\phi$ such that when all membership symbols in it are replaced by equality symbols then every subformula of the resulting formula would hold of at least one object, and also the negation of that subformula must hold of at least one object.
The idea of this tricky kind of comprehension, is to take any formula $\phi$ having all and only $x_1,..,x_n$ occurring free, then replace all $\in$ symbols in it by the equality symbol $=$, call it $\phi^=$, then take each subformula $\psi^=$ of $\phi^=$, and let $\vec{p_\psi}$ be the string of all variables free in $\psi$ other than $x_1,..,x_{n-1}$, ordered after their quantificational order in $\phi$; now add to $\psi^=$ the existential prefix $\exists \vec{p_\psi}$, as to have the formula $\exists \vec{p_\psi} (\psi^=)$.
Now we come to state our Modified naive Comprehension axiom:
Comprehension: If $\phi$ is a formula in the first order language of set theory in which all and only symbols $x_1,..,x_n$ occur free, and if $\psi_1,..,\psi_m$ are all subformulas of $\phi$; then: $$\forall x_1,..,x_{n-1}\big{(}\bigwedge_{i=1}^m[\exists \vec{p_{\psi_i}} (\psi_i^=) \land \exists \vec{p_{\psi_i}}(\neg \psi_i^=)] \\ \to \exists x \forall x_n (x_n \in x \leftrightarrow \phi)\big{)}$$; is an axiom. Axiom of Multiplicity: $\forall x,y \ \exists z (z \neq x \land z \neq y)$
/
I personally think this is complex a little bit, I highly doubt its consistency though. Yet if there is a chance that this is consistent, then it would actually prove
$\mathcal All$ of the axioms of a short axiomatization of $\sf NF$, [although axioms of Frege $1^*$ and of unordered intersection relation set must be written in another equivalent way], since full Extensionality is assumed here.
The idea here is to block the loophole argument presented in the answer to the prior attempt. We add the requirement of making the preconditional check for every subformula of $\phi^=$, and not just for $\phi^=$ as it was the case in the prior attempt. I hope this manage to block that argument.
Although the procedure looks a little bit complex here, but it's not really that complicated, since it's easy to check for truth of equality sentences.
I need to comment that I think this method most likely fails, i.e. it is inconsistent, and further restrictions must be added, for instance prenex normal forms might escape the above restrictions, so we might need the restriction that every subformula of $\phi$ must not be a prenex normal form of a formula that is not in prenex normal form. However a proof of inconsistency of this theory still eludes me.
|
I'm wondering if there are any recursion principles more general than the following, first given by Montague, Tarski and Scott (1956):
Let $\mathbb{V}$ be the universe, and $\mathcal{R}$ be a well-founded relation such that for all $x\in Fld\mathcal{R}$, $\{y:y\mathcal{R}x\}$ is a set. Further, let $\mathbb{F}$ be a function with $dmn\mathbb{F}=Fld\mathcal{R}\times\mathbb{V}$. Then there exists a unique function $\mathbb{G}$ such that $dmn\mathbb{G}=Fld\mathcal{R}$ and for all $x\in Fld\mathcal{R}$, $$\mathbb{G}(x)=\mathbb{F}(x,\mathbb G\restriction\{y:y\mathcal{R}x\}).$$
Specifically, I would like to be able to drop the requirement that $\{y:y\mathcal{R}x\}$ be a set. I am attempting to define by recursion a function on $O_n\times O_n$ ordered lexicographically, which is a well ordering and consequently a well-founded relation, however $\{y:y<(0,\alpha)\}$ is a proper class for all $\alpha>0$. The proof of the above theorem in the context of MK class theory (and even its statement) rely pretty explicitly on this not happening, however I am aware that category theorists often work in situations where they need to aggregate together many proper classes and manipulate them/construct morphisms between them.
Is there a (perhaps large-cardinal based) strengthening of this theorem that would allow one to legitimately make such a definition by recursion in a class-theoretical context?
|
Bid, Aveek and Bora, Achyut and Raychaudhuri, Arup K (2005)
Experimental study of Rayleigh instability in metallic nanowires using resistance fluctuations measurements from 77K to 375K. In: SPIE: Fluctuations and Noise in Materials II, 24 May, Austin, TX, USA, Vol.5843, 147 -154. Abstract
Nanowires with high aspect ratio can become unstable due to Rayleigh-Plateau instability. The instability sets in below a certain minimum diameter when the force due to surface tension exceeds the limit that can lead to plastic flow as determined by the yield stress of the material of the wire. This minimum diameter is given $d_m \approx 2\sigma_S/\sigma_Y$ , where $\sigma_S$ is the surface tension and $\sigma_Y$ is the Yield force. For Ag and Cu we estimate that $d_m \approx$ 15nm. The Rayleigh instability (a classical mechanism) is severely modified by electronic shell effect contributions. It has been predicted recently that quantum-size effects arising from the electron confinement within the cross section of the wire can become an important factor as the wire is scaled down to atomic dimensions, in fact the Rayleigh instability could be completely suppressed for certain values of $k_F r_O$. Even for the stable wires, there are pockets of temperature where the wires are unstable. Low-frequency resistance fluctuation (noise) measurement is a very sensitive probe of such instabilities, which often may not be seen through other measurements. We have studied the low-frequency resistance fluctuations in the temperature range 77K to 400K in Ag and Cu nanowires of average diamete $\approx$ 15nm to 200nm. We identify a threshold temperature T* for the nanowires, below which the power spectral density $S_V(f) \sim 1/f$. As the temperature is raised beyond T* there is onset of a new contribution to the power spectra. We link this observation to onset of Rayleigh instability expected in such long nanowires. T* \sim 220K for the 15nm Ag wire and \sim 260K for the 15nm Cu wire. We compare the results with a simple estimation of the fluctuation based on Rayleigh instability and find good agreement.
Item Type: Conference Paper Additional Information: Copyright belongs to International Society for Optical Engineering Department/Centre: Division of Physical & Mathematical Sciences > Physics Depositing User: Girish Kumar N Date Deposited: 05 Apr 2007 Last Modified: 27 Aug 2008 12:37 URI: http://eprints.iisc.ac.in/id/eprint/9819 Actions (login required)
View Item
|
Euler’s formulas for sine and cosine
$\sin \omega t = \dfrac{1}{2} (e^{+j\omega t} - e^{-j\omega t})$
$\cos \omega t = \dfrac{1}{2} (e^{+j\omega t} + e^{-j\omega t})$
Sine and cosine emerge from vector sum of three spinning numbers in Euler’s Formula,
The green vector is $\dfrac{1}{2} e^{+j\omega t}$
The pink vector is $-\dfrac{1}{2} e^{-j\omega t}$ (used to make sine)
The red vector is $+\dfrac{1}{2} e^{-j\omega t}$ (used to make cosine)
Sine is the yellow dot, the vector sum of green and pink.
Cosine is the orange dot, the vector sum of green and red.
Notice the $90\degree$ phase shift between sine and cosine created by the simulation. Watch the dots as they pass through the origin.
Just music, no narration. The background music is
Sunday Stroll by Huma Huma.
Animated with d3.js, source code. The image is not a video. It’s being computed on the fly by your device.
Created by Willy McAllister.
|
A Effect size appendix
This appendix contains supplements to the effect size guideline.
A.1 Alternative approaches for simple effect size exemplar
The simple effect size exemplar demonstrates one common technique for esitmating mean differences in resposne time and the uncertainty around them (Student’s t confidence intervals). This supplement demonstrates several possible approaches one might take to calculate differences in response time, and compares them. It is not intended to be exhaustive.
A.1.1 Libraries needed for this analysis
# See here for rstan installation instructions: # https://github.com/stan-dev/rstan/wiki/RStan-Getting-Startedlibrary(rstan)library(tidyverse)library(modelr) # for data_grid()library(broom) # for tidy()library(ggstance) # for geom_pointrangeh(), stat_summaryh()library(brms) # for brm() (requires rstan)library(tidybayes) # for mean_qi()# requires `import` and `MASS` packages to be installedimport::from(MASS, mvrnorm)
A.1.2 Data
We will use the same data as the simple effect size exemplar:
set.seed(12)n <- 20data <- tibble( group = rep(c("A", "B"), each = n), completion_time_ms = c( rlnorm(n, meanlog = log(170), sdlog = 0.3), rlnorm(n, meanlog = log(50), sdlog = 0.4) ))
See that exemplar for more information.
A.1.3 Calculating simple effect size A.1.3.1 Approach 1: Difference in means with Student’s t confidence interval
This is the approach used in the exemplar. While the response distributions are non-normal, the sampling distribution of the difference in means will still be defined on \((-\infty, +\infty)\) and approximately symmetrical (per the central limit theorem), so we can compute a
Student’s t distribution confidence interval for the difference in means.
t_interval <- t.test(completion_time_ms ~ group, data = data) %>% tidy() # put result in tidy tabular formatt_interval
estimate estimate1 estimate2 statistic p.value parameter conf.low conf.high method alternative 103.6021 159.0898 55.48774 9.388748 0 28.66574 81.02211 126.182 Welch Two Sample t-test two.sided
The
tidy()ed output of the
t.test() function includes an estimate of the mean difference in milliseconds (
estimate) as well as the lower (
conf.low) and upper (
conf.high) bounds of the 95% confidence interval.
A.1.3.2 Approach 2a: Ratio of geometric means with Student’s t confidence interval on log-scale
For responses that are assumed to be log-normal, one alternative is to calculate the mean difference on the log scale. Because the mean on the log scale corresponds to the geometric mean of the untransformed responses, this is equivalent to calculating a ratio of geometric means on the untransformed scale (in this case, a ratio of geometric mean response times). See the data transformation guideline for more information.
log_t_interval <- t.test(log(completion_time_ms) ~ group, data = data) %>% tidy() # put result in tidy tabular formatlog_t_interval
estimate estimate1 estimate2 statistic p.value parameter conf.low conf.high method alternative 1.085344 5.036436 3.951092 11.05821 0 34.90244 0.8860725 1.284615 Welch Two Sample t-test two.sided
We can also transform this difference (in the log scale) into a ratio of geometric mean response times:
log_t_ratios <- log_t_interval %>% mutate_at(vars(estimate, estimate1, estimate2, conf.low, conf.high), exp)log_t_ratios
estimate estimate1 estimate2 statistic p.value parameter conf.low conf.high method alternative 2.960458 153.9204 51.9921 11.05821 0 34.90244 2.425584 3.613278 Welch Two Sample t-test two.sided
This output shows the estimated geometric mean response times (
estimate1 and
estimate2), and an estimate of the ratio between them (
estimate = estimate1/estimate2) as well as the lower (
conf.low) and upper (
conf.high) bounds of the 95% confidence interval of that ratio. This allows us to estimate how many times slower or faster one condition is compared to another.
However, since we have some sense in this context of how large or small we might want response times to be on the original scale (e.g., people tend to perceive differences on the order of 100ms), it may be easier to interpret effect sizes if we calculate them on that scale.
In this case, the geometric mean of condition A is roughly 154 and of B is roughly 52.0. A is about 2.96 \(\times\) B, with a 95% confidence interval of [2.43\(\times\), 3.61\(\times\)]. So we have: 52.0 \(\times\) 2.96 \(\approx\) 154.
A.1.3.3 Approach 2b: log-normal regression with marginal estimate of difference in means using simulation
We can run a linear regression that is equivalent to approach 2a:
m_log <- lm(log(completion_time_ms) ~ group, data = data)summary(m_log)
## ## Call:## lm(formula = log(completion_time_ms) ~ group, data = data)## ## Residuals:## Min 1Q Median 3Q Max ## -0.49993 -0.18776 0.00806 0.12970 0.78975 ## ## Coefficients:## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 5.03644 0.06940 72.57 < 2e-16 ***## groupB -1.08534 0.09815 -11.06 1.94e-13 ***## ---## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1## ## Residual standard error: 0.3104 on 38 degrees of freedom## Multiple R-squared: 0.7629, Adjusted R-squared: 0.7567 ## F-statistic: 122.3 on 1 and 38 DF, p-value: 1.939e-13
This model estimates the geometric means in each group. However, we want to know the difference in means on the original (time) scale, not on the log scale.
We can translate the log-scale means into means on the original scale using the fact that if a random variable \(X\) is log-normally distributed with mean \(\mu\) and standard deviation \(\sigma\):
\[ \log(X) \sim \mathrm{Normal}(\mu, \sigma^2) \]
Then the mean of \(X\) is (see here):
\[ \mathbf{E}[X] = e^{\mu+\frac{\sigma^2}{2}} \]
We will use the sampling distribution of the coefficients of
m_log to generate samples of \(\mu\) in each group, then translate these samples (along with an estimate of \(\sigma\)) onto the outcome scale. Given an estimate of the coefficients (\(\boldsymbol{\hat\beta}\)) and an estimate of the covariance matrix (\(\boldsymbol{\hat\Sigma}\)), the sampling distribution of the coefficients on a log scale is a multivariate normal distribution:
\[ \mathrm{Normal}(\boldsymbol{\hat\beta}, \boldsymbol{\hat\Sigma}) \] We can sample from that distribution and use the estimated log-scale standard deviation (\(\hat\sigma\)) to generate sample means on the untransformed scale, which we can use to derive a difference of means on the original scale and a confidence interval around that difference (this is sort of treating the sampling distribution as a Bayesian posterior):
log_interval <- mvrnorm(10000, mu = coef(m_log), Sigma = vcov(m_log)) %>% as_data_frame() %>% mutate( sigma = sigma(m_log), # Using MLE estimate of residual SD. Could also sample from # sqrt(rgamma(nrow(.), (n - 1)/2, ((n - 1)/sigma(m_log)^2)/2)) # but results are similar # get samples of means for each group on the original scale mu_A = `(Intercept)`, mean_A = exp(mu_A + sigma^2 / 2), mu_B = `(Intercept)` + groupB, mean_B = exp(mu_B + sigma^2 / 2), # get samples of the difference in means on the original scale estimate = mean_A - mean_B ) %>% mean_qi(estimate) %>% to_broom_names() %>% # makes the column names the same as those returned by broom::tidy mutate(method = "lognormal regression")
## Warning: `as_data_frame()` is deprecated, use `as_tibble()` (but mind the new semantics).## This warning is displayed once per session.
log_interval
estimate conf.low conf.high .width .point .interval method 107.2028 85.23455 131.7186 0.95 mean qi lognormal regression
This approach does not account for non-constant variance on the log scale, however; i.e., the fact that the variances of the two groups are different. The next approach does.
A.1.3.4 Approach 3a: log-normal regression with marginal estimate of difference in means using Bayesian regression (uninformed priors)
For this approach, we will use a Bayesian log-normal regression model to estimate the mean and variance of the response distribution in each group on a log scale. We will then transform these parameters into a difference in means on the original (millisecond) scale, as in approach 2b.
For this approach, we will use a Bayesian log-normal regression with uninformed priors. This model is the same as the
lm model in approach 2, except that it also allows the variance to be different in each group (in other words, it does not assume
constant variance between groups, also known as homoskedasticity).
m_log_bayes <- brm(brmsformula( completion_time_ms ~ group, sigma ~ group # allow variance to be different in each group ), data = data, family = lognormal)
Similar to approach 2b, we will derive samples of the mean difference, this time from the posterior distribution. We will use these to derive a credible interval (Bayesian analog to a confidence interval) around the mean difference:
log_bayes_samples <- m_log_bayes %>% tidy_draws() %>% mutate( mu_A = b_Intercept, sigma_A = exp(b_sigma_Intercept), mean_A = exp(mu_A + sigma_A^2 / 2), mu_B = b_Intercept + b_groupB, sigma_B = exp(b_sigma_Intercept + b_sigma_groupB), mean_B = exp(mu_B + sigma_B^2 / 2), estimate = mean_A - mean_B ) log_bayes_interval <- log_bayes_samples %>% mean_qi(estimate) %>% to_broom_names() %>% mutate(method = "lognormal regression (Bayesian, uninformed)")log_bayes_interval
estimate conf.low conf.high .width .point .interval method 103.923 82.73573 128.301 0.95 mean qi lognormal regression (Bayesian, uninformed)
Alternatively, we could use
tidybayes::add_fitted_draws, which internally calls
brmsfit::posterior_linpred, which does the same math as above to calculate the posterior distribution for the mean on the response scale. This saves us some math (and makes sure we did not do that math incorrectly):
log_bayes_samples <- data %>% # reverse the order of group so that the output is A - B instead of B - A data_grid(group = fct_rev(group)) %>% add_fitted_draws(m_log_bayes, value = "estimate") %>% ungroup() %>% compare_levels(estimate, by = group) %>% mutate(method = "lognormal regression (Bayesian, uninformed)") %>% group_by(method)log_bayes_interval <- log_bayes_samples %>% mean_qi(estimate) %>% to_broom_names()log_bayes_interval
method estimate conf.low conf.high .width .point .interval lognormal regression (Bayesian, uninformed) 103.923 82.73573 128.301 0.95 mean qi
This gives the estimated mean difference between conditions in milliseconds (
estimate), as well as the lower (
conf.low) and upper (
conf.high) bounds of the 95% quantile credible interval of that difference.
A.1.3.5 Approach 3b: log-normal regression with marginal estimate of difference in means using Bayesian regression (weakly informed priors)
Finally, let’s run the same analysis with weakly informed priors based on what we might believe reasonable ranges of the effect are. To see what priors we can set in
brm models, we can use the
get_prior() function:
get_prior(brmsformula( # for using informed priors, the `0 + intercept` formula can be helpful: otherwise, # brm re-centers the data on 0, which can make it harder to set an informed prior. # see help("set_prior") for more information. completion_time_ms ~ group + 0 + intercept, sigma ~ group + 0 + intercept ), data = data, family = lognormal)
prior class coef group resp dpar nlpar bound b b groupB b intercept b sigma b groupB sigma b intercept sigma
This shows priors on the log-scale mean and priors on the log-scale standard deviation (
sigma).
First, we’ll assume that completion time is something like a pointing task: not reasonably faster than 10ms or slower than 2s (2000ms). On log scale, that is between approximately \(\log(10) \approx 2\) and \(\log(2000) \approx 8\), so we’ll center our prior intercept between these (\((8+2)/2\)) and give it a 95% interval that covers them (sd of \((8-2)/4\)): \(\mathrm{Normal}(5, 1.5)\).
For differences in log mean, we’ll assume that times will not be more than about 100\(\times\) difference in either direction: a zero-centered normal prior with standard deviation \(\approx log(100)/2 \approx 2.3\): \(\mathrm{Normal}(0, 2.3)\).
Since the standard deviation is estimated using a submodel that itself uses a log link, we have to make a prior on the log scale of log standard deviation. For standard deviation on the log scale, let’s assume a baseline of around 100ms response time. Then, our prior on standard deviation on the log scale could reasonbly be as small as one reflecting individual differences on the order of 10ms: \(log(110) - log(100) \approx 0.1 \approx e^-2.4\), or as large as one reflecting a difference of 1 second: \(log(1100) - log(100) \approx 2.4 \approx e^0.9\). So we’ll center our log log sd prior at \((0.9 + -2.4)/2\) and give it a 95% interval that covers them (sd of \((0.9 - -2.4)/4\)): \(\mathrm{Normal}(-0.75, 0.825)\).
Finally, for differences in log log standard deviation, we’ll assume zero-centered with similar magnitude to the intercept: \(\mathrm{Normal}(0, 0.825)\).
These priors can be specified as follows:
log_bayes_priors <- c( prior(normal(5.5, 1.75), class = b, coef = intercept), prior(normal(0, 2.3), class = b, coef = groupB), prior(normal(-0.75, 0.825), class = b, coef = intercept, dpar = sigma), prior(normal(0, 0.825), class = b, coef = groupB, dpar = sigma) )log_bayes_priors
prior class coef group resp dpar nlpar bound normal(5.5, 1.75) b intercept normal(0, 2.3) b groupB normal(-0.75, 0.825) b intercept sigma normal(0, 0.825) b groupB sigma
Then we can re-run the model from approach 3a with those priors:
m_log_bayes_informed <- brm(brmsformula( completion_time_ms ~ group + 0 + intercept, sigma ~ group + 0 + intercept ), data = data, family = lognormal, prior = log_bayes_priors)
Similar to approach 2b, we will derive samples of the mean difference, this time from the posterior distribution. We will use these to derive a credible interval (Bayesian analog to a confidence interval) around the mean difference:
log_bayes_informed_samples <- data %>% # reverse the order of group so that the output is A - B instead of B - A data_grid(group = fct_rev(group)) %>% add_fitted_draws(m_log_bayes_informed, value = "estimate") %>% ungroup() %>% compare_levels(estimate, by = group) %>% mutate(method = "lognormal regression (Bayesian, weakly informed)") %>% group_by(method)log_bayes_informed_interval <- log_bayes_informed_samples %>% mean_qi(estimate) %>% to_broom_names()log_bayes_informed_interval
method estimate conf.low conf.high .width .point .interval lognormal regression (Bayesian, weakly informed) 104.4916 82.77971 129.5219 0.95 mean qi
This gives the estimated mean difference between conditions in milliseconds (
estimate), as well as the lower (
conf.low) and upper (
conf.high) bounds of the 95% quantile credible interval of that difference.
A.1.4 Comparing approaches
All approaches that give estimates for the difference in means give very similar results:
bayes_samples = bind_rows(log_bayes_samples, log_bayes_informed_samples)bind_rows(t_interval, log_interval, log_bayes_interval, log_bayes_informed_interval) %>% ggplot(aes(x = estimate, y = method)) + geom_violinh(data = bayes_samples, color = NA, fill = "gray65") + geom_pointrangeh(aes(xmin = conf.low, xmax = conf.high)) + geom_vline(xintercept = 0)
The Bayesian estimates include posterior distributions shown as violin plots in gray.
|
Adaptive Algorithms for Optimization of Hip Implant Positioning
This research is carried out in the framework of MATHEON supported by Einstein Foundation Berlin.
This project aims at a software environment supporting computer-assisted planning for total hip joint replacement by suggesting implant positions optimized for longevity of bone implants. The aim is to pre-operatively assess stress distribution in bone and to determine an optimal implant position with respect to natural function and stress distribution to prevent loosening, early migration, stress shielding, undesired bone remodeling, and fracture. Increasing the longevity of implants will help to enhance quality of life and reduce the cost of health care in aging societies.
Focus of the research is the development of efficient optimization algorithms by adaptive quadrature of the high-dimensional space of daily motions and appropriate choice of tolerances for the underlying dynamic contact solver. Introduction
Though we have an increased need of durable implants due to an aging society, the current implementation process still allows for improvement. Up-to-date software tools in computer-assisted hip implant surgery, e.g., still use 2D images of the patients hip geometry or have only begun to use 3D models. Loads which the implant has to endure on a daily basis are not considered at all and the joint socket is put in using the surgeons experience.
The aim of this project is to optimize the hip implant position using a 3D, patient specific hip joint model. The natural range of motion is to be retained and stress induced by daily motions in a healthy hip joint is to be matched such that bone remodeling, dislocation, inflammation and pain is reduced or even prevented.
The numerical solution of this optimization problem is expensive. This is why methods have to be developed, that allow for a fast (or at least practical) computation of an optimal hip implant position. These methods need to take for example an adaptive approach on the load domain, that describes the daily motions. The error tolerance also needs to change adaptively, the optimization can be done with a multilevel approach and necessary refinement is to follow a goal-oriented error estimation scheme rather than a general error estimation scheme or, even worse, uniform refinement.
Underlying model
The bone is modeled as a linear hyperelastic, isotropic material. The included strain tensor (Cauchy-Green deformation tensor) can contain geometric non-linearity, but for the testing model the tensor is linearized. Arising ligaments and force densities acting on the joint through the ligament attaching points is put in by measuring the geometric (Euklidian) distance.
The FEM discretization and assembly of the according matrices is done using the Kaskade 7.3. The code is coupled with the FuFEM code from the Freie Universität Berlin in cooperation with the associated project ECMath CH1 – Reduced basis methods in orthopedic hip surgery planning. The FuFEM code contains the contact solver, that is described in this publication.
In order to have a computational inexpensive, but structurally realistic test setup for method development, simple 2D situations have been considered.
The first computation has been conducted on a geometry of two squares. The more complicated geometry of a hip joint was acquired from ZIBAmira. For both geometries the Triangle code from Shewchuk was used, to get a first discretization.
The time varying force acting on the bottom boundary of the lower object is incorporated using a quasi-static approach.
Features like cartilage, muscles, soft tissues, a non-linearized deformation tensor, 3D geometry, Cosserat rods for the ligaments and others are possible (and eventually necessary), and will be included later.
Adaptive Optimization Challenges
The first challenge is the definition of an objective function. We consider objectives of the form:
\[J(p) = \int\limits_{m\in\mathcal{L}} \text{w}(m) \int\limits_{x\in\Omega} j(\sigma_{h}(p,m))\ \text{d}x\ \text{d}m\]
The basic idea is to optimize the implant's position p by computing the stresses sigma
h – subscript h symbolizing the discretization of the geometry – induced through loads integrated over space. The function j penalizes high local stresses, yielding certain loads unfeasible that threaten to damage the bone, the implant's fixed position or are baneful in another way. The letter m stands for one motion from the parameter domain L where the loads are parametrized with angle, longitudinal and tangential forces, while the function w projects the load m to it's probability (walking and sitting loads being more frequent than stumbling or skiing loads).
This setup doesn't include retention of the natural range of motion yet which needs to be taken care of as well. Also the probability distribution of different loads and the penalization function j have to be designed.
Given the objective function, it is not clear whether it is continuous. This has to be studied and the (generalized) gradient will be derived, accordingly.
Afterwards methods can be developed, that ensure a fast and effective computation of an optimal position. In order to deal with the complexity of the parameter domain L, adaptivity in the choice of motion parameters is to be implemented. The selection of the motions happens according to the given probability distribution.
To even out numerical errors, this method will be combined with multilevel optimization. This combination will further be coupled with trust-region or line-search as well as goal-oriented error estimation methods.
On occurring position changes of the implant, forthcoming discretization errors will be analyzed and damped, such that they don't have a detrimental influence on finding an optimal implant position.
Dealing with errors will also take place in the method-development of adaptivity for the choice of error tolerance. Here we will start with a relatively coarse tolerance, which evolves fluently due to the optimization process, i.e. becoming increasingly fine according to the convergence and coarsening again on implant position changes, when the given differential equation has to be solved again for the chosen motions.
Which leads to the last part of the development. One has to look at the solution and decide what quantities will be stored for later use and comparison. The storage of the full spatio-temporal solution is infeasible, since that will exceed the available standard memory after a short time.
Publications
2019 Sebastian Götschel, Anton Schiela, Martin Weiser Kaskade 7 -- a Flexible Finite Element Toolbox ZIB-Report 19-48 PDF
BibTeX
URN
2017 Marian Moldenhauer, Martin Weiser, Stefan Zachow Adaptive Algorithms for Optimal Hip Implant Positioning PAMM, 17(1), pp. 203-204, 2017 BibTeX
DOI
Lars Lubkoll, Anton Schiela, Martin Weiser An affine covariant composite step method for optimization with PDEs as equality constraints Optimization Methods and Software, 32(5), pp. 1132-1161, 2017 (preprint available as ZIB-Report 15-09)
BibTeX
DOI
2016 Martin Weiser Inside Finite Elements De Gruyter, 2016 BibTeX 2015 Sebastian Götschel, Martin Weiser Lossy Compression for PDE-constrained Optimization: Adaptive Error Control Comput. Optim. Appl., 62(1), pp. 131-155, 2015 (preprint available as ) PDF
BibTeX
|
A transistor goes into saturation when both the base-emitter and base-collector junctions are forward biased, basically. So if the collector voltage drops below the base voltage, and the emitter voltage is below the base voltage, then the transistor is in saturation.
Consider this Common Emitter Amplifier circuit. If the collector current is high enough, then the voltage drop across the resistor will be big enough to lower the collector voltage below the base voltage. But note that the collector voltage can't go too low, because the base-collector junction will then be like a forward-biased diode! So, you will have a voltage drop across the base-collector junction but it will not be the usual 0.7V, it will be more like 0.4V.
How do you bring it out of saturation? You could reduce the amount of base drive to the transistor (either reduce the voltage \$V_{be}\$ or reduce the current \$I_b\$), which will then reduce the collector current, which means the voltage drop across the collector resistor will be decreased also. This should increase the voltage at the collector and act to bring the transistor out of saturation. In the "extreme" case, this is what is done when you switch off the transistor. The base drive is removed completely. \$V_{be}\$ is zero and so is \$I_b\$. Therefore, \$I_c\$ is zero too, and the collector resistor is like a pull-up, bringing the collector voltage up to \$V_{CC}\$.
A follow-up comment on your statement
Does a BJT become saturated by
raising Vbe above a certain threshold?
I doubt this, because BJTs, as I
understand them, are
current-controlled, not
voltage-controlled.
There are a number of different ways to describe transistor operation. One is to describe the relationship between currents in the different terminals:
$$I_c = \beta I_b$$
$$I_c = \alpha I_e$$
$$I_e = I_b + I_c$$
etc. Looking at it this way, you could say that the collector current is controlled by the base
current.
Another way of looking at it would be to describe the relationship between base-emitter voltage and collector current, which is
$$I_c = I_s e^{\frac{V_{be}} {V_T}}$$
Looking at it this way, the collector current is controlled by the base
voltage.
This is definitely confusing. It confused me for a long time. The truth is that you cannot really separate the base-emitter voltage from the base current, because they are interrelated. So both views are correct. When trying to understand a particular circuit or transistor configuration, I find it is usually best just to pick whichever model makes it easiest to analyze.
Edit:
Does a BJT become saturated by
allowing Ib to go over a certain
threshold?
If so, does this threshold
depend on the "load" that is connected
to the collector? Is a transistor
saturated simply because Ib is high
enough that the beta of the transistor
is no longer the limiting factor in
Ic?
The bold part is basically exactly right. But the \$I_b\$ threshold is not intrinsic to a particular transistor. It will depend not only on the transistor itself but on the configuration: \$V_{CC}\$, \$R_C\$, \$R_E\$, etc.
|
In other notebooks we've studied Euler's method for solving ordinary differential equations (ODEs). This method is perhaps the most simple to understand and implement and thus is very popular as teaching material. However, for scientific purposes, Euler's method is inaccurate compared to more complex methods using the same step size, and are also conditionally unstable as discussed the previous modules.
Here, we'll consider the Runge-Kutta Method, which is one of the most applied numerical methods for solving ODEs. It is deemed reasonably efficient considering computation time and its fourth order approximation offers decent accuracy, that is, good enough for most problems not demanding solutions of very high precision. Do note that we present the method in its simplest form: You may encounter its implementation somewhat different elsewhere. In general, newer (and smarter) implementations of the method can provide significantly boosts to its efficiency.
Consider the the same (non-linear) ODE as discussed in the previous modules,$$ \dot{x}(t) = \cos(x(t)) + \sin(t), \qquad \dot{x}\equiv\frac{\textrm{d}x(t)}{\textrm{d}t} $$
with initial condition $x(t_0) = 0, t_0 = 0$. The (Explicit) Euler method solves this problem by discretizing the variables such that\begin{align*} t & \rightarrow t_n \qquad\equiv t_0 + n\cdot\Delta t,\\[1.0em] x(t) & \rightarrow x(t_n) \quad\equiv x_n ,\\[1.0em] n & = 0,1,\ldots,N; \end{align*}
Then, the function value $x_{n+1}$ may be approximated by the preceding value plus the change (derivative) in that point multiplied with the distance in time $\Delta t$ between $x_n$ and $x_{n+1}$. That is,$$ x_{n+1} = x_n + (\Delta t) \cdot \dot{x}_n, \qquad \Delta t = \frac{t_N - t_0}{N} \equiv h $$
For this particular problem, we are given $\dot{x}_n(t_n)$, such that this can be inserted directly into the Euler formula giving,$$ x_{n+1} = x_n + h [\cdot \cos(x_n) + \sin(t_n)] $$
Which is about everything needed to solve the current problem. For a more thorough and detailed explaination of the Euler method, take a look at the modules 4.1 (explicit) and 4.3 (implicit)
%matplotlib inlineimport numpy as np # Numerical Pythonimport matplotlib.pyplot as plt # Graph and Plot handlingimport time # Time Measure
# A quick implementation of Euler's Method for solving# the above Initial Value Problem# Try adjusting 'N' to see how the numerical solution convergest0 = 0.0tN = 10.0N = 15t = np.linspace(t0,tN,N)x_Eu = np.zeros(N) ## Eulerh = (tN-t0)/N# Initial condition:x_Eu[0] = 0for n in range(0,N-1): x_Eu[n+1] = x_Eu[n] + h * (np.cos(x_Eu[n]) + np.sin(t[n]))plt.figure(figsize=(10,6))plt.plot(t,x_Eu,'-ro',linewidth=3.0, label=r'Euler')plt.ylabel(r'Dimensionless position, $x(t)$')plt.xlabel(r'Dimensionless time, $t$')plt.legend(loc=4) # 'loc' sets the location of the legend-text in the plot plt.grid()plt.show()
A quick error analysis shows that the local truncation error emitted in
each and every step of Euler's method is proportional to the step size squared $h^2 = (\Delta t)^2$, by Taylor expanding the function about $t+h$
Now, if we iterate the system a total of $N=\frac{t_N - t_0}{h}\propto h^{-1}$ times, then the total, global error is $E = N\cdot e \propto h$. That is, the Euler is a first order method, as concluded in the previous modules which also gives a more in-depth error analysis of the Euler method.
The question now is: Can we obtain a better approximated solution of same step size?
The underlying concept of the Runge-Kutta method we wish to show, is built on the same basic concepts as from Euler's method. Consider now that we apply Euler's method again, however this time we will not use the derivative at $x_{n+1}$, but rather at the middle point $x_{m}=\frac{x_{n+1}+x_n}{2}$, refered to as a
. One can then compute a more accurate $x_{n+1}$ using the information of the derivative at this midpoint ($\dot{x}_m$) of the interval $h$ between $x_n$ and $x_{n+1}$. That is, test-point
Now, let us plot it against the original Euler-approach and study the difference.
x_imp = np.zeros(N) for n in range(0,N-1): x_imp[n+1] = x_imp[n] + h * ( np.cos(x_imp[n]+(h/2.0)*(np.cos(x_imp[n]) + np.sin(t[n]))) + np.sin(t[n]+h/2.0) )plt.figure(figsize=(10,6))plt.plot(t,x_Eu ,'-ro' ,linewidth=3.0,label=r'Euler')plt.plot(t,x_imp,'--go',linewidth=3.0,label=r'Improved Euler')plt.ylabel(r'Dimensionless position, $x(t)$')plt.xlabel(r'Dimensionless time, $t$')plt.title(r'Stepsize, $N$ = %i' % N)plt.legend(loc=4) # 'loc' sets the location of the legend-text in the plot plt.grid()plt.show()
The new approximation is significantly different from the solution provided by the original Euler-approach (using $N=15$). What we have implemented here, is often referred to as
The Improved Euler Method or The Midpoint Method for ODEs. However, a dear child has many names, and we will refer to it as The Second Order Runge-Kutta Method. If we assume our improved method to be more accurate, one sees that the original approach over-shoots, which is a known issue for the Euler method.
Let us in a short manner compare the errors of the two approaches, by first estimating the error of our improved scheme in a similar fashion as above, by Taylor expanding about $(t+h/2)$. The improved scheme applies the approximation$$ x(t+h) \approx x(t) + h\dot{x}(t+h/2) $$
where we expand $\dot{x}(t+h/2)$ resulting in$$ \dot{x}(t+h/2) = \dot{x}(t) + (h/2)\ddot{x} + \frac{(h/2)^2}{2}\dddot{x} + \mathcal{O}(h^3) $$
In a similar fashion as for the Euler method, we compare our improved scheme with the exact Taylor expansion,\begin{align*} \text{Exact:} & \qquad x(t + h) = x(t) + h\dot{x} + \frac{h^2}{2}\ddot{x} + \frac{h^3}{6}\dddot{x} + \ldots \\[1.2em] \text{Impr. E.:} & \qquad x(t + h) \approx x(t) + h[\dot{x}(t) + (h/2)\ddot{x}]\\[1.2em] \text{Error:} & \qquad e = \frac{h^3}{6}\dddot{x} + \mathcal{O}(h^4) \end{align*}
This implies a global error $E=N\cdot e \propto h^2$, which effectivley gives a method of second order accuracy in $h$, not so surprising considering the name of the method. Let us ask ourselfs again:
can we do it even better?
What Euler did was estimating $x_{n+1}$ by using $x_n$ and $\dot{x}_n$. What we just did, was estimating $x_{n+1}$ by using $x_n$ and the derivative of the point in-between, namely the mid-test-point, $x_m$. Now, here's an interesting idea:
Take some time to think about what this question implies before you read on. Would it be possible to use a similar test-point, to calculate the derivative to the actual test-point?
The perhaps not so ground breaking answer to the above question is
yes. In fact, we can do even better: We can make a test-point, for the test-point's test point, and so on! As it happens to be, The Fourth Order Runge-Kutta Method uses three such test-points and is the most widely used Runge-Kutta Method. You might ask why we don't use five, ten or even more test-points, and the answer is quite simple: It is not computationally free to calculate all these test-points, and the gain in accuracy rapidly decreases beyond the fourth order of the method. That is, if high precision is of such importance that you would require a tenth-order Runge-Kutta, then you're better off reducing the step size $h$, than increasing the order of the method.
Also, there exists other more sophisticated methods which can be both faster and more accurate for equivalent choices of $h$, but obviously, may be a lot more complicated to implement. See for instance
Richardson Extrapolation, the Bulirsch-Stoer method, Multistep methods, Multivalue methods and Predictor-Corrector methods.
Nevertheless, we now show a general expression for the arbitrarily-ordered Runge-Kutta Method, before we apply the fourth order method on the problem given above. Again, consider an ODE written on the form$$ \dot{x}(t) = g(x(t),t) $$
Then, for the
general $q$-ordered Runge-Kutta method, one has
Such that,\begin{equation*} x_{n+1} = x_n + \sum_{i=1}^{q} b_i k_i \end{equation*}
The scheme as introduced now in its general form, has some undefined coefficients: $a_{i,j}$, $b_{i}$ and $c_{i}$. The elements $a_{i,j}$ are denoted as the Runge-Kutta Matrix, while $b_i$ are weights and $c_i$ are nodes. Deriving these (and coefficients for other similar methods) can be a tedious task of complicated algebra. Hence, such coefficients are usually obtained from tables in litterature. The Runge-Kutta methods' coefficients can be found using the (John C.) Butcher tableau,\begin{array}{ c|c c c } 0 & && &&& \\ c_2 & a_{2,1} && &&& \\ c_3 & a_{3,1} && a_{3,2} &&& \\ \vdots & &\ddots& &&& \\ c_q & a_{q,1} && a_{q,2} & \ldots & a_{q,q-1}&& \\ \hline & b_1 && b_2 & \ldots & b_{q-1} && b_q \end{array}
The coefficients are then determined by demanding the method to be
consistent. Consistancy of a numerical (finite difference) approximation referes to the fact that the approximated problem (equation) approaches the exact problem in the limit of the step size going towards zero. For the Runge-Kutta method, this happens to be the case when
There may exist multiple choices for the coefficients for an aribrary order $q$, but we will not go any further into the details of their derivation. Instead, we give the perhaps most widely applied choice for the fourth order method $(q=4)$, for which the Butcher tableau is\begin{array}{ c|c c c } 0 & & & & \\ 1/2 & 1/2 & & & \\ 1/2 & 0 & 1/2 & & \\ 1 & 0 & 0 & 1 & \\ \hline & 1/6 & 1/3 & 1/3 & 1/6 \\ \end{array}
The above tableau enables the calculations of $k_1$, $k_2$, $k_3$ and $k_4$ such that we may now apply the fourth order Runge-Kutta method to problem at hand.
x_4RK = np.zeros(N) ## Runge-Kuttadef g(x_,t_): return np.cos(x_) + np.sin(t_)for n in range(0,N-1): k1 = h*g( x_4RK[n] , t[n] ) k2 = h*g( x_4RK[n] + k1/2, t[n] + (h/2) ) k3 = h*g( x_4RK[n] + k2/2, t[n] + (h/2) ) k4 = h*g( x_4RK[n] + k3 , t[n] + h ) x_4RK[n+1] = x_4RK[n] + k1/6 + k2/3 + k3/3 + k4/6 plt.figure(figsize=(10,6))plt.plot(t,x_Eu, '-ro' , linewidth=3.0,label=r'Euler')plt.plot(t,x_imp,'--go', linewidth=3.0,label=r'2nd order Runge-Kutta')plt.plot(t,x_4RK,':bo' , linewidth=3.0,label=r'4th order Runge-Kutta')plt.ylabel(r'Dimensionless position, $x(t)$')plt.xlabel(r'Dimensionless time, $t$')plt.title(r'Stepsize, $N$ = %i' % N)plt.legend(loc=4) plt.show()
One sees that there is a marginal difference between the fourth and second order implementations of Runge-Kutta, while Euler's method has a significant error. We recommend that you adjust the total number of points $N$, to see how the different methods approaches each other (and the exact solution) as $h \overset{N \to \infty}{\longrightarrow} 0$.
So far, we've restricted ourselfs to consider ODEs containing first (and zeroth) ordered derivatives only, however, this is in fact all one has to master. Be careful not to confuse the (derivative) order of the equation, and the (accuracy) order of the numerical method. A $q$-ordered ODE can always be reduced to a set of two $(q-1)$-ordered ODEs. Consider the general, linear, second order ODE with constant coefficients and some arbitrary initial conditions$$ a\ddot{x} + b\dot{x} + cx = g(t), \qquad x = x(t) $$
Introduce then a new variable, $\nu(t) \equiv \dot{x}(t)$ such that the above problem can be expressed by\begin{align*} \dot{x} &= \nu &\equiv F(x,\nu,t)\\[1.0em] \dot{\nu} &= \frac{1}{a} ( g(t) - b\nu - cx ) &\equiv G(x,\nu,t) \end{align*}
Where we have introduced $F,G$ only for generality. The method would in fact also be applicable for the more general case where $a,b,c$ are functions of $x,\nu,t$.
Nevertheless, this set of two dependent, first order ODEs can be solved using a Runge-Kutta method! As a matter of fact, we can solve any set of $M$
first order ODEs, however, keep in mind that in this particular example the test-points for $x$ will depend on those for $\nu$ and vice versa such that the two equations will have to be solved simultaneously. That also goes for an arbitrary sized set of ODEs.
For the fourth order Runge-Kutta method, one gets for time-step $n$\begin{align*} k_{x1} &= h \cdot F\left(x_n ,\nu_n ,t_n \right), & k_{\nu1} &= h \cdot G\left(x_n ,\nu_n ,t_n \right) \\[1.0em] k_{x2} &= h \cdot F\left(x_n + \frac{k_{x1}}{2},\nu_n + \frac{k_{\nu1}}{2},t_n+\frac{h}{2} \right), & k_{\nu2} &= h \cdot G\left(x_n + \frac{k_{x1}}{2},\nu_n + \frac{k_{\nu1}}{2},t_n+\frac{h}{2} \right) \\[1.0em] k_{x3} &= h \cdot F\left(x_n + \frac{k_{x2}}{2},\nu_n + \frac{k_{\nu2}}{2},t_n+\frac{h}{2} \right), & k_{\nu3} &= h \cdot G\left(x_n + \frac{k_{x2}}{2},\nu_n + \frac{k_{\nu2}}{2},t_n+\frac{h}{2} \right) \\[1.0em] k_{x4} &= h \cdot F\left(x_n + k_{x3} ,\nu_n + k_{\nu3} ,t_n+h \right), & k_{\nu3} &= h \cdot G\left(x_n + k_{x3} ,\nu_n + k_{\nu3} ,t_n+h \right) \end{align*}
Such that,\begin{align*} x_{n+1} &= x_n + k_{x1}/6 + k_{x2}/3 + k_{x3}/3 + k_{x4}/6 \\[1.0em] \nu_{n+1} &= \nu_n + k_{\nu 1}/6 + k_{\nu 2}/3 + k_{\nu 3}/3 + k_{\nu 4}/6 \end{align*}
For $n=1,\ldots,N-1;$ while $x_0$, $\nu_0$ should be known as initial conditions. Do note that the step size $h$ can be withdrawn from inside of $k$, and left to the calculation of $x_{n+1}$ instead, which will somewhat reduce the total number of calculations/operations in each loop. You should also notice that $k_{xi}$ depends on $k_{\nu (i-1)}$ and $k_{\nu i}$ depends on $k_{x(i-1)}$. Thus, the $k$-values need to be calculated in an orderly fashion!
We conclude that for a system of $M$ first order equations, the total number of $k$-values for a $q$-ordered Runge-Kutta method will be $M \cdot q$. That is, large sets of differential equations with derivatives of higher orders will result in an even larger set of first order ODEs yielding a huge amount of $k$-variables. If this is the case, you might want to reconsider if the Runge-Kutta method really is the best approach for your problem.
However, the Runge-Kutta method is rather easy and straight forward to implement, and provide what we will call a quite decent precision while is still reasonably fast and efficient. It is, by all means, a powerful tool for any numerical scientist!
|
If you need higher-order derivatives, you won't get good results using equidistant data points. If you can sample your function at arbitrary nodes, I would recommend using Chebyshev points, i.e. $$x_k = \cos\left( \pi \frac{k}{n} \right), \quad k=0\dots n$$ for a polynomial of degree $n$.
You can evaluate the polynomial stably using Barycentric Interpolation. Note that since you're using a polynomial of high degree over the entire interval, you will probably need less data points. Note also that this assumes that your data can be represented by a polynomial, i.e. it is continuous and smooth.
Getting higher-order derivatives from the interpolant is a bit tricky and is ill-conditioned for high derivatives. It can be done, however, using Chebyshev polynomials. In Matlab/Octave (sorry, my Python is not good at all), though, you could do the following:
% We will use the sine function as a test case
f = @(x) sin( 4*pi*x );
% Set the number of points and the interval
N = 40;
a = 0; b = 1;
% Create a Vandermonde-like matrix for the interpolation using the
% three-term recurrence relation for the Chebyshev polynomials.
x = cos( pi*[0:N-1]/(N-1) )';
V = ones( N ); V(:,2) = x;
for k=3:N, V(:,k) = 2*x.*V(:,k-1) - V(:,k-2); end;
% Compute the Chebyshev coefficients of the interpolation. Note that we
% map the points x to the interval [a,b]. Note also that the matrix inverse
% can be either computed explicitly or evaluated using a discrete cosine transform.
c = V \ f( (a+b)/2 + (b-a)/2*x );
% Compute the derivative: this is a bit trickier and relies on the relationship
% between Chebyshev polynomials of the first and second kind.
temp = [ 0 ; 0 ; 2*(N-1:-1:1)'.*c(end:-1:2) ];
cdiff = zeros( N+1 , 1 );
cdiff(1:2:end) = cumsum( temp(1:2:end) );
cdiff(2:2:end) = cumsum( temp(2:2:end) );
cdiff(end) = 0.5*cdiff(end);
cdiff = cdiff(end:-1:3);
% Evaluate the derivative fp at the nodes x. This is useful if you want
% to use Barycentric Interpolation to evaluate it anywhere in the interval.
fp = V(:,1:n-1) * cdiff;
% Evaluate the polynomial and its derivative at a set of points and plot them.
xx = linspace(-1,1,200)';
Vxx = ones( length(xx) , N ); Vxx(:,2) = xx;
for k=3:N, Vxx(:,k) = 2*xx.*Vxx(:,k-1) - Vxx(:,k-2); end;
plot( (a+b)/2 + (b-a)/2*xx , [ Vxx*c , Vxx(:,1:N-1)*cdiff ] );
The code to compute the derivative can be re-applied several times to compute higher derivatives.
If you use Matlab, you may be interested in the Chebfun project, which does most of this automatically and from which parts of the code example above were taken. Chebfun can create an interpolant from literally any function, e.g. continuous, discontinuous, with singularities, etc... and then compute its integral, derivatives, use it to solve ODEs, etc...
|
also, if you are in the US, the next time anything important publishing-related comes up, you can let your representatives know that you care about this and that you think the existing situation is appalling
@heather well, there's a spectrum
so, there's things like New Journal of Physics and Physical Review X
which are the open-access branch of existing academic-society publishers
As far as the intensity of a single-photon goes, the relevant quantity is calculated as usual from the energy density as $I=uc$, where $c$ is the speed of light, and the energy density$$u=\frac{\hbar\omega}{V}$$is given by the photon energy $\hbar \omega$ (normally no bigger than a few eV) di...
Minor terminology question. A physical state corresponds to an element of a projective Hilbert space: an equivalence class of vectors in a Hilbert space that differ by a constant multiple - in other words, a one-dimensional subspace of the Hilbert space. Wouldn't it be more natural to refer to these as "lines" in Hilbert space rather than "rays"? After all, gauging the global $U(1)$ symmetry results in the complex line bundle (not "ray bundle") of QED, and a projective space is often loosely referred to as "the set of lines [not rays] through the origin." — tparker3 mins ago
> A representative of RELX Group, the official name of Elsevier since 2015, told me that it and other publishers “serve the research community by doing things that they need that they either cannot, or do not do on their own, and charge a fair price for that service”
for example, I could (theoretically) argue economic duress because my job depends on getting published in certain journals, and those journals force the people that hold my job to basically force me to get published in certain journals (in other words, what you just told me is true in terms of publishing) @EmilioPisanty
> for example, I could (theoretically) argue economic duress because my job depends on getting published in certain journals, and those journals force the people that hold my job to basically force me to get published in certain journals (in other words, what you just told me is true in terms of publishing)
@0celo7 but the bosses are forced because they must continue purchasing journals to keep up the copyright, and they want their employees to publish in journals they own, and journals that are considered high-impact factor, which is a term basically created by the journals.
@BalarkaSen I think one can cheat a little. I'm trying to solve $\Delta u=f$. In coordinates that's $$\frac{1}{\sqrt g}\partial_i(g^{ij}\partial_j u)=f.$$ Buuuuut if I write that as $$\partial_i(g^{ij}\partial_j u)=\sqrt g f,$$ I think it can work...
@BalarkaSen Plan: 1. Use functional analytic techniques on global Sobolev spaces to get a weak solution. 2. Make sure the weak solution satisfies weak boundary conditions. 3. Cut up the function into local pieces that lie in local Sobolev spaces. 4. Make sure this cutting gives nice boundary conditions. 5. Show that the local Sobolev spaces can be taken to be Euclidean ones. 6. Apply Euclidean regularity theory. 7. Patch together solutions while maintaining the boundary conditions.
Alternative Plan: 1. Read Vol 1 of Hormander. 2. Read Vol 2 of Hormander. 3. Read Vol 3 of Hormander. 4. Read the classic papers by Atiyah, Grubb, and Seeley.
I am mostly joking. I don't actually believe in revolution as a plan of making the power dynamic between the various classes and economies better; I think of it as a want of a historical change. Personally I'm mostly opposed to the idea.
@EmilioPisanty I have absolutely no idea where the name comes from, and "Killing" doesn't mean anything in modern German, so really, no idea. Googling its etymology is impossible, all I get are "killing in the name", "Kill Bill" and similar English results...
Wilhelm Karl Joseph Killing (10 May 1847 – 11 February 1923) was a German mathematician who made important contributions to the theories of Lie algebras, Lie groups, and non-Euclidean geometry.Killing studied at the University of Münster and later wrote his dissertation under Karl Weierstrass and Ernst Kummer at Berlin in 1872. He taught in gymnasia (secondary schools) from 1868 to 1872. He became a professor at the seminary college Collegium Hosianum in Braunsberg (now Braniewo). He took holy orders in order to take his teaching position. He became rector of the college and chair of the town...
@EmilioPisanty Apparently, it's an evolution of ~ "Focko-ing(en)", where Focko was the name of the guy who founded the city, and -ing(en) is a common suffix for places. Which...explains nothing, I admit.
|
I'm trying to figure out some properties of Hermitian matrices over finite fields. Namely, let $K_0=\mathbb{F}_q$ be a field with $q$ elements, and let $K=\mathbb{F}_{q^2}$. The matrix algebra $\newcommand{\M}{\mathrm{M}} \M_n(K)$ is endowed with the conjugate-transpose map, given by $(x_{i,j})^\circ:=(x_{j,i}^\sigma)$, where $x\mapsto x^\sigma$ is the non-trivial automorphism of $K/K_0$.
We have a map from the algebra $\M_n(K)$ to the $K$-vector space of hermitian matrices (i.e. matrices $X$ such that $X^\circ=X$) over $K$, given by $Y\mapsto Y^\circ Y$.
My question is the following- under what circumstances is this map surjective? That is- when is it true that any hermitian matrix is of the form $Y^\circ Y$ for some $Y\in\M_n(K)$?
I know that in the analogous case of $K=\mathbb{C}$ and $K_0=\mathbb{R}$ for $X$ to be of the form $Y^\circ Y$, one must also require that $X$ is a positive-definite matrix. However, I have seen in article that this fact should hold for finite fields. The reference in the article was to page 16 of Dieudonne's
La Geometrie des groupes classiques, but I could not find it there (possibly due to poor french-reading skills). If someone could help me find the proof in this text that would be also great.
Thank you.
|
Before knowing about Differential Equation and its types, let us know what a differential equation is.An equation with one or more terms that involves derivatives of the dependent variable with respect to an independent variable is known as
Differential Equation
An equation with one or more terms that involves derivatives of the dependent variable with respect to an independent variable is known as
differential equation.
In simple words, a differential equation consists of derivatives, which could either be ordinary derivatives or partial derivatives.
Example:
\( \frac {d^2y}{dx^2} + \left( \frac {dy}{dx} \right)^2 \)
\( \frac {d^2y}{dt^2} + \frac {dy}{dt} = 5 sin t \)
(A) Differential Equation and its types- Based on Type:- (a) Ordinary Differential Equation:-
It’s a differential equationwhich depends ona single independent variable.
Example: \( \frac {dy}{dx} + 5x = 5y \)
In the above mentioned equation,is a function of only so it’s an ordinary differential equation.
(b) Partial Differential Equation
It involves partial derivatives.
\( \frac {\partial y}{\partial x} + \frac {\partial y}{\partial t} = x^3 – t^3 \)
\( \frac {\partial^2 y}{\partial x^2} – c^2 \frac {\partial^2 y}{\partial t^2} = x0 \)
(B) Differential Equations and its types- Based on order:-
The order of the highest differential coefficient (derivative) involved in the differential equation is known as the order of the differential equation.
For Example:- \( \frac {d^3y}{dx^2} + 5 \frac {dy}{dx} + y = \sqrt{x} \)
Here, the order = 3 as the order of highest derivative involved is 3.
For derivatives the use of single quote notation is preferred which is
\( y’ = \frac {dy}{dx} \)
\( y” = \frac {d^2y}{dx^2} \)
\( y”’ = \frac {d^3y}{dx^3} \)
For the higher order derivatives it would become cumbersome to use multiple quotes so for these derivatives we prefer using the notation y
n for the n th order derivative \( \frac {d^ny}{dx^n} \)
Consider the following examples:-
(i) y” + 5y’ – 6y =\( x^2 \)
(ii) x’ = -x + 16
(iii) x”’ + 2x’ = 0
The equation (i) is a second order differential equation as the order of highest differential co-efficient is 2.
Similarly the example is a first order differential equation as the highest derivative is of order 1.
The exampleis a third order Differential Equation
(C) Differential Equation and its types- Based on Linearity:-
By linearity, it means that variable appearing in the equation is raised to the power of one. The graph of linear functions is generally a straight line. For example: (3x + 5) is Linear but (x
3 + 4x 2) is non-linear. Linear Differential Equation:
If all the dependent variables and its entire derivatives occur linearly in a given equation, then it represents a linear differential equation.
(b)
Non-Linear Differential Equation:-
Any differential equation with non-linear terms is known as non-linear differential equation.
Consider the following examples for illustration:
eg: – \( \frac {dy}{dx} + xy = 5x \)
\( \frac {d^2y}{dx^2} – ln y = 10 \)
Example 1: \( \frac {dy}{dx} + xy = 5x \)
It is a linear differential equation as \( \frac {dy}{dx} \)
Example 2: \( \frac {d^2 y}{dx^2} – ln \space y =10 \)
In y is not linear. Hence, this equation is non-linear.
(D) Differential Equation and its types- Based on Homogeneity
Consider the following functions
\( f_1(x,y) = y^3 + \frac 23 xy^2 \)
\( f_2(x,y) = x^3 ÷ y^2 x \)
\( f_3(x,y) = tan x + sec y \)
If we replace and byαx and α y respectively, where α is any non-zero constant, we get
\( f_1(x,y) = (\alpha y )^3 + \frac 23 (\alpha x) (\alpha y)^2 = \alpha^3 ( y^3 + \frac 23 xy ) = \alpha^3 f_1 (x,y) \)
\( f_2 (x,y) = \frac {(\alpha x)^3}{(\alpha y)^3 (\alpha x)} = \frac {x^3}{xy^2} = \alpha^\circ f_2 (x,y) \)
\( f_3 (x,y) = tan (\alpha x) + sex (\alpha y) \)
We observe that
\( f_1,f_2 \)
The linear differential equation of the form,\( fn (x) y^n + …….+ f_1 (x)y’ + f0(x) y = g(x) \)
Learn more about Differential Equations the easy way. We here at BYJU’s are always ready to walk you through the concepts and make your learning experience interesting. Download Byju’s learning app
|
The bounty period lasts 7 days. Bounties must have a minimum duration of at least 1 day. After the bounty ends, there is a grace period of 24 hours to manually award the bounty. Simply click the bounty award icon next to each answer to permanently award your bounty to the answerer. (You cannot award a bounty to your own answer.)
@Mathphile I found no prime of the form $$n^{n+1}+(n+1)^{n+2}$$ for $n>392$ yet and neither a reason why the expression cannot be prime for odd n, although there are far more even cases without a known factor than odd cases.
@TheSimpliFire That´s what I´m thinking about, I had some "vague feeling" that there must be some elementary proof, so I decided to find it, and then I found it, it is really "too elementary", but I like surprises, if they´re good.
It is in fact difficult, I did not understand all the details either. But the ECM-method is analogue to the p-1-method which works well, then there is a factor p such that p-1 is smooth (has only small prime factors)
Brocard's problem is a problem in mathematics that asks to find integer values of n and m for whichn!+1=m2,{\displaystyle n!+1=m^{2},}where n! is the factorial. It was posed by Henri Brocard in a pair of articles in 1876 and 1885, and independently in 1913 by Srinivasa Ramanujan.== Brown numbers ==Pairs of the numbers (n, m) that solve Brocard's problem are called Brown numbers. There are only three known pairs of Brown numbers:(4,5), (5,11...
$\textbf{Corollary.}$ No solutions to Brocard's problem (with $n>10$) occur when $n$ that satisfies either \begin{equation}n!=[2\cdot 5^{2^k}-1\pmod{10^k}]^2-1\end{equation} or \begin{equation}n!=[2\cdot 16^{5^k}-1\pmod{10^k}]^2-1\end{equation} for a positive integer $k$. These are the OEIS sequences A224473 and A224474.
Proof: First, note that since $(10^k\pm1)^2-1\equiv((-1)^k\pm1)^2-1\equiv1\pm2(-1)^k\not\equiv0\pmod{11}$, $m\ne 10^k\pm1$ for $n>10$. If $k$ denotes the number of trailing zeros of $n!$, Legendre's formula implies that \begin{equation}k=\min\left\{\sum_{i=1}^\infty\left\lfloor\frac n{2^i}\right\rfloor,\sum_{i=1}^\infty\left\lfloor\frac n{5^i}\right\rfloor\right\}=\sum_{i=1}^\infty\left\lfloor\frac n{5^i}\right\rfloor\end{equation} where $\lfloor\cdot\rfloor$ denotes the floor function.
The upper limit can be replaced by $\lfloor\log_5n\rfloor$ since for $i>\lfloor\log_5n\rfloor$, $\left\lfloor\frac n{5^i}\right\rfloor=0$. An upper bound can be found using geometric series and the fact that $\lfloor x\rfloor\le x$: \begin{equation}k=\sum_{i=1}^{\lfloor\log_5n\rfloor}\left\lfloor\frac n{5^i}\right\rfloor\le\sum_{i=1}^{\lfloor\log_5n\rfloor}\frac n{5^i}=\frac n4\left(1-\frac1{5^{\lfloor\log_5n\rfloor}}\right)<\frac n4.\end{equation}
Thus $n!$ has $k$ zeroes for some $n\in(4k,\infty)$. Since $m=2\cdot5^{2^k}-1\pmod{10^k}$ and $2\cdot16^{5^k}-1\pmod{10^k}$ has at most $k$ digits, $m^2-1$ has only at most $2k$ digits by the conditions in the Corollary. The Corollary if $n!$ has more than $2k$ digits for $n>10$. From equation $(4)$, $n!$ has at least the same number of digits as $(4k)!$. Stirling's formula implies that \begin{equation}(4k)!>\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\end{equation}
Since the number of digits of an integer $t$ is $1+\lfloor\log t\rfloor$ where $\log$ denotes the logarithm in base $10$, the number of digits of $n!$ is at least \begin{equation}1+\left\lfloor\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right)\right\rfloor\ge\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right).\end{equation}
Therefore it suffices to show that for $k\ge2$ (since $n>10$ and $k<n/4$), \begin{equation}\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right)>2k\iff8\pi k\left(\frac{4k}e\right)^{8k}>10^{4k}\end{equation} which holds if and only if \begin{equation}\left(\frac{10}{\left(\frac{4k}e\right)}\right)^{4k}<8\pi k\iff k^2(8\pi k)^{\frac1{4k}}>\frac58e^2.\end{equation}
Now consider the function $f(x)=x^2(8\pi x)^{\frac1{4x}}$ over the domain $\Bbb R^+$, which is clearly positive there. Then after considerable algebra it is found that \begin{align*}f'(x)&=2x(8\pi x)^{\frac1{4x}}+\frac14(8\pi x)^{\frac1{4x}}(1-\ln(8\pi x))\\\implies f'(x)&=\frac{2f(x)}{x^2}\left(x-\frac18\ln(8\pi x)\right)>0\end{align*} for $x>0$ as $\min\{x-\frac18\ln(8\pi x)\}>0$ in the domain.
Thus $f$ is monotonically increasing in $(0,\infty)$, and since $2^2(8\pi\cdot2)^{\frac18}>\frac58e^2$, the inequality in equation $(8)$ holds. This means that the number of digits of $n!$ exceeds $2k$, proving the Corollary. $\square$
We get $n^n+3\equiv 0\pmod 4$ for odd $n$, so we can see from here that it is even (or, we could have used @TheSimpliFire's one-or-two-step method to derive this without any contradiction - which is better)
@TheSimpliFire Hey! with $4\pmod {10}$ and $0\pmod 4$ then this is the same as $10m_1+4$ and $4m_2$. If we set them equal to each other, we have that $5m_1=2(m_2-m_1)$ which means $m_1$ is even. We get $4\pmod {20}$ now :P
Yet again a conjecture!Motivated by Catalan's conjecture and a recent question of mine, I conjecture thatFor distinct, positive integers $a,b$, the only solution to this equation $$a^b-b^a=a+b\tag1$$ is $(a,b)=(2,5).$It is of anticipation that there will be much fewer solutions for incr...
|
I was reading the following notes on maximum flow and it said the term "maximum-bottleneck (s-t)-path" but I couldn't find were it precisely defined it, so I am left guessing what it means. I am assuming it means a path from s to t with maximum total capacities once you sum the capacities across the path. However, I am not sure what the term bottleneck adds to the definition nor was I sure if that was correct. Is that the correct definition?
While I suspect that the notion of "bottleneck" has been introduced in an earlier lesson, it's all in the document you link.
Page 8:
Edmonds and Karp’s first rule is essentially a greedy algorithm:
Choose the augmenting path with largest bottleneck value.
Page 11:
For any flow network $G$ and any vertices $u$ and $v$, let bottleneck $G(u,v)$ denote the maximum, over all paths $\pi$ in $G$ from $u$ to $v$, of the minimum-capacity edge along $\pi$.
This sentence is a bit hard to parse. Unfolded:
Let $P_G(u,v)$ be the set of all $u$-$v$-paths in $G$, and $E(\pi)$ the set of edges that comprise a path $\pi$ in $G$. Then,
$\qquad\displaystyle G(u,v) = \max_{\pi \in P_G(u,v)} \quad \min_{e \in E(\pi)} c(e)$.
Note that $\min_{e \in E(\pi)} c(e)$ is the bottleneck (capacity) of $\pi$.
So Edmongs-Karp picks the $s$-$t$-path with the largest bottleneck (in the residual network), which means it picks the path it can send the most flow over (while maintaining conservation of flows).
|
Question:Let us consider the Poisson problem on the square with constant source $1$$$\begin{cases} - \Delta u &= 1, \qquad \text{ in } (0,1)^n \\\\ u &= 0, \qquad \text{ on } \partial (0,1)^n \end{cases}$$By symmetry the maximum is attained in $x_0:=(1/2,\dots,1/2)$. Does there exist a formula or precise estimate for$$ \max u((0,1)^n) = u(x_0) \quad ? $$How does this value scale in $n$? Can one recover some bound $o(1)$, for $n\to \infty$? Background:It is easy by comparison with $1/8-(x-x_0)^2/(2n)$ and the maximum principle to deduce the bound$$ u(1/2, \dots, 1/2 ) \leq 1/8 ,$$which is also sharp for $n=1$, where $u(x) = 1/2 \; x (1-x)$ is the solution. By the same method one finds for the solution of the Poisson problem on the unit ball the bound $1/(2n)$, which is the second part of the question. Strategy so far: By the product-ansatz one can deduce an explicit solution for $u$, given by$$ u(x) = \frac{4^n}{\pi^{n+2}}\sum_{\alpha\in (2 \mathbb{N}+1)^n} \frac{1}{\sum_{i=1}^n \alpha_i^2} \prod_{j=1}^n \frac{\sin(\pi \alpha_j x_j)}{\alpha_j} .$$Hereby, I denote with $2\mathbb{N}+1$ the odd integers starting with $1$, hence $\alpha\in (2\mathbb{N}+1)^n$ is a multiindex with odd entries. In principle, setting $x=(1/2,\dots,1/2)$ should give the solution.
How does one calculate the resulting alternating series or find estimates recovering at least the scaling in $n$?
Numerical results: For small $n$ from series representation:
n=1: 0.125n=2: 0.0737n=3: 0.0562n=4: 0.0473
Probabilistic interpretation of the solution: $u(x)$ is the mean hitting time of a Brownian particle starting in $x$, which may open estimates from the stochastic side. One more thought wrt. volume: It is not clear, if the comparison of unit cube and unit ball is the right comparison, since the volume of the unit ball goes to $0$ for $n\to \infty$. Maybe, one should consider the unit-volume-ball, for which I find the scaling of the maximum like $O(n^{-1/2})$. I can add details, if there is some indication in this direction. Bonus questions: Is there a more general method not directly based on explicit solutions, which would lead to estimates for general domains, for instance convex, star shaped or just bounded?
What is a reasonable quantity describing the scaling behaviour in $n$? For instance: Do all reasonable unit-volume domains scale like $O(n^{-1/2})$?
|
All my experience with textbook problems of quantum mechanics shows that the energy levels associated with the bound states of a confined quantum system are discrete and sharp. For example, the energy levels of the hydrogen atom. Why is it then said that excited states of an atom have an energy width? Where does the width come from? This fact doesn't match with the examples I know in quantum mechanics. If this question is asked before can some one give the links?
Leaving aside Doppler broadening and the other main practical reasons for line broadening, the fundamental reason is that the excited atom is coupled to all modes of the EM field equally, or at least there is an extremely wide frequency band of modes that are coupled.
So the atom "tries" to couple its excess energy into all of the modes. As it does so, destructive interference hinders the process for modes that have large frequency separation from the center frequency defined by the energy gap. If you model this broadband coupling mathematically and assume truly equal coupling to all modes at once, you get a Lorentzian lineshape whose breadth is proportional to the coupling strength. This Lorentzian linewidth tends to be narrow compared to the other, more "practical" reasons I mentioned above. I show how to do this calculation in my answer here. This is essential
Wigner-Weisskopf Theory. The transition rate, which is proportional to the frequency linewidth, when reckoned by a Fermi Golden Rule calculation, is given by:
$$\Gamma_{rad}(\omega) = \frac{4\, \alpha\, \omega^3\,| \langle 1|\mathbf{r}|2\rangle |^2}{3 \,c^2}$$
with $\alpha$ being the fine structure constant, $\omega$ the center frequency and $\langle 1|\mathbf{r}|2\rangle$ being the overlap between the two electronic states on either side of the transition.
|
This question's context is time series forecasting using regression, with multivariate training data. With a regularization method like LARS w/ LASSO, elastic net, or ridge, we need to decide on the model complexity or regularization parameters. For example, the ridge $\lambda$ penalty or the number of steps to go along the LARS w/ LASSO algorithm before hitting the OLS solution.
My first instinct is to use cross-validation to infer a decent value of the regularization parameter. For LARS w/ LASSO, I would infer the (effective) degrees of freedom that optimizes some fitness function like $\frac{1}{n}\sum_{i{\le}n}|\hat{y}_i-y_i|$. However with time series data, we should cross-validate out-of-sample. (No peeking into the future!) Say there are two feature time series $x_1$ and $x_2$ and I am forecasting a time series $y$. For each step of time $t$, train with $x_{1,1}$ through $x_{1,t}$ and $x_{2,1}$ through $x_{2,t}$ — and then forecast $\hat{y}_{t+1}$ and compare with the actual $y_{t+1}$.
This framework makes sense from an out-of-sample perspective, but I worry that earlier cross-validation steps (low $t$) will be overemphasized when averaging over all the equally-weighted steps.
Should the first few time series cross-validation steps, the ones that use much less training data, be suppressed when inferring (regularization) model parameters? I might prefer a model complexity (regularization) level that "did better" on those later cross-validation steps using more training data.
|
Filtering time-series data can be a tricky business. Many times what seems to obvious to our eyes is difficult to mathematically put into practice. A recent technique I've been reading about is called "total variance minimization" filtering or another name would be "$ \ell_1 $ trend filtering." Another member of this family is the Hodrick-Prescott (HP) trend filter. In all three of these types of filters, we're presented with a set of noisy data, $ y = x + z $ (where $ x $ is the real signal and $ z $ is some noise) and we're trying to fit a more simple model or set of lines to this data. For the TV filter, the type of lines we're fitting to the data are lines that don't have many large "jumps", or mathematically, $ \vert (x_{n-1} - x_{n}) \vert $, the difference between two of the fitted values, is small. In HP and $ \ell_1 $ trend filtering, the goal is to make sure that the 2nd derivative is small. Mathematically this looks like minimizing the term$$ (x_{n-2} - x_{n-1}) - (x_{n-1} - x_{n}) . $$
This makes sense, because the only time that the term above goes to zero is if the data is in a straight line - no "kinks" in the trendline. In HP filtering this term is squared and in $ \ell_1 $ filtering this term is just the absolute value (the $ l_1 $ norm). HP filtering fits a kind of spline to the data, while the $ \ell_1 $ filtering fits a piecewise linear function (straight lines that join at the knots).
The actual cost function looks a bit like this:$$ c(x) = \frac{1}{2}\sum_{i=1}^{n-1} (y_i - x_i)^2 + \lambda P(x) $$
where $ P(x) $ is one of the penalty terms we talked about above and $ \lambda $ controls the strength of the denoising. Again, remember that $ y $ is our observed signal and $ x $ is our projected or estimated signal - the trend. Setting $ \lambda $ to infinity will cause the fitted line to be a straight line through the data, and setting $ \lambda $ to zero will fit a line that looks identical to the original data. In HP filtering the whole function we're minimizing looks like this$$ c(x) = \frac{1}{2}\sum_{i=1}^{n-1} (y_i - x_i)^2 + \lambda\sum_{i = 1}^{n-1} ((x_{n-2} - x_{n-1}) - (x_{n-1} - x_{n}))^2 $$
and so $ \hat{x} = \arg \min_x c(x) $.
In order to find the minimum value of $ \hat{x} $ we can take the derivative of $ c(x) $ with respect to $ x $ and solve for zero. This isn't too difficult, and having both the terms in the HP filter squared makes this a bit easier - $ \ell_1 $ filtering is trickier.
Before we compute the derivative, let's rewrite the ending penalty term as a matrix multiplication. In order to compute $ ((x_{n-2} - x_{n-1}) - (x_{n-1} - x_{n}))^2 $ for each value of $ n $, we can multiply our array of $ x $ values, $ x = (x_1, x_2, ..., x_n)^T \in \mathbb R^{n x 1} $, by a matrix that takes the difference. It will look like this$$ D = \begin{bmatrix} \\ 1 & -2 & 1 & & & & \\ & 1 & -2 & 1 & & & \\ & & \ddots & \ddots & \ddots & & \\ & & & 1 & -2 & 1 & \\ & & & & 1 & -2 & 1 \\ \end{bmatrix} $$
So now, $ D \in \mathbb R^{N-2 \times N} $ and $ Dx $ will now be a $ \mathbb R^{N-2 \times 1} $ matrix descring the difference penalty. The cost function can now be rewritten as$$ c(x) = \frac{1}{2}\Vert y - x \Vert^2 + \lambda\Vert Dx\Vert^2 $$
Notice that both $ \Vert y - x \Vert^2 $ and $ \Vert Dx \Vert^2 $ are single values, since both of the inner quantaties are vectors ($ N \times 1 $ and $ N-2 \times 1 $) and so taking the $ \ell_2 $ norm sums the square of the elements, yielding one value.
If we take the derivative of $ c(x) $ w.r.t $ x $, we have$$ \frac{\partial c(x)}{\partial x} = -y + x + 2\lambda D^TDx = 0 $$
The first part is taking the derivative of $ (y - x)(y - x)^T $ (which is a different way of writing the $ \ell_2 $ norm), and the latter part comes out of the tedious computation of the derivative of the $ \ell_2 $ norm of $ Dx $.
Rearranging this equation yields$$ (x - 2\lambda D^TDx) = y $$
Solving for $ x $ gives$$ (I - 2\lambda D^TD)^{-1}y = \hat x $$
This is super handy because it gives us an analytical way of solving for $ \hat x $ by just multiplying our $ y $ vector by a precomputed transformation matrix - $ (I - 2\lambda D^TD) $ is a fixed matrix of size $ N \times N $.
An Example
Let's take the stock price of Apple over time. We can import this via the matplotlib (!) module:
from matplotlib import finance%pylab inlinepylab.rcParams['figure.figsize'] = 12, 8 # that's default image size for this interactive sessionimport statsmodels.api as sm
Populating the interactive namespace from numpy and matplotlib
d1 = datetime.datetime(2011, 01, 01)d2 = datetime.datetime(2012, 12, 01)sp = finance.quotes_historical_yahoo('AAPL',d1,d2,asobject=None)
plot(sp[:,2])title('Stock price of AAPL from Jan. 2011 to Dec. 2012')
<matplotlib.text.Text at 0x1106d438>
Now that we have the historic data we want to fit a trend line to it. The Statsmodels package has this filter already built in. It comes installed with an Anaconda installation. You can give the function a series of data and the $ \lambda $ parameter and it will return the fitted line.
xhat = sm.tsa.filters.hpfilter(sp[:,2], lamb=100000)[1]plot(sp[:,2])hold(True)plot(xhat,linewidth=4.)
[<matplotlib.lines.Line2D at 0x1539ba90>]
The green line nicely flows through our data. What happens when we adjust the regularization? Since we don't really know what value to pick for $ \lambda $ we'll try a range of values and plot them.
lambdas = numpy.logspace(3,6,5) # Generate a logarithmicly spaced set of lambdas from 1,000 to 1 Millionxhat = []for i in range(lambdas.size): xhat.append(sm.tsa.filters.hpfilter(sp[:,2],lambdas[i])[1]) # Return the 2nd argument and apped
plot(sp[:,2])hold(True)plot(transpose(xhat),linewidth=2.)
[<matplotlib.lines.Line2D at 0x1580f908>, <matplotlib.lines.Line2D at 0x1580fef0>, <matplotlib.lines.Line2D at 0x158110b8>, <matplotlib.lines.Line2D at 0x15811240>, <matplotlib.lines.Line2D at 0x158113c8>]
You can see we move through a continuum of highly fitted to loosly fitted data. Pretty nifty, huh? You'll also notice that the trend line doesn't have any sharp transitions. This is due to the squared penalty term. In $ \ell_1 $ trend filtering you'll end up with sharp transitions. Maybe this is good, maybe not. For filtering data where there's no guarantee that the data will be semi-continuous (maybe some sensor reading), perhaps the TV or $ \ell_1 $ filter is better. The difficulty is that by adding the $ \ell_1 $ penalty the function becomes non-differentialable and makes the solution a little more difficult - generally an iterative process.
|
We proved Thévenin’s and Norton’s theorems in the previous article. This article covers the practical steps to create a Thévenin or Norton equivalent for a linear circuit.
Thévenin’s theorem Any network of resistors and sources, when viewed from a port, can be simplified down to one voltage source in series with one resistor. Norton’s theorem Any network of resistors and sources, when viewed from a port, can be simplified down to one current source in parallel with one resistor.
The proof and the practical design method are separate ideas. You often find them mixed together in many texts.
Written by Willy McAllister.
Contents Using Thévenin’s theorem When is Thévenin’s theorem useful? Strategy Example 1 Example 2 Where we’re headed
To create a Thévenin or Norton equivalent,
Pick two nodes to be the port of the circuit you want to transform. Remove the external component(s) connected to the port. Find any two of these three: $\bold R, v_{oc}, i_{sc}$ — whichever two are the easiest, $\bold R$ — Suppress the internal sources and simplify the resulting resistor network down to a single resistor. $v_{oc}$ — Restore internal sources, leave the port open, find the open-circuit voltage. $i_{sc}$ — Restore internal sources, short across the port, find the short-circuit current. The third variable comes from the other two, $\bold R = v_{oc}/i_{sc}$ $v_{oc} = i_{sc}\,\bold R$ $i_{sc} = v_{oc}/\bold R$ The Thévenin equivalent is $\text R_\text T$ in series with $\text V_\text T$, $\text V_\text T = v_{oc}$ $\text R_\text T = \bold R$ The Norton equivalent is $\text R_\text N$ in parallel with $\text I_\text N$, $\text I_\text N = i_{sc}$ $\text R_\text N = \bold R$
One of the coolest ideas from linear circuit theory is the concept of an
equivalent circuit. Say you have a circuit and you “look into” a port using a voltmeter and ammeter. You observe some sort of $i$-$v$ behavior. If you have another circuit that displays the same $i$-$v$ behavior then those two circuits are equivalent from the viewpoint of the ports.
In an earlier article on how to simplify a resistor network, we learned how to turn any resistor network into an equivalent single resistor. Thévenin’s and Norton’s theorems are the next step. They teach us how to simplify networks of resistors
and sources. If you have a complicated linear circuit the theorems provide the instructions for how to construct a very simple equivalent circuit. Using Thévenin’s theorem
Thévenin’s theorem looks like this in schematic form,
When is Thévenin’s theorem useful?
Think about using Thévenin’s theorem when you want to focus on a specific part of a circuit and push the details of the rest into the background. For example, suppose you care about what an amplifier does at its output port. Thévenin’s theorem creates a simple equivalent version of the complicated amplifier with the exact same $i$-$v$ behavior at the output.
Strategy
We use aspects of the proof to come up with practical steps to find an equivalent circuit.
A Thévenin equivalent has a voltage source $\text V_\text T$ in series with a resistor $\bold R$.
The Norton equivalent has a current source $\text I_\text N$ in parallel with the same resistor $\bold R$.
To create a Thévenin equivalent,
Pick two nodes to be the port of the circuit you want to transform. Remove any external component(s) connected to the port. Transform what remains to its Thévenin equivalent. The resistor—you can call it $\bold R$ or $\text R_\text T$ or $\text R_\text N$—is the simplified equivalent resistance of the resistor network from the original circuit when all internal sources are suppressed. To suppress a voltage source, replace it with a short. To suppress a current source, replace it with an open. The Thévenin voltage $\text V_\text T$ is $v_{oc}$, the voltage at the port when all internal sources are turned on and the port is left open. Alternatively, you can find the Norton current $\text I_\text N$. It is $i_{sc}$, the current flowing from the port when all internal sources are turned on and a short is placed across the port. If you know $i_{sc}$ and $\bold R$, compute $v_{oc} = \text V_\text T = i_{sc}\,\bold R$ for the Thévenin equivalent. If you know $v_{oc}$ and $\bold R$, compute $i_{sc} = \text I_\text N = v_{oc}/\bold R$ for the Norton equivalent.
When you find $v_{oc}$ or $i_{sc}$ you have to do a circuit analysis. Choose either one based on how hard you think the analysis task will be. If you can’t tell, flip a coin.
Example 1
We are asked to find the output voltage for several different values of the load resistor (the initial value of the load resistor is $2\,\text k\Omega$ shown on the right),
We could solve the whole circuit for each value of the load. Or, we can find the Thévenin equivalent of everything to the left of the port (the two little circles) and solve a much simpler circuit for each value of the load.
Define the port
First, decide what part of the system you want to reduce to a Thévenin equivalent. Select two nodes you care about and define them as the
port. Then remove any components external to the circuit.
We are interested in what’s happening at the $2\,\text k\Omega$ resistor on the far right, so we identify the port by drawing two little circles. Our goal is to simplify everything to the left of the port by finding its Thévenin equivalent. The first step is to take away the $2\,\text k\Omega$ resistor,
Find the Thévenin components
The next task is to identify the Thévenin voltage and the Thévenin resistor.
Thévenin resistance
To find the Thévenin resistance, suppress internal sources and compute a single equivalent $\bold R$. We use this same resistance in both the Thévenin and Norton equivalent circuits, so we can call it $\text R_\text T$ or $\text R_\text N$ or just $\bold R$.
Here is the circuit with the two internal sources suppressed—the voltage source is replaced with a short, the current short is replace with an open,
With all the sources gone, what’s left? A resistor network. So let’s reduce the network to a single equivalent resistance,
$\bold R = 500 + (1000 \parallel 1000) = 500 + \dfrac{1000 \cdot 1000}{1000 + 1000}$
$\bold R = 1000\,\Omega$
$\parallel$
$\parallel$ is a shorthand notation for “in parallel with”
Thévenin voltage
To find $\text V_\text T$ we have to do a circuit analysis. If it looks like it’s going to be difficult, think about solving for $\text I_\text N$ instead. Pick the easiest route.
To find the Thévenin voltage restore the internal sources and leave the port
open. Then find the open circuit voltage, $v_{oc}$. $v_{oc}$ is the voltage the circuit presents to the world when nothing is connected to its port. The Thévenin voltage is equal to $v_{oc}$.
Have a go at finding $v_{oc}$ yourself. (It is not that easy.) Hint: When a circuit has multiple sources one of your choices is to solve by superposition.
The answer is tucked away here—try yourself before peeking,
Find $v_{oc}$
The circuit has multiple sources. When you see multiple sources, think
superposition!.
Break the circuit into two sub-circuits, one for each source. Then work out $v_{oc}$ for each one. The final $v_{oc}$ is the superposition of the two sub-circuits, $v_{oc} = v_{oc1} + v_{oc2}$.
Sub-circuit #1 has the voltage source, with the current source suppressed,
Use the voltage divider formula to find $v_{oc1}$,
$v_{oc1} = 5\,\text V \,\dfrac{1000}{1000+1000} = 2.5\,\text V$
Sub-circuit #2 has the current source restored and the voltage source suppressed,
The $2\,\text{mA}$ current flows through the three resistors,
$v_{oc2} = 2\,\text{mA} (500 + 1000 \parallel 1000) = 2\,\text{mA} (500 + 500)$
$v_{oc2} = 2\,\text V$
Now superimpose the two sub-circuits,
$v_{oc} = v_{oc1} + v_{oc2} = 2.5 + 2$
$v_{oc} = 4.5\,\text V$
If you are not already in love with superposition, try to solve this circuit with the Node Voltage Method. The difference in effort is striking.
Norton current
The third variable you might want to find is the Norton current, $\text I_\text N$. You have to do a circuit analysis for this, too, but it may turn out to be easier than finding $v_{oc}$. Let’s see if that’s the case.
To find the Norton current restore all the internal sources and place a short circuit across the port. Then find the current in the shorting wire, $i_{sc}$. The Norton current is equal to $i_{sc}$. It is the current the circuit would generate if you forced the output voltage to zero. (Please be very careful if you do this to a real circuit. It might not be designed to drive a short.)
The solution for $i_{sc}$ is tucked away here—see if you can do it yourself before peeking,
Find $i_{sc}$
This problem has multiple sources. Whenever I see multiple sources my first thought is
superposition! Superposition turns one nasty circuit into multiple simple circuits.
Solve for $i_{sc}$ twice, and add the results, $i_{sc} = i_{sc1} + i_{sc2}$
Sub-circuit #1 includes the voltage source, with the current source suppressed,
Find $i_{sc1}$,
$5\,\text V = i_1 ( 1000 + 1000 \parallel 500)$
$(1\text K \parallel 500) = \dfrac{1000 \cdot 500}{1000 + 500} = 333\,\Omega$
$i_1 = \dfrac{5\,\text V}{1000 + 333} = 3.75\,\text{mA}$
Find $\goldC{v_x}$ with the voltage divider formula,
$v_x = 5\, \dfrac{333}{1000 + 333} = 5 \cdot 0.250 = 1.25\,\text V$
$i_{sc1}$ is the current flowing in the $500\,\Omega$ resistor,
$i_{sc1} = \dfrac{v_x}{500} = \dfrac{1.25}{500}$
$i_{sc1} = 2.5\,\text{mA}$
Sub-circuit #2 has the current source restored and the voltage source suppressed. This one is particularly easy to solve.
The wire across the port directly shorts across the current source. That means the voltage across the current source
and the resistor network is zero. Therefore, the entire $2\,\text{mA}$ current flows in the shorting wire,
$i_{sc2} = 2\,\text{mA}$
Superimpose the two components of $i_{sc}$,
$i_{sc} = i_{sc1} + i_{sc2} = 2.5\,\text{mA} + 2\,\text{mA}$
$i_{sc} = 4.5\,\text{mA}$
For this example the effort to find $i_{sc}$ compared to $v_{oc}$ was similar, with $v_{oc}$ probably a little easier overall. No harm done, we got to see both. Hooray for superposition!
Find $\text V_\text T$ from $i_{sc}$ and $\bold R$
We can use $i_{sc}$ and $\bold R$ to find the Thévenin voltage.
$\text V_\text T = i_{sc}\,\bold R$
$\text V_\text T = 4.5\,\text{mA}\,1000\,\Omega$
$\text V_\text T = 4.5\,\text V$
The same answer is the same as when we analyzed the circuit for $v_{oc}$.
Assemble the equivalent circuits
Put it together. The Thévenin equivalent is $\bold R$ in series with the open-circuit voltage source,
And pretty much for free we get the Norton equivalent. The Norton equivalent is the same $\bold R$ in parallel with the short-circuit current source,
Example 1 simulation models
Open this simulation model of Example 1 in a new tab. There are five circuits shown. The top one is the original Example 1 circuit. The two below that are the circuits we used to find $v_{oc}$ and $i_{sc}$. Use these to check your own circuit analysis work. The two on the right are the Thévenin and Norton equivalents.
Click on DCto run a DC analysis. Look at the current and voltage for the $2\,\text k\Omega$ load resistor of the first circuit. Compare that to the current and voltage of the Thévenin and Norton equivalents. Are they the same? Explore: Change the load resistor to some other value in all three by double-clicking on the resistor symbol. Run DCanalysis again. The the current and voltage will be different, but all three should still match because all three circuits have the same $i$-$v$ equation. Pretty cool, eh? If you want to simulate the superposition solutions, suppress the sources one at a time in the two versions in the lower left and run DCagain. Example 2
Here’s a practical application. This circuit shows a common way to set up a bipolar junction transistor (BJT) as an amplifier. The BJT is the symbol in the center right, with reference designator $\text Q1$. The power supply for the amplifier is provided by the $15\text V$ source.
The $100\,\text k\Omega$ and $50\,\text k\Omega$ resistors set the voltage of $\text Q1$’s base terminal to an intermediate value between the power supply and ground. Together these resistors are called the
biasing network. We are going to convert the biasing network into its Thévenin equivalent. Identify the port and isolate the biasing network by removing the external components. Isolated biasing network
Find the two Thévenin components, a voltage source and resistance. Thévenin voltage and resistance
The Thévenin voltage is the voltage on the port when we leave it open, $v_{oc}$. The circuit is a voltage divider so we’ll use that formula to find $v_{oc}$,
$\text V_\text T = v_{oc} = 15\,\text V\,\dfrac{50\text k}{100\text k + 50\text k} = 15\,\text V \cdot \dfrac{1}{3}$
$\text V_\text T = 5\,\text V$
To get the Thévenin resistance we suppress the voltage source by replacing it with a short circuit. The Thévenin resistance is what’s left,
$\text R_\text T = 50\text k \parallel 100\,\text k = \dfrac{50\text k \cdot 100\text k}{50\text k + 100\text k} = \dfrac{5000\text M}{150\text k}$
$\text R_\text T = 33.3 \,\text k\Omega$
Assemble the two components to get the Thévenin equivalent,
Thévenin equivalent
Embed the Thévenin equivalent back into the amplifier circuit. Thévenin equivalent embedded into amplifier
Summary Thévenin’s theorem A circuit made of any combination of resistors and sources can be simplified down to a single voltage source in series with a single resistor. Norton’s theorem A circuit made of any combination of resistors and sources can be simplified down to a single current source in parallel with a single resistor.
Norton and Thévenin forms are interchangeable because of what we learned in the article on source transformation.
To create a Thévenin or Norton equivalent,
Identify the port and remove the external component(s). Find any two of these three: $\bold R, v_{oc}, i_{sc}$, whichever two are the easiest, $\bold R$ — Suppress the internal sources and simplify the resulting resistor network. $v_{oc}$ — Restore internal sources, leave the port open, find the open-circuit voltage. $i_{sc}$ — Restore internal sources, short across the port, find the short-circuit current. The third variable can be derived from the other two, $\bold R = v_{oc}/i_{sc}$ $v_{oc} = i_{sc}\,\bold R$ $i_{sc} = v_{oc}/\bold R$ The Thévenin equivalent is $\bold R$ in series with $v_{oc}$. The Norton equivalent is $\bold R$ in parallel with $i_{sc}$.
|
Search
Now showing items 1-10 of 24
Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV
(Springer, 2015-01-10)
The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ...
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV
(Springer Berlin Heidelberg, 2015-04-09)
The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV
(Springer, 2015-05-27)
The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ...
Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(American Physical Society, 2015-03)
We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV
(American Physical Society, 2015-06)
The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ...
Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Springer, 2015-11)
The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ...
K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2015-02)
The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ...
|
Choose n points randomly from a circle, how to calculate the probability that all the points are in one semicircle? Any hint is appreciated.
A variation on @joriki's answer (and edited with help from @joriki):
Suppose that point $i$ has angle $0$ (angle is arbitrary in this problem) -- essentially this is the event that point $i$ is the "first" or "leading" point in the semicircle. Then we want the event that all of the points are in the same semicircle -- i.e., that the remaining points end up all in the upper halfplane.
That's a coin-flip for each remaining point, so you end up with $1/2^{n-1}$. There's $n$ points, and the event that any point $i$ is the "leading" point is disjoint from the event that any other point $j$ is, so the final probability is $n/2^{n-1}$ (i.e. we can just add them up).
A sanity check for this answer is to notice that if you have either one or two points, then the probability must be 1, which is true in both cases.
See
for the general problem (when the points have any distribution that is invariant w.r.t. rotation about the origin) and
for a nice application.
As a curiosity, this answer can be expressed as a product of sines:
Here's another way to do this:
Divide the circle into $2k$ equal sectors. There are $2k$ contiguous stretches of $k$ sectors each that form a semicircle, and $2k$ slightly shorter contiguous stretches of $k-1$ sectors that almost form a semicircle. The number of the semicircles containing all the points minus the number of slightly shorter stretches containing all the points is $1$ if the points are contained in at least one of the semicircles and $0$ otherwise; that is, it's the indicator variable for the points all being contained in at least one of the semicircles. The probability of an event is the expected value of its indicator variable, which in this case is
$$2k\left(\frac k{2k}\right)^n-2k\left(\frac{k-1}{2k}\right)^n=\frac k{2^{n-1}}\left(1-\left(1-\frac1k\right)^n\right)\;.$$
The limit $k\to\infty$ yields the desired probability:
$$ \lim_{k\to\infty}\frac k{2^{n-1}}\left(1-\left(1-\frac1k\right)^n\right)=\lim_{k\to\infty}\frac k{2^{n-1}}\cdot\frac nk=\frac n{2^{n-1}}\;. $$
Bull,
1948, Mathematical Gazette, Vol 32 No 299 (Dec), pp87-88 solves this problem in the context of the broken stick problem (he uses polytopes and relative volumes in his argument). Rushton, 1949, Mathematical Gazette, Vol 33 No 306 (May), pp286-288 points out that the problem can be re-stated in terms of placing points at random on the circumference of a circle. Ruston's answer is the clearest I have seen. Place $n$ points randomly on the circumference. Label them $X_1, X_2, ..., X_n$. Open up the circle at $X_n$ and produce a straight line. Label the line $OX_n$ (where $O$ is the part of the circle previously immediately adjacent to $X_n$). There are $n$ line segments: $OX_1, X_1X_2, ..., X_{n-1}X_n$. Each segment is equally likely to be longer than half the length of $OX_n$ (and thus correspond to greater than a semi-circle of the orginal circle). The probability that the first segment fulfils this condition is the probability that the remaining $n-1$ points lie upon the second half of the line $OX_n$. That is $(\frac{1}{2})^{(n-1)}$. The probability that there is one segment (note there can be at most one) greater than half the length of the circumference is the sum of the probabilities that each particular segment could be so (because these are mutually exclusive): $n(\frac{1}{2})^{(n-1)}$. So, the favorable probability is $1 -n(\frac{1}{2})^{(n-1)}$.
Another simpler approach,
1) Randomly pick $1$ out of $n$ points and call it $A$ : $\binom n1$ ways
2) Starting from $A$, mark another point $B$ on circumference, such that $length(AB) = \frac12(Cirumference)$
[so that $AB$ and $BA$ are two semi-circles]
3) Now out of remaining $(n-1)$ points, each point can lie on either $AB$ or $BA$ with probability $\frac12$
4) For ALL the remaining $(n-1)$ points to lie on EITHER $AB$ OR $BA$ (i.e., all $(n-1)$ lie on same semi-circle), the joint probability is $\frac12*\frac12 ...(n-1) times$ $=$ $(\frac12)^{(n-1)}$
Since #1 above (randomly picking $A$) is an independent event, $\therefore$ $(\frac12)^{(n-1)}$ (expression in #4) will add $\binom n1$ times
$\implies$ Required probability is $\binom n1(\frac12)^{(n-1)}$ $=$ $n(\frac12)^{(n-1)}$
|
In Mathematics several well-known sequences exist, like Fibonacci, Square numbers sequence, etc. One such sequence is Jacobsthal, which states that:Jacobsthal(n)\ =\ \left\{\begin{matrix}0&n\ =\ 0\\1&n\ =\ 1\\2Jacobsthal(n-2)\ +\ Jacobsthal(n-1)&n\ \geq\ 2\\\end{matrix}\right.Jacobsthal(m)=\frac{2^m-{(-1)}^m}{3}
Using any of the forumalas above to generate the first 50 numbers in the sequence, the output is:
0, 1, 1, 3, 5, 11, 21, 43, 85, 171, 341, 683, 1365, 2731, 5461, 10923, 21845, 43691, 87381, 174763, 349525, 699051, 1398101, 2796203, 5592405, 11184811, 22369621, 44739243, 89478485, 178956971, 357913941, 715827883, 1431655765, 2863311531, 5726623061, 11453246123, 22906492245, 45812984491, 91625968981, 183251937963, 366503875925, 733007751851, 1466015503701, 2932031007403, 5864062014805, 11728124029611, 23456248059221, 46912496118443, 93824992236885, 187649984473771
References: Wikipedia contributors, “Jacobsthal number,” Wikipedia, The Free Encyclopedia,https://en.wikipedia.org/w/index.php?title=Jacobsthal_number&oldid=849904767 (accessed March 19, 2019). Weisstein, Eric W. “Jacobsthal Number.” From MathWorld–A Wolfram Web Resource. http://mathworld.wolfram.com/JacobsthalNumber.html (accessed March 19, 2019). Frey, Darrin D., and James A. Sellers. “Jacobsthal Numbers and Alternating Sign Matrices.” Journal of Integer Sequences3 (2000). https://cs.uwaterloo.ca/journals/JIS/VOL3/SELLERS/sellers.pdf. (accessed March 19, 2019) Rabago, Julius Fergy. “A Note on Modified Jacobsthal and Jacobstal-Lucas Numbers.” Notes on Number Theory and Discrete Mathematics19, no. 3 (2013): 15–20. http://nntdm.net/papers/nntdm-19/NNTDM-19-3-15-20.pdf. (accessed March 19, 2019) Szynal-Liana, Anetta and Włoch, Iwona. “A Note on Jacobsthal Quaternions.” Advances in Applied Clifford Algebra26 (2016): 441–447. https://core.ac.uk/download/pdf/81813252.pdf. (accessed March 26, 2019)
|
Stone Weierstrass Theorem
Let $X$ be a topological space and let $C(X,\mathbb{R})$ denote the set of all continous functions $f:X\to\mathbb{R}$.
A family $\mathscr{F}$ of functions is an
algebra if for every $f,g\in\mathscr{F}$ and any $c\in \mathbb{R}$, we have $f+g,fg, cf\in\mathscr{F}$. ex: $C([a,b],\mathbb{R})$ is an algebra for any closed interval $[a,b]\subset\mathbb{R}$.
An algbera $\mathscr{A}$ of functions $f:X\to \mathbb{R}$
separates points if for every $x,y\in X$ there is a function $f\in \mathscr{A}$ such that $f(x)\neq f(y)$. ex: Let $\mathscr{P}\subset C([a,b],\mathbb{R})$ denote the set of polynomials on $[a,b]$. Then $\mathscr{P}$ is a subalgebrawhich separates points since it contains the polynomial $f(x)=x$.
An algbera $\mathscr{A}$ of functions $f:X\to \mathbb{R}$
vanishes nowhere if for every $x\in X$ there is a function $f\in\mathscr{A}$ such that $f(x)\neq 0$. ex: Let $\mathscr{P}\subset C([a,b], \mathbb{R})$ denote the set of polynomials on $[a,b]$. Then $\mathscr{P}$ is a subalgebrathat separates points since it contains the polynomial $f(x)=x$.
These definitions come together in today's theorem:
In English, this just says that any continuous, real-valued function on a compact set can be approximated (with respect to the supremum norm) by some function in your algebra $\mathscr{A}$. In other words, choose any continuous function $f:X\to \mathbb{R}$ where $X$ is compact. Then for every $\epsilon >0$, we can find a function $g\in \mathscr{A}$ such that $$\|f-g\|=\sup_{x\in X}\{|f(x)-g(x)|\}<\epsilon.$$Here's an example: take $X=[a,b]$ to be a closed interval in $\mathbb{R}$ and let $\mathscr{A}$ be the set of all polynomials on $[a,b]$. Then for any continuous $f:[a,b]\to\mathbb{R}$ and any $\epsilon>0$, we can find a polynomial $p:[a,b]\to\mathbb{R}$ such that $\|f-p\|< \epsilon$. This is the familar Weierstrass Approximation Theorem!
Any continous function on a closed interval can be approximated - as close as you want - by a polynomial. The Stone Weierstrass Theorem says that this result is still true if we replace $[a,b]$ by any compact set $X$, and if we replace the set of polynomials by any subalgebra of $C(X,\mathbb{R})$ which separates points and vanishes nowhere.
Now you might ask, "Why do we need $X$ to be compact?" and "Why must $\mathscr{A}$ separate points and vanish nowhere?" Today we'll see why these hypotheses are necessary. Next time we'll work through an exercise from Rudin's
Principal of Mathematical Analysis (a.k.a. "Baby Rudin") to see the theorem in action. Why do we need compactness?
The best way to answer this question is to look at a counterexample. So let's consider $X=\mathbb{R}$ and let $\mathscr{A}$ be the subalgebra of all poynomials on $\mathbb{R}$. Then $\mathscr{A}$ separates points and vanishes nowhere, but $X$ is
not compact. In this case, the theorem fails since the function $f(x)=e^x$ cannot be approximated by any polynomial! This is because any polynomial $p$ is dominated by its largest term, say $x^n$, and $e^x$ tends to $\infty$ much faster than does $x^n$ (even if $n$ is very large). As a result, the distance between $e^x$ and $x^n$ cannot be made arbitrarily small as $x$ ranges over all of $\mathbb{R}$.
But if we restrict ourselves to a closed and bounded interval, the smaller terms in $p$ have more weight, and this allows us to approximate $e^x$ by $p(x)$ with as much accuracy as we want. And this is exactly what we've all done in undergraduate calculus! You remember those problems. The $n$th degree Taylor polynomial of $e^x$ centered at 0 is $$e^x\approx \sum_{k=0}^n\frac{x^k}{k!},$$ and a typical homework question might've been something like, "For what values of $x$ is this approximation accurate to within 0.00001?". The answer would be $|x|<\delta$ for some constant $\delta$ which you could find using Taylor's Inequality. This $[-\delta,\delta]$ is precisely the compact set we need to restrict to in order to obtain a good approximation.
Why must the algebra separate points?
Again we'll consider a counterexample. This time let $X=[a,b]\subset\mathbb{R}$ and take $\mathscr{A}$ to be the collection of all polynomials $p:[a,b]\to\mathbb{R}$ such that $p(a)=p(b)$. It's easy to check that this forms an algebra, and it clearly does not separate points. To see where the Stone Weierstrass Theorem fails, simply choose any continuous function $f:[a,b]\to\mathbb{R}$ such that $f(a),f(b)\neq p(a),p(b)$. Then we
cannot approximate $f$ by any polynomial $p\in\mathscr{A}$ because we can always find an $\epsilon$ such that $\|f-p\|\geq \epsilon.$ In fact, $\epsilon=|f(b)-f(a)|/2$ does the job.
This isn't too hard to show. Let $M=\max\{|f(a)-p(a)|,|f(b)-p(b)|\}$ and observe from the picture above that $\|f-p\|\geq M$. We want to show* $$\|f-p\|\geq M\geq \frac{|f(b)-f(a)|}{2}.$$ To see this, let $m$ denote the common value $p(a)=p(b)$, assume WLOG $f(a)\leq f(b)$, and suppose $f(a)< m < f(b)$. If, for instance, $m=|f(a)+f(b)|/2$, then $M=|f(a)+f(b)|/2$ and the claim is true.
Otherwise, if, say, $m$ lies in-between $|f(a)+f(b)|/2$ and $f(b)$ (see insert on the left), then $M=|f(a)-m|$ which is greater than $|f(a)-f(b)|/2$ as claimed. And if $m< f(a)$ or $m>f(b)$, then $M$ is even larger and again the claim holds.
Alternatively we could also choose $\mathscr{A}$ to be the set of constant functions on $[a,b]$ (this definitley does not separate points). Then, for example, the function $f(x)=e^x$ can't be approximated by any constant $c$ since $\|e^x-c\|$ is bounded below by $\frac{|e^b-e^a|}{2}$ (using the same argument as above).
Why must the algebra vanish nowhere?
Suppose $X=[0,1]$ and let's take $\mathscr{A}$ to be the set of all continuous functions $p:[0,1]\to\mathbb{R}$ such that $p(0)=0$ (one easily checks that this is an algebra). Then any continuous function $f$ which is not zero at zero can't be approximated by any $p\in\mathscr{A}$! The supremum of $|f(x)-p(x)|$ for $x$ in $[0,1]$ is bounded below by $|f(0)-p(0)|=|f(0)|.$ For instance take $f(x)=x+3$. Then $\|f-p\|$ is
at least 3.
So there you go! Each of the conditions in the Stone Weierstrass Theorem is indeed necessary. Next week we'll use the theorem to solve this exercise from
Baby Rudin: (Rudin, PMA#7.20) If $f$ is continuous on $[0,1]$ and if $\int_0^1f(x)x^n\;dx=0$ for all $n=0,1,2,\ldots,$ prove that $f(x)=0$ on $[0,1]$.
Footnote:
* We don't want to let $M=\max\{|f(a)-p(a)|,|f(b)-p(b)|\}$ be our $\epsilon$ since we need $\epsilon$ to be independent of the polynomial $p$. (The negation of the Stone-Weierstrass Theorem says that if $X$ is not compact or if $\mathscr{A}$ is an algebra which does not separate points or does not vanish nowhere, then there exists a function $f\in C(X,\mathbb{R})$ and there exists $\epsilon>0$ such that $\|f-p\|\geq \epsilon$ for all $p\in\mathscr{A}$. The wording implies that $\epsilon$ depends on $f$ only.)
|
I'm studying $\lambda$-calculus, and had a question regarding an exercise I came across. I understand that $\lambda$-calculus uses three main strategies of evaluation, but I'm having trouble applying it. Specifically $\beta$-reduction.
For example, for $1 + 2$:
\begin{align} 1 + 2 & = \lambda n.\lambda m.\lambda s. \lambda z. m\ s\ (n\ s\ z)\ 1\ 2 \\ & = \lambda s. \lambda z.\ 2\ s\ (1\ s\ z) \\ & = \lambda s. \lambda z.\ 2\ s\ ((\lambda s. \lambda z.s\ z)\ s\ z) \\ & = \lambda s. \lambda z.\ 2\ s \ (s\ z) \\ & = \lambda s. \lambda z.(\lambda s. \lambda z. (s\ (s\ z))\ s\ (s\ z) \\ & = \lambda s. \lambda z.s\ (s\ (s\ z)) \\ & = 3 \end{align}
The particular part that I'm having trouble understanding is the $\beta$-reduction at the last step before deriving the final $\lambda$ expression. More specifically, how $\lambda s.\lambda z. (s\ (s\ z))\ s\ (s\ z)$ reduces to $s\ (s\ (s\ z))$.
My understanding is that in order to perform $\beta$-reduction, we need to identify redexes of the form $(\lambda x.e_1)\ e_2$. Using this understanding, my initial approach would be:
\begin{align} \lambda s. \lambda z. (s\ (s\ z))\ s\ (s\ z) & = [\lambda s. \lambda z. (s\ (s\ z))\ s]_{redex}\ (s\ z) \\ & = \lambda z. (s\ (s\ z))\ (s\ z) \\ & = [\lambda z. (s\ (s\ z))\ s]_{redex}\ z \\ & = s\ (s\ (s\ z)) \end{align} Is my approach correct? If so, is it valid for me to drop the parentheses arbitrarily as I did in the third line?
|
These are a natural and easiest (most tractable mathematically) choice.
A utility function is defined up to a positive affine transformation: economically there is no difference between the utility functions $U(x)$ and $\tilde{U}(x)=Au(x)+B$. Hence, a measure of risk aversion that remains constant w.r.t. affine transformations would be useful. How does one construct such a measure? Well, the easiest way is to consider the expression$$A(x)= -\frac{U''(x)}{U'(x)}$$a.k.a. ARA (Arrow–Pratt measure of
absolute risk aversion). ARA stays the same under affine transformations and measures the degree of risk aversion - the curvature of the utility function. The reciprocal of ARA measures the level of risk tolerance, and a simple special case is when it is a linear function of wealth:$$T(x)=\frac{1}{A(x)}=\frac{x}{1-\gamma}+\frac{b}{a}.$$Now, what are the utility functions such that the corresponding level of risk tolerance is linear? These are solutions to the ODE$$-\frac{U'(x)}{U''(x)}=\frac{x}{1-\gamma}+\frac{b}{a}$$which is known to be solvable in closed form. The unique solution (up to affine transformation!) to the equation has the form $$\qquad U(x)=\frac{1-\gamma}{\gamma}\left(\frac{ax}{1-\gamma}+b \right)^\gamma.\qquad(1)$$There are other solutions which differ from (1) by additive and/or multiplicative constants but these do not affect the behavior implied by the utility function. (1) is known as the hyperbolic absolute risk aversion.
The other utility functions that you've mentioned are just specifications of (1). In particular, assuming $b=0$ one gets the isoelastic utility:$$\quad\qquad U(x) = \begin{cases}\frac{x^\gamma-1}{\gamma},\quad \gamma\neq 0 \\ \ln(x), \quad \gamma =0 \end{cases}\qquad\quad (2)$$(2) is also the only example of utility functions with the constant
relative risk aversion$$R(x)=xA(x)=1-\gamma.$$
|
Answer
The amplitude is $3$, the period is $4$, there is no vertical translation and the phase shift is $\frac{1}{2}$ units to the right since $d$ is more than zero.
Work Step by Step
We first write the equation in the form $y=c+a \cos [b(x-d)]$. Therefore, $y=3\cos [\frac{\pi}{2}(x-\frac{1}{2})$ becomes $y=0+3\cos [\frac{\pi}{2}(x-\frac{1}{2})$ Comparing the two equations, $a=3,b=\frac{\pi}{2},c=0$ and $d=\frac{1}{2}$. The amplitude is $|a|=|3|=3.$ The period is $\frac{2\pi}{b}=\frac{2\pi}{0.5\pi}=4$. The vertical translation is $c=0$. The phase shift is $|d|=|\frac{1}{2}|=\frac{1}{2}$ Therefore, the amplitude is $3$, the period is $4$, there is no vertical translation and the phase shift is $\frac{1}{2}$ units to the right since $d$ is more than zero.
|
Seeing that in the Chomsky Hierarchy Type 3 languages can be recognised by a DFA (which has no stacks), Type 2 by a DFA with one stack (i.e. a push-down automaton) and Type 0 by a DFA with two stacks (i.e. with one queue, i.e. with a tape, i.e. by a Turing Machine), how do Type 1 languages fit in...
Considering this pseudo-code of an bubblesort:FOR i := 0 TO arraylength(list) STEP 1switched := falseFOR j := 0 TO arraylength(list)-(i+1) STEP 1IF list[j] > list[j + 1] THENswitch(list,j,j+1)switched := trueENDIFNEXTIF switch...
Let's consider a memory segment (whose size can grow or shrink, like a file, when needed) on which you can perform two basic memory allocation operations involving fixed size blocks:allocation of one blockfreeing a previously allocated block which is not used anymore.Also, as a requiremen...
Rice's theorem tell us that the only semantic properties of Turing Machines (i.e. the properties of the function computed by the machine) that we can decide are the two trivial properties (i.e. always true and always false).But there are other properties of Turing Machines that are not decidabl...
People often say that LR(k) parsers are more powerful than LL(k) parsers. These statements are vague most of the time; in particular, should we compare the classes for a fixed $k$ or the union over all $k$? So how is the situation really? In particular, I am interested in how LL(*) fits in.As f...
Since the current FAQs say this site is for students as well as professionals, what will the policy on homework be?What are the guidelines that a homework question should follow if it is to be asked? I know on math.se they loosely require that the student make an attempt to solve the question a...
I really like the new beta theme, I guess it is much more attractive to newcomers than the sketchy one (which I also liked). Thanks a lot!However I'm slightly embarrassed because I can't read what I type, both in the title and in the body of a post. I never encountered the problem on other Stac...
This discussion started in my other question "Will Homework Questions Be Allowed?".Should we allow the tag? It seems that some of our sister sites (Programmers, stackoverflow) have not allowed the tag as it isn't constructive to their sites. But other sites (Physics, Mathematics) do allow the s...
There have been many questions on CST that were either closed, or just not answered because they weren't considered research level. May those questions (as long as they are of good quality) be reposted or moved here?I have a particular example question in mind: http://cstheory.stackexchange.com...
Ok, so in most introductory Algorithm classes, either BigO or BigTheta notation are introduced, and a student would typically learn to use one of these to find the time complexity.However, there are other notations, such as BigOmega and SmallOmega. Are there any specific scenarios where one not...
Many textbooks cover intersection types in the lambda-calculus. The typing rules for intersection can be defined as follows (on top of the simply typed lambda-calculus with subtyping):$$\dfrac{\Gamma \vdash M : T_1 \quad \Gamma \vdash M : T_2}{\Gamma \vdash M : T_1 \wedge T_2}...
I expect to see pseudo code and maybe even HPL code on regular basis. I think syntax highlighting would be a great thing to have.On Stackoverflow, code is highlighted nicely; the schema used is inferred from the respective question's tags. This won't work for us, I think, because we probably wo...
Sudoku generation is hard enough. It is much harder when you have to make an application that makes a completely random Sudoku.The goal is to make a completely random Sudoku in Objective-C (C is welcome). This Sudoku generator must be easily modified, and must support the standard 9x9 Sudoku, a...
I have observed that there are two different types of states in branch prediction.In superscalar execution, where the branch prediction is very important, and it is mainly in execution delay rather than fetch delay.In the instruction pipeline, where the fetching is more problem since the inst...
Is there any evidence suggesting that time spent on writing up, or thinking about the requirements will have any effect on the development time? Study done by Standish (1995) suggests that incomplete requirements partially (13.1%) contributed to the failure of the projects. Are there any studies ...
NPI is the class of NP problems without any polynomial time algorithms and not known to be NP-hard. I'm interested in problems such that a candidate problem in NPI is reducible to it but it is not known to be NP-hard and there is no known reduction from it to the NPI problem. Are there any known ...
I am reading Mining Significant Graph Patterns by Leap Search (Yan et al., 2008), and I am unclear on how their technique translates to the unlabeled setting, since $p$ and $q$ (the frequency functions for positive and negative examples, respectively) are omnipresent.On page 436 however, the au...
This is my first time to be involved in a site beta, and I would like to gauge the community's opinion on this subject.On StackOverflow (and possibly Math.SE), questions on introductory formal language and automata theory pop up... questions along the lines of "How do I show language L is/isn't...
This is my first time to be involved in a site beta, and I would like to gauge the community's opinion on this subject.Certainly, there are many kinds of questions which we could expect to (eventually) be asked on CS.SE; lots of candidates were proposed during the lead-up to the Beta, and a few...
This is somewhat related to this discussion, but different enough to deserve its own thread, I think.What would be the site policy regarding questions that are generally considered "easy", but may be asked during the first semester of studying computer science. Example:"How do I get the symme...
EPAL, the language of even palindromes, is defined as the language generated by the following unambiguous context-free grammar:$S \rightarrow a a$$S \rightarrow b b$$S \rightarrow a S a$$S \rightarrow b S b$EPAL is the 'bane' of many parsing algorithms: I have yet to enc...
Assume a computer has a precise clock which is not initialized. That is, the time on the computer's clock is the real time plus some constant offset. The computer has a network connection and we want to use that connection to determine the constant offset $B$.The simple method is that the compu...
Consider an inductive type which has some recursive occurrences in a nested, but strictly positive location. For example, trees with finite branching with nodes using a generic list data structure to store the children.Inductive LTree : Set := Node : list LTree -> LTree.The naive way of d...
I was editing a question and I was about to tag it bubblesort, but it occurred to me that tag might be too specific. I almost tagged it sorting but its only connection to sorting is that the algorithm happens to be a type of sort, it's not about sorting per se.So should we tag questions on a pa...
To what extent are questions about proof assistants on-topic?I see four main classes of questions:Modeling a problem in a formal setting; going from the object of study to the definitions and theorems.Proving theorems in a way that can be automated in the chosen formal setting.Writing a co...
Should topics in applied CS be on topic? These are not really considered part of TCS, examples include:Computer architecture (Operating system, Compiler design, Programming language design)Software engineeringArtificial intelligenceComputer graphicsComputer securitySource: http://en.wik...
I asked one of my current homework questions as a test to see what the site as a whole is looking for in a homework question. It's not a difficult question, but I imagine this is what some of our homework questions will look like.
I have an assignment for my data structures class. I need to create an algorithm to see if a binary tree is a binary search tree as well as count how many complete branches are there (a parent node with both left and right children nodes) with an assumed global counting variable.So far I have...
It's a known fact that every LTL formula can be expressed by a Buchi $\omega$-automata. But, apparently, Buchi automata is a more powerful, expressive model. I've heard somewhere that Buchi automata are equivalent to linear-time $\mu$-calculus (that is, $\mu$-calculus with usual fixpoints and onl...
Let us call a context-free language deterministic if and only if it can be accepted by a deterministic push-down automaton, and nondeterministic otherwise.Let us call a context-free language inherently ambiguous if and only if all context-free grammars which generate the language are ambiguous,...
One can imagine using a variety of data structures for storing information for use by state machines. For instance, push-down automata store information in a stack, and Turing machines use a tape. State machines using queues, and ones using two multiple stacks or tapes, have been shown to be equi...
Though in the future it would probably a good idea to more thoroughly explain your thinking behind your algorithm and where exactly you're stuck. Because as you can probably tell from the answers, people seem to be unsure on where exactly you need directions in this case.
What will the policy on providing code be?In my question it was commented that it might not be on topic as it seemes like I was asking for working code. I wrote my algorithm in pseudo-code because my problem didnt ask for working C++ or whatever language.Should we only allow pseudo-code here?...
|
Sequence is just a function of the type $f:\mathbb{N} \to \mathbb{R}$. It is common to list the elements of this sequence as $$(a_1,a_2,a_3,\ldots,a_n)\,.$$ One example is the sequence of all even numbers: $(0,2,4,6,8,10,\ldots)$. However some sequences may be defined in a different form when there is no easy formula for expressing the terms, like the sequence of prime numbers $(2,3,5,7,11,13,\ldots)$, defined verbally.
Series means summation of terms, this is clear if we use sigma notation, in which the terms are defined by a law that resembles a sequence. For example: $$\sum_{i=1}^{n} i^2$$ is the sum of the sequence of squares $(1,4,9,16,25,\ldots,n^2)$. We can expand the RHS for the sake of clearness: $$\sum_{i=1}^{n} i^2=1^2+2^2+3^2+4^2+\cdots+n^2.$$
Often the series has a formula that depends only on the upper limit, so we can easily find the result without adding the terms, for the example above we know that $$\sum_{i=1}^{n} i^2=1^2+2^2+3^2+4^2+\cdots+n^2=\frac{n(n+1)(2n+1)}{6}$$
There is also the term infinite series, that is simply the limit $$\lim_{n\to\infty}\sum_{i=1}^{n} a_i\,,$$ from this many concepts arrises, but it is another discussion.
For the sake of didactic you can visualize a sequence as a stone path, where each stone has a number, when talking about sequences what matters is in what stone you are, like in the number 3 or more generally, $a_n$. On the same example, series is the path you travel to reach a determined stone, i.e., if you are supposed to go to the stone numbered 9 (from the origin), series is the sum of the stones you stepped, in this case: $(1+2+ 3+\ldots+8+9)$, or $(a_1+a_2+a_3+\ldots+a_8+a_9)$.
Hope I could help you more. I could, so I edited.
|
X
Search Filters
Format
Subjects
Language
Publication Date
Click on a bar to filter by decade
Slide to change publication date range
1. Measurement of $D^{\pm}$, $D^\pm$ and $D_s^\pm$ meson production cross sections in $pp$ collisions at $\sqrt{s}=7$ TeV with the ATLAS detector
Nuclear Physics B, ISSN 0550-3213, 12/2015, Volume 907, pp. 717–763 - 763
Nucl. Phys. B 907 (2016) 717 The production of $D^{*\pm}$, $D^\pm$ and $D_s^\pm$ charmed mesons has been measured with the ATLAS detector in $pp$ collisions at...
Physics - High Energy Physics - Experiment | Physics | High Energy Physics - Experiment | scattering [p p] | phase space [kinematics] | Subatomic Physics | Jet Fragmentation | transverse momentum [D] | Hadron Collisions | Science & Technology | Annihilation | Deep inelastic scattering | Charm | Subatomär fysik | Ciências Físicas [Ciências Naturais] | 7000 GeV-cms | ATLAS detector; LHC; proton-proton | Quark Fragmentation Function | Physik | experimental results | Nuclear and High Energy Physics | perturbation theory [quantum chromodynamics] | CERN LHC Coll | Hadron-Production | Fractions | Hera | Hadron production | Deep-Inelastic Scattering | E(+)E(-) Collisions | measured [differential cross section] | Mesons | [PHYS.HEXP]Physics [physics]/High Energy Physics - Experiment [hep-ex] | fragmentation [charm] | rapidity dependence | measured [total cross section] | Model | QUARK FRAGMENTATION FUNCTION; DEEP-INELASTIC SCATTERING; HADRON-PRODUCTION; E(+)E(-) COLLISIONS; JET FRAGMENTATION; HERA; ANNIHILATION; CHARM; MODEL; FRACTIONS | colliding beams [p p] | hadroproduction [charm] | transverse momentum dependence | [ PHYS.HEXP ] Physics [physics]/High Energy Physics - Experiment [hep-ex] | hadroproduction [charmed meson] | HEPEX | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS
Physics - High Energy Physics - Experiment | Physics | High Energy Physics - Experiment | scattering [p p] | phase space [kinematics] | Subatomic Physics | Jet Fragmentation | transverse momentum [D] | Hadron Collisions | Science & Technology | Annihilation | Deep inelastic scattering | Charm | Subatomär fysik | Ciências Físicas [Ciências Naturais] | 7000 GeV-cms | ATLAS detector; LHC; proton-proton | Quark Fragmentation Function | Physik | experimental results | Nuclear and High Energy Physics | perturbation theory [quantum chromodynamics] | CERN LHC Coll | Hadron-Production | Fractions | Hera | Hadron production | Deep-Inelastic Scattering | E(+)E(-) Collisions | measured [differential cross section] | Mesons | [PHYS.HEXP]Physics [physics]/High Energy Physics - Experiment [hep-ex] | fragmentation [charm] | rapidity dependence | measured [total cross section] | Model | QUARK FRAGMENTATION FUNCTION; DEEP-INELASTIC SCATTERING; HADRON-PRODUCTION; E(+)E(-) COLLISIONS; JET FRAGMENTATION; HERA; ANNIHILATION; CHARM; MODEL; FRACTIONS | colliding beams [p p] | hadroproduction [charm] | transverse momentum dependence | [ PHYS.HEXP ] Physics [physics]/High Energy Physics - Experiment [hep-ex] | hadroproduction [charmed meson] | HEPEX | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS
Journal Article
2. Measurement of the muon reconstruction performance of the ATLAS detector using 2011 and 2012 LHC proton-proton collision data
European Physical Journal C. Particles and Fields, ISSN 1434-6044, 2014, Volume 74, Issue 11
Journal Article
3. Measurement of the Inelastic Proton-Proton Cross Section at √s = 13 TeV with the ATLAS Detector at the LHC
Physical Review Letters, ISSN 0031-9007, 2016, Volume 117, Issue 18, pp. 1 - 19
This Letter presents a measurement of the inelastic proton-proton cross section using 60 μb-1 of pp collisions at a center-of-mass energy s of 13 TeV with the...
PHYSICS OF ELEMENTARY PARTICLES AND FIELDS | ATLAS detectors | Center-of-mass energies | Subatomic Physics | Plastic scintillator | Inelastic cross sections | Fysik | Physical Sciences | Naturvetenskap | Scintillation counters | Ionization | A-center | Invariant mass | Tellurium compounds | Subatomär fysik | Hadronic systems | Natural Sciences | Inelastic interaction | Phase space methods
PHYSICS OF ELEMENTARY PARTICLES AND FIELDS | ATLAS detectors | Center-of-mass energies | Subatomic Physics | Plastic scintillator | Inelastic cross sections | Fysik | Physical Sciences | Naturvetenskap | Scintillation counters | Ionization | A-center | Invariant mass | Tellurium compounds | Subatomär fysik | Hadronic systems | Natural Sciences | Inelastic interaction | Phase space methods
Journal Article
4. Measurement of W+W− production in association with one jet in proton–proton collisions at s=8TeV with the ATLAS detector
Physics Letters B, ISSN 0370-2693, 12/2016, Volume 763, Issue C, pp. 114 - 133
The production of boson pairs in association with one jet in collisions at is studied using data corresponding to an integrated luminosity of 20.3 fb collected...
proton–proton collisions; ATLAS detector; LHC | Subatomic Physics | Nuclear and High Energy Physics | Subatomär fysik | Ciências Físicas [Ciências Naturais] | High Energy Physics - Experiment | [ PHYS.HEXP ] Physics [physics]/High Energy Physics - Experiment [hep-ex] | Science & Technology | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences
proton–proton collisions; ATLAS detector; LHC | Subatomic Physics | Nuclear and High Energy Physics | Subatomär fysik | Ciências Físicas [Ciências Naturais] | High Energy Physics - Experiment | [ PHYS.HEXP ] Physics [physics]/High Energy Physics - Experiment [hep-ex] | Science & Technology | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences
Journal Article
5. Measurement of charged-particle distributions sensitive to the underlying event in $\sqrt{s} = 13$ TeV proton-proton collisions with the ATLAS detector at the LHC
Journal of High Energy Physics, ISSN 1029-8479, 01/2017, Volume 1703, p. 157
Journal Article
6. Search for pair production of first or second generation leptoquarks in proton-proton collisions at sqrt(s)=7 TeV using the ATLAS detector at the LHC
Physical Review D, ISSN 1550-7998, 04/2011, Volume 83, p. 112006
Journal Article
7. Measurement of charged-particle distributions sensitive to the underlying event in $ \sqrt{s}=13 $ TeV proton-proton collisions with the ATLAS detector at the LHC
2017
We present charged-particle distributions sensitive to the underlying event, measured by the ATLAS detector in proton-proton collisions at a centre-of-mass...
charged particle: rapidity | experimental results | charged particle: multiplicity | CERN LHC Coll | numerical calculations: Monte Carlo | ATLAS | angular distribution | multiplicity: dependence | p p: scattering | p p: colliding beams | transverse momentum dependence | charged particle: transverse momentum | particle flow | underlying event | 13000 GeV-cms
charged particle: rapidity | experimental results | charged particle: multiplicity | CERN LHC Coll | numerical calculations: Monte Carlo | ATLAS | angular distribution | multiplicity: dependence | p p: scattering | p p: colliding beams | transverse momentum dependence | charged particle: transverse momentum | particle flow | underlying event | 13000 GeV-cms
Report
8. Charged-particle multiplicities in pp interactions at s=900 GeV measured with the ATLAS detector at the LHC
Physics Letters B, ISSN 0370-2693, 04/2010, Volume 688, Issue 1, pp. 21 - 42
Journal Article
9. Charged-particle multiplicities in pp interactions at root s=900 GeV measured with the ATLAS detector at the LHC ATLAS Collaboration
PHYSICS LETTERS B, ISSN 0370-2693, 04/2010, Volume 688, Issue 1, pp. 21 - 42
The first measurements from proton-proton collisions recorded with the ATLAS detector at the LHC are presented. Data were collected in December 2009 using a...
LHC | CM ENERGIES | Multiplicities | PHYSICS, NUCLEAR | TRANSVERSE-MOMENTUM SPECTRA | Minimum bias | 900 GeV | COLLISIONS | ATLAS | DISTRIBUTIONS | ASTRONOMY & ASTROPHYSICS | MASS | Charged-particle | SQUARE-ROOT-S | PHYSICS, PARTICLES & FIELDS | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences | ATLAS; LHC | Physics
LHC | CM ENERGIES | Multiplicities | PHYSICS, NUCLEAR | TRANSVERSE-MOMENTUM SPECTRA | Minimum bias | 900 GeV | COLLISIONS | ATLAS | DISTRIBUTIONS | ASTRONOMY & ASTROPHYSICS | MASS | Charged-particle | SQUARE-ROOT-S | PHYSICS, PARTICLES & FIELDS | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences | ATLAS; LHC | Physics
Journal Article
New Journal of Physics, ISSN 1367-2630, 05/2011, Volume 13, p. 053033
Measurements are presented from proton-proton collisions at centre-of-mass energies of root s = 0.9, 2.36 and 7 TeV recorded with the ATLAS detector at the...
DISTRIBUTIONS | TRANSVERSE-MOMENTUM SPECTRA | PHYSICS, MULTIDISCIPLINARY | CM ENERGIES | SQUARE-ROOT-S | COLLISIONS | Physics - High Energy Physics - Experiment | Physics | High Energy Physics - Experiment | Fysik | Physical Sciences | Naturvetenskap | TEV | COLLIDER | Natural Sciences
DISTRIBUTIONS | TRANSVERSE-MOMENTUM SPECTRA | PHYSICS, MULTIDISCIPLINARY | CM ENERGIES | SQUARE-ROOT-S | COLLISIONS | Physics - High Energy Physics - Experiment | Physics | High Energy Physics - Experiment | Fysik | Physical Sciences | Naturvetenskap | TEV | COLLIDER | Natural Sciences
Journal Article
11. Study of (W/Z)H production and Higgs boson couplings using H -> WW decays with the ATLAS detector
Journal of High Energy Physics, ISSN 1126-6708, 2015, Volume August, Issue 8, pp. 1 - 65
A search for Higgs boson production in association with a W or Z boson, in the H -> WW* decay channel, is performed with a data sample collected with the ATLAS...
Hadron-Hadron Scattering | Higgs physics | PHYSICS, PARTICLES & FIELDS | scattering [p p] | Ciencias Físicas | Hadron-Hadron Scattering; High energy physics; ATLAS; LHC; Higgs particle | Settore FIS/04 - Fisica Nucleare e Subnucleare | Fysik | associated production [vector boson] | Hadron-Hadron Scattering; Higgs physics | scaling | experimental results | Settore FIS/01 - Fisica Sperimentale | Nuclear and High Energy Physics | leptonic decay [vector boson] | Hadron-Hadron Scattering; Higgs physics; LHC; MASS | CERN LHC Coll | multiplicity [lepton] | fusion [vector boson] | fusion [gluon] | mass [Higgs particle] | Physical Sciences | 7000 GeV-cms8000 GeV-cms | [PHYS.HEXP]Physics [physics]/High Energy Physics - Experiment [hep-ex] | colliding beams [p p] | hadroproduction [Higgs particle] | coupling [Higgs particle] | [ PHYS.HEXP ] Physics [physics]/High Energy Physics - Experiment [hep-ex] | Astronomía | Naturvetenskap | Natural Sciences
Hadron-Hadron Scattering | Higgs physics | PHYSICS, PARTICLES & FIELDS | scattering [p p] | Ciencias Físicas | Hadron-Hadron Scattering; High energy physics; ATLAS; LHC; Higgs particle | Settore FIS/04 - Fisica Nucleare e Subnucleare | Fysik | associated production [vector boson] | Hadron-Hadron Scattering; Higgs physics | scaling | experimental results | Settore FIS/01 - Fisica Sperimentale | Nuclear and High Energy Physics | leptonic decay [vector boson] | Hadron-Hadron Scattering; Higgs physics; LHC; MASS | CERN LHC Coll | multiplicity [lepton] | fusion [vector boson] | fusion [gluon] | mass [Higgs particle] | Physical Sciences | 7000 GeV-cms8000 GeV-cms | [PHYS.HEXP]Physics [physics]/High Energy Physics - Experiment [hep-ex] | colliding beams [p p] | hadroproduction [Higgs particle] | coupling [Higgs particle] | [ PHYS.HEXP ] Physics [physics]/High Energy Physics - Experiment [hep-ex] | Astronomía | Naturvetenskap | Natural Sciences
Journal Article
European Physical Journal C, ISSN 1434-6044, 2013, Volume 73, Issue 3, pp. 1 - 118
(ProQuest: ... denotes formulae and/or non-USASCII text omitted; see image) The jet energy scale and its systematic uncertainty are determined for jets...
Journal Article
New Journal of Physics, ISSN 1367-2630, 2011, Volume 13
Journal Article
14. Search for new phenomena in dijet angular distributions in proton-proton collisions at √s = 8 TeV measured with the ATLAS detector
Physical Review Letters, ISSN 0031-9007, 2015, Volume 114, Issue 22
Journal Article
15. Measurement of the production cross section for W-bosons in association with jets in pp collisions at s=7 TeV with the ATLAS detector
Physics Letters B, ISSN 0370-2693, 04/2011, Volume 698, Issue 5, pp. 325 - 345
Journal Article
16. Search for new phenomena in the dijet mass distribution using pp collision data at √s = 8 TeV with the ATLAS detector
Physical Review D. Particles, Fields, Gravitation, and Cosmology, ISSN 1550-7998, 2015, Volume 91, Issue 5
Dijet events produced in LHC proton-proton collisions at a center-of-mass energy √s = 8 TeV are studied with the ATLAS detector using the full 2012 data set,...
Journal Article
17. Measurement of event-shape observables in $Z \to \ell^{+} \ell^{-}$ events in $pp$ collisions at $\sqrt{s}=7$ TeV with the ATLAS detector at the LHC
European Physical Journal C: Particles and Fields, ISSN 1434-6044, 02/2016, Volume 76, p. 375
Journal Article
18. Measurement of the branching ratio $\Gamma(\Lambda_b^0 \rightarrow \psi(2S)\Lambda^0)/\Gamma(\Lambda_b^0 \rightarrow J/\psi\Lambda^0)$ with the ATLAS detector
Physics Letters B, ISSN 0370-2693, 07/2015, Volume 751, pp. 63 - 80
Physics Letters B 751 (2015) 63-80 An observation of the $\Lambda_b^0 \rightarrow \psi(2S) \Lambda^0$ decay and a comparison of its branching fraction with...
Physics - High Energy Physics - Experiment | Physics | High Energy Physics - Experiment | scattering [p p] | Lambda/b0 --> J/psi Lambda | branching ratio [Lambda/b0] | psi | branching ratio; ATLAS detector | Ciencias Físicas | psi --> muon+ muon | covariance | Settore FIS/04 - Fisica Nucleare e Subnucleare | Science & Technology | PP COLLISIONS; ROOT-S=7 TEV; LAMBDA(B) | Fysik | Lambda/b0 --> psi Lambda | Ciências Físicas [Ciências Naturais] | Nuclear and High Energy Physics; LHC; ATLAS; Lambda | transverse momentum [Lambda/b0] | Transverse Momentum | Physik | measured [branching ratio] | J/psi --> muon+ muon | experimental results | Settore FIS/01 - Fisica Sperimentale | Muon Pair | Large Hadron Collider | rapidity | Nuclear and High Energy Physics | CERN LHC Coll | ATLAS detector | Proton–proton collisions | 8000 GeV-cms | Physical Sciences | [PHYS.HEXP]Physics [physics]/High Energy Physics - Experiment [hep-ex] | pair production [muon] | colliding beams [p p] | Branching processes | Lambda --> p pi | [ PHYS.HEXP ] Physics [physics]/High Energy Physics - Experiment [hep-ex] | quark model | Astronomía
Physics - High Energy Physics - Experiment | Physics | High Energy Physics - Experiment | scattering [p p] | Lambda/b0 --> J/psi Lambda | branching ratio [Lambda/b0] | psi | branching ratio; ATLAS detector | Ciencias Físicas | psi --> muon+ muon | covariance | Settore FIS/04 - Fisica Nucleare e Subnucleare | Science & Technology | PP COLLISIONS; ROOT-S=7 TEV; LAMBDA(B) | Fysik | Lambda/b0 --> psi Lambda | Ciências Físicas [Ciências Naturais] | Nuclear and High Energy Physics; LHC; ATLAS; Lambda | transverse momentum [Lambda/b0] | Transverse Momentum | Physik | measured [branching ratio] | J/psi --> muon+ muon | experimental results | Settore FIS/01 - Fisica Sperimentale | Muon Pair | Large Hadron Collider | rapidity | Nuclear and High Energy Physics | CERN LHC Coll | ATLAS detector | Proton–proton collisions | 8000 GeV-cms | Physical Sciences | [PHYS.HEXP]Physics [physics]/High Energy Physics - Experiment [hep-ex] | pair production [muon] | colliding beams [p p] | Branching processes | Lambda --> p pi | [ PHYS.HEXP ] Physics [physics]/High Energy Physics - Experiment [hep-ex] | quark model | Astronomía
Journal Article
|
Automorphisms of the Riemann Sphere
Welcome to the final post in our summer series on automorphisms of four (though, for all practical purposes, it's really
three) different Riemann surfaces: the unit disc, the upper half plane, the complex plane, and the Riemann sphere. Last time, we proved that the automorphisms of the complex plane take on a certain form. Today, we'll close the series by proving a similar result about automorphisms of the Riemann sphere.
If you missed the introductory/motivational post for this series, be sure to check it out here!
Also in this series:
Automorphisms of the Riemann Sphere Theorem: Every automorphism $f$ of the Riemann sphere $\hat{\mathbb{C}}$ is of the form $f(z)=\frac{az+b}{cz+d}$ where $a,b,c,d\in\mathbb{C}$ such that $ad-bc\neq 0$. Proof. If $f(z)=\frac{az+b}{cz+d}$ for $a,b,c,d\in\mathbb{C}$ with $ad-bc\neq 0$, then $f$ is a linear fractional transformation and is thus an automorphism of $\hat{\mathbb{C}}$. The converse direction is a corollary of the following proposition. Proposition: A function $f$ is meromorphic* in $\hat{\mathbb{C}}$ if and only if it is a rational map. Proof. First suppose $f$ is a meromorphic function. Then $f$ has only a finite number of poles in $\hat{\mathbb{C}}$. This follows since the poles of $f$ form a discrete, closed subset of $\hat{\mathbb{C}}$. (The set is closed because its complement, the collection of points where $f$ is holomorphic, is open. To see this, note that for any point $z$ at which $f$ is holomorphic, we can find a small enough neighborhood $N$ about $z$ such that $f$ is also holomorphic on $N$.) Hence it must be finite.
So suppose the poles of $f$ are $a_1, a_2,\ldots, a_n$ and consider the Laurent expansion of $f$ at $a_1$: $$f(z)=\frac{A_{-m}}{(z-a_1)^m}+\cdots+\frac{A_{-1}}{z-a_1}+A_0 +A_1(z-a_1) + \cdots. $$ Let $p_1(z)=\frac{A_{-m}}{(z-a_1)^m}+\cdots+\frac{A_{-1}}{z-a_1}$ be the principal part of $f$ and let $f_1(z)=A_0 +A_1(z-a_1) + \cdots$ so that $f(z)=p_1(z)+f_1(z)$. Observe that $p_1$ is holomorphic for all $z\neq a_1$ and has a pole at $z=a_1$. Moreover, $f_1$ has the same number of poles as $f$ except at $z=a_1$.
Repeating this argument for each $i=1,2,\ldots, n$, we obtain the collection of principal parts $p_i(z)$ of $f$ at the pole $a_i$ . So consider the function $$f_n(z)= f(z)-p_1(z)-\cdots - p_n(z)$$ and observe that $f_n$ is holomorphic on $\mathbb{C}$ and has a pole at $\infty$. This implies that $f_n$ must be a polynomial and so $f(z)=f_n(z)+p_1(z)+\cdots p_n(z)$ is a rational function, i.e. there are polynomials $P$ and $Q$ with coefficients in $\mathbb{C}$ for which $f(z)=\frac{P(z)}{Q(z)}$.
Conversely let's assume $f=\frac{P(z)}{Q(z)}=\frac{a_nz^n+\cdots+a_0}{b_mz^m+\cdots+b_0}$ is a rational function expressed in simplest terms so that $P$ and $Q$ share no common roots. Suppose $z_0$ is a zero of $Q(z)$ of order $k$ so that $Q(z)=(z-z_0)^kQ_1(z)$ where $Q_1$ is a polynomial of degree $m-k$ satisfying $Q_1(z_0)\neq 0$. Then $$f(z)=\frac{P(z)}{(z-z_0)^kQ_1(z)}=\frac{\varphi(z)}{(z-z_0)^k}$$ where $\varphi(z)=P(z)/Q_1(z)$ is holomorphic and $\varphi(z_0)\neq 0$. It follows that $z_0$ is a pole of $f$. Hence the zeros of $Q$ are precisely the poles of $f$ (and are of the same order). Thus $f$ is meromorphic.
$\square$
Corollary: If $f$ is an automorphism of $\hat{\mathbb{C}}$ then $f$ is of the form $f(z)=\frac{az+b}{cz+d}$ where $a,b,c,d\in\mathbb{C}$ satisfy $ad-bc\neq 0$. Proof. Suppose $f$ is an automorphism of $\hat{\mathbb{C}}$. Then $f$ must be meromorphic as $f$ must map one point to $\infty$ and thus have a pole. By the previous proposition, $f$ must be of the form $f(z)=\frac{P(z)}{Q(z)}$ for polynomials $P$ and $Q$. But by assumption $f$ is injective and so $P$ and $Q$ must be linear in $z$. In other words $f(z)=\frac{az+b}{cz+d}$ for $a,b,c,d\in\mathbb{C}$. (And a quick check shows that the condition $ad-bc\neq 0$ is needed to guarantee that $f^{-1}$ exists.) This completes the proof of the Theorem above.
$\square$
*that is, $f$ is holomorphic on all of $\hat{\mathbb{C}}$ except for a set of isolated points, namely the poles of $f$.
|
We will say that two sets $$A$$ and $$B$$ are equal, written as $$A = B$$ if they have the same elements. That is, if, and only if, every element of $$A$$ is contained also in $$B$$ and every element of $$B$$ is contained in $$A$$. In symbols: $$$x\in A \Leftrightarrow x\in B$$$
We say that a set $$A$$ is a subset of another set $$B$$,if every element of $$A$$ is also an element of $$B$$, that is, when the following is verified: $$$x\in A \Rightarrow x\in B$$$ whatever the element $$x$$ is. In this case, it is written $$$A\subseteq B$$$
Note that by definition, the possibility that if $$A\subseteq B$$, then $$A = B$$ is not excluded. If $$B$$ has at least one element not belonging to $$A$$, but if every element of d$$A$$ is an element of $$B$$, then we say that $$A$$ is a proper subset of $$B$$, which is represented as $$A\subset B$$.
Thus, the empty set is a proper subset of every set (except of itself), and any set $$A$$ is an improper subset of itself.
If $$A$$ is a subset of $$B$$, we can also say that $$B$$ is a superset of $$A$$, written $$B\supseteq A$$ and say that $$B$$ is a proper superset of $$A$$ if $$B \supset A$$.
By principle of identity, it is always true that $$$x\in A \Rightarrow x\in A $$$ for every element $$x$$, so, every set is a subset and a superset of itself.
|
Difference between revisions of "Ineffable"
m (as a weakening of subtle cardinals)
(-)
Line 57: Line 57:
* Every ethereal cardinal is weakly [[inaccessible]].
* Every ethereal cardinal is weakly [[inaccessible]].
* A strongly inaccessible cardinal is ethereal if and only if it is subtle.
* A strongly inaccessible cardinal is ethereal if and only if it is subtle.
−
* If $κ$ is ethereal and $2^\underset{\smile}{κ} = κ$, then $♢_κ$
+
* If $κ$ is ethereal and $2^\underset{\smile}{κ} = κ$, then $♢_κ$ holds (where $2^\underset{\smile}{κ} = \bigcup \{ 2^α | α < κ \}$ is the weak power of $κ$).
''To be expanded.''
''To be expanded.''
Latest revision as of 12:38, 6 October 2019
Ineffable cardinals were introduced by Jensen and Kunen in [1] and arose out of their study of $\diamondsuit$ principles. An uncountable regular cardinal $\kappa$ is ineffable if for every sequence $\langle A_\alpha\mid \alpha<\kappa\rangle$ with $A_\alpha\subseteq \alpha$ there is $A\subseteq\kappa$ such that the set $S=\{\alpha<\kappa\mid A\cap \alpha=A_\alpha\}$ is stationary. Equivalently an uncountable regular $\kappa$ is ineffable if and only if for every function $F:[\kappa]^2\rightarrow 2$ there is a stationary $H\subseteq\kappa$ such that $F\upharpoonright [H]^2$ is constant [1]. This second characterization strengthens a characterization of weakly compact cardinals which requires that there exist such an $H$ of size $\kappa$.
If $\kappa$ is ineffable, then $\diamondsuit_\kappa$ holds and there cannot be a slim $\kappa$-Kurepa tree [1] . A $\kappa$-Kurepa tree is a tree of height $\kappa$ having levels of size less than $\kappa$ and at least $\kappa^+$-many branches. A $\kappa$-Kurepa tree is slim if every infinite level $\alpha$ has size at most $|\alpha|$.
Contents Ineffable cardinals and the constructible universe
Ineffable cardinals are downward absolute to $L$. In $L$, an inaccessible cardinal $\kappa$ is ineffable if and only if there are no slim $\kappa$-Kurepa trees. Thus, for inaccessible cardinals, in $L$, ineffability is completely characterized using slim Kurepa trees. [1]
Ramsey cardinals are stationary limits of completely ineffable cardinals, they are weakly ineffable, but the least Ramsey cardinal is not ineffable. Ineffable Ramsey cardinals are limits of Ramsey cardinals, because ineffable cardinals are $Π^1_2$-indescribable and being Ramsey is a $Π^1_2$-statement. The least strongly Ramsey cardinal also is not ineffable, but super weakly Ramsey cardinals are ineffable. $1$-iterable (=weakly Ramsey) cardinals are weakly ineffable and stationary limits of completely ineffable cardinals. The least $1$-iterable cardinal is not ineffable. [2, 4]
Relations with other large cardinals Measurable cardinals are ineffable and stationary limits of ineffable cardinals. $\omega$-Erdős cardinals are stationary limits of ineffable cardinals, but not ineffable since they are $\Pi_1^1$-describable. [3] Ineffable cardinals are $\Pi^1_2$-indescribable [1]. Ineffable cardinals are limits of totally indescribable cardinals. [1] ([5] for proof) For a cardinal $κ=κ^{<κ}$, $κ$ is ineffable iff it is normal 0-Ramsey. [6] Weakly ineffable cardinal
Weakly ineffable cardinals (also called almost ineffable) were introduced by Jensen and Kunen in [1] as a weakening of ineffable cardinals. An uncountable regular cardinal $\kappa$ is weakly ineffable if for every sequence $\langle A_\alpha\mid \alpha<\kappa\rangle$ with $A_\alpha\subseteq \alpha$ there is $A\subseteq\kappa$ such that the set $S=\{\alpha<\kappa\mid A\cap \alpha=A_\alpha\}$ has size $\kappa$. If $\kappa$ is weakly ineffable, then $\diamondsuit_\kappa$ holds.
Weakly ineffable cardinals are downward absolute to $L$. [1] Weakly ineffable cardinals are $\Pi_1^1$-indescribable. [1] Ineffable cardinals are limits of weakly ineffable cardinals. Weakly ineffable cardinals are limits of totally indescribable cardinals. [1] ([5] for proof) For a cardinal $κ=κ^{<κ}$, $κ$ is weakly ineffable iff it is genuine 0-Ramsey. [6] Subtle cardinal
Subtle cardinals were introduced by Jensen and Kunen in [1] as a weakening of weakly ineffable cardinals. A uncountable regular cardinal $\kappa$ is subtle if for every for every $\langle A_\alpha\mid \alpha<\kappa\rangle$ with $A_\alpha\subseteq \alpha$ and every closed unbounded $C\subseteq\kappa$ there are $\alpha<\beta$ in $C$ such that $A_\beta\cap\alpha=A_\alpha$. If $\kappa$ is subtle, then $\diamondsuit_\kappa$ holds.
Subtle cardinals are downward absolute to $L$. [1] Weakly ineffable cardinals are limits of subtle cardinals. [1] Subtle cardinals are stationary limits of totally indescribable cardinals. [1, 7] The least subtle cardinal is not weakly compact as it is $\Pi_1^1$-describable. $\alpha$-Erdős cardinals are subtle. [1] If $δ$ is a subtle cardinal, the set of cardinals $κ$ below $δ$ that are strongly uplifting in $V_δ$ is stationary.[8] for every class $\mathcal{A}$, in every club $B ⊆ δ$ there is $κ$ such that $\langle V_δ, \mathcal{A} ∩ V_δ \rangle \models \text{“$κ$ is $\mathcal{A}$-shrewd.”}$.[9] (The set of cardinals $κ$ below $δ$ that are $\mathcal{A}$-shrewd in $V_δ$ is stationary.) there is an $\eta$-shrewd cardinal below $δ$ for all $\eta < δ$.[9] Ethereal cardinal
Ethereal cardinals were introduced by Ketonen in [10] (information in this section from there) as a weakening of subtle cardinals.
Definition:
A regular cardinal $κ$ is called etherealif for every club $C$ in $κ$ and sequence $(S_α|α < κ)$ of sets such that for $α < κ$, $|S_α| = |α|$ and $S_α ⊆ α$, there are elements $α, β ∈ C$ such that $α < β$ and $|S_α ∩ S_β| = |α|$. I.e., symbolically(?):
$$κ \text{ – ethereal} \overset{\text{def}}{⟺} \left( κ \text{ – regular} ∧ \left( \forall_{C \text{ – club in $κ$}} \forall_{S : κ → \mathcal{P}(κ)} \left( \forall_{α < κ} |S_α| = |α| ∧ S_α ⊆ α \right) ⟹ \left( \exists_{α, β ∈ C} α < β ∧ |S_α ∩ S_β| = |α| \right) \right) \right)$$
Properties:
Every subtle cardinal is obviously ethereal. Every ethereal cardinal is weakly inaccessible. A strongly inaccessible cardinal is ethereal if and only if it is subtle. If $κ$ is ethereal and $2^\underset{\smile}{κ} = κ$, then $♢_κ$ holds (where $2^\underset{\smile}{κ} = \bigcup \{ 2^α | α < κ \}$ is the weak power of $κ$). To be expanded. $n$-ineffable cardinal
The $n$-ineffable cardinals for $2\leq n<\omega$ were introduced by Baumgartner in [11] as a strengthening of ineffable cardinals. A cardinal is $n$-ineffable if for every function $F:[\kappa]^n\rightarrow 2$ there is a stationary $H\subseteq\kappa$ such that $F\upharpoonright [H]^n$ is constant.
$2$-ineffable cardinals are exactly the ineffable cardinals. an $n+1$-ineffable cardinal is a stationary limit of $n$-ineffable cardinals. [11]
A cardinal $\kappa$ is totally ineffable if it is $n$-ineffable for every $n$.
a $1$-iterable cardinal is a stationary limit of totally ineffable cardinals. (this follows from material in [4]) Helix
(Information in this subsection come from [7] unless noted otherwise.)
For $k \geq 1$ we define:
$\mathcal{P}(x)$ is the powerset (set of all subsets) of $x$. $\mathcal{P}_k(x)$ is the set of all subsets of $x$ with exactly $k$ elements. $f:\mathcal{P}_k(\lambda) \to \mathcal{P}(\lambda)$ is regressive iff for all $A \in \mathcal{P}_k(\lambda)$, we have $f(A) \subseteq \min(A)$. $E$ is $f$-homogenous iff $E \subseteq \lambda$ and for all $B,C \in \mathcal{P}_k(E)$, we have $f(B) \cap \min(B \cup C) = f(C) \cap \min(B \cup C)$. $\lambda$ is $k$-subtle iff $\lambda$ is a limit ordinal and for all clubs $C \subseteq \lambda$ and regressive $f:\mathcal{P}_k(\lambda) \to \mathcal{P}(\lambda)$, there exists an $f$-homogenous $A \in \mathcal{P}_{k+1}(C)$. $\lambda$ is $k$-almost ineffable iff $\lambda$ is a limit ordinal and for all regressive $f:\mathcal{P}_k(\lambda) \to \mathcal{P}(\lambda)$, there exists an $f$-homogenous $A \subseteq \lambda$ of cardinality $\lambda$. $\lambda$ is $k$-ineffable iff $\lambda$ is a limit ordinal and for all regressive $f:\mathcal{P}_k(\lambda) \to \mathcal{P}(\lambda)$, there exists an $f$-homogenous stationary $A \subseteq \lambda$.
$0$-subtle, $0$-almost ineffable and $0$-ineffable cardinals can be defined as “uncountable regular cardinals” because for $k \geq 1$ all three properties imply being uncountable regular cardinals.
For $k \geq 1$, if $\kappa$ is a $k$-ineffable cardinal, then $\kappa$ is $k$-almost ineffable and the set of $k$-almost ineffable cardinals is stationary in $\kappa$. For $k \geq 1$, if $\kappa$ is a $k$-almost ineffable cardinal, then $\kappa$ is $k$-subtle and the set of $k$-subtle cardinals is stationary in $\kappa$. For $k \geq 1$, if $\kappa$ is a $k$-subtle cardinal, then the set of $(k-1)$-ineffable cardinals is stationary in $\kappa$. For $k \geq n \geq 0$, all $k$-ineffable cardinals are $n$-ineffable, all $k$-almost ineffable cardinals are $n$-almost ineffable and all $k$-subtle cardinals are $n$-subtle. Completely ineffable cardinal
Completely ineffable cardinals were introduced in [5] as a strengthening of ineffable cardinals. Define that a collection $R\subseteq P(\kappa)$ is a stationary class if
$R\neq\emptyset$, for all $A\in R$, $A$ is stationary in $\kappa$, if $A\in R$ and $B\supseteq A$, then $B\in R$.
A cardinal $\kappa$ is completely ineffable if there is a stationary class $R$ such that for every $A\in R$ and $F:[A]^2\to2$, there is $H\in R$ such that $F\upharpoonright [H]^2$ is constant.
Relations:
Completely ineffable cardinals are downward absolute to $L$. [5] Completely ineffable cardinals are limits of ineffable cardinals. [5] There are stationarily many completely ineffable, greatly Erdős cardinals below any Ramsey cardinal.[13] The following are equivalent:[6] $κ$ is completely ineffable. $κ$ is coherent $<ω$-Ramsey. $κ$ has the $ω$-filter property. Every completely ineffable is a stationary limit of $<ω$-Ramseys.[6] Completely Ramsey cardinals and $ω$-Ramsey cardinals are completely ineffable.[6] $ω$-Ramsey cardinals are limits of completely ineffable cardinals.[2]
References Jensen, Ronald and Kunen, Kenneth. Some combinatorial properties of $L$ and $V$.Unpublished, 1969. www bibtex Holy, Peter and Schlicht, Philipp. A hierarchy of Ramsey-like cardinals.Fundamenta Mathematicae 242:49-74, 2018. www arχiv DOI bibtex Jech, Thomas J. Third, Springer-Verlag, Berlin, 2003. (The third millennium edition, revised and expanded) www bibtex Set Theory. Gitman, Victoria. Ramsey-like cardinals.The Journal of Symbolic Logic 76(2):519-540, 2011. www arχiv MR bibtex Abramson, Fred and Harrington, Leo and Kleinberg, Eugene and Zwicker, William. Flipping properties: a unifying thread in the theory of large cardinals.Ann Math Logic 12(1):25--58, 1977. MR bibtex Nielsen, Dan Saattrup and Welch, Philip. Games and Ramsey-like cardinals., 2018. arχiv bibtex Friedman, Harvey M. Subtle cardinals and linear orderings., 1998. www bibtex Hamkins, Joel David and Johnstone, Thomas A. Strongly uplifting cardinals and the boldface resurrection axioms., 2014. arχiv bibtex Rathjen, Michael. The art of ordinal analysis., 2006. www bibtex Ketonen, Jussi. Some combinatorial principles.Trans Amer Math Soc 188:387-394, 1974. DOI bibtex Baumgartner, James. Ineffability properties of cardinals. I.Infinite and finite sets (Colloq., Keszthely, 1973; dedicated to P. Erdős on his 60th birthday), Vol. I, pp. 109--130. Colloq. Math. Soc. János Bolyai, Vol. 10, Amsterdam, 1975. MR bibtex Kentaro, Sato. Double helix in large large cardinals and iteration ofelementary embeddings., 2007. www bibtex Sharpe, Ian and Welch, Philip. Greatly Erdős cardinals with some generalizations to the Chang and Ramsey properties.Ann Pure Appl Logic 162(11):863--902, 2011. www DOI MR bibtex
|
The Fourier series of the function $f(x) = \left\{ \begin{array}{rcl} -4\,x & \text{if} & -\pi<x<0 \\ 4\,x & \text{if} & 0<x<\pi \end{array}\right.$
is given by $f(x) \sim c_0 - \displaystyle \sum\limits_{n=0}^\infty c_n\;\cos\big((2\,n+1)\,x\big) - \sum\limits_{n=1}^\infty b_n\;\sin\big(n\,x\big)$
I'm asked to find $c_0$, $c_n$, and $b_n$.
Given that this is an even function, we get $b_n =0$.
I have already found $c_0=4\pi$, but I am having trouble finding $c_n$.
Here is where I get confused. The general formula that I am given to find $c_n$ is given as
$$c_n=\frac{2}{L} \int_{0}^{L} f(x)\cos(\frac{n\pi x}{L})dx$$
where $L$ are the boundary conditions.
However, given that I have $cos((2n+1)x)$, I'm assuming I would have to modify the general formula to accommodate this term.
Solving for $c_n$, I get
$$c_n=\frac{2}{\pi} \int_{0}^{\pi} 4x\cos((2n+1)x)dx$$
$$c_n=\frac{2}{\pi} [\frac{-4\pi(2n+1)\sin(2\pi n)-4\cos(2\pi n)}{(2n+1)^2} -\frac{4}{(2n+1)^2}]$$
$$c_n=\frac{-8}{\pi} \frac{\pi(2n+1)\sin(2\pi n)+\cos(2\pi n)+1}{(2n+1)^2}$$
Apparently this is not correct, so I'm not sure what I'm doing wrong. Any help would be appreciated!
|
What is a simple algorithm for computing the SVD of $2 \times 2$ matrices?
Ideally, I'd like a numerically robust algorithm, but I'll like to see both simple and not-so-simple implementations. C code accepted.
Any references to papers or code?
Computational Science Stack Exchange is a question and answer site for scientists using computers to solve scientific problems. It only takes a minute to sign up.Sign up to join this community
What is a simple algorithm for computing the SVD of $2 \times 2$ matrices?
Ideally, I'd like a numerically robust algorithm, but I'll like to see both simple and not-so-simple implementations. C code accepted.
Any references to papers or code?
See https://math.stackexchange.com/questions/861674/decompose-a-2d-arbitrary-transform-into-only-scaling-and-rotation (sorry, I would have put that in a comment but I've registered just to post this so I can't post comments yet).
But since I'm writing it as an answer, I'll also write the method:
$$E=\frac{m_{00}+m_{11}}{2}; F=\frac{m_{00}-m_{11}}{2}; G=\frac{m_{10}+m_{01}}{2}; H=\frac{m_{10}-m_{01}}{2}\\ Q=\sqrt{E^2+H^2}; R=\sqrt{F^2+G^2}\\ s_x=Q+R; s_y=Q-R\\ a_1=\mathrm{atan2}(G,F); a_2=\mathrm{atan2}(H,E)\\ \theta=\frac{a_2-a_1}{2}; \phi=\frac{a_2+a_1}{2}$$
That decomposes the matrix as follows:
$$M=\pmatrix{m_{00}&m_{01}\\m_{10}&m_{11}}=\pmatrix{\cos\phi&-\sin\phi\\\sin\phi&\cos\phi}\pmatrix{s_x&0\\0&s_y}\pmatrix{\cos\theta&-\sin\theta\\\sin\theta&\cos\theta}$$
The only thing to guard against with this method is that $G=F=0$ or $H=E=0$ for atan2. I doubt it can be any more robust than that (update: but see Alex Eftimiades' answer!).
The reference is: http://dx.doi.org/10.1109/38.486688 (given by Rahul there) which comes from the bottom of this blog post: http://metamerist.blogspot.com/2006/10/linear-algebra-for-graphics-geeks-svd.html
Update: As noted by @VictorLiu in a comment, $s_y$ may be negative. That happens if the determinant of the input matrix is negative as well. If that's the case and you want the positive singular values, you can change the sign of the last column of the first rotation matrix, and then change the sign of $s_y$. Put formally:
$$M=\pmatrix{m_{00}&m_{01}\\m_{10}&m_{11}}=\pmatrix{\cos\phi&-S\sin\phi\\\sin\phi&S\cos\phi}\pmatrix{s_x&0\\0&|s_y|}\pmatrix{\cos\theta&-\sin\theta\\\sin\theta&\cos\theta}$$ where $S$ is $-1$ if $s_y$ is negative, otherwise $1$ (the sign of $s_y$).
@Pedro Gimeno
"I doubt it can be any more robust than that."
Challenge accepted.
I noticed the usual approach is to use trig functions like atan2. Intuitively, there shouldn't be a need to use trig functions. Indeed, all the results end up as sines and cosines of arctans--which can be simplified to algebraic functions. It took quite a while, but I managed to simplified Pedro's algorithm to use only algebraic functions.
The following python code does the trick.
from numpy import asarray, diag
def svd2(m):
y1, x1 = (m[1, 0] + m[0, 1]), (m[0, 0] - m[1, 1]) y2, x2 = (m[1, 0] - m[0, 1]), (m[0, 0] + m[1, 1]) h1 = hypot(y1, x1) h2 = hypot(y2, x2) t1 = x1 / h1 t2 = x2 / h2 cc = sqrt((1 + t1) * (1 + t2)) ss = sqrt((1 - t1) * (1 - t2)) cs = sqrt((1 + t1) * (1 - t2)) sc = sqrt((1 - t1) * (1 + t2)) c1, s1 = (cc - ss) / 2, (sc + cs) / 2, u1 = asarray([[c1, -s1], [s1, c1]]) d = asarray([(h1 + h2) / 2, (h1 - h2) / 2]) sigma = diag(d) if h1 != h2: u2 = diag(1 / d).dot(u1.T).dot(m) else: u2 = diag([1 / d[0], 0]).dot(u1.T).dot(m) return u1, sigma, u2
The GSL has a 2-by-2 SVD solver underlying the QR decomposition part of the main SVD algorithm for
gsl_linalg_SV_decomp. See the
svdstep.c file and look for the
svd2 function. The function has a few special cases, isn't exactly trivial, and looks to be doing several things to be numerically careful (e.g., using
hypot to avoid overflows).
When we say "numerically robust" we usually mean an algorithm in which we do things like pivoting to avoid error propagation. However, for a 2x2 matrix, you can write the result down in terms of explicit formulas -- i.e., write down formulas for the SVD elements that state the result
only in terms of the inputs, rather than in terms of intermediate values previously computed. That means that you may have cancellation but no error propagation.
The point simply is that for 2x2 systems, worrying about robustness is not necessary.
This code is based on Blinn's paper, Ellis paper, SVD lecture, and additional calculations. An algorithm is suitable for regular and singular real matrices. All previous versions works 100% as well as this one.
#include <stdio.h>#include <math.h>void svd22(const double a[4], double u[4], double s[2], double v[4]) { s[0] = (sqrt(pow(a[0] - a[3], 2) + pow(a[1] + a[2], 2)) + sqrt(pow(a[0] + a[3], 2) + pow(a[1] - a[2], 2))) / 2; s[1] = fabs(s[0] - sqrt(pow(a[0] - a[3], 2) + pow(a[1] + a[2], 2))); v[2] = (s[0] > s[1]) ? sin((atan2(2 * (a[0] * a[1] + a[2] * a[3]), a[0] * a[0] - a[1] * a[1] + a[2] * a[2] - a[3] * a[3])) / 2) : 0; v[0] = sqrt(1 - v[2] * v[2]); v[1] = -v[2]; v[3] = v[0]; u[0] = (s[0] != 0) ? (a[0] * v[0] + a[1] * v[2]) / s[0] : 1; u[2] = (s[0] != 0) ? (a[2] * v[0] + a[3] * v[2]) / s[0] : 0; u[1] = (s[1] != 0) ? (a[0] * v[1] + a[1] * v[3]) / s[1] : -u[2]; u[3] = (s[1] != 0) ? (a[2] * v[1] + a[3] * v[3]) / s[1] : u[0];}int main() { double a[4] = {1, 2, 3, 6}, u[4], s[2], v[4]; svd22(a, u, s, v); printf("Matrix A:\n%f %f\n%f %f\n\n", a[0], a[1], a[2], a[3]); printf("Matrix U:\n%f %f\n%f %f\n\n", u[0], u[1], u[2], u[3]); printf("Matrix S:\n%f %f\n%f %f\n\n", s[0], 0, 0, s[1]); printf("Matrix V:\n%f %f\n%f %f\n\n", v[0], v[1], v[2], v[3]);}
I needed an algorithm that has
We want to calculate $c_1, s_1, c_2, s_2, \sigma_1$ and $\sigma_2$ as follows:
$A = USV$, which can be expanded like:
$ \begin{bmatrix} a & b \\ c & d \end{bmatrix} = \begin{bmatrix} c_1 & s_1 \\ -s_1 & c_1 \end{bmatrix} \begin{bmatrix} \sigma_1 & 0 \\ 0 & \sigma_2 \end{bmatrix} \begin{bmatrix} c_2 & -s_2 \\ s_2 & c_2 \end{bmatrix} $
The main idea is to find a rotation matrix $V$ that diagonalizes $A^TA$, that is $VA^TAV^T=D$ is diagonal.
Recall that
$USV = A$
$US = AV^{-1} = AV^T$ (since $V$ is orthogonal)
$VA^TAV^T = (AV^T)^TAV^T = (US)^TUS = S^TU^TUS = D$
Multiplying both sides by $S^{-1}$ we get
$(S^{-T}S^T)U^TU(SS^{-1}) = U^TU = S^{-T}DS^{-1}$
Since $D$ is diagonal, setting $S$ to $\sqrt{D}$ will give us $U^TU=Identity$, meaning $U$ is a rotation matrix, $S$ is a diagonal matrix, $V$ is a rotation matrix and $USV = A$, just what we are looking for.
Calculating the diagonalizing rotation can be done by solving the following equation:
$t_2^2 - \frac{\beta-\alpha}{\gamma}t_2-1 = 0$
where
$ A^TA = \begin{bmatrix} a & c \\ b & d \end{bmatrix} \begin{bmatrix} a & b \\ c & d \end{bmatrix} = \begin{bmatrix} a^2+c^2 & ab+cd \\ ab+cd & b^2+d^2 \end{bmatrix} = \begin{bmatrix} \alpha & \gamma \\ \gamma & \beta \end{bmatrix} $
and $t_2$ is the tangent of angle of $V$. This can be derived by expanding $VA^TAV^T$ and making its off-diagonal elements equal to zero (they are equal to each other).
The problem with this method is that it loses significant floating point precision when calculating $\beta-\alpha$ and $\gamma$ for certain matrices, because of the subtractions in the calculations. The solution for this is to do an RQ decomposition ($A=RQ$, $R$ upper triangular and $Q$ orthogonal) first, then use the algorithm to factorize $USV' = R$. This gives $USV=USV'Q=RQ=A$. Notice how setting $d$ to 0 (as in $R$) eliminates some of the additions/subtractions. (The RQ decomposition is fairly trivial from the expansion of the matrix product).
The algorithm naively implemented this way has some numerical and logical anomalies (e.g. is $S$ $+\sqrt{D}$ or $-\sqrt{D}$), which I fixed in the code below.
I threw about 2000 million randomized matrices at the code, and the largest numerical error produced was around $6\cdot10^{-7}$ (with 32 bit floats, $error = ||USV-M||/||M||$). The algorithm runs in about 340 clock cycles (MSVC 19, Ivy Bridge).
template <class T>void Rq2x2Helper(const Matrix<T, 2, 2>& A, T& x, T& y, T& z, T& c2, T& s2) { T a = A(0, 0); T b = A(0, 1); T c = A(1, 0); T d = A(1, 1); if (c == 0) { x = a; y = b; z = d; c2 = 1; s2 = 0; return; } T maxden = std::max(abs(c), abs(d)); T rcmaxden = 1/maxden; c *= rcmaxden; d *= rcmaxden; T den = 1/sqrt(c*c + d*d); T numx = (-b*c + a*d); T numy = (a*c + b*d); x = numx * den; y = numy * den; z = maxden/den; s2 = -c * den; c2 = d * den;}template <class T>void Svd2x2Helper(const Matrix<T, 2, 2>& A, T& c1, T& s1, T& c2, T& s2, T& d1, T& d2) { // Calculate RQ decomposition of A T x, y, z; Rq2x2Helper(A, x, y, z, c2, s2); // Calculate tangent of rotation on R[x,y;0,z] to diagonalize R^T*R T scaler = T(1)/std::max(abs(x), abs(y)); T x_ = x*scaler, y_ = y*scaler, z_ = z*scaler; T numer = ((z_-x_)*(z_+x_)) + y_*y_; T gamma = x_*y_; gamma = numer == 0 ? 1 : gamma; T zeta = numer/gamma; T t = 2*impl::sign_nonzero(zeta)/(abs(zeta) + sqrt(zeta*zeta+4)); // Calculate sines and cosines c1 = T(1) / sqrt(T(1) + t*t); s1 = c1*t; // Calculate U*S = R*R(c1,s1) T usa = c1*x - s1*y; T usb = s1*x + c1*y; T usc = -s1*z; T usd = c1*z; // Update V = R(c1,s1)^T*Q t = c1*c2 + s1*s2; s2 = c2*s1 - c1*s2; c2 = t; // Separate U and S d1 = std::hypot(usa, usc); d2 = std::hypot(usb, usd); T dmax = std::max(d1, d2); T usmax1 = d2 > d1 ? usd : usa; T usmax2 = d2 > d1 ? usb : -usc; T signd1 = impl::sign_nonzero(x*z); dmax *= d2 > d1 ? signd1 : 1; d2 *= signd1; T rcpdmax = 1/dmax; c1 = dmax != T(0) ? usmax1 * rcpdmax : T(1); s1 = dmax != T(0) ? usmax2 * rcpdmax : T(0);}
Ideas from:
http://www.cs.utexas.edu/users/inderjit/public_papers/HLA_SVD.pdf http://www.math.pitt.edu/~sussmanm/2071Spring08/lab09/index.html http://www.lucidarme.me/singular-value-decomposition-of-a-2x2-matrix/
I have used the description at http://www.lucidarme.me/?p=4624 to create this C++ code. The Matrices are those of the Eigen library, but you can easily create your own data structure from this example:
$A=U\Sigma V^T$
#include <cmath>#include <Eigen/Core>using namespace Eigen;Matrix2d A;// ... fill Adouble a = A(0,0);double b = A(0,1);double c = A(1,0);double d = A(1,1);double Theta = 0.5 * atan2(2*a*c + 2*b*d, a*a + b*b - c*c - d*d);// calculate UMatrix2d U;U << cos(Theta), -sin(Theta), sin(Theta), cos(Theta);double Phi = 0.5 * atan2(2*a*b + 2*c*d, a*a - b*b + c*c - d*d);double s11 = ( a*cos(Theta) + c*sin(Theta))*cos(Phi) + ( b*cos(Theta) + d*sin(Theta))*sin(Phi);double s22 = ( a*sin(Theta) - c*cos(Theta))*sin(Phi) + (-b*sin(Theta) + d*cos(Theta))*cos(Phi);// calculate SS1 = a*a + b*b + c*c + d*d;S2 = sqrt(pow(a*a + b*b - c*c - d*d, 2) + 4*pow(a*c + b*d, 2));Matrix2d Sigma;Sigma << sqrt((S1+S2) / 2), 0, 0, sqrt((S1-S2) / 2);// calculate VMatrix2d V;V << signum(s11)*cos(Phi), -signum(s22)*sin(Phi), signum(s11)*sin(Phi), signum(s22)*cos(Phi);
With the standard sign function
double signum(double value){ if(value > 0) return 1; else if(value < 0) return -1; else return 0;}
This results in exactly the same values as the
Eigen::JacobiSVD (see https://eigen.tuxfamily.org/dox-devel/classEigen_1_1JacobiSVD.html).
I have pure C code for the 2x2 real SVD here. See line 559. It essentially computes the eigenvalues of $A^TA$ by solving a quadratic, so it's not necessarily the most robust, but it seems to work well in practice for not-too-pathological cases. It's relatively simple.
For my personal need, I tried to isolate the minimum computation for a 2x2 svd. I guess it is probably one of the simplest and fastest solution. You can find details on my personal blog : http://lucidarme.me/?p=4624.
Advantages : simple, fast and you can only calculate one or two of the three matrices (S, U or D) if you don't need the three matrices. Drawback it uses atan2, which may be inacurate and may require an external library (typ. math.h).
Here is an implementation of a 2x2 SVD solve. I based it off of Victor Liu's code. His code was not working for some matrices. I used these two documents as mathematical reference for the solve: pdf1 and pdf2.
The matrix
setData method is in row-major order. Internally, I represent the matrix data as a 2D array given by
data[col][row].
void Matrix2f::svd(Matrix2f* w, Vector2f* e, Matrix2f* v) const{ //If it is diagonal, SVD is trivial if (fabs(data[0][1] - data[1][0]) < EPSILON && fabs(data[0][1]) < EPSILON){ w->setData(data[0][0] < 0 ? -1 : 1, 0, 0, data[1][1] < 0 ? -1 : 1); e->setData(fabs(data[0][0]), fabs(data[1][1])); v->loadIdentity(); } //Otherwise, we need to compute A^T*A else{ float j = data[0][0]*data[0][0] + data[0][1]*data[0][1], k = data[1][0]*data[1][0] + data[1][1]*data[1][1], v_c = data[0][0]*data[1][0] + data[0][1]*data[1][1]; //Check to see if A^T*A is diagonal if (fabs(v_c) < EPSILON){ float s1 = sqrt(j), s2 = fabs(j-k) < EPSILON ? s1 : sqrt(k); e->setData(s1, s2); v->loadIdentity(); w->setData( data[0][0]/s1, data[1][0]/s2, data[0][1]/s1, data[1][1]/s2 ); } //Otherwise, solve quadratic for eigenvalues else{ float jmk = j-k, jpk = j+k, root = sqrt(jmk*jmk + 4*v_c*v_c), eig = (jpk+root)/2, s1 = sqrt(eig), s2 = fabs(root) < EPSILON ? s1 : sqrt((jpk-root)/2); e->setData(s1, s2); //Use eigenvectors of A^T*A as V float v_s = eig-j, len = sqrt(v_s*v_s + v_c*v_c); v_c /= len; v_s /= len; v->setData(v_c, -v_s, v_s, v_c); //Compute w matrix as Av/s w->setData( (data[0][0]*v_c + data[1][0]*v_s)/s1, (data[1][0]*v_c - data[0][0]*v_s)/s2, (data[0][1]*v_c + data[1][1]*v_s)/s1, (data[1][1]*v_c - data[0][1]*v_s)/s2 ); } }}
|
Sean Nixon, Jianke Yang
Light propagation in optical waveguides with periodically modulated index of refraction and alternating gain and loss are investigated for linear and nonlinear systems. Based on a multiscale perturbation analysis, it is shown that for many non-parity-time (PT) symmetric waveguides, their linear spectrum is partially complex, thus light exponentially grows or decays upon propagation, and this growth or delay is not altered by nonlinearity. However, several classes of non-PT-symmetric waveguides are also identified to possess all-real linear spectrum. In the nonlinear regime longitudinally periodic and transversely quasi-localized modes are found for PT-symmetric waveguides both above and below phase transition. These nonlinear modes are stable under evolution and can develop from initially weak initial conditions.
http://arxiv.org/abs/1412.6113
Optics (physics.optics); Pattern Formation and Solitons (nlin.PS)
Jin Wang, Hui Yuan Dong, Chi Wai Ling, C. T. Chan, Kin Hung Fung
We find that a new type of non-reciprocal modes exist at an interface between two
parity-time (PT) symmetric magnetic domains (MDs) near the frequency of zero effective permeability. The new mode is non-propagating and purely magnetic when the two MDs are semi-infinite while it becomes propagating in the finite case. In particular, two pronounced nonreciprocal responses could be observed via the excitation of this mode: one-way optical tunneling for oblique incidence and unidirectional beam shift at normal incidence. When the two MDs system becomes finite in size, it is found that perfect-transmission mode could be achieved if PT-symmetry is maintained. The unique properties of such an unusual mode are investigated by analytical modal calculation as well as numerical simulations. The results suggest a new approach to the design of compact optical isolator.
http://arxiv.org/abs/1412.5725
Optics (physics.optics)
Jiangbin Gong, Qing-hai Wang
The time evolution of a system with a time-dependent non-Hermitian Hamiltonian is in general unstable with exponential growth or decay. A periodic driving field may stabilize the dynamics because the eigenphases of the associated Floquet operator may become all real. This possibility can emerge for a continuous range of system parameters with subtle domain boundaries. It is further shown that the issue of stability of a driven non-Hermitian Rabi model can be mapped onto the band structure problem of a class of lattice Hamiltonians. As an application, we show how to use the stability of driven non-Hermitian two-level systems (0-dimension in space) to simulate a spectrum analogous to Hofstadter’s butterfly that has played a paradigmatic role in quantum Hall physics. The simulation of the band structure of non-Hermitian superlattice potentials with parity-time reversal symmetry is also briefly discussed.
http://arxiv.org/abs/1412.3549
Quantum Physics (quant-ph)
Andreas Fring
We construct a previously unknown \(E_2\)-quasi-exactly solvable non-Hermitian model whose eigenfunctions involve weakly orthogonal polynomials obeying three-term recurrence relations that factorize beyond the quantization level. The model becomes Hermitian when one of its two parameters is fixed to a specific value. We analyze the double scaling limit of this model leading to the complex Mathieu equation. The norms, Stieltjes measures and moment functionals are evaluated for some concrete values of one of the two parameters.
http://arxiv.org/abs/1412.2800
Quantum Physics (quant-ph); Mathematical Physics (math-ph)
Ali Mostafazadeh
Spectral singularities are certain points of the continuous spectrum of generic complex scattering potentials. We review the recent developments leading to the discovery of their physical meaning, consequences, and generalizations. In particular, we give a simple definition of spectral singularities, provide a general introduction to spectral consequences of PT-symmetry (clarifying some of the controversies surrounding this subject), outline the main ideas and constructions used in the pseudo-Hermitian representation of quantum mechanics, and discuss how spectral singularities entered in the physics literature as obstructions to these constructions. We then review the transfer matrix formulation of scattering theory and the application of complex scattering potentials in optics. These allow us to elucidate the physical content of spectral singularities and describe their optical realizations. Finally, we survey some of the most important results obtained in the subject, drawing special attention to the remarkable fact that the condition of the existence of linear and nonlinear optical spectral singularities yield simple mathematical derivations of some of the basic results of laser physics, namely the laser threshold condition and the linear dependence of the laser output intensity on the gain coefficient.
http://arxiv.org/abs/1412.0454
Quantum Physics (quant-ph); Mathematical Physics (math-ph); Optics (physics.optics)
Mykola Kulishov, H. F. Jones, Bernard Kress
We study the diffraction produced by a PT-symmetric volume Bragg grating that combines modulation of refractive index and gain/loss of the same periodicity with a quarter-period shift between them. Such a complex grating has a directional coupling between the different diffraction orders, which allows us to find an analytic solution for the first three orders of the full Maxwell equations without resorting to the paraxial approximation. This is important, because only with the full equations can the boundary conditions, allowing for reflections, be properly implemented. Using our solution we analyze the properties of such a grating in a wide variety of configurations.
http://arxiv.org/abs/1412.0506
Optics (physics.optics)
F. Battelli, J. Diblik, M. Feckan, J. Pickton, M. Pospisil, H. Susanto
A Parity-Time (PT)-symmetric system with periodically varying-in-time gain and loss modeled by two coupled Schrodinger equations (dimer) is studied. It is shown that the problem can be reduced to a perturbed pendulum-like equation. This is done by finding two constants of motion. Firstly, a generalized problem using Melnikov type analysis and topological degree arguments is studied for showing the existence of periodic (libration), shift periodic (rotation), and chaotic solutions. Then these general results are applied to the PT-symmetric dimer. It is interestingly shown that if a sufficient condition is satisfied, then rotation modes, which do not exist in the dimer with constant gain-loss, will persist. An approximate threshold for PT-broken phase corresponding to the disappearance of bounded solutions is also presented. Numerical study is presented accompanying the analytical results.
http://arxiv.org/abs/1412.0164
Pattern Formation and Solitons (nlin.PS); Quantum Gases (cond-mat.quant-gas); Classical Analysis and ODEs (math.CA); Optics (physics.optics)
H. Jing, Z. Geng, S. K. Özdemir, J. Zhang, X.-Y. Lü, B. Peng, L. Yang, F. Nori
Optomechanically-induced transparency (OMIT) and the associated slow-light propagation provide the basis for storing photons in nanofabricated phononic devices. Here we study OMIT in parity-time (PT)-symmetric microresonators with a tunable gain-to-loss ratio. This system features a reversed, non-amplifying transparency: inverted-OMIT. When the gain-to-loss ratio is steered, the system exhibits a transition from the PT-symmetric phase to the broken-PT-symmetric phase. We show that by tuning the pump power at fixed gain-to-loss ratio or the gain-to-loss ratio at fixed pump power, one can switch from slow to fast light and vice versa. Moreover, the presence of PT-phase transition results in the reversal of the pump and gain dependence of transmission rates. These features provide new tools for controlling light propagation using optomechanical devices.
http://arxiv.org/abs/1411.7115
Quantum Physics (quant-ph); Optics (physics.optics)
H. F. Jones
A popular PT-symmetric optical potential (variation of the refractive index) that supports a variety of interesting and unusual phenomena is the imaginary exponential, the limiting case of the potential \(V_0[\cos(2\pi x/a)+i\lambda\sin(2\pi x/a)]\) as \(\lambda\to1\), the symmetry-breaking point. For \(\lambda<1\), when the spectrum is entirely real, there is a well-known mapping by a similarity transformation to an equivalent Hermitian potential. However, as \(\lambda\to1\), the spectrum, while remaining real, contains Jordan blocks in which eigenvalues and the corresponding eigenfunctions coincide. In this limit the similarity transformation becomes singular. Nonetheless, we show that the mapping from the original potential to its Hermitian counterpart can still be implemented; however, the inverse mapping breaks down. We also illuminate the role of Jordan associated functions in the original problem, showing that they map onto eigenfunctions in the associated Hermitian problem.
http://arxiv.org/abs/1411.6451
Optics (physics.optics); Mathematical Physics (math-ph); Quantum Physics (quant-ph)
|
Suppose that a stock price $S$ follows Geometric Brownian Motion with expected return $\mu$ and volatility $\sigma:$
$$dS = \mu S dt +\sigma S dz$$
How to find out the process followed by variable $S^n$?
How to prove that $S^n$ also follows geometric brownian motion?
The expected value of $S_T,$ the stock price at time $T,$.is $Se^{\mu(T-t)}$.
What is the expected value of $S^n_T$?
Answer:- The expected value of $S^n_T$ is $S(t)^n e^{[n(r-\delta)+\frac12 n^2\sigma^2]T}$
But i found tha answer in some study material on internet as $S(t)^n e^{[n(r-\delta)+\frac12 n(n-1)\sigma^2]T}$
Would anyone explain me why the difference occurred between my answer and answer provided by study material on internet? r is risk-free interest rate.$\delta$ is dividend yield on the stock. $S(t)=e^{Y(t)}$
For a geometric Brownian motion ${S_t}$, the expected value of the process at time t given the history of the process up to time s, for $s < t$
$E[S{(t)}|S{(u)}, 0\leq u \leq s]=S(s)E[e^{Y(t)-Y(s)]}$
Now the mgf of a normal random variable $W$ is given by
$E[e^{nW}]=\exp[nE(W)+n^2 Var(W)/2]$
Hence, since $S(t)-S(s)$ is normal with mean $(r-\delta)(t-s)$ and variance $\sigma^2 (t-s)$ it follows that
$E[e^{S(t)-S(s)}]=e^{n(r-\delta)+\frac12(t-s)n^2\sigma^2}$
Thus we will get final answer to expected value of $S^n_t$
$E[S(t)|S(u),0\leq u \leq s]= E[e^{Y(t)}|Y(u),0\leq u \leq s]$
$L.H.S=E[e^{Y(s)+Y(t)-Y(s)}|Y(u),0\leq u\leq s]$
$L.H.S.=e^{Y(s)}E[e^{Y(t)-Y(s)}|Y(u),0\leq u\leq s]$
$L.H.S.=S(s)E[e^{Y(t)-Y(s)}]$
|
Asian Journal of Mathematics Asian J. Math. Volume 15, Number 4 (2011), 611-630. A New Pinching Theorem for Closed Hypersurfaces with Constant Mean Curvature in $S^{n+1}$ Abstract
We investigate the generalized Chern conjecture, and prove that if $M$ is a closed hypersurface in $S^{n+1}$ with constant scalar curvature and constant mean curvature, then there exists an explicit positive constant $C(n)$ depending only on $n$ such that if $|H| < C(n)$ and $S > \beta (n,H)$, then $S > \beta (n,H) + \frac{3n}{7}$, where $\beta(n,H) = n + \frac{n^3 H^2}{2(n−1)} + \frac{n(n−2)}{2(n−1)} \sqrt{n^2 H^4 + 4(n − 1)H^2}$.
Article information Source Asian J. Math., Volume 15, Number 4 (2011), 611-630. Dates First available in Project Euclid: 12 March 2012 Permanent link to this document https://projecteuclid.org/euclid.ajm/1331583350 Mathematical Reviews number (MathSciNet) MR2853651 Zentralblatt MATH identifier 1243.53104 Citation
Xu, Hong-Wei; Tian, Ling. A New Pinching Theorem for Closed Hypersurfaces with Constant Mean Curvature in $S^{n+1}$. Asian J. Math. 15 (2011), no. 4, 611--630. https://projecteuclid.org/euclid.ajm/1331583350
|
Current browse context:
astro-ph.HE
Change to browse by: Bookmark(what is this?) Astrophysics > High Energy Astrophysical Phenomena Title: Filaments of Galaxies as a Clue to the Origin of Ultra-High-Energy Cosmic Rays
(Submitted on 3 Jan 2019)
Abstract: Ultra-high-energy cosmic rays (UHECRs) are known to come from outside of our Galaxy, but their origin still remains unknown. The Telescope Array (TA) experiment recently identified a high concentration in the arrival directions of UHECRs with energies above $5.7 \times 10^{19} eV$, called hotspot. We here report the presence of filaments of galaxies, connected to the Virgo Cluster, on the sky around the hotspot, and a statistically significant correlation between hotspot events and the filaments. With 5-year TA data, the maximum significance of binomial statistics for the correlation is estimated to be 6.1 $\sigma$ at correlation angle 3.4 degree. The probability that the above significance appears by chance is $\sim 2.0 \times 10^{-8}$ (5.6 $\sigma$). Based on this finding, we suggest a model for the origin of TA hotspot UHECRs; they are produced at sources in the Virgo Cluster, and escape to and propagate along filaments, before they are scattered toward us. This picture requires the filament magnetic fields of strength $\gtrsim 20$ nG, which need to be confirmed in future observations. Submission historyFrom: Dongsu Ryu [view email] [v1]Thu, 3 Jan 2019 06:35:34 GMT (1324kb)
|
Search
Now showing items 1-10 of 19
Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV
(American Physical Society, 2017-09-08)
The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ...
Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions
(Nature Publishing Group, 2017)
At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ...
K$^{*}(892)^{0}$ and $\phi(1020)$ meson production at high transverse momentum in pp and Pb-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 2.76 TeV
(American Physical Society, 2017-06)
The production of K$^{*}(892)^{0}$ and $\phi(1020)$ mesons in proton-proton (pp) and lead-lead (Pb-Pb) collisions at $\sqrt{s_\mathrm{NN}} =$ 2.76 TeV has been analyzed using a high luminosity data sample accumulated in ...
Production of $\Sigma(1385)^{\pm}$ and $\Xi(1530)^{0}$ in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Springer, 2017-06)
The transverse momentum distributions of the strange and double-strange hyperon resonances ($\Sigma(1385)^{\pm}$, $\Xi(1530)^{0}$) produced in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV were measured in the rapidity ...
Charged–particle multiplicities in proton–proton collisions at $\sqrt{s}=$ 0.9 to 8 TeV, with ALICE at the LHC
(Springer, 2017-01)
The ALICE Collaboration has carried out a detailed study of pseudorapidity densities and multiplicity distributions of primary charged particles produced in proton-proton collisions, at $\sqrt{s} =$ 0.9, 2.36, 2.76, 7 and ...
Energy dependence of forward-rapidity J/$\psi$ and $\psi(2S)$ production in pp collisions at the LHC
(Springer, 2017-06)
We present ALICE results on transverse momentum ($p_{\rm T}$) and rapidity ($y$) differential production cross sections, mean transverse momentum and mean transverse momentum square of inclusive J/$\psi$ and $\psi(2S)$ at ...
Measurement of azimuthal correlations of D mesons and charged particles in pp collisions at $\sqrt{s}=7$ TeV and p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Springer, 2017-04)
The azimuthal correlations of D mesons and charged particles were measured with the ALICE detector in pp collisions at $\sqrt{s}=7$ TeV and p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV at the Large Hadron Collider. ...
Production of $\pi^0$ and $\eta$ mesons up to high transverse momentum in pp collisions at 2.76 TeV
(Springer, 2017-05)
The invariant differential cross sections for inclusive $\pi^{0}$ and $\eta$ mesons at midrapidity were measured in pp collisions at $\sqrt{s}=2.76$ TeV for transverse momenta $0.4<p_{\rm T}<40$ GeV/$c$ and $0.6<p_{\rm ...
Measurement of the production of high-$p_{\rm T}$ electrons from heavy-flavour hadron decays in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV
(Elsevier, 2017-08)
Electrons from heavy-flavour hadron decays (charm and beauty) were measured with the ALICE detector in Pb–Pb collisions at a centre-of-mass of energy $\sqrt{s_{\rm NN}}=2.76$ TeV. The transverse momentum ($p_{\rm T}$) ...
Flow dominance and factorization of transverse momentum correlations in Pb-Pb collisions at the LHC
(American Physical Society, 2017-04)
We present the first measurement of the two-particle transverse momentum differential correlation function, $P_2\equiv\langle \Delta p_{\rm T} \Delta p_{\rm T} \rangle /\langle p_{\rm T} \rangle^2$, in Pb-Pb collisions at ...
|
№ 9
All Issues Filevych P. V. The coefficients of power expansion and $a$-points of an entire function with Borel exceptional value
Ukr. Mat. Zh. - 2016. - 68, № 2. - pp. 147-155
For entire functions with Borel exceptional values, we establish the relationship between the rate of approaching $\infty$ for the sequence of their $a$-points and the rate of approaching 0 for the sequence of their Taylor coefficients.
Ukr. Mat. Zh. - 2015. - 67, № 6. - pp. 739–751
For the entire Dirichlet series $f(z) = ∑_{n = 0}${∞$ a_n e^{zλn}$, we establish necessary and sufficient conditions on the coefficients $a_n$ and exponents $λ_n$ under which the function $f$ has the Paley effect, i.e., the condition $$\underset{r\to +\infty }{ \lim \sup}\frac{ \ln {M}_f(r)}{T_f(r)}=+\infty$$ is satisfied, where $M_f (r)$ and $T_f (r)$ are the maximum modulus and the Nevanlinna characteristic of the function $f$, respectively.
Ukr. Mat. Zh. - 2003. - 55, № 6. - pp. 840-849
We establish a necessary and sufficient condition for the coefficients
a n of an entire function \(f(z) = \sum {_{n = 0}^\infty } {\text{ }}a_n z^n \) under which its central index and the logarithms of the maximum of the modulus and the maximum term are regularly varying functions. We construct an entire function the logarithm of the maximum of whose modulus is a regularly varying function, whereas the Nevanlinna characteristic function is not a regularly varying function.
Ukr. Mat. Zh. - 2002. - 54, № 8. - pp. 1149-1153
Let
M f( r) and μ f( r) be, respectively, the maximum of the modulus and the maximum term of an entire function f and let Φ be a continuously differentiable function convex on (−∞, +∞) and such that x = o(Φ( x)) as x → +∞. We establish that, in order that the equality \(\lim \inf \limits_{r \to + \infty} \frac{\ln M_f (r)}{\Phi (\ln r)} = \lim \inf \limits_{r \to + \infty} \frac{\ln \mu_f (r)}{\Phi (\ln r)}\) be true for any entire function f, it is necessary and sufficient that ln Φ′( x) = o(Φ( x)) as x → +∞.
Ukr. Mat. Zh. - 2001. - 53, № 4. - pp. 522-530
Let
M f( r) and μ f r) be, respectively, the maximum of the modulus and the maximum term of an entire function f and let l( r) be a continuously differentiable function convex with respect to ln r. We establish that, in order that ln M f( r) ∼ ln μ f r), r → +∞, for every entire function f such that μ f r) ∼ l( r), r → +∞, it is necessary and sufficient that ln ( rl′( r)) = o( l( r)), r → +∞.
Ukr. Mat. Zh. - 2001. - 53, № 2. - pp. 286-288
We obtain an exact estimate for the measure of the exceptional set in the Borel relation for entire functions.
Ukr. Mat. Zh. - 2000. - 52, № 10. - pp. 1431-1434
We establish that, for the “majority” of entire functions of finite order, their generalized Phragmén–Lindelöf indicators are identically equal to constants.
Ukr. Mat. Zh. - 1998. - 50, № 11. - pp. 1578–1580
An estimate exact in a certain sense is obtained for the value of the exceptional set in the Borel relation for entire functions
|
First: I am not mathematician but philosopher. I understand why the universal instantiation rule is working.
$\frac{\vdash\forall xA}{\vdash A^x_t}$
But is there actually a serious proof in a logical proof system (Hilbert etc.) ? I don't personally find the critical step on how to get rid of the universal quantifier from one to the next line.
An idea of a syntactical proof:
1: $\vdash\forall A$
2: $\vdash\forall A\to A$
3: $\vdash A$
The first is the assumption, the second is a fact which I found in a book and which seems serious and the third line is modus ponens on the first two. But now, I have got the same result, but without the substitution in A?
|
I have the equation $ t\sin (t^2) = 0.22984$. I solved this with a graphing calculator, but is there any way to solve for $ t$ without graphing?
Thanks!
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Using a Taylor series, $\sin(x)$ can be written as
$\sin(x)\approx x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \ldots$
Replacing $x$ for $t^2$ gives:
$\sin(t^2)\approx t^2 - \frac{t^6}{3!} + \frac{t^{10}}{5!} - \frac{t^{14}}{7!} + \ldots$
Plugging this into your original equation gives:
$t\sin(t^2)=0.22984$
$=t^3 - \frac{t^7}{3!} + \frac{t^{11}}{5!} - \frac{t^{15}}{7!} + \ldots=0.22984$
So you can see why solving this in a closed-form sense might be difficult.
That said, it's reasonable to think that there might be a value of $t$ less than one, in which case you can try neglecting the higher level terms (this is the small angle approximation).
This gives
$t^3=0.22984$
$t=0.61255046092664577035$
Graphically, you find a root at $\sim0.617544$. The difference is 0.8%.
The best you can hope for in this situation is to be able to calculate the solution out to as many digits as you are asked for. There are numerical methods to do just that, for instance Newton's Method.
As is said, there is no closed form solution to this equation. No formula if you prefer.
In such cases, numerical methods are used, which means that different values for $t$ are tried, using specific strategies to get closer and closer to the solution.
It is useful to carry out the study of the function to get a rough idea where the solutions can be.
In this case, if you rewrite as $sin(p)=\frac{0.22984}{\sqrt p}$, to make more familiar functions appear, you see that you intersect a sinusoid with a kind of hyperbola, having an horizontal asymptote.
The first root can also be estimated from $sin(p)\approx p$ (for small $p$), so that $t\sin(t^2)\approx t^3$, and $t\approx\sqrt[3]{0.22984}=0.61255$. (Check: $0.61255\sin(0.61255^2)=0.22448$).
For large $p$, the equation becomes $sin(p)\approx 0$, meaning that you will find infinitely many solutions close to $p=k\pi$, i.e. $t=\sqrt{k\pi}$.
A yet better approximation is obtained by setting $p=q+k\pi$ (for small $q$), so that $sin(q+k\pi)\approx \pm q\approx\pm\frac{0.22984}{\sqrt{k\pi}}$ ($+$ for even $k$, $-$ for odd), i.e. $p\approx(-1)^k\frac{0.22984}{\sqrt{k\pi}}+k\pi$, and $t\approx\sqrt{(-1)^k\frac{0.22984}{\sqrt{k\pi}}+k\pi}$.
There could be additional solutions in between.
|
I don't know how it is achieved, but here is the reason:
If you have two sources of the same noise, with nearly the same frequency, the sum of both noises will be a noise of slowly increasing and decreasing volume. It's called beat and can become very annoying.
The math says
$$\sin(2\pi f_1t)+\sin(2\pi f_2t)=2\cdot\sin\left(2\pi \frac{f_1+f_2}{2}t\right)\cdot\cos\left(2\pi \frac{f_1-f_2}{2}t\right)$$
To demonstrate this, there you see two sinus-tones of almost same frequency, and what happens if you mix them:
The blue curve is of frequency $(f_1-f_2)/2$.
As example, one machine running at 3000rpm and one at 3030rpm results in a noise which increases for one seconds before it decreases within one second again.
As said, I don't know how it is done, but synchronization must be done very precisely to avoid this beat.
Edit:
Here is what happens if the two noises do not have the same volume. One of the curves has three times the amplitude of the other. The envelope is not a pure sin function, but the blue function fits it quite well.
|
Preprints (rote Reihe) des Fachbereich Mathematik Refine
238
Despite their very good empirical performance most of the simplex algorithm's variants require exponentially many pivot steps in terms of the problem dimensions of the given linear programming problem (LPP) in worst-case situtation. The first to explain the large gap between practical experience and the disappointing worst-case was Borgwardt (1982a,b), who could prove polynomiality on tbe average for a certain variant of the algorithm-the " Schatteneckenalgorithmus (shadow vertex algorithm)" - using a stochastic problem simulation.
223
A Simple Integral Representation for the Second Moments of Additive Random Variables on Stochastic Polyhedra (1992)
Let \(a_1, i:=1,\dots,m\), be an i.i.d. sequence taking values in \(\mathbb{R}^n\), whose convex hull is interpreted as a stochastic polyhedron \(P\). For a special class of random variables, which decompose additively relative to their boundary simplices, eg. the volume of \(P\), simple integral representations of its first two moments are given in case of rotationally symmetric distributions in order to facilitate estimations of variances or to quantify large deviations from the mean.
276
Let \(a_1,\dots,a_n\) be independent random points in \(\mathbb{R}^d\) spherically symmetrically but not necessarily identically distributed. Let \(X\) be the random polytope generated as the convex hull of \(a_1,\dots,a_n\) and for any \(k\)-dimensional subspace \(L\subseteq \mathbb{R}^d\) let \(Vol_L(X) :=\lambda_k(L\cap X)\) be the volume of \(X\cap L\) with respect to the \(k\)-dimensional Lebesgue measure \(\lambda_k, k=1,\dots,d\). Furthermore, let \(F^{(i)}\)(t):= \(\bf{Pr}\) \(\)(\(\Vert a_i \|_2\leq t\)), \(t \in \mathbb{R}^+_0\) , be the radial distribution function of \(a_i\). We prove that the expectation functional \(\Phi_L\)(\(F^{(1)}, F^{(2)},\dots, F^{(n)})\) := \(E(Vol_L(X)\)) is strictly decreasing in each argument, i.e. if \(F^{(i)}(t) \le G^{(i)}(t)t\), \(t \in {R}^+_0\), but \(F^{(i)} \not\equiv G^{(i)}\), we show \(\Phi\) \((\dots, F^{(i)}, \dots\)) > \(\Phi(\dots,G^{(i)},\dots\)). The proof is clone in the more general framework of continuous and \(f\)- additive polytope functionals.
245
Let \(A\):= {\(a_i\mid i= 1,\dots,m\)} be an i.i.d. random sample in (\mathbb{R}^n\), which we consider a random polyhedron, either as the convex hull of the \(a_i\) or as the intersection of halfspaces {\(x \mid a ^T_i x\leq 1\)}. We introduce a class of polyhedral functionals we will call "additive-type functionals", which covers a number of polyhedral functionals discussed in different mathematical fields, where the emphasis in our contribution will be on those, which arise in linear optimization theory. The class of additive-type functionals is a suitable setting in order to unify and to simplify the asymptotic probabilistic analysis of first and second moments of polyhedral functionals. We provide examples of asymptotic results on expectations and on variances.
262
An improved asymptotic analysis of the expected number of pivot steps required by the simplex algorithm (1995)
Let \(a_1,\dots,a_m\) be i.i .d. vectors uniform on the unit sphere in \(\mathbb{R}^n\), \(m\ge n\ge3\) and let \(X\):= {\(x \in \mathbb{R}^n \mid a ^T_i x\leq 1\)} be the random polyhedron generated by. Furthermore, for linearly independent vectors \(u\), \(\bar u\) in \(\mathbb{R}^n\), let \(S_{u, \bar u}(X)\) be the number of shadow vertices of \(X\) in \(span (u, \bar u\)). The paper provides an asymptotic expansion of the expectation value \(E (S_{u, \bar u})\) for fixed \(n\) and \(m\to\infty\). The first terms of the expansion are given explicitly. Our investigation of \(E (S_{u, \bar u})\) is closely connected to Borgwardt's probabilistic analysis of the shadow vertex algorithm - a parametric variant of the simplex algorithm. We obtain an improved asymptotic upper bound for the number of pivot steps required by the shadow vertex algorithm for uniformly on the sphere distributed data.
250
Let (\(a_i)_{i\in \bf{N}}\) be a sequence of identically and independently distributed random vectors drawn from the \(d\)-dimensional unit ball \(B^d\)and let \(X_n\):= convhull \((a_1,\dots,a_n\)) be the random polytope generated by \((a_1,\dots\,a_n)\). Furthermore, let \(\Delta (X_n)\) : = (Vol \(B^d\) \ \(X_n\)) be the deviation of the polytope's volume from the volume of the ball. For uniformly distributed \(a_i\) and \(d\ge2\), we prove that tbe limiting distribution of \(\frac{\Delta (X_n)} {E(\Delta (X_n))}\) for \(n\to\infty\) satisfies a 0-1-law. Especially, we provide precise information about the asymptotic behaviour of the variance of \(\Delta (X_n\)). We deliver analogous results for spherically symmetric distributions in \(B^d\) with regularly varying tail.
282
Let \(a_1,\dots,a_m\) be independent random points in \(\mathbb{R}^n\) that are independent and identically distributed spherically symmetrical in \(\mathbb{R}^n\). Moreover, let \(X\) be the random polytope generated as the convex hull of \(a_1,\dots,a_m\) and let \(L_k\) be an arbitrary \(k\)-dimensional subspace of \(\mathbb{R}^n\) with \(2\le k\le n-1\). Let \(X_k\) be the orthogonal projection image of \(X\) in \(L_k\). We call those vertices of \(X\), whose projection images in \(L_k\) are vertices of \(X_k\)as well shadow vertices of \(X\) with respect to the subspace \(L_k\) . We derive a distribution independent sharp upper bound for the expected number of shadow vertices of \(X\) in \(L_k\).
233
Let \(a_i i:= 1,\dots,m.\) be an i.i.d. sequence taking values in \(\mathbb{R}^n\). Whose convex hull is interpreted as a stochastic polyhedron \(P\). For a special class of random variables which decompose additively relative to their boundary simplices, eg. the volume of \(P\), integral representations of their first two moments are given which lead to asymptotic estimations of variances for special "additive variables" known from stochastic approximation theory in case of rotationally symmetric distributions.
248
The article provides an asymptotic probabilistic analysis of the variance of the number of pivot steps required by phase II of the "shadow vertex algorithm" - a parametric variant of the simplex algorithm, which has been proposed by Borgwardt [1] . The analysis is done for data which satisfy a rotationally invariant distribution law in the \(n\)-dimensional unit ball.
|
__author__ = "Christopher Potts"__version__ = "CS224u, Stanford, Spring 2019"
Thus far, all of the information in our word vectors has come solely from co-occurrences patterns in text. This information is often very easy to obtain – though one does need a
lot of text – and it is striking how rich the resulting representations can be.
Nonetheless, it seems clear that there is important information that we will miss this way – relationships that just aren't encoded at all in co-occurrences or that get distorted by such patterns.
For example, it is probably straightforward to learn representations that will support the inference that all puppies are dogs (
puppy entails dog), but it might be difficult to learn that dog entails mammal because of the unusual way that very broad taxonomic terms like mammal are used in text.
The question then arises: how can we bring structured information – labels – into our representations? If we can do that, then we might get the best of both worlds: the ease of using co-occurrence data and the refinement that comes from using labeled data.
In this notebook, we look at one powerful method for doing this: the
retrofitting model of Faruqui et al. 2016. In this model, one learns (or just downloads) distributed representations for nodes in a knowledge graph and then updates those representations to bring connected nodes closer to each other.
This is an incredibly fertile idea; the final section of the notebook reviews some recent extensions, and new ones are likely appearing all the time.
%matplotlib inlinefrom collections import defaultdictfrom nltk.corpus import wordnet as wnimport numpy as npimport osimport pandas as pdimport retrofittingfrom retrofitting import Retrofitterimport utils
data_home = 'data'
For an
an existing VSM $\widehat{Q}$ of dimension $m \times n$, and a set of edges $E$ (pairs of indices into rows in $\widehat{Q}$), the retrofitting objective is to obtain a new VSM $Q$ (also dimension $m \times n$) according to the following objective:
The left term encodes a pressure to stay like the original vector. The right term encodes a pressure to be more like one's neighbors. In minimizing this objective, we should be able to strike a balance between old and new, VSM and graph.
Definitions:
$\|u - v\|_{2}^{2}$ gives the
squared euclidean distance from $u$ to $v$.
$\alpha$ and $\beta$ are weights we set by hand, controlling the relative strength of the two pressures. In the paper, they use $\alpha=1$ and $\beta = \frac{1}{\{j : (i, j) \in E\}}$.
To get a feel for what's happening, it's helpful to visualize the changes that occur in small, easily understood VSMs and graphs. The function
retrofitting.plot_retro_path helps with this.
Q_hat = pd.DataFrame( [[0.0, 0.0], [0.0, 0.5], [0.5, 0.0]], columns=['x', 'y'])Q_hat
x y 0 0.0 0.0 1 0.0 0.5 2 0.5 0.0
edges_0 = {0: {1, 2}, 1: set(), 2: set()}_ = retrofitting.plot_retro_path(Q_hat, edges_0)
edges_all = {0: {1, 2}, 1: {0, 2}, 2: {0, 1}}_ = retrofitting.plot_retro_path(Q_hat, edges_all)
edges_isolated = {0: {1, 2}, 1: {0, 2}, 2: set()}_ = retrofitting.plot_retro_path(Q_hat, edges_isolated)
_ = retrofitting.plot_retro_path( Q_hat, edges_all, retrofitter=Retrofitter(alpha=lambda x: 0))
Faruqui et al. conduct experiments on three knowledge graphs: WordNet, FrameNet, and the Penn Paraphrase Database (PPDB). The repository for their paper includes the graphs that they derived for their experiments.
Here, we'll reproduce just one of the two WordNet experiments they report, in which the graph is formed based on synonymy.
WordNet is an incredible, hand-built lexical resource capturing a wealth of information about English words and their inter-relationships. (Here is a collection of WordNets in other languages.) For a detailed overview using NLTK, see this tutorial.
The core concepts:
A
lemma is something like our usual notion of word. Lemmas are highly sense-disambiguated. For instance, there are six lemmas that are consistent with the string
crane: the bird, the machine, the poets, ...
A
synset is a collection of lemmas that are synonymous in the WordNet sense (which is WordNet-specific; words with intuitively different meanings might still be grouped together into synsets.).
WordNet is a graph of relations between lemmas and between synsets, capturing things like hypernymy, antonymy, and many others. For the most part, the relations are defined between nouns; the graph is sparser for other areas of the lexicon.
lems = wn.lemmas('crane', pos=None)for lem in lems: ss = lem.synset() print("="*70) print("Lemma name: {}".format(lem.name())) print("Lemma Synset: {}".format(ss)) print("Synset definition: {}".format(ss.definition()))
====================================================================== Lemma name: Crane Lemma Synset: Synset('crane.n.01') Synset definition: United States writer (1871-1900) ====================================================================== Lemma name: Crane Lemma Synset: Synset('crane.n.02') Synset definition: United States poet (1899-1932) ====================================================================== Lemma name: Crane Lemma Synset: Synset('grus.n.01') Synset definition: a small constellation in the southern hemisphere near Phoenix ====================================================================== Lemma name: crane Lemma Synset: Synset('crane.n.04') Synset definition: lifts and moves heavy objects; lifting tackle is suspended from a pivoted boom that rotates around a vertical axis ====================================================================== Lemma name: crane Lemma Synset: Synset('crane.n.05') Synset definition: large long-necked wading bird of marshes and plains in many parts of the world ====================================================================== Lemma name: crane Lemma Synset: Synset('crane.v.01') Synset definition: stretch (the neck) so as to see better
A central challenge of working with WordNet is that one doesn't usually encounter lemmas or synsets in the wild. One probably gets just strings, or maybe strings with part-of-speech tags. Mapping these objects to lemmas is incredibly difficult.
For our experiments with VSMs, we simply collapse together all the senses that a given string can have. This is expedient, of course. It might also be a good choice linguistically: senses are flexible and thus hard to individuate, and we might hope that our vectors can model multiple senses at the same time.
The following code uses the NLTK WordNet API to create the edge dictionary we need for using the
Retrofitter class:
def get_wordnet_edges(): edges = defaultdict(set) for ss in wn.all_synsets(): lem_names = {lem.name() for lem in ss.lemmas()} for lem in lem_names: edges[lem] |= lem_names return edges
wn_edges = get_wordnet_edges()
For our VSM, let's use the 300d file included in this distribution from the GloVe team, as it is close to or identical to the one used in the paper:
If you download this archive, place it in
vsmdata, and unpack it, then the following will load the file into a dictionary for you:
glove_dict = utils.glove2dict( os.path.join(data_home, 'glove.6B', 'glove.6B.300d.txt'))
This is the initial embedding space $\widehat{Q}$:
X_glove = pd.DataFrame(glove_dict).T
X_glove.T.shape
(300, 400000)
Now we just need to replace all of the strings in
edges with indices into
X_glove:
def convert_edges_to_indices(edges, Q): lookup = dict(zip(Q.index, range(Q.shape[0]))) index_edges = defaultdict(set) for start, finish_nodes in edges.items(): s = lookup.get(start) if s: f = {lookup[n] for n in finish_nodes if n in lookup} if f: index_edges[s] = f return index_edges
wn_index_edges = convert_edges_to_indices(wn_edges, X_glove)
And now we can retrofit:
wn_retro = Retrofitter(verbose=True)
X_retro = wn_retro.fit(X_glove, wn_index_edges)
Converged at iteration 10; change was 0.0043
# Optionally write `X_retro` to disk for use elsewhere:## X_retro.to_csv(# os.path.join(data_home, 'glove6B300d-retrofit-wn.csv.gz'), compression='gzip')
Here are the results I obtained, using the same metrics as in Table 2 of Faruqui et al. 2016:
VSM WS-353 MEN-3K MTurk-287 MTurk-771 Original GloVe 0.601 0.749 0.633 0.650 Retro GloVe 0.613 0.760 0.640 0.636 Performance change +0.012 +0.011 +0.007 –0.014
The retrofitting idea is very close to
graph embedding, in which one learns distributed representations of nodes based on their position in the graph. See Hamilton et al. 2017 for an overview of these methods. There are numerous parallels with the material we've reviewed here.
If you think of the input VSM as a "warm start" for graph embedding algorithms, then you're essentially retrofitting. This connection opens up a number of new opportunities to go beyond the similarity-based semantics that underlies Faruqui et al.'s model. See Lengerich et al. 2017, section 3.2, for more on these connections.
Mrkšić et al. 2016 address the limitation of Faruqui et al's model that it assumes connected nodes in the graph are similar. In a graph with complex, varied edge semantics, this is likely to be false. They address the case of antonymy in particular.
Lengerich et al. 2017 present a
functional retrofitting framework in which the edge meanings are explicitly modeled, and they evaluate instantiations of the framework with linear and neural edge penalty functions. (The Faruqui et al. model emerges as a specific instantiation of this framework.)
|
Advances in Differential Equations Adv. Differential Equations Volume 16, Number 5/6 (2011), 487-522. Local and global properties of solutions of heat equation with superlinear absorption Abstract
We study the limit when $k\to\infty$ of the solutions of $ \partial_tu-\Delta u+f(u)=0$ in $\mathbb R^N\times (0,\infty)$ with initial data $k\delta$, when $f$ is a positive superlinear increasing function. We prove that there exist essentially three types of possible behaviour according to whether $f^{-1}$ and $F^{-1/2}$ belong or not to $L^1(1,\infty)$, where $F(t)=\int_0^t f(s)ds$. We use these results for providing a new and more general construction of the initial trace and some uniqueness and nonuniqueness results for solutions with unbounded initial data.
Article information Source Adv. Differential Equations, Volume 16, Number 5/6 (2011), 487-522. Dates First available in Project Euclid: 17 December 2012 Permanent link to this document https://projecteuclid.org/euclid.ade/1355703298 Mathematical Reviews number (MathSciNet) MR2816114 Zentralblatt MATH identifier 1222.35109 Subjects Primary: 35K58: Semilinear parabolic equations 35K91: Semilinear parabolic equations with Laplacian, bi-Laplacian or poly- Laplacian 35K15: Initial value problems for second-order parabolic equations Citation
Phuoc, Tai Nguyen; Véron, Laurent. Local and global properties of solutions of heat equation with superlinear absorption. Adv. Differential Equations 16 (2011), no. 5/6, 487--522. https://projecteuclid.org/euclid.ade/1355703298
|
For the calculation of the $$n$$-th power of a complex number expressed in trigonometric form let's see what the expression will be. We consider the product of $$n$$ complex numbers in trigonometric form:
$$$\big( |z|\cdot[\cos(\alpha)+i \cdot\sin(\alpha)] \big)^n=$$$
$$$= \big( |z|\cdot[\cos(\alpha)+i \cdot\sin(\alpha)] \big) \stackrel{(n)}{\cdots} \big( |z|\cdot[\cos(\alpha)+i \cdot\sin(\alpha)] \big)=$$$
$$$=|z|^n \cdot[\cos(n\alpha)+i \cdot\sin(n\alpha)]$$$ This formula is the one that gives the n-th power of a complex number in trigonometric form and that was given by Moivre.
Let's see an example: $$$ \displaystyle \begin{array}{rl} \big( 5\cdot[\cos(60^\circ)+i \cdot\sin(60^\circ)] \big)^3 =& 5^3 \cdot[\cos(3\cdot60^\circ)+i \cdot\sin(3\cdot60^\circ)] \\ =& 125 \cdot[\cos(180^\circ)+i \cdot\sin(180^\circ)] \end{array} $$$
Once the exponentiation is done, we can study the roots.
Given a complex number, any other complex number that is raised to the n-th exponent which gives an equal result to this first one, is said to be its $$n$$-th root.
Let's see whether, given any complex number whose module and argument are given by $$R$$ and $$\phi$$ respectively, it always has $$n$$-th roots.
From the definition the condition so that $$|z|\cdot[\cos(\alpha)+i \cdot\sin(\alpha)]$$ is an $$n$$-th root, is: $$$ \big( |z|\cdot[\cos(\alpha)+i \cdot\sin(\alpha)] \big)^n = R\cdot [\cos(\phi)+i \cdot\sin(\phi)]$$$
The two numbers represented by the first and second member of this equality have to be equal, that is, they will have to have the same norm and its arguments will have to differ in a multiple of $$360^\circ$$:
$$$ |z|^n=R \quad \text{ and } \quad n\alpha=\phi+k\cdot 360^\circ$$$
The norm $$|z|$$ is perfectly determined by the first of these equations, since it has to be a positive number and with its $$n$$-th power equal to $$R$$, whereby we conclude that:
$$$|z|=\sqrt[n]{R}$$$ This way, $$|z|$$ is the arithmetic $$n$$-th root of $$R$$.
As for the argument $$\alpha$$, the second equation gives us: $$$\alpha=\dfrac{\phi}{n}+\dfrac{k\cdot 360^\circ}{n}$$$
At first sight it might seem that it has infinite values, but in fact, for $$k = 0$$ to $$k = n-1$$ we will obtain $$n$$ different arguments, and from that we will obtain the same complex numbers that we had already found.
In short, any non zero complex number $$R\cdot [\cos(\phi)+i \cdot\sin(\phi)]$$ has $$n$$ different $$n$$-th roots, which norm is the same for all, (it is equal to the arithmetic $$n$$-th root of the norm $$R$$) and which arguments (except for multiples of $$360^\circ$$) are:
$$$ \dfrac{\phi}{n}, \ \dfrac{\phi+360^\circ}{n}, \ \dfrac{\phi+2\cdot 360^\circ}{n}, \ \dfrac{\phi+3\cdot 360^\circ}{n}, \ \dots \ , \ \dfrac{\phi+(n-1)\cdot 360^\circ}{n}$$$
Let's calculate the fourth roots of: $$$4\cdot[\cos(60^\circ)+i \cdot\sin(60^\circ)]$$$ We look for $$$ |z|^4=4 \ \Rightarrow \ |z|=\sqrt[4]{4}=\sqrt{2}$$$ And the arguments will be: $$$ \dfrac{60^\circ}{4}, \ \dfrac{60^\circ+360^\circ}{4}, \ \dfrac{60^\circ+2\cdot 360^\circ}{4}, \ \dfrac{60^\circ+3\cdot 360^\circ}{4}$$$
As we can see this method is very similar to the one used with the polar form of the complex numbers.
|
Preprints (rote Reihe) des Fachbereich Mathematik Refine Year of publication 1993 (13) (remove)
240
244
242
Efficient algorithms and structural results are presented for median problems with 2 new facilities including the classical 2-Median problem, the 2-Median problem with forbidden regions and bicriterial 2-Median problems. This is the first paper dealing with multi-facility multiobjective location problems. The time complexity of all presented algorithms is O(MlogM), where M is the number of existing facilities.
238
Despite their very good empirical performance most of the simplex algorithm's variants require exponentially many pivot steps in terms of the problem dimensions of the given linear programming problem (LPP) in worst-case situtation. The first to explain the large gap between practical experience and the disappointing worst-case was Borgwardt (1982a,b), who could prove polynomiality on tbe average for a certain variant of the algorithm-the " Schatteneckenalgorithmus (shadow vertex algorithm)" - using a stochastic problem simulation.
245
Let \(A\):= {\(a_i\mid i= 1,\dots,m\)} be an i.i.d. random sample in (\mathbb{R}^n\), which we consider a random polyhedron, either as the convex hull of the \(a_i\) or as the intersection of halfspaces {\(x \mid a ^T_i x\leq 1\)}. We introduce a class of polyhedral functionals we will call "additive-type functionals", which covers a number of polyhedral functionals discussed in different mathematical fields, where the emphasis in our contribution will be on those, which arise in linear optimization theory. The class of additive-type functionals is a suitable setting in order to unify and to simplify the asymptotic probabilistic analysis of first and second moments of polyhedral functionals. We provide examples of asymptotic results on expectations and on variances.
248
The article provides an asymptotic probabilistic analysis of the variance of the number of pivot steps required by phase II of the "shadow vertex algorithm" - a parametric variant of the simplex algorithm, which has been proposed by Borgwardt [1] . The analysis is done for data which satisfy a rotationally invariant distribution law in the \(n\)-dimensional unit ball.
246
Max ordering (MO) optimization is introduced as tool for modelling production planning with unknown lot sizes and in scenario modelling. In MO optimization a feasible solution set \(X\) and, for each \(x\in X, Q\) individual objective functions \(f_1(x),\dots,f_Q(x)\) are given. The max ordering objective \(g(x):=max\) {\(f_1(x),\dots,f_Q(x)\)} is then minimized over all \(x\in X\). The paper discusses complexity results and describes exact and approximative algorithms for the case where \(X\) is the solution set of combinatorial optimization problems and network flow problems, respectively.
239
We investigate two versions of multiple objective minimum spanning tree problems defined on a network with vectorial weights. First, we want to minimize the maximum of Q linear objective functions taken over the set of all spanning trees (max linear spanning tree problem ML-ST). Secondly, we look for efficient spanning trees (multi criteria spanning tree problem MC-ST). Problem ML-ST is shown to be NP-complete. An exact algorithm which is based on ranking is presented. The procedure can also be used as an approximation scheme. For solving the bicriterion MC-ST, which in the worst case may have an exponential number of efficient trees, a two-phase procedure is presented. Based on the computation of extremal efficient spanning trees we use neighbourhood search to determine a sequence of solutions with the property that the distance between two consecutive solutions is less than a given accuracy.
236
Es wird anhand von Beispielen, an denen der Autor in der Vergangenheit gearbeitet hat, gezeigt, wie man Modelle der exakten Naturwissenschaften auf wirtschaftliche Probleme anwenden kann. Insbesondere wird diskutiert, wo Grenzen dieser Übertragbarkeit liegen. Die Arbeit ist eine Zusammenfassung eines Vortrags, der im SS 1992 im Rahmen des Studium Generale an der Universität Kaiserslautern gehalten wurde.
243
Given Q different objective functions, three types of single-facility problems are considered: Lexicographic, pareto and max ordering problems. After discussing the interrelation between the problem types, a complete characterization of lexicographic locations and some instances of pareto and max ordering locations is given. The characterizations result in efficient solution algorithms for finding these locations. The paper relies heavily on the theory of restricted locations developed by the same authors, and can be further extended, for instance, to multifacility problems with several objectives. The proposed approach is more general than previously published results on multicriteria planar location problems and is particulary suited for modelling real-world problems.
|
I'm confused about something related to the splitting principle. It seems to imply that the Chern classes of any endomorphism bundle are zero, which is not true.
Let $M$ be a manifold of dimension $n$, and $E \to M$ be a smooth complex vector bundle of rank $r$. There exists some space $M'$ and a map $f : M' \to M$ such that $f^*E$ splits as a direct sum of complex line bundles $L_j$, and such that $f^* : H^*(M, \mathbb R) \to H^*(M', \mathbb R)$ is injective.
If $E' = \bigoplus_{j} L_j$, then $\operatorname{End} E'$ is flat: Equip each $L_j$ with some hermitian metric $h_j$, and denote its curvature form by $\omega_j$. The curvature form $\Theta$ of the induced metric on the direct sum is then a diagonal matrix with entries $\omega_j$. But because $\Theta$ is diagonal, it is equal to its transpose, so $$ \Theta_{\operatorname{End} E'} = \operatorname{id}_{(E')^*} \otimes\, \Theta - \Theta^{t} \otimes \operatorname{id}_{E'} = 0. $$
Thus all the Chern roots of $\operatorname{End} f^* E \cong \operatorname{End} E'$ are zero, so all its Chern classes are zero. But as the Chern classes are functorial and $f^*$ injective, all the Chern classes of $\operatorname{End} E$ are then zero as well.
Where have I gone wrong here?
|
The 3 most popular average algorithms are:
Median average is when the middle value within a range is found. For example, consider a class of 5 students whose height are 1.54m, 1.55cm, 1.56cm, 1.54cm and 1.53cm respectively. The Arithmetic Mean of the class height is 1.544cm and the Mode average is 1.54cm. However if the students are placed next to each other with their height ascending, the student in the middle will be the 1.55m student. The sorting of the class in ascending order and taking the student in the middle as the class’ height average is called
Median average.
Formulating the Median algorithm into Mathematical Syntax\text{Given a sample S}\\ \text{Let }O_S = (x | \forall x \in S)\\ Median(S) = \exists x \in O_S | x = O_{S\big\lceil\frac{|O_S|}{2}\big\rceil}
The expression x = O_{S\big\lceil\frac{|O_S|}{2}\big\rceil} raises a concern. What happens when the length of the set is even. The equation states that the middle point will be one of the middle points, but in reality the middle point of the set is a fraction in between 2 values.
In the case of samples with even number of sample points, the Median express above is modified slightly such that the Median value will be the Arithmetic Mean of the 2 sample points around the set middle. In Mathemtical notationMedian(S) = \frac{x_{O_{S\frac{|O_S|}{2}}} + x_{O_{S\big\{\frac{|O_S|}{2} + 1\big\}}}}{2}
The last equation can be used even when the length of the sample is odd. This can be done as the ceiling and floor value of a whole number is the whole number itself, and then the same number is added to itself and divided by 2 will return the same number. Thus keeping the two Median formulæ above as an optimisation for odd and even length sets the final Median notation is\text{Given a sample S}\\ \text{Let }O_S = (x | \forall x \in S)\\ Median(S) = \begin{cases}\exists x \in O_S | x = O_{S\big\lceil\frac{|O_S|}{2}\big\rceil} & \text{if } |O_S| \text{ is odd}\\ \frac{x_{O_{S\frac{|O_S|}{2}}} + x_{O_{S\big\{\frac{|O_S|}{2} + 1\big\}}}}{2} & \text{if } |O_S| \text{ is even}\end{cases}
The mathematical formula above with the sample examples can be found implemented in 5 different programming languages in our Github repository.
Examples Example 1:
Consider the following simple set S = \{1, 2, 3, 4, 5, 6, 7, 8, 9\}. Using the formula above, we can find the Median for the sample S as follow.
Sort the set in ascending order O_S = (1, 2, 3, 4, 5, 6, 7, 8, 9) Find the length of the set S |O_S| = 9 Return the element at the middle of the set Median = x_{O_{S\big\lceil\frac{9}{2}\big\rceil}} = 5 Example 2S = \left\{\begin{matrix}0.4963474212, 0.1976482578, 0.9688106204,\\ 0.6871861999,0.0291216746, 0.9002658844,\\0.5500229849, 0.7883546496, 0.9563570268,\\0.5221225236\end{matrix}\right\} Workings: Sort the set in ascending order \left(\begin{matrix}0.0291216746, 0.197648278, 0.4963474212,\\0.5221225236, 0.5500229849, 0.6871861999,\\0.7883546496, 0.9002658844, 0.9563570268,\\0.9688106204\end{matrix}\right) Find the length of the set S |O_S| = 10 Return the average between the 2 elements at the middle of the set Median = \frac{x_{O_{S\frac{10}{2}}} + x_{O_{S\big\{\frac{10}{2} + 1\big\}}}}{2} = 0.6186045924 Example 3S = \left\{\begin{matrix}51,97,43,20,48,48,96,63,10,35,16,\\4,42,80,18,1,67,75,46,92,38,44,\\87,69,54,91,6,8,60,64,53,23,86\end{matrix}\right\} Workings: Sort the set in ascending order \left(\begin{matrix}1,4,6,8,10,16,18,20,23,35,38,\\42,43,44,46,48,48,51,53,54,60,\\63,64,67,69,75,80,86,87,91,92,96,97\end{matrix}\right) Find the length of the set S |O_S| = 33 Return the element at the middle of the set Median = x_{O_{S\big\lceil\frac{33}{2}\big\rceil}} = 48
|
Preprints (rote Reihe) des Fachbereich Mathematik
258
On the Convergence at lnfinity of Solutions with Finite Dirichlet Integral to the Exterior Dirichlet Problem for the Steady Plane Navier-Stokes System of Equations (1994)
254
253-2
Order-semi-primal lattices (1994)
250
Let (\(a_i)_{i\in \bf{N}}\) be a sequence of identically and independently distributed random vectors drawn from the \(d\)-dimensional unit ball \(B^d\)and let \(X_n\):= convhull \((a_1,\dots,a_n\)) be the random polytope generated by \((a_1,\dots\,a_n)\). Furthermore, let \(\Delta (X_n)\) : = (Vol \(B^d\) \ \(X_n\)) be the deviation of the polytope's volume from the volume of the ball. For uniformly distributed \(a_i\) and \(d\ge2\), we prove that tbe limiting distribution of \(\frac{\Delta (X_n)} {E(\Delta (X_n))}\) for \(n\to\infty\) satisfies a 0-1-law. Especially, we provide precise information about the asymptotic behaviour of the variance of \(\Delta (X_n\)). We deliver analogous results for spherically symmetric distributions in \(B^d\) with regularly varying tail.
256
The local isometric imbedding in \(\mathbb{R}^3\) of two-dimensional Riemannian manifolds with Gaussian curvature changing sign arbitrarily (1994)
|
Line-of-sight propagation is a characteristic of electro-magnetic radiation or acoustic wave propagation. Electromagnetic transmission includes light emissions traveling in a straight line. The rays or waves may be diffracted, refracted, reflected, or absorbed by atmosphere and obstructions with material and generally cannot travel over the horizon or behind obstacles.
Line of sight propagation to an antenna
At low frequencies (below approximately 3 MHz) radio signals travel as ground waves, which follow the Earth's curvature due to diffraction with the layers of atmosphere. This enables AM radio signals in low-noise environments to be received well after the transmitting antenna has dropped below the horizon. Additionally, frequencies between approximately 1 and 30 MHz can be reflected by the F1/F2 Layer, thus giving radio transmissions in this range a potentially global reach (see shortwave radio), again along multiple deflected straight lines. The effects of multiple diffraction or reflection lead to macroscopically "quasi-curved paths".
However, at higher frequencies and in lower levels of the atmosphere, neither of these effects are significant. Thus any obstruction between the transmitting antenna and the receiving antenna will block the signal, just like the light that the eye may sense. Therefore, since the ability to visually see a transmitting antenna (disregarding the limitations of the eye's resolution) roughly corresponds to the ability to receive a radio signal from it, the propagation characteristic of high-frequency radio is called "line-of-sight". The farthest possible point of propagation is referred to as the "radio horizon".
In practice, the propagation characteristics of these radio waves vary substantially depending on the exact frequency and the strength of the transmitted signal (a function of both the transmitter and the antenna characteristics). Broadcast FM radio, at comparatively low frequencies of around 100 MHz, are less affected by the presence of buildings and forests.
Radio horizon
The
radio horizon is the locus of points at which direct rays from an antenna are tangential to the surface of the Earth. If the Earth were a perfect sphere and there were no atmosphere, the radio horizon would be a circle.
R is the radius of the Earth, h is the height of the transmitter (exaggerated), d is the line of sight distance
The radio horizon of the transmitting and receiving antennas can be added together to increase the effective communication range. Antenna heights above 1,000,000 feet (189 miles; 305 kilometres) will cover the entire hemisphere and not increase the radio horizon.
Radio wave propagation is affected by atmospheric conditions, ionospheric absorption, and the presence of obstructions, for example mountains or trees. Simple formulas that include the effect of the atmosphere give the range as:
\mathrm{horizon}_\mathrm{miles} \approx \sqrt{2 \times \mathrm{height}_\mathrm{feet}}. \mathrm{horizon}_\mathrm{km} \approx 3.57 \cdot \sqrt{\mathrm{height}_\mathrm{metres}}
The simple formulas give a best-case approximation of the maximum propagation distance but are not sufficient to estimate the quality of service at any location.
Earth bulge and atmosphere effect
Earth bulge is a term used in telecommunications. It refers to the circular segment of earth profile which blocks off long distance communications. Since the geometric line of sight passes at varying heights over the Earth, the propagating radio wave encounters slightly different propagation conditions over the path. The usual effect of the declining pressure of the atmosphere with height is to bend radio waves down toward the surface of the Earth, effectively increasing the Earth's radius, and the distance to the radio horizon, by a factor around 4/3. [1] This k-factor can change from its average value depending on weather. Geometric distance to horizon
Assuming a perfect sphere with no terrain irregularity, the distance to horizon from a high altitude transmitter (i.e., line of sight) can readily be calculated.
Let
R be the radius of Earth and h be the altitude of a telecommunication station. Line of sight distance d of this station is given by the Pythagorean theorem; d^2=(R+h)^{2}-R^2= 2\cdot R \cdot h +h^2
Since the altitude of the station is much less than the radius of the Earth,
d \approx \sqrt{ 2\cdot R \cdot h}
If the height is given in metres, and distance in kilometres,
[2] d \approx 3.57 \cdot \sqrt{h}
If the height is given in feet, and the distance in miles,
d \approx 1.23 \cdot \sqrt{h} The actual service range
The above analysis doesn’t take the effect of atmosphere on the propagation path of the RF signals into consideration. In fact, the RF signals don’t propagate in straight lines. Because of the refractive effects of atmospheric layers, the propagation paths are somewhat curved. Thus, the maximum service range of the station, is not equal to the line of sight (geometric) distance. Usually a factor
k is used in the equation above d \approx \sqrt{2 \cdot k \cdot R \cdot h}
k > 1 means geometrically reduced bulge and a longer service range. On the other hand, k < 1 means a shorter service range.
Under normal weather conditions
k is usually chosen [3] to be 4/3. That means that, the maximum service range increases by % 15 d \approx 4.12 \cdot \sqrt{h}
for
h in meters and d in km. d \approx 1.41 \cdot\sqrt{h}
for
h in feet and d in miles ;
But in stormy weather,
k may decrease to cause fading in transmission. (In extreme cases k can be less than 1.) That is equivalent to a hypothetical decrease in Earth radius and an increase of Earth bulge. [4] Example
In normal weather conditions, the service range of a station at an altitude of
1500 m. with respect to receivers at sea level can be found as, d \approx 4.12 \cdot \sqrt{1500} = 160 \mbox { km.} Line-of-sight propagation as a prerequisite for radio distance measurements
Travel time of radio waves between transmitters and receivers can be measured disregarding the type of propagation. But, generally, travel time only then represents the distance between transmitter and receiver, when line of sight propagation is the basis for the measurement. This applies as well to radar, to Real Time Locating and to lidar.
This rules: Travel time measurements for determining the distance between pairs of transmitters and receivers generally require line of sight propagation for proper results. Whereas the desire to have just any type of propagation to enable communication may suffice, this does never coincide with the requirement to have strictly line of sight at least temporarily as the means to obtain properly measured distances. However, the travel time measurement may be always biased by multi-path propagation including line of sight propagation as well as non line of sight propagation in any random share. A qualified system for measuring the distance between transmitters and receivers must take this phenomenon into account. Thus filtering signals traveling along various paths makes the approach either operationally sound or just tediously irritating.
Impairments to line-of-sight propagation
Two stations not in line-of-sight may be able to communicate through an intermediate radio repeater
station.
Low-powered microwave transmitters can be foiled by tree branches, or even heavy rain or snow.
If a direct visual fix cannot be taken, it is important to take into account the curvature of the Earth when calculating line-of-sight from maps.
The presence of objects not in the direct visual line of sight can interfere with radio transmission. This is caused by diffraction effects: for the best propagation, a volume known as the first Fresnel zone should be kept free of obstructions.
Objects within the Fresnel zone can disturb line of sight propagation even if they don't block the geometric line between antennas
Reflected radiation from the ground plane also acts to cancel out the direct signal. This effect, combined with the free-space r
−2 propagation loss to a r −4 propagation loss. This effect can be reduced by raising either or both antennas further from the ground: the reduction in loss achieved is known as height gain. Mobile telephones
Although the frequencies used by mobile phones (cell phones) are in the line-of-sight range, they still function in cities. This is made possible by a combination of the following effects:
r −4 propagation over the rooftop landscape diffraction into the "street canyon" below multipath reflection along the street diffraction through windows, and attenuated passage through walls, into the building reflection, diffraction, and attenuated passage through internal walls, floors and ceilings within the building
The combination of all these effects makes the mobile phone propagation environment highly complex, with multipath effects and extensive Rayleigh fading. For mobile phone services these problems are tackled using:
rooftop or hilltop positioning of base stations many base stations (usually called "cell sites"). A phone can typically see at least three, and usually as many as six at any given time. "sectorized" antennas at the base stations. Instead of one antenna with omnidirectional coverage the station may use as few as 3 (rural areas with few customers) or as many as 32 separate antennas each covering a portion of the circular coverage. This allows the base station to use a directional antenna that is pointing at the user, which improves the signal to noise ratio. If the user moves (perhaps by walking or driving) from one antenna sector to another the base station automatically selects the proper antenna. rapid handoff between base stations (roaming) the radio link used by the phones is a digital link with extensive error correction and detection in the digital protocol sufficient operation of mobile phone in tunnels when supported by split cable antennas local repeaters inside complex vehicles or buildings
Other conditions may physically disrupt the connection without prior notice:
temporal failure inside metal constructions as elevator cabins, trains, cars, ships (see Faraday Cage) local failure when using the mobile phone in buildings with extensive steel reinforcement (again, see Faraday Cage) See also References ^ Christopher Haslett, Essentials of radio wave propagation, Cambridge University Press, 2008 052187565X pages 119-120 ^ Mean Radius of the Earth is ≈6.37 x 10 6 metres = 6370 km. See Earth radius ^ R.Busi: Technical Monograph3108-1967 High Altitude VHF and UHF Broadcasting Stations, European Broadcasting Union Brussels,1967 ^ This analysis is for high altitude to sea level reception. In microwave radio link chains, both stations are high altitudes. External links http://www.wireless-center.net/Cisco-Wireless-Networking/728.html http://web.telia.com/~u85920178/data/pathlos.htm#bulges Article on the importance of Line Of Sight for UHF reception Attenuation Levels Through Roofs Approximating 2-Ray Model by using Binomial series by Matthew Bazajian
This article incorporates public domain material from the General Services Administration document "Federal Standard 1037C" (in support of MIL-STD-188).
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
|
According to the Wiki article, if $u$ and $v$ are locally integrable functions on some open subset of $\mathbb{R}^n$, then $v$ is the weak derivative of $u$ if, for any infinitely differentiable function $\varphi$ on $U$ with compact support, we have $$\int_U u D^{\alpha}\varphi = (-1)^{|\alpha|}\int v\varphi,$$ where $$D^{\alpha}\varphi = \frac{\partial^{|\alpha|}\varphi}{\partial x_1^{\alpha_1}...\partial x_n^{\alpha_n}}.$$
They then go on to say that the weak derivative is often notated $D^{\alpha}u$. Replacing this in the above definition, we get
$$\int_U u D^{\alpha}\varphi = (-1)^{|\alpha|}\int D^{\alpha}u \varphi.$$
Just checking my understanding here: $D^{\alpha}$ is used to signify two different things here, right? The one of $\varphi$ is a big partial derivative, while the one on $u$ denotes the weak derivative (i.e. a locally integrable function that satisfies that satisfies that identity). If so, is there no better notation that we can use? This looks terribly confusing.
|
So it starts by assuming a cube cavity with length $L$ that acts approximately as a black body. [note: the cube length is $a$ but I'm using $L$ instead]
Standing waves are formed inside the cavity, the equation for a standing wave in 3D is $$ E(x,t) = E_0 \sin(k_x x) \sin(k_y y) \sin(k_z z) \sin ( 2 \pi \nu t), $$ where $k_x = \frac{n_x \pi }{L}$ and so on.
The wave vector $ \vec k = k_x \hat i + k_y \hat j + k_z \hat k$
Where the magnitude of $k = \frac { 2 \pi}{ \lambda}$
Then $ k^2 = k^{2}_x + k^{2}_y + k^{2}_z = (\frac{n_x \pi }{L})^2 + (\frac{n_y \pi }{L})^2 + (\frac{n_z \pi }{L})^2 = \frac { 4 \pi^2}{ \lambda^2}$
Therefore,
$$ n^{2}_x + n^{2}_y + n^{2}_z = \frac { 4 L^2}{ \lambda^2} = \frac { 4 L^2 \nu^2}{c^2}$$
$$ \implies \nu^2 = \frac { c^2 n^2}{ 4 L^2}$$
Now here comes the first part that I don't understand:
'Volume of the n space is a sphere of volume $ \frac {4}{3} \pi n^3$ which gives the number of nodes.
What does that volume represent, how does it give the number of nodes?
Continuing the proof:
Since $n$ is a positive integer, we divide the volume by 8, then the new volume $V = \frac { \pi}{6} n^3 = \frac { \pi}{6} ( \frac {2L \nu}{c})^{3}$
Here comes the second part that doesn't make sense to me:
We got the volume $ V = \frac { \pi}{6} ( \frac {2L \nu}{c})^{3}$
He now differentiates both sides
$$ dV = 3 \nu^2 \frac {4}{3} \pi \frac {L^3}{c^3} d \nu = 4 \pi \frac {L^3}{c^3} \nu^2 d \nu$$
We know that $n$ is a positive integer that can't take on continuous values, then what does differentiating the volume with respect to the frequency mean? I hope my questions were clear, thank you.
|
We review binary logistic regression. In particular, we derive a) the equations needed to fit the algorithm via gradient descent, b) the maximum likelihood fit’s asymptotic coefficient covariance matrix, and c) expressions for model test point class membership probability confidence intervals. We also provide python code implementing a minimal “LogisticRegressionWithError” class whose “predict_proba” method returns prediction confidence intervals alongside its point estimates.
Our python code can be downloaded from our github page, here. Its use requires the jupyter, numpy, sklearn, and matplotlib packages.
Follow @efavdb Follow us on twitter for new submission alerts! Introduction
The logistic regression model is a linear classification model that can be used to fit binary data — data where the label one wishes to predict can take on one of two values — e.g., $0$ or $1$. Its linear form makes it a convenient choice of model for fits that are required to be interpretable. Another of its virtues is that it can — with relative ease — be set up to return both point estimates and also confidence intervals for test point class membership probabilities. The availability of confidence intervals allows one to flag test points where the model prediction is not precise, which can be useful for some applications — eg fraud detection.
In this note, we derive the expressions needed to fit the logistic model to a training data set. We assume the training data consists of a set of $n$ feature vector- label pairs, $\{(\vec{x}_i, y_i)$, for $i = 1, 2, \ldots, n\}$, where the feature vectors $\vec{x}_i$ belong to some $m$-dimensional space and the labels are binary, $y_i \in \{0, 1\}.$ The logistic model states that the probability of belonging to class $1$ is given by
\begin{eqnarray}\tag{1} \label{model1} p(y=1 \vert \vec{x}) \equiv \frac{1}{1 + e^{- \vec{\beta} \cdot \vec{x} } }, \end{eqnarray} where $\vec{\beta}$ is a coefficient vector characterizing the model. Note that with this choice of sign in the exponent, predictor vectors $\vec{x}$ having a large, positive component along $\vec{\beta}$ will be predicted to have a large probability of being in class $1$. The probability of class $0$ is given by the complement, \begin{eqnarray}\tag{2} \label{model2} p(y=0 \vert \vec{x}) \equiv 1 – p(y=1 \vert \vec{x}) = \frac{1}{1 + e^{ \vec{\beta} \cdot \vec{x} } }. \end{eqnarray} The latter equality above follows from simplifying algebra, after plugging in (\ref{model1}) for $p(y=1 \vert \vec{x}).$
To fit the Logistic model to a training set — i.e., to find a good choice for the fit parameter vector $\vec{\beta}$ — we consider here only the maximum-likelihood solution. This is that $\vec{\beta}^*$ that maximizes the conditional probability of observing the training data. The essential results we review below are 1) a proof that the maximum likelihood solution can be found by gradient descent, and 2) a derivation for the asymptotic covariance matrix of $\vec{\beta}$. This latter result provides the basis for returning point estimate confidence intervals.
On our GitHub page, we provide a Jupyter notebook that contains some minimal code extending the SKLearn LogisticRegression class. This extension makes use of the results presented here and allows for class probability confidence intervals to be returned for individual test points. In the notebook, we apply the algorithm to the SKLearn Iris dataset. The figure at right illustrates the output of the algorithm along a particular cut through the Iris data set parameter space. The y-axis represents the probability of a given test point belong to Iris class $1$. The error bars in the plot provide insight that is completely missed when considering the point estimates only. For example, notice that the error bars are quite large for each of the far right points, despite the fact that the point estimates there are each near $1$. Without the error bars, the high probability of these point estimates might easily be misinterpreted as implying high model confidence.
Our derivations below rely on some prerequisites: Properties of covariance matrices, the multivariate Cramer-Rao theorem, and properties of maximum likelihood estimators. These concepts are covered in two of our prior posts [$1$, $2$].
Optimization by gradient descent
In this section, we derive expressions for the gradient of the negative-log likelihood loss function and also demonstrate that this loss is everywhere convex. The latter result is important because it implies that gradient descent can be used to find the maximum likelihood solution.
Again, to fit the logistic model to a training set, our aim is to find — and also to set the parameter vector to — the maximum likelihood value. Assuming the training set samples are independent, the likelihood of observing the training set labels is given by
\begin{eqnarray} L &\equiv& \prod_i p(y_i \vert \vec{x}_i) \\ &=& \prod_{i: y_i = 1} \frac{1}{1 + e^{-\vec{\beta} \cdot \vec{x}_i}} \prod_{i: y_i = 0} \frac{1}{1 + e^{\vec{\beta} \cdot \vec{x}_i}}. \tag{3} \label{likelihood} \end{eqnarray} Maximizing this is equivalent to minimizing its negative logarithm — a cost function that is somewhat easier to work with, \begin{eqnarray} J &\equiv & -\log L \\ &=& \sum_{\{i: y_i = 1 \}} \log \left (1 + e^{- \vec{\beta} \cdot \vec{x}_i } \right ) + \sum_{\{i: y_i = 0 \}} \log \left (1 + e^{\vec{\beta} \cdot \vec{x}_i } \right ). \tag{4} \label{costfunction} \end{eqnarray} The maximum-likelihood solution, $\vec{\beta}^*$, is that coefficient vector that minimizes the above. Note that $\vec{\beta}^*$ will be a function of the random sample, and so will itself be a random variable — characterized by a distribution having some mean value, covariance, etc. Given enough samples, a theorem on maximum-likelihood asymptotics (Cramer-Rao) guarantees that this distribution will be unbiased — i.e., it will have mean value given by the correct parameter values — and will also be of minimal covariance [$1$]. This theorem is one of the main results motivating use of the maximum-likelihood solution.
Because $J$ is convex (demonstrated below), the logistic regression maximum-likelihood solution can always be found by gradient descent. That is, one need only iteratively update $\vec{\beta}$ in the direction of the negative $\vec{\beta}$-gradient of $J$, which is
\begin{eqnarray} – \nabla_{\vec{\beta}} J &=& \sum_{\{i: y_i = 1 \}}\vec{x}_i \frac{ e^{- \vec{\beta} \cdot \vec{x}_i } }{1 + e^{- \vec{\beta} \cdot \vec{x}_i }} – \sum_{\{i: y_i = 0 \}} \vec{x}_i \frac{ e^{\vec{\beta} \cdot \vec{x}_i }}{1 + e^{\vec{\beta} \cdot \vec{x}_i } } \\ &\equiv& \sum_{\{i: y_i = 1 \}}\vec{x}_i p(y=0 \vert \vec{x}_i) -\sum_{\{i: y_i = 0 \}} \vec{x}_i p(y= 1 \vert \vec{x}_i). \tag{5} \label{gradient} \end{eqnarray} Notice that the terms that contribute the most here are those that are most strongly misclassified — i.e., those where the model’s predicted probability for the observed class is very low. For example, a point with true label $y=1$ but large model $p(y=0 \vert \vec{x})$ will contribute a significant push on $\vec{\beta}$ in the direction of $\vec{x}$ — so that the model will be more likely to predict $y=1$ at this point going forward. Notice that the contribution of a term above is also proportional to the length of its feature vector — training points further from the origin have a stronger impact on the optimization process than those near the origin (at fixed classification difficulty).
The Hessian (second partial derivative) matrix of the cost function follows from taking a second gradient of the above. With a little algebra, one can show that this has $i-j$ component given by,
\begin{eqnarray} H(J)_{ij} &\equiv& -\partial_{\beta_j} \partial_{\beta_i} \log L \\ &=& \sum_k x_{k; i} x_{k; j} p(y= 0 \vert \vec{x}_k) p(y= 1 \vert \vec{x}_k). \tag{6} \label{Hessian} \end{eqnarray} We can prove that this is positive semi-definite using the fact that a matrix $M$ is necessarily positive semi-definite if $\vec{s}^T \cdot M \cdot \vec{s} \geq 0$ for all real $\vec{s}$ [$2$]. Dotting our Hessian above on both sides by an arbitrary vector $\vec{s}$, we obtain \begin{eqnarray} \vec{s}^T \cdot H \cdot \vec{s} &\equiv & \sum_k \sum_{ij} s_i x_{k; i} x_{k; j} s_j p(y= 0 \vert \vec{x}_k) p(y= 1 \vert \vec{x}_k) \\ &=& \sum_k \vert \vec{s} \cdot \vec{x}_k \vert^2 p(y= 0 \vert \vec{x}_k) p(y= 1 \vert \vec{x}_k) \geq 0. \tag{7} \label{convex} \end{eqnarray} The last form follows from the fact that both $ p(y= 0 \vert \vec{x}_k) $ and $ p(y= 1 \vert \vec{x}_k) $ are non-negative. This holds for any $\vec{\beta}$ and any $\vec{s}$, which implies that our Hessian is everywhere positive semi-definite. Because of this, convex optimization strategies — e.g., gradient descent — can always be applied to find the global maximum-likelihood solution. Coefficient uncertainty and significance tests
The solution $\vec{\beta}^*$ that minimizes $J$ — which can be found by gradient descent — is a maximum likelihood estimate. In the asymptotic limit of a large number of samples, maximum-likelihood parameter estimates satisfy the Cramer-Rao lower bound [$2$]. That is, the parameter covariance matrix satisfies [$3$],
\begin{eqnarray} \text{cov}(\vec{\beta}^*, \vec{\beta}^*) &\sim& H(J)^{-1} \\ &\approx & \frac{1}{\sum_k \vec{x}_{k} \vec{x}_{k}^T p(y= 0 \vert \vec{x}_k) p(y= 1 \vert \vec{x}_k)}. \tag{8} \label{covariance} \end{eqnarray} Notice that the covariance matrix will be small if the denominator above is large. Along a given direction, this requires that the training set contains samples over a wide range of values in that direction (we discuss this at some length in the analogous section of our post on Linear Regression [$4$]). For a term to contribute in the denominator, the model must also have some confusion about its values: If there are no difficult-to-classify training examples, this means that there are no examples near the decision boundary. When this occurs, there will necessarily be a lot of flexibility in where the decision boundary is placed, resulting in large parameter variances.
Although the form above only holds in the asymptotic limit, we can always use it to approximate the true covariance matrix — keeping in mind that the accuracy of the approximation will degrade when working with small training sets. For example, using (\ref{covariance}), the asymptotic variance for a single parameter can be approximated by
\begin{eqnarray} \tag{9} \label{single_cov} \sigma^2_{\beta^*_i} = \text{cov}(\vec{\beta}^*, \vec{\beta}^*)_{ii}. \end{eqnarray} In the asymptotic limit, the maximum-likelihood parameters will be Normally-distributed [$1$], so we can provide confidence intervals for the parameters as \begin{eqnarray} \tag{10} \label{parameter_interval} \beta_i \in \left ( \beta^*_i – z \sigma_{\beta^*_i}, \beta_i^* + z \sigma_{\beta^*_i} \right), \end{eqnarray} where the value of $z$ sets the size of the interval. For example, choosing $z = 2$ gives an interval construction procedure that will cover the true value approximately $95\%$ of the time — a result of Normal statistics [$5$]. Checking which intervals do not cross zero provides a method for identifying which features contribute significantly to a given fit. Prediction confidence intervals
The probability of class $1$ for a test point $\vec{x}$ is given by (\ref{model1}). Notice that this depends on $\vec{x}$ and $\vec{\beta}$ only through the dot product $\vec{x} \cdot \vec{\beta}$. At fixed $\vec{x}$, the variance (uncertainty) in this dot product follows from the coefficient covariance matrix above: We have [$2$],
\begin{eqnarray} \tag{11} \label{logit_var} \sigma^2_{\vec{x} \cdot \vec{\beta}} \equiv \vec{x}^T \cdot \text{cov}(\vec{\beta}^*, \vec{\beta}^*) \cdot \vec{x}. \end{eqnarray} With this result, we can obtain an expression for the confidence interval for the dot product, or equivalently a confidence interval for the class probability. For example, the asymptotic interval for class $1$ probability is given by \begin{eqnarray} \tag{12} \label{prob_interval} p(y=1 \vert \vec{x}) \in \left ( \frac{1}{1 + e^{- \vec{x} \cdot \vec{\beta}^* + z \sigma_{\vec{x} \cdot \vec{\beta}^*}}}, \frac{1}{1 + e^{- \vec{x} \cdot \vec{\beta}^* – z \sigma_{\vec{x} \cdot \vec{\beta}^*}}} \right), \end{eqnarray} where $z$ again sets the size of the interval as above ($z=2$ gives a $95\%$ confidence interval, etc. [$5$]), and $\sigma_{\vec{x} \cdot \vec{\beta}^*}$ is obtained from (\ref{covariance}) and (\ref{logit_var}).
The results (\ref{covariance}), (\ref{logit_var}), and (\ref{prob_interval}) are used in our Jupyter notebook. There we provide code for a minimal Logistic Regression class implementation that returns both point estimates and prediction confidence intervals for each test point. We used this code to generate the plot shown in the post introduction. Again, the code can be downloaded here if you are interested in trying it out.
Summary
In this note, we have 1) reviewed how to fit a logistic regression model to a binary data set for classification purposes, and 2) have derived the expressions needed to return class membership probability confidence intervals for test points.
Confidence intervals are typically not available for many out-of-the-box machine learning models, despite the fact that intervals can often provide significant utility. The fact that logistic regression allows for meaningful error bars to be returned with relative ease is therefore a notable, advantageous property.
Footnotes
[$1$] Our notes on the maximum-likelihood estimators can be found here.
[$2$] Our notes on covariance matrices and the multivariate Cramer-Rao theorem can be found here.
[$3$] The Cramer-Rao identity [$2$] states that covariance matrix of the maximum-likelihood estimators approaches the Hessian matrix of the log-likelihood, evaluated at their true values. Here, we approximate this by evaluating the Hessian at the maximum-likelihood point estimate.
[$4$] Our notes on linear regression can be found here.
[$5$] Our notes on Normal distributions can be found here.
|
Express the partition function of a collection of N molecules \(Q\) in terms of the molecular partition function \(q\). Assuming the N molecules to be independent, the total energy \(E_{tot}\) of molecules is a sum of individual molecular energies
\[ E_{tot} = \sum_i E_i\]
and all possible
\[Q = \sum _{\text{all possible energies}} e^{-E/k_BT} = \sum _i e^{-E_i/k_BT} \sum _j e^{-E_j/k_BT} \sum _k e^{-E_k/k_BT} ... \sum _i e^{-E_i/k_BT} \]
\[ Q = q \times q \times q \times ... q^N\]
Here \(\epsilon_i^{(1)}\), \(\epsilon_i^{(2)}\), \(\epsilon_i^{N}\) are energies of individual molecules and a sum of all energies can only come from summing over all \(\epsilon_i\). Gibbs postulated that
\[Q = \dfrac{q^N}{N!}\]
where the \(N!\) in the denominator is due to the indistinguishability of the tiny molecules (or other quantum particles in a collection).
|
In this problem, we will find the portfolio allocation that minimizes risk while achieving a given expected return $R_\mbox{target}$.
Suppose that we know the mean returns $R \in \mathbf{R}^n$ and the covariance $Q \in \mathbf{R}^{n \times n}$ of the $n$ assets. We would like to find a portfolio allocation $x \in \mathbf{R}^n$, $\sum_i x_i = 1$, minimizing the
risk of the portfolio, which we measure as the variance $x^T Q x$ of the portfolio. The requirement that the portfolio allocation achieve the target expected return can be expressed as $x^T R >= R_\mbox{target}$. We suppose further that our portfolio allocation must comply with some lower and upper bounds on the allocation, $x_\mbox{lower} \leq x \leq x_\mbox{upper}$.
This problem can be written as\begin{array}{ll} \mbox{minimize} & x^T Q x \\ \mbox{subject to} & x^T R >= R_\mbox{target} \\ & \sum_i x_i = 1 \\ & x_\mbox{lower} \leq x \leq x_\mbox{upper} \end{array}
where $x \in \mathbf{R}^n$ is our optimization variable.
We can solve this problem as follows.
using Convex, ECOS# generate problem datasrand(0)n = 50R = 5*randn(n)A = randn(n, 5)Q = A * A' + diagm(rand(n))R_target = 5x_lower = 0x_upper = 1x = Variable(length(R))p = minimize(quad_form(x, Q), x' * R >= R_target, sum(x) == 1, x_lower <= x, x <= x_upper )solve!(p, ECOSSolver(verbose = false)) # the minimal riskp.optval
0.025263946889697284
We see that we can achieve an extremely low risk portfolio (with variance .025) with the desired expected return.
The optimal portfolio invests in only about half of the assets.
sum(x.value.>1e-4)
27
Let's take a look at the optimal portfolio we chose:
plot(x=1:n,y=x.value,Geom.bar,Guide.xlabel("Asset Index"),Guide.ylabel("Fraction of Portfolio"))
|
Automorphisms of the Unit Disc
As I promised last time, my goal for today and for the next several posts is to prove that automorphisms of the unit disc, the upper half plane, the complex plane, and the Riemann sphere each take on a certain form.
Fair warning: these posts will be mostly computational! Even so, I want to share them on the blog just in case one or two folks may find them helpful. (As we all know, grimy calculations can be painful!)
And if you need some motivation for this series and missed last week's post, be sure to check it out first!
Also in this series:
Automorphisms of the Unit Disc
Theorem: Every automorphism $f$ of the unit disc $\Delta$ is of the form $f(z)=\frac{az+b}{\bar b z+\bar a}$ where $a,b\in\mathbb{C}$ and $|a|^2-|b|^2=1$. Proof. First let's suppose $f\in\text{Aut}(\Delta)$. One can show by Schwarz's Lemma that for $z\in\Delta$ and $\alpha=f^{-1}(0)$, the function $f$ can be written as$$f(z)=e^{i\theta}\frac{z-\alpha}{1-\bar\alpha z}=\frac{\lambda z - \lambda \alpha}{1-\bar\alpha z}$$where $\lambda=e^{i\theta}$ with $\theta\in\mathbb{R}$. (If you haven't seen the proof, you can find it here.) Now let $k=1-|\alpha|^2$ and define $a=\sqrt{\lambda/k}$ so that $$a=\frac{1}{\sqrt{k}}(\cos{\theta/2}+i\sin{\theta/2})$$and thus$$\bar a= \frac{1}{\sqrt{k}}(\cos{\theta/2}-i\sin{\theta/2})=\frac{1}{\sqrt{k\lambda}}.$$This allows us to write\begin{align*}f(z)&=\frac{\lambda z - \lambda \alpha}{1-\bar\alpha z}\\[10pt]&=\frac{z\sqrt{\lambda} - \alpha\sqrt{\lambda} }{-(\bar\alpha/\sqrt{\lambda})z + 1/\sqrt{\lambda}}\\[10pt]&=\frac{z\sqrt{\lambda/k}-\alpha\sqrt{\lambda/k}}{-(\bar\alpha/\sqrt{\lambda k})z+1/\sqrt{\lambda k}}\end{align*}where in the second line we have multiplied top/bottom by $1/\sqrt{\lambda}$ and in the last line we have multiplied top/bottom by $1/\sqrt{k}$. But notice! This is precisely of the form$$f(z)=\frac{az+b}{\bar b z+\bar a}$$where as above $a=\sqrt{\lambda/k}$ and $b=\alpha\sqrt{\lambda/k}$. Moreover, \begin{align*}|a|^2-|b|^2&=a\bar a - b\bar b\\&= 1/k-|\alpha|^2/k\\&=(1-|\alpha|^2)/k\\&=1\end{align*}as desired.
Conversely suppose that for some $a,b\in\mathbb{C}$ with $|a|^2-|b|^2=1$ we have $f(z)=\frac{az+b}{\bar b z+\bar a}$ for $z\in\Delta$. Then $f(z)\in\Delta$ since $|f(z)|< 1$ if and only if $|az+b|< |\bar b z+\bar a| < 1$ which holds if and only if (by expanding the terms) $(|a|^2-|b|^2)|z|^2<|a|^2-|b|^2$ and - by our assumption - that's true if and only if $|z|^2< 1$ which is, of course, true.
Moreover, $f$ is bijective since it has inverse $f^{-1}(z)=\frac{-\bar a z+b}{\bar bz-a}$. One can also check that both $f$ and its inverse are holomorphic (In particular, $f$ and $f^{-1}$ are linear fractional transformations.) showing that $f$ is indeed an automorphism of $\Delta.$
$\square$
Next time: We'll prove a similar result about the automorphisms of the upper half plane.
|
This question is related to the development of
The Joint Giving Theorem (by S. Kolm).
There are two types of agents: benevolent and beneficiaries.
Benevolents' preferences are represented by utilities: $u^i=u^i(x_i,x,c_i,g_i,c_{-i},g_{-i})$
Where $x_i=X_i-g_i-t_i$ is final wealth, $X_i$ initial wealth, $g_i$ some private gift made to the poor, $t_i$ is the transfer to the public sector (who then gives it to the poor). $x$ is the final wealth of the beneficiaries. $c_i=g_i+t_i$ is total contribution of agent $i$. Subscripts $~_{-i}$ denote variables of other benevolents.
Beneficiary's preferences are represented by an increasing ordinal utility function $u=u(x)$.
The assumptions are (subscripts mean derivatives): $u^i_{x_i}>0,u^i_{x}\geq 0, u^i_{c_i}\geq 0, u^i_{g_i}\geq u^i_{c_j} \leq 0$
The theorem then says (I quote):
Pareto efficiency for this society of potential givers and receivers implies that there exist coefficients $\lambda_i >0$ such that $U=\sum \lambda_j u^j +u$ is maximal (without loss of generality). Public policy chooses taxes $t_i$. When it implements a Pareto efficient social state, this choice maximizes such a function $U$. This implies, for tax $t_i$ :
$\lambda_i \cdot (-u^i_{x_i}+u^i_{x}+u^i_{c_i})+\sum_{j\neq i} \lambda_j \cdot (u^j_{x}+u^j_{c_i})+u'\leq 0$
with $=0$ if $t_i>0$ and $\leq 0$ if $t_i=0$.
My question is: Why is the derivative of $U$ with respect to taxes $t$ non positive? More precisely, why is it non-positive if taxes are
zero?
|
This showcase presents some simulation results for a deep neural network consisting of 21 layers. Based on randomly generated data, the distribution of network activations and gradients is analysed for different activation functions. This reveals how the flow of activations from the first to the last and the flow of the gradients from the last to the first layer behaves for different activation functions.
The simulation works as follows: random numbers from a normal distribution \(\mathcal{N}(\mu, \sigma) = \mathcal{N}(0, 1)\) serve as input \(X\) to the network. Then, one forward pass through the network is calculated. This is basically a loop of matrix operations\begin{equation*} Y_i = f(Y_{i-1} \cdot W_i) \end{equation*}
with the input \(Y_{i-1}\) from the previous layer (with \(Y_0 = X\)) to the current layer (on \(n_{\text{in}}\) connections), the weight matrix \(W\) initialized with randomly generated numbers from a \(\mathcal{N}(0, 1/\sqrt{n_{\text{in}}})\) distribution and the transfer function \(f(x)\). Note that for simplicity no bias is included. After the forward pass, one backwards pass is calculated where the main interest lies in the values of the gradients (to see if the network learns). The target values (e.g. labels in a real-world application) are also faked with random numbers. This is, of course, not a realistic example but should reflect the main behaviour of the activation functions. The idea for the simulation is borrowed from the post Training very deep networks with Batchnorm (the effect of batch normalization is similar to what the SELU activation function is capable of).
In order to calculate the statistical measures, all activations from the entire input set are converted to a list of values per layer. That is, for every input \(\fvec{x}\) (e.g. a row of \(X\)), all activations \(\smash{y_i^{(l)}}\) from the \(i\) neurons in the \(l\)-th layer are converted to one list. Then, statistics from this list are calculated. The same procedure is applied to every layer.
The first plot shows the range \([\mu - \sigma; \mu + \sigma]\) (one standard deviation range from the mean) of network activations per layer. It should reveal the main range of the activations throughout the layer hierarchy.
The next plot shows how the gradients flow back from the end to the beginning. It is a measure of how much the network learns in one epoch in each layer. The statistics are calculated in a similar way as the network activations.
Finally, a closer look at the distributions of network activations. In each layer, a histogram of the activations is calculated and all histograms from all layers are combined in one surface plot. With this, we can see how the distribution of activations changes as going deeper into the hierarchy.
List of attached files:
Simulation.zip (Matlab code for the simulation)
← Back to the overview page
|
Numerical calculation suggests the following two combinatorial identities: $$ \sum_{i=0}^n\sum_{j=0}^n{n\choose i}{n\choose j}{i+j\choose n}=2^n{2n\choose n},\\ \sum_{i=0}^n\sum_{j=0}^n{n\choose i}{n\choose j}{n\choose i+j}={3n\choose n}. $$ Proofs or references are all welcome.
For the first one: pick $i$ numbers from 1 to $n$, then $j$ numbers from $n+1$ to $2n$, then select $n$ of the $i+j$ picked numbers. One ends with an $n$-element subset of $1, \ldots, 2n$. Each such resulting subset can be generated in $2^n$ ways, once for each subset of $[1, \ldots, 2n] \setminus X$. (If $X$ is the resulting set, let $A = X\cap[1,\ldots, n]$ and $B = X \cap[n+1, \ldots, 2n]$; denote $a=|A|$ and $b=|B|$. Then $X$ can be obtained by selecting $A\cup Y$ and $B\cup Z$. There are $2^{n-a}$ ways for $Y$ and $2^{n-b}$ ways for $Z$, hence $2^{n-a}2^{n-b} = 2^n$ ways for $X$.)
For the second one: $\binom{3n}{n}$ is the number of ways to pick $n$ numbers from $1, 2, \ldots, 3n$. You can pick $i$ from the first $n$, $j$ from the next $n$, and are left to pick $n-(i+j)$ from the last $n$ (or to ignore $i+j$ from the last $n$). The last factor is 0 if you have already picked more than $n$ from the first two buckets.
I do not have the complete proof but still here is something interesting for the first identity $\sum_{i=0}^n\sum_{j=0}^n{n\choose i}{n\choose j}{i+j\choose n} = \sum_{s=0}^{2n}\sum\limits_{i+j=s}{n\choose i}{n\choose j}{i+j\choose n}=\sum_{s=0}^{2n}{s\choose n}\sum\limits_{i+j=s}{n\choose i}{n\choose j}$ and the internal sum can be written $\sum\limits_{i+j=s}{n\choose i}{n\choose j}=\sum_{k=0}^s{n \choose k}{n \choose s-k} \stackrel{Vandermonde \ identity}{=} {2n \choose s}$ therefore $\sum_{i=0}^n\sum_{j=0}^n{n\choose i}{n\choose j}{i+j\choose n} = \sum_{s=0}^{2n}{s \choose n}{2n \choose s}$
|
Applications of differentiation Part VI
Displacement, Velocity and Acceleration
Consider the motion of a particle which moves in a way such that its displacement s from a fixed point O at time t seconds is given by $ s = -t^2 + 5t + 6 $ The instantaneous velocity is given by the differentiation of displacement $ v = \frac{ds}{dt} = -2t + 5 $ The instantaneous acceleration is given by the differentiation of velocity $ a = \frac{dv}{dt} = -2 $ To obtain velocity function from accleration, we integrate the acceleration function To obtain the displacement function from velocity, we integrate the velocity function Example 1 The acceleration of a particle is given by the function $ a = 10 + 15t^2 $ where t is the time in seconds after leaving start point O. Find the velocity of the particle at t = 1 given that the initial velocity of the particle is 0m/s. To obtain velocity function from accleration, we integrate the acceleration function $\begin{aligned} v &= \int a \:dt \\ &= \int 10 + 15t^2 \: dt \\ &= 10 t + \frac{15t^3}{3} + c\\ &= 10 t + 5t^3 + c \\ \text{When t=0, v=0, therefore c = 0} \\ v&= 10 t + 5t^3 \\ \text{Substitute t =1 } v&=10+5 \\ &=15 \\ \end{aligned} $ What's Next ? Check out the Basic Techniques of Differentiation Check out the Contents Page
Consider the motion of a particle which moves in a way such that its displacement s from a fixed point O at time t seconds is given by
$ s = -t^2 + 5t + 6 $
The instantaneous velocity is given by the differentiation of displacement
$ v = \frac{ds}{dt} = -2t + 5 $
The instantaneous acceleration is given by the differentiation of velocity
$ a = \frac{dv}{dt} = -2 $
To obtain velocity function from accleration, we integrate the acceleration function
To obtain the displacement function from velocity, we integrate the velocity function
Example 1
The acceleration of a particle is given by the function $ a = 10 + 15t^2 $ where t is the time in seconds after leaving start point O. Find the velocity of the particle at t = 1 given that the initial velocity of the particle is 0m/s.
To obtain velocity function from accleration, we integrate the acceleration function
$\begin{aligned}
v &= \int a \:dt \\
&= \int 10 + 15t^2 \: dt \\
&= 10 t + \frac{15t^3}{3} + c\\
&= 10 t + 5t^3 + c \\
\text{When t=0, v=0, therefore c = 0} \\
v&= 10 t + 5t^3 \\
\text{Substitute t =1 }
v&=10+5 \\
&=15 \\
\end{aligned} $
What's Next ?
Check out the Basic Techniques of Differentiation
Check out the Contents Page
|
In order to have an introduction of the Word2Vec
look at this post.Using this method we try to predict the movie sentiment (positive vs. negative) using text data as predictor. We use the movie review data set, and we use the power of Doc2Vec to transform our data into predictors.
library(text2vec)library(tm)data(movie_review)names(movie_review)
[1] "id" "sentiment" "review"
The data set contain an
id, sentiment (0=negative, 1=positive), and review that contains the text if people like the movie or not.We start creating the Training and Test set, and we have also to collect unique terms from the document using the function create_vocabulary().
library(caret)Train = createDataPartition(movie_review$sentiment, p=0.75, list=FALSE, times = 1) # function to devide training and test set# Define training control# train_control =trainControl(method="cv", number=10)train = movie_review[Train, ] # 75% trainingtest = movie_review[-Train, ] # 25% test# TRAINING-SET# Vocabulary-based DTM# We collect unique terms from all documents and mark each of them with a unique ID using the create_vocabulary() functionit = itoken(train$review, preprocessor = tolower, # transform text to lowercase tokenizer = word_tokenizer, # create a token ids = train$id) # assign id# Iterator over tokens with the itoken() functionv = create_vocabulary(it)# Represent documents in vector spacevectorizer = vocab_vectorizer(v)# TEST-SETit2 = itoken(test$review, preprocessor = tolower, tokenizer = word_tokenizer, ids = test$id)# v2 = create_vocabulary(it2)# pruned_vocab2 = prune_vocabulary(v2, term_count_min = 10, doc_proportion_max = 0.5, doc_proportion_min = 0.001)# vectorizer2 = vocab_vectorizer(v2)# Document term matrix (vocab based)dtm_train = create_dtm(it, vectorizer) # training setdtm_test = create_dtm(it2, vectorizer) # test set
Now, we have to get the
tf-idf matrix from the bag of words. The bag of words is used to extract features. We simply count the number of occurrences for each word, this process in also called CountVectorizer. To make the CountVectorizer more comparable, we scale it using the Term Frequency Transformation (tf) and in order to boost the most important features we use the Inverse Document Frequency (IDF), this calculate how often a word occurs in the corpus. The combination of both is called tf-idf = tf * idf.The tf-idf can be also represented as follow:
\[i d f(t, D)=\log \left(\frac{N}{|\{d \in D : t \in d\}|}\right)\]The
tf-idf is the log of the total number of documents N divided by the number of documents that contain the term that we are taking into consideration.
# Get tf-idf # It quantify importance of a term in a document# Increase the weight of terms which are specific to a single document decrease the weight for terms used in most documentstfidf <- TfIdf$new() # define tfidf model# Implement the tfidf model to the training and test setdtm_train_tfidf <- fit_transform(dtm_train, tfidf)dtm_test_tfidf <- fit_transform(dtm_test, tfidf)
Now, we are ready to use the
Sparse Linear Model with the R package glmnet that works quite well when we have to deal with binary response variable.In Document Classification we might have a lange number of variables per observation. We cannot fit the data using standard approaches since p>>N, where p=features and N=document samples. The Sparse Linear Model tries to optimize each parameter separately holding all the other parameters fixed, and it does it on a grid of lambda.For an exhaustive explanation of Sparse Linear Model there is a great lesson that you can find here.
# GLMNET modellibrary(glmnet)glmnet_classifier =cv.glmnet(x = dtm_train_tfidf, y = train[['sentiment']], # predict sentiment family = 'binomial', # because we are dealing with a binary responce variable alpha = 1, # L1 penalty type.measure = "auc", # area under ROC curve nfolds = 5, # 5-fold cross-validation thresh = 1e-3, # convergence threshold for coordinate descent, high value is less accurate, but has faster training maxit = 1e3) # limit the maximum number of variables in the model, we use lower number of iterations for faster trainingplot(glmnet_classifier)
print(paste("max AUC =", round(max(glmnet_classifier$cvm), 4)))
[1] "max AUC = 0.9108"
# Predict on unseen datapreds <- predict(glmnet_classifier, dtm_test_tfidf, type= 'response')[, 1]glmnet:::auc(as.numeric(test$sentiment), preds)
[1] 0.8922888
# How accurately can the sentiments be identified from text
As we can see from the graph above, we can determine the best value of
lambda in order to have the best AUC.We also obtained a very high AUC aroud 90% in the Train Set, which is comparable to the Testing Set AUC 91%. This says to use that the model we created can predict the reviews of the unseen movie with a high level of Accuracy.
|
Difference between revisions of "Rank into rank"
(→$C^{(n)}$ variants: headers)
m (→$C^{(n)}$ variants: -m)
Line 78: Line 78:
$\mathrm{I3}$ and other $C^{(n)}$ variants:
$\mathrm{I3}$ and other $C^{(n)}$ variants:
−
*
+
* $\mathrm{I3}(κ, δ)$, if $δ$ is a limit cardinal (instead of a successor of a limit cardinal – Kunen’s Theorem excludes other cases), it is equal to $sup\{j^m(κ) : m ∈ ω\}$ where $j$ is the elementary embedding. Then $κ$ and $j^m(κ)$ are $C^{(n)}$-[[superstrong]], $C^{(n)}$-[[extendible]] and $C^{(n)}$-$m$-[[huge]] in $V_δ$, for all $n$ and $m$.
Definitions of $C^{(n)}$ variants of rank-into-rank cardinals:
Definitions of $C^{(n)}$ variants of rank-into-rank cardinals:
Revision as of 11:00, 17 September 2019
A nontrivial elementary embedding $j:V_\lambda\to V_\lambda$ for some infinite ordinal $\lambda$ is known as a
rank into rank embedding and the axiom asserting that such an embedding exists is usually denoted by $\text{I3}$, $\text{I2}$, $\text{I1}$, $\mathcal{E}(V_\lambda)\neq \emptyset$ or some variant thereof. The term applies to a hierarchy of such embeddings increasing in large cardinal strength reaching toward the Kunen inconsistency. The axioms in this section are in some sense a technical restriction falling out of Kunen's proof that there can be no nontrivial elementary embedding $j:V\to V$ in $\text{ZFC}$). An analysis of the proof shows that there can be no nontrivial $j:V_{\lambda+2}\to V_{\lambda+2}$ and that if there is some ordinal $\delta$ and nontrivial rank to rank embedding $j:V_\delta\to V_\delta$ then necessarily $\delta$ must be a strong limit cardinal of cofinality $\omega$ or the successor of one. By standing convention, it is assumed that rank into rank embeddings are not the identity on their domains.
There are really two cardinals relevant to such embeddings: The large cardinal is the critical point of $j$, often denoted $\mathrm{crit}(j)$ or sometimes $\kappa_0$, and the other (not quite so large) cardinal is $\lambda$. In order to emphasize the two cardinals, the axiom is sometimes written as $E(\kappa,\lambda)$ (or $\text{I3}(\kappa,\lambda)$, etc.) as in [1]. The cardinal $\lambda$ is determined by defining the
critical sequence of $j$. Set $\kappa_0 = \mathrm{crit}(j)$ and $\kappa_{n+1}=j(\kappa_n)$. Then $\lambda = \sup \langle \kappa_n : n <\omega\rangle$ and is the first fixed point of $j$ that occurs above $\kappa_0$. Note that, unlike many of the other large cardinals appearing in the literature, the ordinal $\lambda$ is not the target of the critical point; it is the $\omega^{th}$ $j$-iterate of the critical point.
As a result of the strong closure properties of rank into rank embeddings, their critical points are huge and in fact $n$-huge for every $n$. This aspect of the large cardinal property is often called $\omega$-hugeness and the term
$\omega$-huge cardinal is sometimes used to refer to the critical point of some rank into rank embedding. Contents The $\text{I3}$ Axiom and Natural Strengthenings
The $\text{I3}$ axiom asserts, generally, that there is some embedding $j:V_\lambda\to V_\lambda$. $\text{I3}$ is also denoted as $\mathcal{E}(V_\lambda)\neq\emptyset$ where $\mathcal{E}(V_\lambda)$ is the set of all elementary embeddings from $V_\lambda$ to $V_\lambda$, or sometimes even $\text{I3}(\kappa,\lambda)$ when mention of the relevant cardinals is necessary. In its general form, the axiom asserts that the embedding preserves all first-order structure but fails to specify how much second-order structure is preserved by the embedding. The case that
no second-order structure is preserved is also sometimes denoted by $\text{I3}$. In this specific case $\text{I3}$ denotes the weakest kind of rank into rank embedding and so the $\text{I3}$ notation for the axiom is somewhat ambiguous. To eliminate this ambiguity we say $j$ is $E_0(\lambda)$ when $j$ preserves only first-order structure.
The axiom can be strengthened and refined in a natural way by asserting that various degrees of second-order correctness are preserved by the embeddings. A rank into rank embedding $j$ is said to be $\Sigma^1_n$ or
$\Sigma^1_n$ correct if, for every $\Sigma^1_n$ formula $\Phi$ and $A\subseteq V_\lambda$ the elementary schema holds for $j,\Phi$, and $A$: $$V_\lambda\models\Phi(A) \Leftrightarrow V_\lambda\models\Phi(j(A)).$$The more specific axiom $E_n(\lambda)$ asserts that some $j\in\mathcal{E}(V_\lambda)$ is $\Sigma^1_{2n}$.
The ``$2n$" subscript in the axiom $E_n(\lambda)$ is incorporated so that the axioms $E_m(\lambda)$ and $E_n(\lambda)$ where $m<n$ are strictly increasing in strength. This is somewhat subtle. For $n$ odd, $j$ is $\Sigma^1_n$ if and only if $j$ is $\Sigma^1_{n+1}$. However, for $n$ even, $j$ being $\Sigma^1_{n+1}$ is
significantly stronger than a $j$ being $\Sigma^1_n$[2]. The $\text{I2}$ Axiom
Any $j:V_\lambda\to V_\lambda$ can be extended to a $j^+:V_{\lambda+1}\to V_{\lambda+1}$ but in only one way: Define for each $A\subseteq V_\lambda$ $$j^+(A)=\bigcup_{\alpha < \lambda}(j(V_\alpha\cap A)).$$ $j^+$ is not necessarily elementary. The $\text{I2}$ axiom asserts the existence of some elementary embedding $j:V\to M$ with $V_\lambda\subseteq M$ where $\lambda$ is defined as the $\omega^{th}$ $j$-iterate of the critical point. Although this axiom asserts the existence of a
class embedding with a very strong closure property, it is in fact equivalent to an embedding $j:V_\lambda\to V_\lambda$ with $j^+$ preserving well-founded relations on $V_\lambda$. So this axioms preserves some second-order structure of $V_\lambda$ and is in fact equivalent to $E_1(\lambda)$ in the hierarchy defined above. A specific property of $\text{I2}$ embeddings is that they are iterable (i.e. the direct limit of directed system of embeddings is well-founded). In the literature, $IE(\lambda)$ asserts that $j:V_\lambda\to V_\lambda$ is iterable and $IE(\lambda)$ falls strictly between $E_0(\lambda)$ and $E_1(\lambda)$.
As a result of the strong closure property of $\text{I2}$, the equivalence mentioned above cannot be through an analysis of some ultrapower embedding. Instead, the equivalence is established by constructing a directed system of embeddings of various ultrapowers and using reflection properties of the critical points of the embeddings. The direct limit is well-founded since well-founded relations are preserved by $j^+$. The use of both direct and indirect limits, in conjunction with reflection arguments, is typical for establishing the properties of rank into rank embeddings.
The $\text{I1}$ Axiom
$\text{I1}$ asserts the existence of a nontrivial elementary embedding $j:V_{\lambda+1}\to V_{\lambda+1}$. This axiom is sometimes denoted $\mathcal{E}(V_{\lambda+1})\neq\emptyset$. Any such embedding preserves all second-order properties of $V_\lambda$ and so is $\Sigma^1_n$ for all $n$. To emphasize the preservation of second-order properties, the axiom is also sometimes written as $E_\omega(\lambda)$. In this case, restricting the embedding to $V_\lambda$ and forming $j^+$ as above yields the original embedding.
Strengthening this axiom in a natural way leads the $\text{I0}$ axiom, i.e. asserting that embeddings of the form $j:L(V_{\lambda+1})\to L(V_{\lambda+1})$ exist. There are also other natural strengthenings of $\text{I1}$, though it is not entirely clear how they relate to the $\text{I0}$ axiom. For example, one can assume the existence of elementary embeddings satisfying $\text{I1}$ which extend to embeddings $j:M\to M$ where $M$ is a transitive class inner model and add various requirements to $M$. These requirements must not entail that $M$ satisfies the axiom of choice by the Kunen inconsistency. Requirements that have been considered include assuming $M$ contains $V_{\lambda+1}$, $M$ satisfies $DC_\lambda$, $M$ satisfies replacement for formulas containing $j$ as a parameter, $j(\mathrm{crit}(j))$ is arbitrarily large in $M$, etc.
Virtually rank-into-rank
(Information in this subsection from [4])
A cardinal $κ$ is
virtually rank-into-rank iff in a set-forcing extension it is the critical point of an elementary embedding $j : V_λ → V_λ$ for some $λ > κ$.
This notion does not require stratification, because Kunen’s Inconsistency does not hold for virtual embeddings.
Every virtually rank-into-rank cardinal is a virtually $n$-huge* limit of virtually $n$-huge* cardinals for every $n < ω$. The least ω-Erdős cardinal $η_ω$ is a limit of virtually rank-into-rank cardinals. Every virtually rank-into-rank cardinal is an $ω$-iterable limit of $ω$-iterable cardinals. Every element of a club $C$ witnessing that $κ$ is a Silver cardinal is virtually rank-into-rank. Large Cardinal Properties of Critical Points
The critical points of rank into rank embeddings have many strong reflection properties. They are measurable, $n$-huge for all $n$ (hence the terminology $\omega$-huge mentioned in the introduction) and partially supercompact.
Using $\kappa_0$ as a seed, one can form the ultrafilter $$U=\{X\subseteq\mathcal{P}(\kappa_0): j``\kappa_0\in j(X)\}.$$ Thus, $\kappa_0$ is a measurable cardinal.
In fact, for any $n$, $\kappa_0$ is also $n$-huge as witnessed by the ultrafilter $$U=\{X\subseteq\mathcal{P}(\kappa_n): j``\kappa_n\in j(X)\}.$$ This motivates the term $\omega$-huge cardinal mentioned in the introduction.
Letting $j^n$ denote the $n^{th}$ iteration of $j$, then $$V_\lambda\models ``\lambda\text{ is supercompact"}.$$ To see this, suppose $\kappa_0\leq \theta <\kappa_n$, then $$U=\{X\subseteq\mathcal{P}_{\kappa_0}(\theta): j^n``\theta\in j^n(X)\}$$ winesses the $\theta$-compactness of $\kappa_0$ (in $V_\lambda$). For this last claim, it is enough that $\kappa_0(j)$ is $<\lambda$-supercompact, i.e. not *fully* supercompact in $V$. In this case, however, $\kappa_0$ *could* be fully supercompact.
Critical points of rank-into-rank embeddings also exhibit some *upward* reflection properties. For example, if $\kappa$ is a critical point of some embedding witnessing $\text{I3}(\kappa,\lambda)$, then there must exist another embedding witnessing $\text{I3}(\kappa',\lambda)$ with critical point
above $\kappa$! This upward type of reflection is not exhibited by large cardinals below extendible cardinals in the large cardinal hierarchy. Algebras of elementary embeddings
If $j,k\in\mathcal{E}_{\lambda}$, then $j^+(k)\in\mathcal{E}_{\lambda}$ as well. We therefore define a binary operation $*$ on $\mathcal{E}_{\lambda}$ called application defined by $j*k=j^{+}(k)$. The binary operation $*$ together with composition $\circ$ satisfies the following identities:
1. $(j\circ k)\circ l=j\circ(k\circ l),\,j\circ k=(j*k)\circ j,\,j*(k*l)=(j\circ k)*l,\,j*(k\circ l)=(j*k)\circ(j*l)$
2. $j*(k*l)=(j*k)*(j*l)$ (self-distributivity).
Identity 2 is an algebraic consequence of the identities in 1.
If $j\in\mathcal{E}_{\lambda}$ is a nontrivial elementary embedding, then $j$ freely generates a subalgebra of $(\mathcal{E}_{\lambda},*,\circ)$ with respect to the identities in 1, and $j$ freely generates a subalgebra of $(\mathcal{E}_{\lambda},*)$ with respect to the identity 2.
If $j_{n}\in\mathcal{E}_{\lambda}$ for all $n\in\omega$, then $\sup\{\textrm{crit}(j_{0}*\dots*j_{n})\mid n\in\omega\}=\lambda$ where the implied parentheses a grouped on the left (for example, $j*k*l=(j*k)*l$).
Suppose now that $\gamma$ is a limit ordinal with $\gamma<\lambda$. Then define an equivalence relation $\equiv^{\gamma}$ on $\mathcal{E}_{\lambda}$ where $j\equiv^{\gamma}k$ if and only if $j(x)\cap V_{\gamma}=k(x)\cap V_{\gamma}$ for each $x\in V_{\gamma}$. Then the equivalence relation $\equiv^{\gamma}$ is a congruence on the algebra $(\mathcal{E}_{\lambda},*,\circ)$. In other words, if $j_{1},j_{2},k\in \mathcal{E}_{\lambda}$ and $j_{1}\equiv^{\gamma}j_{2}$ then $j_{1}\circ k\equiv^{\gamma} j_{2}\circ k$ and $j_{1}*k\equiv^{\gamma}j_{2}*k$, and if $j,k_{1},k_{2}\in\mathcal{E}_{\lambda}$ and $k_{1}\equiv^{\gamma}k_{2}$ then $j\circ k_{1}\equiv^{\gamma}j\circ k_{2}$ and $j*k_{1}\equiv^{j(\gamma)}j*k_{2}$.
If $\gamma<\lambda$, then every finitely generated subalgebra of $(\mathcal{E}_{\lambda}/\equiv^{\gamma},*,\circ)$ is finite.
$C^{(n)}$ variants
(section from [5])
$\mathrm{I3}$ and other $C^{(n)}$ variants:
Assuming $\mathrm{I3}(κ, δ)$, if $δ$ is a limit cardinal (instead of a successor of a limit cardinal – Kunen’s Theorem excludes other cases), it is equal to $sup\{j^m(κ) : m ∈ ω\}$ where $j$ is the elementary embedding. Then $κ$ and $j^m(κ)$ are $C^{(n)}$-superstrong, $C^{(n)}$-extendible and $C^{(n)}$-$m$-huge in $V_δ$, for all $n$ and $m$.
Definitions of $C^{(n)}$ variants of rank-into-rank cardinals:
$κ$ is called $C^{(n)}$-$\mathrm{I3}$ cardinalif it is an $\mathrm{I3}$ cardinal, witnessed by some embedding $j : V_δ → V_δ$, with $j(κ) ∈ C^{(n)}$. $κ$ is called $C^{(n)+}$-$\mathrm{I3}$ cardinalif it is an $\mathrm{I3}$ cardinal, witnessed by some embedding $j : V_δ → V_δ$, with $δ ∈ C^{(n)}$. $κ$ is called $C^{(n)}$-$\mathrm{I1}$ cardinalif it is an $\mathrm{I1}$ cardinal, witnessed by some embedding $j : V_{δ+1} → V_{δ+1}$, with $j(κ) ∈ C^{(n)}$.
Results:
If $κ$ is $C^{(n)}$-$\mathrm{I3}$, then $κ ∈ C^{(n)}$. Every $\mathrm{I3}$-cardinal is $C^{(1)}$-$\mathrm{I3}$ and $C^{(1)+}$-$\mathrm{I3}$. By simple reflection arguments: The least $C^{(n)}$-$\mathrm{I3}$ cardinal is not $C^{(n+1)}$-$\mathrm{I1}$, for $n ≥ 1$. The least $C^{(n)}$-$\mathrm{I1}$ cardinal is not $C^{(n+1)}$-$\mathrm{I1}$, for $n ≥ 1$. Every $C^{(n)}$-$\mathrm{I1}$ cardinal is also $C^{(n)}$-$\mathrm{I3}$. For every $n ≥ 1$, if $δ$ is a limit ordinal and $j : V_δ → V_δ$ witnesses that $κ$ is $\mathrm{I3}$, then $(\forall_{m < ω}j^m (κ) ∈ C^{(n)}) \iff δ ∈ C^{(n)}$. If $κ$ is $C^{(n)}$-$\mathrm{I3}$, then it is $C^{(n)}$-$m$-huge, for all $m$, and there is a normal ultrafilter $\mathcal{U}$ over $κ$ such that $\{α < κ : α$ is $C^{(n)}$-$m$-huge for every $m\} ∈ \mathcal{U}$. If $κ$ is $C^{(n)}$-$\mathrm{I1}$, then the least $δ$ s.t. there is an elementary embedding $j : V_{δ+1} → V_{δ+1}$ with $crit(j) = κ$ and $j(κ) ∈ C^{(n)}$ is smaller than the first ordinal in $C^{(n+1)}$ greater than $κ$. Moreover, the least $C^{(n)}$-$\mathrm{I1}$ cardinal, if it exists, is smaller than the first ordinal in $C^{(n+1)}$, for all $n ≥ 1$.
References Kanamori, Akihiro. Second, Springer-Verlag, Berlin, 2009. (Large cardinals in set theory from their beginnings, Paperback reprint of the 2003 edition) www bibtex The higher infinite. Laver, Richard. Implications between strong large cardinal axioms.Ann Math Logic 90(1--3):79--90, 1997. MR bibtex Corazza, Paul. The gap between ${\rm I}_3$ and the wholeness axiom.Fund Math 179(1):43--60, 2003. www DOI MR bibtex Gitman, Victoria and Shindler, Ralf. Virtual large cardinals.www bibtex Bagaria, Joan. $C^{(n)}$-cardinals.Archive for Mathematical Logic 51(3--4):213--240, 2012. www DOI bibtex
|
Let $a \in \Bbb{R}$. Let $\Bbb{Z}$ act on $S^1$ via $(n,z) \mapsto ze^{2 \pi i \cdot an}$. Claim: The is action is not free if and only if $a \Bbb{Q}$. Here's an attempt at the forward direction: If the action is not free, there is some nonzero $n$ and $z \in S^1$ such that $ze^{2 \pi i \cdot an} = 1$. Note $z = e^{2 \pi i \theta}$ for some $\theta \in [0,1)$.
Then the equation becomes $e^{2 \pi i(\theta + an)} = 1$, which holds if and only if $2\pi (\theta + an) = 2 \pi k$ for some $k \in \Bbb{Z}$. Solving for $a$ gives $a = \frac{k-\theta}{n}$...
What if $\theta$ is irrational...what did I do wrong?
'cause I understand that second one but I'm having a hard time explaining it in words
(Re: the first one: a matrix transpose "looks" like the equation $Ax\cdot y=x\cdot A^\top y$. Which implies several things, like how $A^\top x$ is perpendicular to $A^{-1}x^\top$ where $x^\top$ is the vector space perpendicular to $x$.)
DogAteMy: I looked at the link. You're writing garbage with regard to the transpose stuff. Why should a linear map from $\Bbb R^n$ to $\Bbb R^m$ have an inverse in the first place? And for goodness sake don't use $x^\top$ to mean the orthogonal complement when it already means something.
he based much of his success on principles like this I cant believe ive forgotten it
it's basically saying that it's a waste of time to throw a parade for a scholar or win he or she over with compliments and awards etc but this is the biggest source of sense of purpose in the non scholar
yeah there is this thing called the internet and well yes there are better books than others you can study from provided they are not stolen from you by drug dealers you should buy a text book that they base university courses on if you can save for one
I was working from "Problems in Analytic Number Theory" Second Edition, by M.Ram Murty prior to the idiots robbing me and taking that with them which was a fantastic book to self learn from one of the best ive had actually
Yeah I wasn't happy about it either it was more than $200 usd actually well look if you want my honest opinion self study doesn't exist, you are still being taught something by Euclid if you read his works despite him having died a few thousand years ago but he is as much a teacher as you'll get, and if you don't plan on reading the works of others, to maintain some sort of purity in the word self study, well, no you have failed in life and should give up entirely. but that is a very good book
regardless of you attending Princeton university or not
yeah me neither you are the only one I remember talking to on it but I have been well and truly banned from this IP address for that forum now, which, which was as you might have guessed for being too polite and sensitive to delicate religious sensibilities
but no it's not my forum I just remembered it was one of the first I started talking math on, and it was a long road for someone like me being receptive to constructive criticism, especially from a kid a third my age which according to your profile at the time you were
i have a chronological disability that prevents me from accurately recalling exactly when this was, don't worry about it
well yeah it said you were 10, so it was a troubling thought to be getting advice from a ten year old at the time i think i was still holding on to some sort of hopes of a career in non stupidity related fields which was at some point abandoned
@TedShifrin thanks for that in bookmarking all of these under 3500, is there a 101 i should start with and find my way into four digits? what level of expertise is required for all of these is a more clear way of asking
Well, there are various math sources all over the web, including Khan Academy, etc. My particular course was intended for people seriously interested in mathematics (i.e., proofs as well as computations and applications). The students in there were about half first-year students who had taken BC AP calculus in high school and gotten the top score, about half second-year students who'd taken various first-year calculus paths in college.
long time ago tho even the credits have expired not the student debt though so i think they are trying to hint i should go back a start from first year and double said debt but im a terrible student it really wasn't worth while the first time round considering my rate of attendance then and how unlikely that would be different going back now
@BalarkaSen yeah from the number theory i got into in my most recent years it's bizarre how i almost became allergic to calculus i loved it back then and for some reason not quite so when i began focusing on prime numbers
What do you all think of this theorem: The number of ways to write $n$ as a sum of four squares is equal to $8$ times the sum of divisors of $n$ if $n$ is odd and $24$ times sum of odd divisors of $n$ if $n$ is even
A proof of this uses (basically) Fourier analysis
Even though it looks rather innocuous albeit surprising result in pure number theory
@BalarkaSen well because it was what Wikipedia deemed my interests to be categorized as i have simply told myself that is what i am studying, it really starting with me horsing around not even knowing what category of math you call it. actually, ill show you the exact subject you and i discussed on mmf that reminds me you were actually right, i don't know if i would have taken it well at the time tho
yeah looks like i deleted the stack exchange question on it anyway i had found a discrete Fourier transform for $\lfloor \frac{n}{m} \rfloor$ and you attempted to explain to me that is what it was that's all i remember lol @BalarkaSen
oh and when it comes to transcripts involving me on the internet, don't worry, the younger version of you most definitely will be seen in a positive light, and just contemplating all the possibilities of things said by someone as insane as me, agree that pulling up said past conversations isn't productive
absolutely me too but would we have it any other way? i mean i know im like a dog chasing a car as far as any real "purpose" in learning is concerned i think id be terrified if something didnt unfold into a myriad of new things I'm clueless about
@Daminark They key thing if I remember correctly was that if you look at the subgroup $\Gamma$ of $\text{PSL}_2(\Bbb Z)$ generated by (1, 2|0, 1) and (0, -1|1, 0), then any holomorphic function $f : \Bbb H^2 \to \Bbb C$ invariant under $\Gamma$ (in the sense that $f(z + 2) = f(z)$ and $f(-1/z) = z^{2k} f(z)$, $2k$ is called the weight) such that the Fourier expansion of $f$ at infinity and $-1$ having no constant coefficients is called a cusp form (on $\Bbb H^2/\Gamma$).
The $r_4(n)$ thing follows as an immediate corollary of the fact that the only weight $2$ cusp form is identically zero.
I can try to recall more if you're interested.
It's insightful to look at the picture of $\Bbb H^2/\Gamma$... it's like, take the line $\Re[z] = 1$, the semicircle $|z| = 1, z > 0$, and the line $\Re[z] = -1$. This gives a certain region in the upper half plane
Paste those two lines, and paste half of the semicircle (from -1 to i, and then from i to 1) to the other half by folding along i
Yup, that $E_4$ and $E_6$ generates the space of modular forms, that type of things
I think in general if you start thinking about modular forms as eigenfunctions of a Laplacian, the space generated by the Eisenstein series are orthogonal to the space of cusp forms - there's a general story I don't quite know
Cusp forms vanish at the cusp (those are the $-1$ and $\infty$ points in the quotient $\Bbb H^2/\Gamma$ picture I described above, where the hyperbolic metric gets coned off), whereas given any values on the cusps you can make a linear combination of Eisenstein series which takes those specific values on the cusps
So it sort of makes sense
Regarding that particular result, saying it's a weight 2 cusp form is like specifying a strong decay rate of the cusp form towards the cusp. Indeed, one basically argues like the maximum value theorem in complex analysis
@BalarkaSen no you didn't come across as pretentious at all, i can only imagine being so young and having the mind you have would have resulted in many accusing you of such, but really, my experience in life is diverse to say the least, and I've met know it all types that are in everyway detestable, you shouldn't be so hard on your character you are very humble considering your calibre
You probably don't realise how low the bar drops when it comes to integrity of character is concerned, trust me, you wouldn't have come as far as you clearly have if you were a know it all
it was actually the best thing for me to have met a 10 year old at the age of 30 that was well beyond what ill ever realistically become as far as math is concerned someone like you is going to be accused of arrogance simply because you intimidate many ignore the good majority of that mate
|
The orthogonal group, consisting of all proper and improper rotations, is generated by reflections. Every proper rotation is the composition of two reflections, a special case of the Cartan–Dieudonné theorem.
Yeah it does seem unreasonable to expect a finite presentation
Let (V, b) be an n-dimensional, non-degenerate symmetric bilinear space over a field with characteristic not equal to 2. Then, every element of the orthogonal group O(V, b) is a composition of at most n reflections.
How does the Evolute of an Involute of a curve $\Gamma$ is $\Gamma$ itself?Definition from wiki:-The evolute of a curve is the locus of all its centres of curvature. That is to say that when the centre of curvature of each point on a curve is drawn, the resultant shape will be the evolute of th...
Player $A$ places $6$ bishops wherever he/she wants on the chessboard with infinite number of rows and columns. Player $B$ places one knight wherever he/she wants.Then $A$ makes a move, then $B$, and so on...The goal of $A$ is to checkmate $B$, that is, to attack knight of $B$ with bishop in ...
Player $A$ chooses two queens and an arbitrary finite number of bishops on $\infty \times \infty$ chessboard and places them wherever he/she wants. Then player $B$ chooses one knight and places him wherever he/she wants (but of course, knight cannot be placed on the fields which are under attack ...
The invariant formula for the exterior product, why would someone come up with something like that. I mean it looks really similar to the formula of the covariant derivative along a vector field for a tensor but otherwise I don't see why would it be something natural to come up with. The only places I have used it is deriving the poisson bracket of two one forms
This means starting at a point $p$, flowing along $X$ for time $\sqrt{t}$, then along $Y$ for time $\sqrt{t}$, then backwards along $X$ for the same time, backwards along $Y$ for the same time, leads you at a place different from $p$. And upto second order, flowing along $[X, Y]$ for time $t$ from $p$ will lead you to that place.
Think of evaluating $\omega$ on the edges of the truncated square and doing a signed sum of the values. You'll get value of $\omega$ on the two $X$ edges, whose difference (after taking a limit) is $Y\omega(X)$, the value of $\omega$ on the two $Y$ edges, whose difference (again after taking a limit) is $X \omega(Y)$ and on the truncation edge it's $\omega([X, Y])$
Gently taking caring of the signs, the total value is $X\omega(Y) - Y\omega(X) - \omega([X, Y])$
So value of $d\omega$ on the Lie square spanned by $X$ and $Y$ = signed sum of values of $\omega$ on the boundary of the Lie square spanned by $X$ and $Y$
Infinitisimal version of $\int_M d\omega = \int_{\partial M} \omega$
But I believe you can actually write down a proof like this, by doing $\int_{I^2} d\omega = \int_{\partial I^2} \omega$ where $I$ is the little truncated square I described and taking $\text{vol}(I) \to 0$
For the general case $d\omega(X_1, \cdots, X_{n+1}) = \sum_i (-1)^{i+1} X_i \omega(X_1, \cdots, \hat{X_i}, \cdots, X_{n+1}) + \sum_{i < j} (-1)^{i+j} \omega([X_i, X_j], X_1, \cdots, \hat{X_i}, \cdots, \hat{X_j}, \cdots, X_{n+1})$ says the same thing, but on a big truncated Lie cube
Let's do bullshit generality. $E$ be a vector bundle on $M$ and $\nabla$ be a connection $E$. Remember that this means it's an $\Bbb R$-bilinear operator $\nabla : \Gamma(TM) \times \Gamma(E) \to \Gamma(E)$ denoted as $(X, s) \mapsto \nabla_X s$ which is (a) $C^\infty(M)$-linear in the first factor (b) $C^\infty(M)$-Leibniz in the second factor.
Explicitly, (b) is $\nabla_X (fs) = X(f)s + f\nabla_X s$
You can verify that this in particular means it's a pointwise defined on the first factor. This means to evaluate $\nabla_X s(p)$ you only need $X(p) \in T_p M$ not the full vector field. That makes sense, right? You can take directional derivative of a function at a point in the direction of a single vector at that point
Suppose that $G$ is a group acting freely on a tree $T$ via graph automorphisms; let $T'$ be the associated spanning tree. Call an edge $e = \{u,v\}$ in $T$ essential if $e$ doesn't belong to $T'$. Note: it is easy to prove that if $u \in T'$, then $v \notin T"$ (this follows from uniqueness of paths between vertices).
Now, let $e = \{u,v\}$ be an essential edge with $u \in T'$. I am reading through a proof and the author claims that there is a $g \in G$ such that $g \cdot v \in T'$? My thought was to try to show that $orb(u) \neq orb (v)$ and then use the fact that the spanning tree contains exactly vertex from each orbit. But I can't seem to prove that orb(u) \neq orb(v)...
@Albas Right, more or less. So it defines an operator $d^\nabla : \Gamma(E) \to \Gamma(E \otimes T^*M)$, which takes a section $s$ of $E$ and spits out $d^\nabla(s)$ which is a section of $E \otimes T^*M$, which is the same as a bundle homomorphism $TM \to E$ ($V \otimes W^* \cong \text{Hom}(W, V)$ for vector spaces). So what is this homomorphism $d^\nabla(s) : TM \to E$? Just $d^\nabla(s)(X) = \nabla_X s$.
This might be complicated to grok first but basically think of it as currying. Making a billinear map a linear one, like in linear algebra.
You can replace $E \otimes T^*M$ just by the Hom-bundle $\text{Hom}(TM, E)$ in your head if you want. Nothing is lost.
I'll use the latter notation consistently if that's what you're comfortable with
(Technical point: Note how contracting $X$ in $\nabla_X s$ made a bundle-homomorphsm $TM \to E$ but contracting $s$ in $\nabla s$ only gave as a map $\Gamma(E) \to \Gamma(\text{Hom}(TM, E))$ at the level of space of sections, not a bundle-homomorphism $E \to \text{Hom}(TM, E)$. This is because $\nabla_X s$ is pointwise defined on $X$ and not $s$)
@Albas So this fella is called the exterior covariant derivative. Denote $\Omega^0(M; E) = \Gamma(E)$ ($0$-forms with values on $E$ aka functions on $M$ with values on $E$ aka sections of $E \to M$), denote $\Omega^1(M; E) = \Gamma(\text{Hom}(TM, E))$ ($1$-forms with values on $E$ aka bundle-homs $TM \to E$)
Then this is the 0 level exterior derivative $d : \Omega^0(M; E) \to \Omega^1(M; E)$. That's what a connection is, a 0-level exterior derivative of a bundle-valued theory of differential forms
So what's $\Omega^k(M; E)$ for higher $k$? Define it as $\Omega^k(M; E) = \Gamma(\text{Hom}(TM^{\wedge k}, E))$. Just space of alternating multilinear bundle homomorphisms $TM \times \cdots \times TM \to E$.
Note how if $E$ is the trivial bundle $M \times \Bbb R$ of rank 1, then $\Omega^k(M; E) = \Omega^k(M)$, the usual space of differential forms.
That's what taking derivative of a section of $E$ wrt a vector field on $M$ means, taking the connection
Alright so to verify that $d^2 \neq 0$ indeed, let's just do the computation: $(d^2s)(X, Y) = d(ds)(X, Y) = \nabla_X ds(Y) - \nabla_Y ds(X) - ds([X, Y]) = \nabla_X \nabla_Y s - \nabla_Y \nabla_X s - \nabla_{[X, Y]} s$
Voila, Riemann curvature tensor
Well, that's what it is called when $E = TM$ so that $s = Z$ is some vector field on $M$. This is the bundle curvature
Here's a point. What is $d\omega$ for $\omega \in \Omega^k(M; E)$ "really"? What would, for example, having $d\omega = 0$ mean?
Well, the point is, $d : \Omega^k(M; E) \to \Omega^{k+1}(M; E)$ is a connection operator on $E$-valued $k$-forms on $E$. So $d\omega = 0$ would mean that the form $\omega$ is parallel with respect to the connection $\nabla$.
Let $V$ be a finite dimensional real vector space, $q$ a quadratic form on $V$ and $Cl(V,q)$ the associated Clifford algebra, with the $\Bbb Z/2\Bbb Z$-grading $Cl(V,q)=Cl(V,q)^0\oplus Cl(V,q)^1$. We define $P(V,q)$ as the group of elements of $Cl(V,q)$ with $q(v)\neq 0$ (under the identification $V\hookrightarrow Cl(V,q)$) and $\mathrm{Pin}(V)$ as the subgroup of $P(V,q)$ with $q(v)=\pm 1$. We define $\mathrm{Spin}(V)$ as $\mathrm{Pin}(V)\cap Cl(V,q)^0$.
Is $\mathrm{Spin}(V)$ the set of elements with $q(v)=1$?
Torsion only makes sense on the tangent bundle, so take $E = TM$ from the start. Consider the identity bundle homomorphism $TM \to TM$... you can think of this as an element of $\Omega^1(M; TM)$. This is called the "soldering form", comes tautologically when you work with the tangent bundle.
You'll also see this thing appearing in symplectic geometry. I think they call it the tautological 1-form
(The cotangent bundle is naturally a symplectic manifold)
Yeah
So let's give this guy a name, $\theta \in \Omega^1(M; TM)$. It's exterior covariant derivative $d\theta$ is a $TM$-valued $2$-form on $M$, explicitly $d\theta(X, Y) = \nabla_X \theta(Y) - \nabla_Y \theta(X) - \theta([X, Y])$.
But $\theta$ is the identity operator, so this is $\nabla_X Y - \nabla_Y X - [X, Y]$. Torsion tensor!!
So I was reading this thing called the Poisson bracket. With the poisson bracket you can give the space of all smooth functions on a symplectic manifold a Lie algebra structure. And then you can show that a symplectomorphism must also preserve the Poisson structure. I would like to calculate the Poisson Lie algebra for something like $S^2$. Something cool might pop up
If someone has the time to quickly check my result, I would appreciate. Let $X_{i},....,X_{n} \sim \Gamma(2,\,\frac{2}{\lambda})$ Is $\mathbb{E}([\frac{(\frac{X_{1}+...+X_{n}}{n})^2}{2}] = \frac{1}{n^2\lambda^2}+\frac{2}{\lambda^2}$ ?
Uh apparenty there are metrizable Baire spaces $X$ such that $X^2$ not only is not Baire, but it has a countable family $D_\alpha$ of dense open sets such that $\bigcap_{\alpha<\omega}D_\alpha$ is empty
@Ultradark I don't know what you mean, but you seem down in the dumps champ. Remember, girls are not as significant as you might think, design an attachment for a cordless drill and a flesh light that oscillates perpendicular to the drill's rotation and your done. Even better than the natural method
I am trying to show that if $d$ divides $24$, then $S_4$ has a subgroup of order $d$. The only proof I could come up with is a brute force proof. It actually was too bad. E.g., orders $2$, $3$, and $4$ are easy (just take the subgroup generated by a 2 cycle, 3 cycle, and 4 cycle, respectively); $d=8$ is Sylow's theorem; $d=12$, take $d=24$, take $S_4$. The only case that presented a semblance of trouble was $d=6$. But the group generated by $(1,2)$ and $(1,2,3)$ does the job.
My only quibble with this solution is that it doesn't seen very elegant. Is there a better way?
In fact, the action of $S_4$ on these three 2-Sylows by conjugation gives a surjective homomorphism $S_4 \to S_3$ whose kernel is a $V_4$. This $V_4$ can be thought as the sub-symmetries of the cube which act on the three pairs of faces {{top, bottom}, {right, left}, {front, back}}.
Clearly these are 180 degree rotations along the $x$, $y$ and the $z$-axis. But composing the 180 rotation along the $x$ with a 180 rotation along the $y$ gives you a 180 rotation along the $z$, indicative of the $ab = c$ relation in Klein's 4-group
Everything about $S_4$ is encoded in the cube, in a way
The same can be said of $A_5$ and the dodecahedron, say
|
$$y=(c_1+c_2t)e^{rt}$$
comes from, it is pretty clear in the case $r=0$, where $D^2y(t)=0$ is solved by
$$y=c_1+c_2t$$
... so it seems that the linear function comes from integrating twice, or more correctly, inverting the same differential operator twice.
Let's try to derive our desired equation $y=(c_1+c_2t)e^{rt}$ via a limit. It doesn't seem like this would arise in the limit of an equation like $y=c_1e^{r_1t}+c_2e^{r_2t}$, but once again -- this is an arbitrary-constant-problem. Much like how we switched to definite integrals (i.e. fixed the limits/boundary conditions of the integral) before taking the limit in Part 1, we must fix the initial conditions here too.
For those new to this series, here's the reason we switch to an initial conditions approach/co-ordinate system:
Most people have the right idea, that you need to take the solution for non-repeated roots, and take the limit as the roots approach each other. This is correct, but it's a mistake to take the limit of theTaken from my answer on Math Stack Exchange. general solution$c_1e^{r_1t}+c_2e^{r_2t}$, which is what most people try to do when they see this problem, and are then puzzled since it gives you a solution space of the wrong dimension. This is wrong, because $c_1$ and $c_2$ are arbitrary mathematical labels, and have no reason to stay the same as the roots approach each other. You can, however, take the limit while representing the solution in terms of your initial conditions, because these can stay the same as you change the system. You can think of this as a physical system where you change the damping and other parameters to create a repeated-roots system as the initial conditions remain the same -- this is a simple process, but if you instead try to ensure $c_1$ and $c_2$ remain the same, you'll run into infinities and undefined stuff. This is exactly what happens here, there simply isn't a repeated-roots solution with the same $c_1$ and $c_2$ values, but you obviously do have a system/solution with the same initial conditions.
We consider the differential equation
$$(D-I)(D-rI)y(t)=0$$
And tend $r\to1$. The solution to the equation in general is
$$y(t) = {c_1}{e^t} + {c_2}{e^{rt}}$$
If we let $y(0) = a,\,\,y'(0) = b$, then it shouldn't be hard to show that the solution we're looking for is
$$y(t)=\frac{ra-b}{r-1}e^t-\frac{a-b}{r-1}e^{rt}$$
This is where we must tend $r\to1$. Doing so is simply algebraic manipulation and a bit of limits:
$$\begin{array}{c}y(t) = \frac{{\left( {ra - b} \right){e^t} - \left( {a - b} \right){e^{rt}}}}{{r - 1}} = \frac{{\left( {ra - b} \right) - \left( {a - b} \right){e^{(r - 1)t}}}}{{r - 1}}{e^t}\\ = \frac{{(r - 1)a + \left( {a - b} \right) - \left( {a - b} \right){e^{(r - 1)t}}}}{{r - 1}}{e^t}\\ = \left[ {a + \left( {a - b} \right)\frac{{1 - {e^{(r - 1)t}}}}{{r - 1}}} \right]{e^t}\\ = \left[ {a - \left( {a - b} \right)\frac{{{e^{(r - 1)t}} - {e^{0t}}}}{{r - 1}}} \right]{e^t}\\ = \left[ {a - \left( {a - b} \right){{\left. {\frac{d}{{dx}}\left[ {{e^{xt}}} \right]} \right|}_{x = 0}}} \right]{e^t}\\ = \left[ {a - \left( {a - b} \right)t} \right]{e^t}\end{array}$$
Which indeed takes the form
$$y(t) = \left( {{c_1} + {c_2}t} \right){e^t}$$
With $c_1,\,\,c_2$ such that $y(0)=a,\,\,y'(0)=b$.
Here's a visualisation of the limit, with varying values of $r$:
And here's an interactive version with a slider for
r.
|
Seems like the experts are not answering your question so I will try to provide an idea. But before I do that I strongly suggest that you look up in the literature for some sophisticated methods that have been already developed. However, without guaranteeing that this is a good or fast or efficient suggestion, I propose the following methodology. Keep in mind, I may have made some mistakes, so I do not guarantee that everything is fully correct, but I hope the idea of the method gives you some approach that will help you solve your problem.
Let $V$ be the set of your points in the whole "big" cube. Fix your "small" cube $C$ somewhere in the big cube and let $ V_C$ be the set of points that are contained in $C$, i.e. $V_C = V \cap C.$ Initially set $V'_C=V_C$.
Step 1: Generate the Voronoi diagram $Vor(V'_C)$. For each point $v \in V'_C$ denote by $Vor(v)$ its Voronoi cell, which is a convex polyhedron in three-space. Furthermore, denote by $W(v)$ the vertices of the Voronoi cell centered at $v \in V'_C$ and by $W(V'_C) = \cup_{v \in V'_C} W(v)$ the vertices of all Voronoicells from the Voronoi diagram $Vor(V'_C)$.
Step 2: Color all points from $V'_C$ and all Voronoi vertices $W(V'_C)$ white.
Step 3: For each Voronoi vertex $w \in W(V'_C)$ draw the Delaunay sphere centered at $w$, that is the sphere with center $w$ and radius the distance between $w$ and one of the points from $V'_C$ whose Voronoi cell has $w$ as a vertex (it doesn't matter which point, there are several but the result is always the same).
Case 3.1. If the Delaunay sphere of $w$ is contained in the cube $C$, color $w$ black.
Case 3.2. If the Delaunay sphere is not contained in the cube $C$ but it doesn't contain any point from $V$ in its (open) interior, color the point $w$ black.
Case 3.3. If the Delaunay sphere of $w$ contain points from $V$ in its (open) interior, (1) add the points from $V$ contained in the interior of the sphere to the set $V'_C$ and (2) keep the color of the point $w$ white.
Step 4: For each point $v \in V'_C$ check if all Voronoi vertices $W(v)$ of its Vornoi cell are black. If not all of them are black, keep the color of $v$ white. If they are black, color $v$ black.
Step 5: Check if all points of the original set $V_C$ are black.
Case 5.1. If they are all black, the Voronoi diagram $Vor(V'_C)$ restricted to the cube $C$ is the local portion of the global Voronoi diagram $Vor(V)$ restricted to $C$. End.
Case 5.2. If there are white vertices in $V_C$, then go back to Step 1. In Step 1, when generating the new Voronoi diagram $Vor(V'_C)$, one keeps the Voronoi cells around black points from $V'_C$ the same, keeps all black Voronoi vertices from $W(V'_C)$ and makes alteration only in relation to the white ones.
I hope this helps.
|
CDS 212, Homework 4, Fall 2010
J. Doyle Issued: 19 Oct 2010 CDS 212, Fall 2010 Due: 28 Oct 2010 Reading DFT, Chapter 6 Problems [DFT 6.1] Show that any stable transfer function can be uniquely factored as the product of an all pass function and a minimum phase function, up to a choice of sign. [DFT 6.4] Let <amsmath>
P(s) = 4\frac{s-2}{(s+1)^2}</amsmath>
and suppose that <amsmath>C</amsmath> is an internally stabilizing controller such that <amsmath>\| S \|_\infty = 1.5.</amsmath> Give a positive lower bound for
<amsmath> \max_{0 \leq \omega \leq 0.1} |S(j\omega)|.</amsmath> [DFT 6.5] Define <amsmath>\epsilon = \|W_1 S\|_\infty</amsmath> and <amsmath>\delta = \| C S \|_\infty</amsmath>, so that <amsmath>\epsilon</amsmath> is a measure of tracking performance and <amsmath>\delta</amsmath> measures control effort. Show that for every point <amsmath>s_0</amsmath> with Re <amsmath>s_0 \geq 0</amsmath>, <amsmath> |W_1(s_0)| \leq \epsilon + |W_1 (s_0) P(s_0)|\, \delta.</amsmath>
Hence <amsmath>\epsilon</amsmath> and <amsmath>\delta</amsmath> cannot both be very small and so we cannot get good tracking without exerting some control effort.
[DFT 6.6] Let <amsmath>\omega</amsmath> be a frequency such that <amsmath>j \omega</amsmath> is not a pole of <amsmath>P</amsmath>. Suppose that <amsmath> \epsilon := |S(j\omega)| < 1.</amsmath>
Derive a lower bound for <amsmath>C(j\omega)</amsmath> that blows up as <amsmath>\epsilon \to 0</amsmath>. Hence good tracking at a particular frequency requires large controller gain at this frequency.
[DFT 6.7] Consider a plant with transfer function <amsmath> P(s) = \frac{1}{s^2 - s + 4}</amsmath>
and suppose we want to design an internally stabilizing controller such that
<amsmath>|S(j\omega| \leq \epsilon</amsmath> for <amsmath>0 \leq \omega \leq 0.1</amsmath> <amsmath>|S(j\omega| \leq 2</amsmath> for <amsmath>0.1 \leq \omega \leq 5</amsmath> <amsmath>|S(j\omega| \leq 1</amsmath> for <amsmath>5 \leq \omega \leq \infty</amsmath>
Find a (positive) lower bound on the achievable <amsmath>\epsilon</amsmath>.
|
Answer
The direction angle of the vector of the complex number
Work Step by Step
The argument of the complex number $\theta=\tan^{-1} (\frac{y}{x})$ is the angle between the vector of the complex number and the positive x-axis.
You can help us out by revising, improving and updating this answer.Update this answer
After you claim an answer you’ll have
24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback.
|
Long story short: I need to get a function ($f(r,z,\alpha)$) via numerical integration, and a number from another numerical integration for this function. The method I have at the moment does converse & gives an answer, it is based on
NIntegrate but takes a long time.
I am looking for suggestions for faster method. Even less precise ones, as I just a rough number.
I found this very tempting method but could not generalise is for multivariables and tables, like I will need to do for the three arguments of the function.
I am calculating the intensity of a laser beam going through a particular optical component, an $axicon$. I found the data from this paper.
The electric field is given by:
$$ E(r,z) = \frac{2\pi}{\lambda z}w_0 w(z) \int_0^{\infty} e^{-x^2}e^{i\pi(ax^2 - \frac{x}{\Delta})}J_0(\beta x)x\, \mathrm{d}x,$$ where $\beta= \beta(r,z), \, a=a(z), \,\Delta = \Delta(\alpha,z), \,$.
I am writing this as:
Efield[r_?NumericQ, z_?NumericQ, α_?NumericQ] := ( 2 π)/(λlaser z)*w0*w[z]* NIntegrate[ Exp[-x^2]* Exp[I*π*(a [z] x^2 - x/Δ[α, z])]* BesselJ[0, β [r, z] x]*x, {x, 0, ∞}]
The intensity of the beam is given by: $$ I(r,z) = |E(r,z)|^2,$$
and I am writing as:
Intensity[r_?NumericQ, z_?NumericQ, α_?NumericQ] := Abs[Efield[r, z, α]]^2
Now I need the power $P$:
$$ P = \int^{\infty}_0 2 \pi I(r,\mathrm{distance})r\,\mathrm{d}dr, $$ where the $z$ coordinate does not matter as it should give the name answer $\forall z$.
I am writing this as:
PowerLens = NIntegrate[(Intensity[r, distance, conv*20]) * r*2 π, {r, 0, 1}]
I need to look at the intensity (vs. $r$) for different values of $\alpha$ and $z$. I can do this with
Tablebut it would take an
NIntegrateeach time, hence the long time.
The power needs two
NIntegrate, so takes a long time too.
Full code:
nm = 10^-6 mm;mm = 10^-3;μm = 10^-3 mm;λlaser = 532 nm;klaser = (2 π)/λlaser;w0 = 5 mm;flens = 50 mm;ng = 1.5;conv = (2 π)/360;Gaussian beam parameterszR = π w0^2/λlaser // Nw[z_] = w0*Sqrt[1 + z^2/zR^2]Rc[z_] = z*(1 + zR^2/z^2)Guoy[z_] = ArcTan[z/zR] GaussianBeamField[ρ_, z_] = (w0/w[z])*Exp[-ρ^2/w[z]^2]* Exp[I*(klaser z - Guoy[z] + (klaser ρ^2)/(2 Rc[z]))] GaussianBeamIntensity[ρ_, z_] = Abs[GaussianBeamField[ρ, z] ]^2 // ComplexExpandNormalisation[z_] := Integrate[2 π*ρ*GaussianBeamIntensity[ρ, z], {ρ, 0, ∞}, Assumptions -> {Element[{ρ, z}, Reals]}]GaussianBeamIntensityNormalised[ρ_, z_] = (1/Normalisation[0])* GaussianBeamIntensity[ρ, z] b[α_] = (2 π (ng - 1) )/λlaser Tan[α]Δ[α_, z_] = π/(b[α]*w[z])β[r_, z_] = (2 π w[z] r)/(λlaser z)a[z_] = w[z]^2/λlaser*(1/z + (1/Rc[z] - 1/flens))Efield[r_?NumericQ, z_?NumericQ, α_?NumericQ] := ( 2 π)/(λlaser z)*w0*w[z]* NIntegrate[ Exp[-x^2]*Exp[I*π*(a [z] x^2 - x/Δ[α, z])]* BesselJ[0, β [r, z] x]*x, {x, 0, ∞}]Intensity[r_?NumericQ, z_?NumericQ, α_?NumericQ] := Abs[Efield[r, z, α]]^2p1 = Plot[GaussianBeamIntensityNormalised[r, flens], {r, 0, 15 mm}]p2 = Plot[IntensityNormalised[r, flens, 20*conv], {r, 0, 15 mm}]Show[p1, p2]PowerLens = NIntegrate[Intensity[r, flens, conv*20]*r*2 π, {r, 0, ∞}]
|
Forgot password? New user? Sign up
Existing user? Log in
If aaa,bbb and ccc are positive integers such that a∣b5a | b^{5}a∣b5 , b∣c5b|c^{5}b∣c5 and c∣a5c|a^{5}c∣a5.Prove that abc∣(a+b+c)31abc|(a+b+c)^{31}abc∣(a+b+c)31.
Note by Eddie The Head 5 years, 6 months ago
Easy Math Editor
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
*italics*
_italics_
**bold**
__bold__
- bulleted- list
1. numbered2. list
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)
> This is a quote
This is a quote
# I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world"
# I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world"
\(
\)
\[
\]
2 \times 3
2^{34}
a_{i-1}
\frac{2}{3}
\sqrt{2}
\sum_{i=1}^3
\sin \theta
\boxed{123}
Sort by:
in general if a∣bna | b^na∣bn , b∣cnb | c^nb∣cn , c∣anc | a^nc∣an , abc∣(a+b+c)n2+n+1abc | (a+b+c)^{n^2 + n + 1}abc∣(a+b+c)n2+n+1
Log in to reply
Here a , b, c are becoming equal no?
a,b,ca,b,ca,b,c are not necessarily equal, if that's what you mean.
@Daniel Liu – You re correct ...at first glance it looked like that....:p I sometimes suffer from brain malfunctions like these.....
This was my favorite question of the set. It looks so elegant, and is simple to approach if you are careful with the details.
Outline:
Consider each term of the expansion of (a+b+c)31,(a+b+c)^{31},(a+b+c)31, which is of the form mapbqcr,m a^{p} b^{q} c^{r},mapbqcr, where m,a,b,r∈Z+m, a, b, r \in \mathbb{Z^+} m,a,b,r∈Z+ and p+q+r=31.p+q+r=31.p+q+r=31. If p,q,r>0,p,q,r>0,p,q,r>0, abc∣apbqcr.abc \mid a^p b^q c^r.abc∣apbqcr. You just need to handle the exceptional cases when atleast one of p,q,rp,q,rp,q,r is zero, which is not hard. I'll post my full solution if necessary.
I got this problem when I sat for RMO last time. Heaven knows why I couldn't solve it in the examination hall... it was pretty easy.
I solved this problem orally with you while coming down the staircase :P
Yep, I remember! :P I couldn't solve it in the hall, though.
This seems fairly easy for an Olympiad problem
Problem Loading...
Note Loading...
Set Loading...
|
@JosephWright Well, we still need table notes etc. But just being able to selectably switch off parts of the parsing one does not need... For example, if a user specifies format 2.4, does the parser even need to look for e syntax, or ()'s?
@daleif What I am doing to speed things up is to store the data in a dedicated format rather than a property list. The latter makes sense for units (open ended) but not so much for numbers (rigid format).
@JosephWright I want to know about either the bibliography environment or \DeclareFieldFormat. From the documentation I see no reason not to treat these commands as usual, though they seem to behave in a slightly different way than I anticipated it. I have an example here which globally sets a box, which is typeset outside of the bibliography environment afterwards. This doesn't seem to typeset anything. :-( So I'm confused about the inner workings of biblatex (even though the source seems....
well, the source seems to reinforce my thought that biblatex simply doesn't do anything fancy). Judging from the source the package just has a lot of options, and that's about the only reason for the large amount of lines in biblatex1.sty...
Consider the following MWE to be previewed in the build in PDF previewer in Firefox\documentclass[handout]{beamer}\usepackage{pgfpages}\pgfpagesuselayout{8 on 1}[a4paper,border shrink=4mm]\begin{document}\begin{frame}\[\bigcup_n \sum_n\]\[\underbrace{aaaaaa}_{bbb}\]\end{frame}\end{d...
@Paulo Finally there's a good synth/keyboard that knows what organ stops are! youtube.com/watch?v=jv9JLTMsOCE Now I only need to see if I stay here or move elsewhere. If I move, I'll buy this there almost for sure.
@JosephWright most likely that I'm for a full str module ... but I need a little more reading and backlog clearing first ... and have my last day at HP tomorrow so need to clean out a lot of stuff today .. and that does have a deadline now
@yo' that's not the issue. with the laptop I lose access to the company network and anythign I need from there during the next two months, such as email address of payroll etc etc needs to be 100% collected first
@yo' I'm sorry I explain too bad in english :) I mean, if the rule was use \tl_use:N to retrieve the content's of a token list (so it's not optional, which is actually seen in many places). And then we wouldn't have to \noexpand them in such contexts.
@JosephWright \foo:V \l_some_tl or \exp_args:NV \foo \l_some_tl isn't that confusing.
@Manuel As I say, you'd still have a difference between say \exp_after:wN \foo \dim_use:N \l_my_dim and \exp_after:wN \foo \tl_use:N \l_my_tl: only the first case would work
@Manuel I've wondered if one would use registers at all if you were starting today: with \numexpr, etc., you could do everything with macros and avoid any need for \<thing>_new:N (i.e. soft typing). There are then performance questions, termination issues and primitive cases to worry about, but I suspect in principle it's doable.
@Manuel Like I say, one can speculate for a long time on these things. @FrankMittelbach and @DavidCarlisle can I am sure tell you lots of other good/interesting ideas that have been explored/mentioned/imagined over time.
@Manuel The big issue for me is delivery: we have to make some decisions and go forward even if we therefore cut off interesting other things
@Manuel Perhaps I should knock up a set of data structures using just macros, for a bit of fun [and a set that are all protected :-)]
@JosephWright I'm just exploring things myself “for fun”. I don't mean as serious suggestions, and as you say you already thought of everything. It's just that I'm getting at those points myself so I ask for opinions :)
@Manuel I guess I'd favour (slightly) the current set up even if starting today as it's normally \exp_not:V that applies in an expansion context when using tl data. That would be true whether they are protected or not. Certainly there is no big technical reason either way in my mind: it's primarily historical (expl3 pre-dates LaTeX2e and so e-TeX!)
@JosephWright tex being a macro language means macros expand without being prefixed by \tl_use. \protected would affect expansion contexts but not use "in the wild" I don't see any way of having a macro that by default doesn't expand.
@JosephWright it has series of footnotes for different types of footnotey thing, quick eye over the code I think by default it has 10 of them but duplicates for minipages as latex footnotes do the mpfoot... ones don't need to be real inserts but it probably simplifies the code if they are. So that's 20 inserts and more if the user declares a new footnote series
@JosephWright I was thinking while writing the mail so not tried it yet that given that the new \newinsert takes from the float list I could define \reserveinserts to add that number of "classic" insert registers to the float list where later \newinsert will find them, would need a few checks but should only be a line or two of code.
@PauloCereda But what about the for loop from the command line? I guess that's more what I was asking about. Say that I wanted to call arara from inside of a for loop on the command line and pass the index of the for loop to arara as the jobname. Is there a way of doing that?
|
The mathematical concept of
linearity has a profound impact on electronic design. The idea itself is quite simple, but the implications have great meaning for our field. We talk about the mathematical meaning of linear.
Written by Willy McAllister.
Contents Preparation Linearity Scaling (homogeneity) Additivity Additivity example Superposition Preparation Function notation
Understanding linearity is an exercise in using function notation $f(x)$.
Whatever the input is, it is stuffed into the argument of the function. The output is $f($input$)$.
If you feel rusty on function notation you can review here.
Variables
Three variables and a function are involved in studying linearity: $x, m, a,$ and $f(x)$.
$x$ represents the input signal. It can take on any value from moment to moment.
$m$ is the slope in the line equation, $f(x) = mx + b$.
$a$ is the scaling multiplier. We use it to test $f(x)$ for the scaling property.
$f(x)$ is a function based on $x$ and other internal built-in constants like $m$.
For linear functions, $m$ is constant (has the same for every value of $x$ for all time). In math lingo $m$ is a
time-invariant constant coefficient. If $m$ depends on $x$ then $f(x)$ is a non-linear function. If $m$ changes with time it is a time-variant coefficient and the math becomes a royal pain. We don’t cover that here. Linearity
Linearity defined in the mathematical sense: A function $f$ is linear if it has these properties,
Homogeneity (scaling): $f(ax) = af(x)$
Additivity: $f(x_1+x_2) = f(x_1) + f(x_2)$
The homogeneity and additivity together are called the
principle of superposition.
You probably already have a sense of what
scaling means. Consider two related physical concepts, like speed and distance. Let’s say you run for $10$ minutes. Your speed and the distance you travel are related. If you run twice as fast (double your speed), then you double the distance run in $10$ minutes. If you triple your speed, you triple your distance. Distance scales with speed.
Likewise with Ohm’s Law, $v = i\,\text R$. If $\text R$ is constant then voltage $v$ scales with current $i$.
Other meanings of linear
The word “linear” in everyday language describes something that is “line-ish,” resembling a line. A row of trees along the street has a linear arrangement. A linear argument is one that goes straight to the point. In high school algebra, you might use the word linear to describe a straight line, $y = a\,x+b$.
This terms are
not exactly the same as what we are talking about here. All linear functions can be graphed as lines, but not all lines are linear functions. It turns out the y-intercept $b$ has to equal $0$ for a line to meet the definition of linear. (We talk about this more down below.) Some functions are not linear
Two quantities might have a
non-linear relationship. What does that look like? Let’s say you change your run a little. This time you run a fixed distance instead of a fixed time. If you run twice as fast, the total time it takes to reach the finish line is cut in half. Speed doubled, time halved. Time does not scale with speed. In fact, they go in opposite directions. Therefore, time and speed do not have a linear relationship when the running problem is stated this way.
Sometimes you can dig a little deeper to find a linear relationship. In the case of running for a fixed time, there’s a linear relationship is between speed and the reciprocal of time, $1/t$. Speed and $1/t$ scale up and down together in a linear relationship.
Scaling (homogeneity)
We can express the idea of scaling in mathematical notation,
“Doubling the input doubles the output”
Doubling the input is expressed as $f(2x)$. Doubling the output is expressed as $2f(x)$.
If $f(2x) = 2f(x)$ for any value of $x$ that means “doubling” has the scaling property.
In general the test for scaling is,
$f(ax) \stackrel{?}{=} af(x)$
where $a$ is some number called the
scale factor. This is the test for the scaling property. The fancy mathematical name for the scaling property is homogeneity.
When you check a function for the scaling property, see if the two sides of the equation are equal. If the test passes for all values of $x$ and all values of $a$, then $f(x)$ is a linear function.
The scaling property can be drawn like this,
Tripler
As a specific example, let’s invent a “tripler” and ask,
Is a tripler linear?
A tripler is described in functional notation like this,
$f(x) = 3x$
We recognize this as the equation of a line, with slope $= 3$ and a y-intercept of $0$. To prove $f(x)$ meets the definition of linear we apply the scaling test. We do this first with plain old algebra, and again by graphing.
Pick an input value, say $x=4$, and perform the scaling test with a scale factor of $a = 2$.
$f(x) = 3x$
$f(ax) \stackrel{?}{=} af(x)\qquad$ scaling test
$f(2x) \stackrel{?}{=} 2f(x)\qquad$ scaling test with $a = 2$
Right side: $\quad af(x) = 2f(4) = 2\,(3 \cdot 4) = \underline{24}$
Left side: $\quad f(ax) = f(2 \cdot 4) = 3 \cdot 8 = \underline{24}$
Both sides produce the same result, $24$, so the scaling test passes.
Graphical interpretation
Here’s $f(x) = 3x$ plotted as the blue line,
$f(x)$ looks pretty “line-ish,” but does it pass the scaling test? We demonstrate the scaling test graphically by plotting some points. We start by plotting an arbitrary first point, $[x, f(x)]$. Then we scale $x$ by $a$ and generate two new points, $[ax, f(ax)]$ and $[ax, af(x)]$. If the new points land on the same spot, the scaling test passes. We use $x=4$ and $a=2$ again.
$[x, f(x)] = [4, 3\cdot4] = [4, 12]$, the $\blueD{\bullet}$ blue dot on the $f(x)$ line.
Apply a scale factor of $a=2$ to $x$ and generate the output two ways,
$[ax, af(x)] = [2\cdot4, 2\,(3\cdot4)] = [8, 24]$ is the $\goldC{\bullet}$ orange dot.
$[ax, f(ax)] = [2\cdot4, 3 \,(2\cdot4)] = [8, 24]$ is the same $\goldC{\bullet}$ orange dot.
Both orange dots fall on the line. This is not a coincidence. This is what the scaling property looks like in action.
We did the scaling test with particular values of $a$ and $x$, but it should be clear from the math that the test passes for any value of $a$ and $x$. Any function that graphs as a line through the origin has the scaling property and is a linear function.
Explore
If you want, pick out some points along the negative $x$-axis and do the scaling test again to see if it holds on that side of the origin.
What happens if f(x) is not a line?
If $f(x)$ is any other shape besides a line, like $f(x)=x^2$ or $1/x$ or $\sin x$, the function does not pass the scaling test.
Let $y = f(x) = x^2/16$
$f(ax) \stackrel{?}{=} af(x)\qquad$ scaling test
$f(2x) \stackrel{?}{=} 2f(x)\qquad$ scaling test with scale factor $a = 2$
Right side: $\quad af(x) = 2f(4) = 2\,(4^2/16) = \underline{2}$
Left side: $\quad f(ax) = f(2 \cdot 4) = 8^2/16 = \underline{4}$
The results, $2$ and $4$, are not the same, so the scaling test fails. We can illustrate this on the graph of the green parabola shown here,
$[x, f(x)] = [4, 4^2/16] = [4, 1]$, the $\greenD{\bullet}$ green dot on the $f(x)$ parabola.
Apply a scale factor of $a=2$ and generate the output two ways,
$[ax, af(x)] = [2\cdot4, 2\,(4^2/16)] = [8, 2]$ is the lower of the two $\goldC{\bullet}$ orange dots.
$[ax, f(ax)] = [2\cdot4, (2\cdot4)^2/16] = [8, 4]$ is the upper $\goldC{\bullet}$ orange dot.
The orange dots don’t land in the same place. This is what a failed scaling test looks like.
This non-linear function does not have the scaling property.
Explore
Perform the scaling test on a line with a non-zero y-intercept. For example, do the scaling test on $f(x) = 3x + 1$. Is this function linear according to our definition?
Additivity
When a function is linear we can derive an adding property. We showed above that linear functions have the form of a line with slope $m$ and no y-intercept,
$f(x) = mx$
Let’s say we apply an input that happens to be the sum of two things, $(x_1 + x_2)$. The function equation becomes,
$f(x_1 + x_2) = m(x_1 + x_2)$
Multiply out the right side of the equation using the distributive property,
$f(x_1 + x_2) = mx_1 + mx_2$
Here’s a nice trick: Notice $mx_1$ is the output of $f()$ if the input is just $x_1$, that is $f(x_1) = mx_1$. That means we can replace $mx_1$ with $f(x_1)$. Likewise, we can replace $mx_2$ with $f(x_2)$. We can rewrite the equation with both $mx$ terms replaced with function notation,
$f(x_1 + x_2) = f(x_1) + f(x_2)$
This is the adding property, or
additivity in math lingo. It says, the function of a sum equals the sum of the function applied to each term.
Additivity is the theoretical basis for circuit analysis.
Additivity example
Suppose we have a signal made up of two parts, $x_1 + x_2$. Just for fun, $x_1$ could be the voice of a singer and $x_2$ could be guitar music. A microphone picks up the singer’s voice. The sound of the strings are sensed by the guitar “pick ups.” Both signals connect to a
mixer (the circle with the plus sign) where they are combined, $x_1 + x_2$. The output of the mixer goes to a linear amplifier. The music plays over a loudspeaker.
Does the guitar change what the singer’s voice sounds like?
Let’s do an experiment to find out: $f(x)$ is the linear amplifier. We model the amplifier as $f(x) = mx$ where $m$ is the volume setting on the amp. Sound goes in, louder sound comes out. $m$ ranges between $1$ and $11$ on the cool amps. We want the singer and guitar sound good no matter how loud or soft each one is, at all volume settings.
The input is (voice $+$ guitar), also known as $x_1 + x_2$.
The output is the loud song, $y = f(x_1 + x_2)$.
To tell if the guitar changes the singer’s voice we make three recordings.
The singer belts out the song $(x_1)$ while the guitar player goes off in the corner and does his math homework. We record the singer’s amplified voice and call it $f(x_1)$,
The singer goes on break and the guitar player wails on his guitar, $x_2$. We record the guitar by itself and call it $f(x_2)$,
The singer and guitar play together and record the amplified sound as $f(x_1 + x_2)$,
The question is, does $f(x_1 + x_2)$ sound the same or different than $f(x_1) + f(x_2)$?
To find out we play back the individual voice and guitar recordings and add them together, meaning we listen to $f(x_1) + f(x_2)$.
As we listen to this recording can we tell it apart from $f(x_1 + x_2)$? We were told the amplifier is linear, so it has the additivity property. The additivity property tells us,
$f(x_1 + x_2) = f(x_1) + f(x_2)$
The two individual recordings added together should sound exactly the same as if the signer and guitarist were playing at the same time. That means the guitar has no effect on the singer’s voice coming through the system. Both signals pass through without regard for whatever else is playing.
Do non-linear functions have the additivity property?
Non-linear functions
do not have the additivity property.
If the amplifier was
non-linear the singing voice would be modified by the guitar. Depending on the nature of the non-linearity the effect could be, for example, the singer’s voice sounds louder when the guitar is loud. That’s not something you want in a hit recording
$f(x_1 + x_2)\stackrel{?}{=} f(x_1) + f(x_2)\qquad$ additivity test
If $f(x)$ is not linear and we put $x_1+x_2$ into the input, the output does not come out as the sum of $f(x_1)+f(x_2)$. Instead, the output can be a complicated mash-up of the two inputs. Imagine we build a non-linear function that squares its input, a “squarer.”
$f(x) = x^2$
If $x = 3$, then $f(3) = 9$. If $x = 0.5$, then $f(0.5) = 0.25$.
Now imagine we have two separate inputs, $a = 2$ and $b = 3$. We add them together and put the sum into our “squarer”. What comes out?
$f(2+3) = (2+3)^2 = 25$
If we put in the two inputs individually, what comes out?
$f(2) + f(3) = 2^2 + 3^2 = 4 + 9 = 13 \ne 25$
With these values the squarer does not pass the additivity test.
With symbols,
$f(a+b) = (a+b)^2 = a^2+2ab+b^2$
$f(a) + f(b) = a^2 + b^2$
The $f(a+b)$ output includes that middle term, $2ab$. The $2ab$ term is a mixture of the input signals. The extra $2ab$ term is why this squarer function does not have the additivity property.
Let’s do the numerical example again. See how the $2ab = 12$ term messes up the additivity property?
$f(2+3) = 2^2 + 2(2\cdot 3) + 3^2 = 4 + 12 + 9 = 25$
Superposition
The scaling and additivity properties of linear functions together are called the
principle of superposition. It is the basis of a circuit analysis technique with the same name.
Superposition is also put to brilliant use in the Mesh current method. It is used in many other engineering areas. Superposition might not seem like a big deal, but it is a huge deal for electronics systems, especially signal processing.
At it’s heart, superposition tells us we can take a complex signal (the sum of many simple things) and break it up into its component pieces. We put the simple inputs through the function one by one and add up the outputs. It’s a way to avoid the gnarly math of analyzing the original complex signal and still get the right answer.
In the next article, Linearity of electronic components, we test to see if our favorite electronic components, $\text{R, L, and C}$ are linear functions.
Summary
A function is linear if it has these properties,
Homogeneity (scaling): $f(ax) = af(x)$
Additivity: $f(x_1+x_2) = f(x_1) + f(x_2)$
These properties together are called the
principle of superposition.
|
Maximal ≠ Maximum!
Suffixes are important!
Did you know that the words
" maximal" and "maximum" generally do NOT mean the same thing
in mathematics? It wasn't until I had to think about Zorn's Lemma in the context of maximal ideals that I actually thought about this, but more on that in a moment. Let's start by comparing the definitions:
Do you see the difference? An element is a maximum if it is larger than
every single element in the set, whereas an element is maximal if it is not smaller than any other element in the set (where "smaller" is determined by the partial order $\leq$). Yes, it's true that the* maximum also satisfies this property, i.e. every maxim um element is also maxim al. But the converse is not true: if an element is maximal, it may not be the maximum! Why? The key is that these definitions are made on a partially ordered set. Basically, partially ordered just means it makes sense to use the words "bigger" or "smaller" - we have a way to compare elements. In a totally ordered set ALL elements are comparable with each other. But in a partially ordered set SOME, but not necessarily all, elements can be compared. This means it's possible to have an element that is maximal yet fails to be the maximum because it cannot be compared with some elements. It's not too hard to see that when a set is totally ordered, "maximal = maximum."**
How about an example? Here's one I like from this scholarly site which also gives an example of a miminal/minimum element (whose definitions are dual to those above).
Example
Consider the set
where the partial order is set inclusion, $\subseteq$. Then
$\{d,o\}$ is minimalbecause $\{d,o\}\not\supseteq x$ for every $x\in X$. i.e. there isn't a single element in $X$ that is "smaller" than $\{d,o\}$ $\{g,o,a,d\}$ is maximalbecause $\{g,o,a,d\}\not\subseteq x$ for every $x\in X$ i.e. there isn't a single element in $X$ that is "larger" than $\{g,o,a,d\}$ $\{o,a,f\}$ is both minimal and maximal because $\{o,a,f\}\not\supseteq x$ for every $x\in X$ $\{o,a,f\}\not\subseteq x$ for every $x\in X$ $\{d,o,g\}$ is neither minimal nor maximal because there is an $x\in X$ such that $x\subseteq \{d,o,g\}$, namely $x=\{d,o\}$ there is an $x\in X$ such that $\{d,o,g\}\subseteq x$, namely $x=\{g,o,a,d\}$ $X$ has NEITHER a maximum or a minimum because there is no $M\in X$ such that $x\subseteq M$ for every$x\in X$ there is no $m\in X$ such that $m\subseteq x$ for every$x\in X$
Let's now relate our discussion above to ring theory. One defines an ideal $M$ in a ring $R$ to be a
maximal ideal if $M\neq R$ and the only ideal that contains $M$ is either $M$ or $R$ itself, i.e. if $I\trianglelefteq R$ is an ideal such that $M\subseteq I \subseteq R$, then we must have either $I=M$ or $I=R$.
Not surprisingly, this coincides with the definition of maximality above. We simply let $X$ be the set of all proper ideals in the ring $R$ endowed with the partial order of inclusion $\subseteq$. The only difference is that in this context, because we're in a ring, we have the second option $I=R$.
I think a good way to see maximal ideals in action is in the proof of this result:
As a final remark, the notions of "a maximal element" and "an upper bound" come together in Zorn's Lemma which is needed to prove that every proper ideal in a ring is contained in a maximal ideal. I should mention that an
upper bound $B$ on a partially ordered set (a.k.a. a "poset") has the same definition as the maximum EXCEPT that $B$ is not required to be inside the set. More precisely, we define an upper bound on a subset $Y$ of $X$ to be an element $B\in X$ such that $y\leq B$ for every $y\in Y$.
So here's the deal with Zorn's Lemma: It's not too hard to prove that every
finite poset has a maximal element. But what if we don't know if the given poset is finite? Or what happens if it's infinite? How can we tell if it has a maximal element? Zorn's Lemma answers that question:
As I mentioned above, it's this result which is needed to prove that every proper ideal is contained in a maximal ideal***. If you'd like to see the proof, I've typed it up in a separate PDF here. It actually implies a weaker statement, called Krull's Theorem (1929), which says that every non-zero ring with unity contains a maximal ideal.
Footnotes
*One can easily show that if a set has a maximum it must be unique, hence THE maximum.
** Here's the proof: Let $(X,\leq)$ be a totally ordered set and let $m\in X$ be a maximal element. It suffices to show $m$ is the maximum. Since $X$ has a total order, either $m\leq x$ or $x\leq m$ for every $x\in X$. If the latter, then $m$ is the maximum. If the former, then $m=x$ by definition of maximal. In either case, we have $x\leq m$ for all $x\in X$. Hence $m$ is the maximum.
*** Note this is NOT the same as saying that every maximal ideal contains all the proper ideals in a ring! Remember, maximal $\neq$ maximum!!
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.