text stringlengths 256 16.4k |
|---|
Every Bounded Seq. in a Hilbert Space has a Weakly Con. Subseq.
Every Bounded Sequence in a Hilbert Space has a Weakly Convergent Subsequence
Theorem 1: Let $H$ be a Hilbert space. Then every bounded sequence in $H$ has a weakly convergent subsequence. Proof:Let $(h_n)$ be a bounded sequence in $H$. Let:
\begin{align} \quad H_0 = \mathrm{cl} (\mathrm{span} (h_1, h_2, ...)) \end{align}
Then $H_0$ is separable as the set of all finite linear combinations of points in $(h_n)$ with rational coefficients is a countable and dense subset of $H_0$. For each $n \in \mathbb{N}$ let $f_n : H_0 \to \mathbb{R}$ be defined for all $h \in H_0$ by:
\begin{align} \quad f_n(h) = \langle h, h_n \rangle \end{align}
Observe that each $f_n$ is linear functional on $H_0$. Furthermore, $f_n$ is bounded with $\| f_n \| \leq \| h_n \|$ since for every $h \in H$, by The Cauchy-Schwarz Inequality for Inner Product Spaces:
\begin{align} \quad |f_n(h)| = |\langle h, h_n| \leq \| h \| \| h_n \| = \| h_n \| \| h \| \end{align}
So $(f_n)$ is a bounded sequence of linear functional on the separable space $H_0$. By Helley's Theorem we have that $(f_n)$ has a weak-* convergent subsequence, say $(f_{n_k})$ weak-* converges to $f_0 \in H_0$. By The Riesz Representation Theorem for Hilbert Spaces there exists an $h_0 \in H_0$ such that $f_0(h) = \langle h, h_0 \rangle$ for all $h \in H_0$. So $(f_{n_k})$ weak-* converges to $f_0(h)$, and so for every $h \in H$:
\begin{align} \quad \lim_{k \to \infty} f_{n_k}(h) &= \langle h, h_0 \rangle \\ \quad \lim_{k \to \infty} \langle h, h_{n_k} \rangle &= \langle h, h_0 \rangle \end{align}
Let $P$ be the orthogonal projection of $H$ onto $H_0$. Then for each $k \in \mathbb{N}$ we have that:
\begin{align} \quad \langle (I - P)(h), h_{n_k} \rangle = \langle (I - P(h), h_0 \rangle \end{align}
Hence, for each $h \in H$ we have that:
\begin{align} \quad \lim_{k \to \infty} \langle h_{n_k}, h \rangle = \langle h_0, h \rangle \end{align}
So from the characterization of Weak Convergence in Hilbert Spaces we have that $(h_{n_k})$ weakly converges to $h_0 \in H$. So every bounded sequence in $H$ has a weakly convergent subsequence. $\blacksquare$ |
For boolean-valued functions $f : \{-1,1\}^n \to \{-1,1\}$ the total influence has several additional interpretations. First, it is often referred to as the
average sensitivity of $f$ because of the following proposition: Proposition 27For $f : \{-1,1\}^n \to \{-1,1\}$ \[ \mathbf{I}[f] = \mathop{\bf E}_{{\boldsymbol{x}}}[\mathrm{sens}_f({\boldsymbol{x}})], \] where $\mathrm{sens}_f(x)$ is the sensitivityof $f$ at $x$, defined to be the number of pivotal coordinates for $f$ on input $x$. Proof: \begin{multline*} \mathbf{I}[f] = \sum_{i=1}^n \mathbf{Inf}_i[f] = \sum_{i=1}^n \mathop{\bf Pr}_{{\boldsymbol{x}}}[f({\boldsymbol{x}}) \neq f({\boldsymbol{x}}^{\oplus i})] \\ = \sum_{i=1}^n \mathop{\bf E}_{{\boldsymbol{x}}}[\boldsymbol{1}_{f({\boldsymbol{x}}) \neq f({\boldsymbol{x}}^{\oplus i})}] = \mathop{\bf E}_{{\boldsymbol{x}}}\left[\sum_{i=1}^n \boldsymbol{1}_{f({\boldsymbol{x}}) \neq f({\boldsymbol{x}}^{\oplus i})}\right] = \mathop{\bf E}_{{\boldsymbol{x}}}[\mathrm{sens}_f({\boldsymbol{x}})]. \quad \Box \end{multline*}
The total influence of $f : \{-1,1\}^n \to \{-1,1\}$ is also closely related to the size of its
edge boundary; from Fact 14 we deduce: Examples 29(Recall Examples 15.) For boolean-valued functions $f : \{-1,1\}^n \to \{-1,1\}$ the total influence ranges between $0$ and $n$. It is minimized by the constant functions $\pm 1$ which have total influence $0$. It is maximized by the parity function $\chi_{[n]}$ and its negation which have total influence $n$; every coordinate is pivotal on every input for these functions. The dictator functions (and their negations) have total influence $1$. The total influence of $\mathrm{OR}_n$ and $\mathrm{AND}_n$ is very small: $n2^{1-n}$. On the other hand, the total influence of $\mathrm{Maj}_n$ is fairly large: roughly $\sqrt{2/\pi}\sqrt{n}$ for large $n$.
By virtue of Proposition 20 we have another interpretation for the total influence of
monotone functions:
This sum of the degree-$1$ Fourier coefficients has a natural interpretation in social choice:
Proposition 31Let $f : \{-1,1\}^n \to \{-1,1\}$ be a voting rule for a $2$-candidate election. Given votes ${\boldsymbol{x}} = ({\boldsymbol{x}}_1, \dots, {\boldsymbol{x}}_n)$, let $\boldsymbol{w}$ be the number of votes which agree with the outcome of the election, $f({\boldsymbol{x}})$. Then \[ \mathop{\bf E}[\boldsymbol{w}] = \frac{n}{2} + \frac12 \sum_{i=1}^n \widehat{f}(i). \] Proof: By the formula for Fourier coefficients, \begin{equation} \label{eqn:deg-1-sum} \sum_{i=1}^n \widehat{f}(i) = \sum_{i=1}^n \mathop{\bf E}_{{\boldsymbol{x}}}[f({\boldsymbol{x}}) {\boldsymbol{x}}_i] = \mathop{\bf E}_{{\boldsymbol{x}}}[f({\boldsymbol{x}})({\boldsymbol{x}}_1 + {\boldsymbol{x}}_2 + \cdots + {\boldsymbol{x}}_n)]. \end{equation} Now ${\boldsymbol{x}}_1 + \cdots + {\boldsymbol{x}}_n$ equals the difference between the number of votes for candidate $1$ and the number of votes for candidate $-1$. Hence $f({\boldsymbol{x}})({\boldsymbol{x}}_1 + \cdots + {\boldsymbol{x}}_n)$ equals the difference between the number of votes for the winner and the number of votes for the loser; i.e., $\boldsymbol{w} – (n-\boldsymbol{w}) = 2\boldsymbol{w} – n$. The result follows. $\Box$
Rousseau
[Rou62] suggested that the ideal voting rule is one which maximizes the number of votes which agree with the outcome. Here we show that the majority rule has this property (at least when $n$ is odd): Theorem 32The unique maximizers of $\sum_{i=1}^n \widehat{f}(i)$ among all $f : \{-1,1\}^n \to \{-1,1\}$ are the majority functions. In particular, $\mathbf{I}[f] \leq \mathbf{I}[\mathrm{Maj}_n] = \sqrt{2/\pi}\sqrt{n} + O(n^{-1/2})$ for all monotone $f$. Proof: From \eqref{eqn:deg-1-sum}, \[ \sum_{i=1}^n \widehat{f}(i) = \mathop{\bf E}_{{\boldsymbol{x}}}[f({\boldsymbol{x}})({\boldsymbol{x}}_1 + {\boldsymbol{x}}_2 + \cdots + {\boldsymbol{x}}_n)] \leq \mathop{\bf E}_{{\boldsymbol{x}}}[|{\boldsymbol{x}}_1 + {\boldsymbol{x}}_2 + \cdots + {\boldsymbol{x}}_n|], \] since $f({\boldsymbol{x}}) \in \{-1,1\}$ always. Equality holds if and only if $f(x) = \mathrm{sgn}(x_1 + \cdots + x_n)$ whenever $x_1 + \cdots + x_n \neq 0$. The second statement of the theorem follows from Proposition 30 and Exercise 18 in this chapter. $\Box$
Let’s now take a look at more analytic expressions for the total influence. By definition, if $f : \{-1,1\}^n \to {\mathbb R}$ then \begin{equation} \label{eqn:tinf-gradient} \mathbf{I}[f] = \sum_{i=1}^n \mathbf{Inf}_i[f] = \sum_{i=1}^n \mathop{\bf E}_{{\boldsymbol{x}}}[\mathrm{D}_i f({\boldsymbol{x}})^2] = \mathop{\bf E}_{{\boldsymbol{x}}}\left[\sum_{i=1}^n \mathrm{D}_i f({\boldsymbol{x}})^2\right]. \end{equation} This motivates the following definition:
Definition 33The (discrete) gradient operator$\nabla$ maps the function $f : \{-1,1\}^n \to {\mathbb R}$ to the function $\nabla f : \{-1,1\}^n \to {\mathbb R}^n$ defined by \[ \nabla f(x) = (\mathrm{D}_1 f(x), \mathrm{D}_2 f(x), \dots, \mathrm{D}_n f(x)). \]
Note that for $f : \{-1,1\}^n \to \{-1,1\}$ we have $\|\nabla f(x)\|_2^2 = \mathrm{sens}_f(x)$, where $\| \cdot \|_2$ is the usual Euclidean norm in ${\mathbb R}^n$. In general, from \eqref{eqn:tinf-gradient} we deduce:
An alternative analytic definition involves introducing the
Laplacian: Definition 35The Laplacian operator$\mathrm{L}$ is the linear operator on functions $f : \{-1,1\}^n \to {\mathbb R}$ defined by $\mathrm{L} = \sum_{i=1}^n \mathrm{L}_i$.
In the exercises you are asked to verify the following:
$\displaystyle \mathrm{L} f (x) = (n/2)\bigl(f(x) – \mathop{\mathrm{avg}}_{i \in [n]} \{f(x^{\oplus i})\}\bigr)$, $\displaystyle \mathrm{L} f (x) = f(x) \cdot \mathrm{sens}_f(x) \quad$ if $f : \{-1,1\}^n \to \{-1,1\}$, $\displaystyle \mathrm{L} f = \sum_{S \subseteq [n]} |S|\,\widehat{f}(S)\,\chi_S$, $\displaystyle \langle f, \mathrm{L} f \rangle = \mathbf{I}[f]$.
We can obtain a Fourier formula for the total influence of a function using Theorem 19; when we sum that theorem over all $i \in [n]$ the Fourier weight $\widehat{f}(S)^2$ is counted exactly $|S|$ times. Hence:
Theorem 37For $f : \{-1,1\}^n \to {\mathbb R}$, \begin{equation} \label{eqn:total-influence-formula} \mathbf{I}[f] = \sum_{S \subseteq [n]} |S| \widehat{f}(S)^2 = \sum_{k=0}^n k \cdot \mathbf{W}^{k}[f]. \end{equation} For $f : \{-1,1\}^n \to \{-1,1\}$ we can express this using the spectral sample: \[ \mathbf{I}[f] = \mathop{\bf E}_{\boldsymbol{S} \sim \mathscr{S}_{f}}[|\boldsymbol{S}|]. \]
Thus the total influence of $f : \{-1,1\}^n \to \{-1,1\}$ also measures the average “height” or degree of its Fourier weights.
Finally, from Proposition 1.13 we have $\mathop{\bf Var}[f] = \sum_{k > 0} \mathbf{W}^{k}[f]$; comparing this with \eqref{eqn:total-influence-formula} we immediately deduce a simple but important fact called the
Poincaré inequality. Poincaré InequalityFor any $f : \{-1,1\}^n \to {\mathbb R}$, $\mathop{\bf Var}[f] \leq \mathbf{I}[f]$.
Equality holds in the Poincaré inequality if and only if all of $f$’s Fourier weight is at degrees $0$ and $1$; i.e., $\mathbf{W}^{\leq 1}[f] = \mathop{\bf E}[f^2]$. For boolean-valued $f : \{-1,1\}^n \to \{-1,1\}$, Exercise 1.19 tells us this can only occur if $f = \pm 1$ or $f = \pm \chi_i$ for some $i$.
For boolean-valued $f : \{-1,1\}^n \to {\mathbb R}$, the Poincaré inequality can be viewed as an (edge-)isoperimetric inequality, or
(edge-)expansion bound, for the Hamming cube. If we think of $f$ as the indicator function for a set $A \subseteq \{-1,1\}^n$ of “measure” $\alpha = |A|/2^n$, then $\mathop{\bf Var}[f] = 4\alpha(1-\alpha)$ (Fact 1.14) whereas $\mathbf{I}[f]$ is $n$ times the (fractional) size of $A$’s edge boundary. In particular, the Poincaré inequality says that subsets $A \subseteq \{-1,1\}^n$ of measure $\alpha = 1/2$ must have edge boundary at least as large as those of the dictator sets.
For $\alpha \notin \{0, 1/2, 1\}$ the Poincaré inequality is not sharp as an edge-isoperimetric inequality for the Hamming cube; for small $\alpha$ even the asymptotic dependence is not optimal. Precisely optimal edge-isoperimetric results (and also vertex-isoperimetric results) are known for the Hamming cube. The following simplified theorem is optimal for $\alpha$ of the form $2^{-i}$:
This result illustrates an important recurring concept in the analysis of boolean functions: the Hamming cube is a “small-set expander”. Roughly speaking, this is the idea that “small” subsets $A \subseteq \{-1,1\}^n$ have unusually large “boundary size”. |
State Gauss' reciprocity law for two distinct odd primes $p$ and $q$. Hence, given an odd prime $p$, prove carefully that the following statements are equivalent:
$p\neq5$ and there is an intenger $k$ such that $p$ divides $5-k^2$; $p$ is congruent to $\pm1\mod{5}$.
I am asked to solve the above question. I have got the Gauss's reciprocity law below.
For distinct odd primes $p$ and $q$, the theorem states: $$\operatorname{Leg}(q,p)=\begin{cases}\operatorname{Leg}(p,q)&\text{if > $p\equiv1$ or $q\equiv1\mod4$},\\-\operatorname{Leg}(p,q)&\text{if > $p\equiv-1$ and $q\equiv-1\mod4$}.\end{cases}$$
I am stuck when I try to do the second bit. Can anyone give me some hints on how to start with the first statement? |
The spirit of this question comes from "Ordinary Monte Carlo", also known as "good old-fashioned Monte Carlo"
Suppose I have a random variable $X$, with
$$\mu := E[X]\\ \sigma^2:=Var[X] $$
Both are unknown values, because the probability distribution function of $X$ is unknown (or the computations are intractable).
Either way, suppose we can somehow
simulate $n$ draws $X_1,X_2,\dots,X_n$ (these are independent and identically distributed) from the distribution of $X$. Let us define the sample parameters
$$ \hat{\mu}_n := \frac{1}{n}\sum_{i=1}^{n}X_i\\ \hat{\sigma}_n^2 : = \frac{1}{n}\sum_{i=1}^{n}(X_i-\hat{\mu}_n)^2 $$
According to the Central Limit Theorem, as $n$ becomes very large, the sample mean $\hat{\mu}_n$ will closely obey a normal distribution
$$ \hat{\mu} \sim N(\mu,\frac{\sigma^2}{n}) $$
Before we can calculate confidence intervals, the author states that since we do not know $\sigma^2$, we will make the estimation that $\sigma^2 \approx \hat{\sigma}^2$, or more precisely for an unbiased estimate $\sigma^2 \approx \frac{n}{n-1}\hat{\sigma}^2$, and we can proceed from there using standard techniques.
Now, while the author mentions the importance of $n$ sufficiently large (
number of draws per simulation), there is no mention about the number of simulations and its effect on our confidence.
Is there any advantage of running $k$ simulations (performing $n$ draws each time) to obtain several sample means $\hat{\mu}_{n,1}, \hat{\mu}_{n,2}, \dots \hat{\mu}_{n,k}$, and then use the means of the means to improve our estimates and confidence regarding the unknown $\mu,\sigma$ of $X$?
Or does it suffice to just draw $n$ samples from $X$ in a single simulation, as long as $n$ is sufficiently large? |
Search
Now showing items 1-10 of 24
Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV
(Springer, 2015-01-10)
The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ...
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV
(Springer Berlin Heidelberg, 2015-04-09)
The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV
(Springer, 2015-05-27)
The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ...
Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(American Physical Society, 2015-03)
We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV
(American Physical Society, 2015-06)
The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ...
Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Springer, 2015-11)
The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ...
K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2015-02)
The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ... |
How to Use Circular Ports in the RF Module
The
Port boundary condition in the RF Module, an add-on to the COMSOL Multiphysics® software, can be used to launch and absorb electromagnetic energy. We explain how to set up a circular waveguide port and review the analytical solution that defines the port mode field. We also analyze a polarized circular port for power transmission with respect to port orientation, and then extend the model to include higher-order modes. Circular Port Reference Axis for Describing Degenerate Modes
To simulate wave propagation in a circular waveguide, we need to set up the excitation and termination via boundary conditions that describe the mode field. However, circular ports exhibit degeneracy, which yields uncertainty in the orientation of the mode field. Let’s begin our discussion on how we can use the
Circular Port Reference Axis subfeature to suppress angular degeneracy in a circular waveguide port. The dominant TE 11 mode exhibits degeneracy. To fix the orientation of the mode field, we use the Circular Port Reference Axis subfeature.
First, we run a
Mode Analysis study to find the resonant modes on a simple circle in 2D, which represents our port boundary. Among the modes returned by default, some are simple rotations of the exact same TE 11 mode shape about the origin. So, how can we determine which solution is correct? All of them are equally accurate solutions to the equations describing the transverse field components of a circular waveguide (Ref. 1):
E_{\rho}=\frac{-j\omega\mu m}{k_{c}^{2}\rho}(A \ cos \ m\phi-B \ sin \ m\phi) \ J_{m}(k_{c}\rho) \ e^{-j\beta z}E_{\phi}=\frac{j\omega\mu}{k_{c}}(A \ sin \ m\phi+B \ cos \ m\phi) \ J’_{m}(k_{c}\rho) \ e^{-j\beta z}
H_{\rho}=\frac{-j\beta}{k_{c}}(A \ sin \ m\phi+B \ cos \ m\phi) \ J’_{m}(k_{c}\rho) \ e^{-j\beta z}H_{\phi}=\frac{-j\beta m}{k_{c}^{2}\rho}(A \ cos \ m\phi-B \ sin \ m\phi) \ J_{m}(k_{c}\rho) \ e^{-j\beta z}
Here,
m represents the value of the first mode index, J m is the Bessel function of the first kind, and J’is the derivative of the Bessel function. The argument is the cutoff wavenumber k m c=χ’ mn/a.
The seemingly identical modes arise because circular ports are degenerate. This means that an infinite number of rotations of a given mode field can exist on the same boundary, which can be problematic for describing the orientation of port mode fields with respect to one another. We therefore define a
Circular Port Reference Axis, which is available as a subfeature to the Port node. This feature allows us to select two vertices on the port circumference that define the orientation of fields on the port boundary. The mode field is then defined with respect to this reference axis, and any uncertainty in its orientation is resolved. We may now extend our study of circular ports to 3D. TE 11 mode propagating through a circular waveguide. The animation of the contour plot shows the z -component of the E field, created with the full dynamic data extension sequence type. The arrow plot describes the electric mode field on the port boundary. Modeling a Polarized Circular Waveguide
Let’s consider the Polarized Circular Ports model, available in the RF Application Gallery. This tutorial demonstrates how to excite and terminate a port with degenerate port modes. The structure under study is a straight, circular waveguide surrounded by perfectly conducting walls.
As with any COMSOL Multiphysics model, we start by building the geometry, assigning materials, and then setting up the physics. Our structure here is a simple cylinder filled with air. We model the metallic boundaries on the exterior using the
Perfect Electric Conductor boundary condition. Since this condition assumes a lossless conductor, there is no need to assign a material to these boundaries. Next, we add a Port boundary condition and select the circular boundary at one end of the waveguide. The outer walls of the waveguide are modeled as Perfect Electric Conductor boundaries. Streamlines of the electric fields are shown in red, and magnetic fields in blue. Arrow plots (black) show the direction of power flow from the excitation port to listener ports. Solid lines represent the reference axis for ports 1 and 2 on the near end, and ports 3 and 4 on the far end.
In the Port 1 settings, we start by setting the geometry type to
Circular. When using a circular port, the mode type ( TE or TM) and mode number must be specified. TE and TM stand for transverse electric and transverse magnetic, respectively; both of these mode types are supported by circular ports. Circular mode numbers are described by two indices, m and n, which are used in the transverse magnetic and electric field equations shown above.
We are interested in the dominant TE
11 mode. Therefore, in the Port 1 settings, we set the mode type to TE and mode number to 11. We select two opposite vertices on the port circumference in the Circular Port Reference Axis subfeature. Now, the important question becomes: How can we terminate this mode at the other end of the waveguide?
Any incident field can be completely terminated by two mutually orthogonal ports with the same mode field shape. We can verify this by expanding the Polarized Circular Waveguide model to study transmittance when the listener ports’ reference axes are rotated with respect to that of the excitation port. In this model, we set up a total of four ports, only one of which is an excitation port (Port 1). Together, mutually orthogonal ports 1 and 2 receive all of the reflected energy, while mutually orthogonal ports 3 and 4 receive all of the transmitted energy.
We run a
Parametric Sweep where the reference axes of ports 3 and 4 are rotated together by an angle theta about the origin. This rotation angle is plotted on the x-axis in the plot below. S-parameters are available as built-in expressions, ready for evaluation in postprocessing. The power ratio transmitted to ports 3 and 4 can be evaluated as the magnitude squared of S 31 and S 41, respectively. We used this relation to evaluate the transmittance at each angle spanned in the plot below. At any angle, the transmittance values sum to one, indicating nearly zero reflection, and therefore ideal termination. The reference axis of the first receiving port can be chosen freely as long as the reference axis of the second receiving port is a 90-degree rotation of the first about the waveguide axis. An Important Consideration: Cutoff Frequency
Anytime we wish to excite a waveguide or port, it is important to consider the cutoff frequency of the structure — that is, the lowest frequency for which a particular mode can propagate. This is true not only for the dominant mode but also higher-order modes. Rectangular, circular, and coaxial ports each have analytical expressions for cutoff frequency. This value is dependent on the size of the structure, the medium inside it, as well as the mode number. Below are the equations for cutoff frequency for both TE and TM modes in a circular waveguide:
f_c=\frac{\chi’_{mn}}{2\pi a \sqrt{\mu\epsilon}} for TE modes
f_c=\frac{\chi_{mn}}{2\pi a \sqrt{\mu\epsilon}} for TM modes
Here,
a is the radius, μ is the permeability, and ε is the permittivity.
The values of χ’
mn are given by the zeros of the derivative of the Bessel function J m(x); these are needed to determine cutoffs for TE modes. The values of χ mn are given by the zeros of the Bessel function; these are needed to determine cutoffs for TM modes. Luckily, the zeros of the Bessel function and its first derivative are well known, and some of them are listed here.
You may notice that the values in the m = 0 column of the χ’
mntable are identical to the values in the m = 1 column of the χ mntable. Therefore, the TE 0nand TM 1nmodes have identical cutoff frequencies and are referred to as degenerate modes.
Mode Index m = 0 m = 1 m = 2 m = 3 m = 4 m = 5 n = 1 2.4049 3.8318 5.1357 6.3802 7.5884 8.7715 n = 2 5.5201 7.0156 8.4173 9.7610 11.0647 12.3386 n = 3 8.6537 10.1735 11.6199 13.0152 14.3726 15.7002 Zeroes χ mn of the first kind Bessel function Jm(x) used in TM mode. (Ref. 2)
Mode Index m = 0 m = 1 m = 2 m = 3 m = 4 m = 5 n = 1 3.8318 1.8412 3.0542 4.2012 5.3175 6.4155 n = 2 7.0156 5.3315 6.7062 8.0153 9.2824 10.5199 n = 3 10.1735 8.5363 9.9695 11.3459 12.6819 13.9872 Zeroes χ’ mn of the derivative of the first kind Bessel function J’m(x) used in TE mode. (Ref. 2)
The number of modes that can exist in a given waveguide increases with frequency. Below, we show the first 24 modes of a circular port in the order of increasing cutoff frequency.
TE 11 TM 01 TE 21 TM 11 TE 01 TE 31 TM 21 TE 41 TE 12 TM 02 TM 31 TE 51 TE 22 TE 02 TM 12 TE 61 TM 41 TE 32 TM 22 TE 13 TE 71 TM 03 TM 51 TE 42 The first 24 modes of a circular port are shown. For TE modes, a surface plot of the electric field norm and arrow plot of the magnetic fields are displayed. For TM modes, a surface plot of the magnetic field norm and arrow plot of the electric fields are displayed. Automate the Data Collection Process Using Methods
To produce the figure above, we used a powerful tool called
methods to accelerate the modeling workflow. A method contains a series of commands. When called, these tasks are run automatically within the software. Here, we use a method to automate and expedite the process of producing the field distributions for the first 24 modes of a circular port. The method performs the following actions sequentially: Calculate the cutoff frequency for a given mode Enter the mode numbers in the Portnode settings Run the model at a frequency value just above the cutoff and store the solution at the port boundary
The method loops through this process for each mode, using its respective χ
mn/χ’ mn constant (the values of which are entered in the Parameters node). The zeroes of the Bessel function for each mode number are entered as parameters (left). These parameters are used in method 1 (shown below) to compute the solution sets for each mode number (right), just above the cutoff frequency. Closing Remarks
This blog post has outlined how to use circular ports for waveguide excitation and termination. While only circular ports are discussed here, remember that cutoff frequency must be considered when using other port types as well. The only difference for rectangular and coaxial ports is their respective cutoff frequency equations, which are still a function of the mode number and port dimensions. You can visualize the mode shapes for these port types efficiently by implementing similar methods.
If you have a question about modeling circular ports, please contact COMSOL Support.
Next Step
Try modeling the polarized circular waveguide featured in this blog post by clicking the button below, which will take you to the Application Gallery. Once there, you can log into your COMSOL Access account and, with a valid software license, download the MPH-file.
References David M. Pozar, Microwave Engineering, John Wiley & Sons, 1998. Constantine A. Balanis, Advanced Engineering Electromagnetics, John Wiley & Sons, 1999. Comments (0) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science |
This article needs attention from an expert in mathematics.
In mathematics, the
Cauchy–Schwarz inequality (the name of Bunyakovsky is sometimes added), is a useful inequality encountered in many different settings, such as linear algebra, analysis, probability theory, and other areas. It is considered to be one of the most important inequalities in all of mathematics. [1] It has a number of generalizations, among them Hölder's inequality.
The inequality for sums was published by Augustin-Louis Cauchy (1821), while the corresponding inequality for integrals was first stated
[1] byViktor Bunyakovsky (1859) and rediscovered by Hermann Amandus Schwarz (1888). Statement of the inequality
The Cauchy–Schwarz inequality states that for all vectors x and y of an inner product space it is true that
where is the inner product also known as dot product. Equivalently, by taking the square root of both sides, and referring to the norms of the vectors, the inequality is written as
Moreover, the two sides are equal if and only if
x and y are linearly dependent (or, in a geometrical sense, they are parallel or one of the vectors' magnitude is zero).
If and have an imaginary component, the inner product is the standard inner product and the bar notation is used for complex conjugation then the inequality may be restated more explicitly as
When viewed in this way the numbers
x 1, ..., x , and n y 1, ..., y are the components of n x and y with respect to an orthonormal basis of V.
Even more compactly written:
Equality holds if and only if
x and y are linearly dependent, that is, one is a scalar multiple of the other (which includes the case when one or both are zero).
The finite-dimensional case of this inequality for real vectors was proven by Cauchy in 1821, and in 1859 Cauchy's student Bunyakovsky noted that by taking limits one can obtain an integral form of Cauchy's inequality. The general result for an inner product space was obtained by Schwarz in the year 1885.
Proof
Let
u, v be arbitrary vectors in a vector space V over F with an inner product, where F is the field of real or complex numbers. We prove the inequality
\leq \left\|u\right\| \left\|v\right\|, \,
and the fact that equality holds only when
u and v are linearly dependent (the fact that conversely one has equality if u and v are linearly dependent is immediate from the properties of the inner product).
If
v = 0 it is clear that we have equality, and in this case u and v are also linearly dependent (regardless of u). We henceforth assume that v is nonzero. Let
Then, by linearity of the inner product in its first argument, one has
i.e.,
z is a vector orthogonal to the vector v (Indeed, z is the projection of u onto the plane orthogonal to v.) We can thus apply the Pythagorean theorem to
which gives
and, after multiplication by ||
v|| 2, the Cauchy–Schwarz inequality.Moreover, if the relation '≥' in the above expression is actually an equality, then || z|| 2 = 0 and hence z = 0; the definition of z then establishes a relation of linear dependence between u and v. This establishes the theorem. Notable special cases R n
In Euclidean space with the standard inner product, the Cauchy–Schwarz inequality is
To prove this form of the inequality, consider the following quadratic polynomial in
z.
Since it is nonnegative it has at most one real root in
z, whence its discriminant is less than or equal to zero, that is,
which yields the Cauchy–Schwarz inequality.
An equivalent proof for starts with the summation below.
Expanding the brackets we have:
= \sum_{i=1}^n x_i^2 \sum_{j=1}^n y_j^2 + \sum_{j=1}^n x_j^2 \sum_{i=1}^n y_i^2 - 2 \sum_{i=1}^n x_i y_i \sum_{j=1}^n x_j y_j ,
collecting together identical terms (albeit with different summation indices) we find:
= \sum_{i=1}^n x_i^2 \sum_{i=1}^n y_i^2 - \left( \sum_{i=1}^n x_i y_i \right)^2 .
Because the left-hand side of the equation is a sum of the squares of real numbers it is greater than or equal to zero, thus:
\sum_{i=1}^n x_i^2 \sum_{i=1}^n y_i^2 - \left( \sum_{i=1}^n x_i y_i \right)^2 \geq 0. This form is usually used when solving school-level math problems.
Yet another approach when
n ≥ 2 ( n = 1 is trivial) is to consider the plane containing x and y. More precisely, recoordinatize R with any orthonormal basis whose first two vectors span a subspace containing n x and y. In this basis only and are nonzero, and the inequality reduces to the algebra of dot product in the plane, which is related to the angle between two vectors, from which we obtain the inequality:
When
n = 3 the Cauchy–Schwarz inequality can also be deduced from Lagrange's identity, which takes the form
from which readily follows the Cauchy–Schwarz inequality.
Another proof of the general case for n can be done by using the technique used to prove Inequality of arithmetic and geometric means.
L 2
For the inner product space of square-integrable complex-valued functions, one has
A generalization of this is the Hölder inequality.
Applications
The triangle inequality for the inner product is often shown as a consequence of the Cauchy–Schwarz inequality, as follows: given vectors
x and y:
\begin{align}\|x + y\|^2 & = \langle x + y, x + y \rangle \\& = \|x\|^2 + \langle x, y \rangle + \langle y, x \rangle + \|y\|^2 \\& = \|x\|^2 + 2 \text{ Re} \langle x, y \rangle + \|y\|^2\\& \le \|x\|^2 + 2|\langle x, y \rangle| + \|y\|^2 \\& \le \|x\|^2 + 2\|x\|\|y\| + \|y\|^2 \\& = \left (\|x\| + \|y\|\right)^2.\end{align}
Taking square roots gives the triangle inequality.
The Cauchy–Schwarz inequality allows one to extend the notion of "angle between two vectors" to any real inner product space, by defining:
\cos\theta_{xy}=\frac{\langle x,y\rangle}{\|x\| \|y\|}.
The Cauchy–Schwarz inequality proves that this definition is sensible, by showing that the right-hand side lies in the interval [−1, 1], and justifies the notion that (real) Hilbert spaces are simply generalizations of the Euclidean space.
It can also be used to define an angle in complex inner product spaces, by taking the absolute value of the right-hand side, as is done when extracting a metric from quantum fidelity.
The Cauchy–Schwarz is used to prove that the inner product is a continuous function with respect to the topology induced by the inner product itself.
The Cauchy–Schwarz inequality is usually used to show Bessel's inequality.
Probability theory
For the multivariate case,
For the univariate case, Indeed, for random variables
X and Y, the expectation of their product is an inner product. That is,
and so, by the Cauchy–Schwarz inequality,
Moreover, if
μ = E( X) and ν = E( Y), then
|\operatorname{Cov}(X,Y)|^2&= |\operatorname{E}( (X - \mu)(Y - \nu) )|^2 = | \langle X - \mu, Y - \nu \rangle |^2\\&\leq \langle X - \mu, X - \mu \rangle \langle Y - \nu, Y - \nu \rangle \\& = \operatorname{E}( (X-\mu)^2 ) \operatorname{E}( (Y-\nu)^2 ) \\& = \operatorname{Var}(X) \operatorname{Var}(Y),\end{align}
where Var denotes variance and Cov denotes covariance.
Generalizations
Various generalizations of the Cauchy–Schwarz inequality exist in the context of operator theory, e.g. for operator-convex functions, and operator algebras, where the domain and/or range of
φ are replaced by a C*-algebra or W*-algebra.
This section lists a few of such inequalities from the operator algebra setting, to give a flavor of results of this type.
Positive functionals on C*- and W*-algebras
One can discuss inner products as positive functionals. Given a Hilbert space
L 2( m), m being a finite measure, the inner product < · , · > gives rise to a positive functional φ by
Since <
ƒ, ƒ > ≥ 0, φ( f*f) ≥ 0 for all ƒ in L 2( m), where ƒ* is pointwise conjugate of ƒ. So φ is positive. Conversely every positive functional φ gives a corresponding inner product < ƒ, g > = φ φ( g*ƒ). In this language, the Cauchy–Schwarz inequality becomes
which extends verbatim to positive functionals on C*-algebras.
We now give an operator theoretic proof for the Cauchy–Schwarz inequality which passes to the C*-algebra setting. One can see from the proof that the Cauchy–Schwarz inequality is a consequence of the
positivity and anti-symmetry inner-product axioms.
Consider the positive matrix
M =\begin{bmatrix}f^*\\g^*\end{bmatrix}\begin{bmatrix}f & g\end{bmatrix}=\begin{bmatrix}f^*f & f^* g \\g^*f & g^*g\end{bmatrix}.
Since
φ is a positive linear map whose range, the complex numbers C, is a commutative C*-algebra, φ is completely positive. Therefore
M' = (I_2 \otimes \phi)(M) =\begin{bmatrix}\phi(f^*f) & \phi(f^* g) \\\phi(g^*f) & \phi(g^*g)\end{bmatrix}
is a positive 2 × 2 scalar matrix, which implies it has positive determinant:
\phi(f^*f) \phi(g^*g) - | \phi(g^*f) |^2 \geq 0 \quad \text{i.e.} \quad \phi(f^*f) \phi(g^*g) \geq | \phi(g^*f) |^2. \,
This is precisely the Cauchy–Schwarz inequality. If
ƒ and g are elements of a C*-algebra, f* and g* denote their respective adjoints.
We can also deduce from above that every positive linear functional is bounded, corresponding to the fact that the inner product is jointly continuous.
Positive maps
Positive functionals are special cases of positive maps. A linear map Φ between C*-algebras is said to be a
positive map if a ≥ 0 implies Φ( a) ≥ 0. It is natural to ask whether inequalities of Schwarz-type exist for positive maps. In this more general setting, usually additional assumptions are needed to obtain such results. Kadison–Schwarz inequality
The following theorem is named after Richard Kadison.
Theorem. If Φ is a unital positive map, then for every normal element a in its domain, we have Φ( a*a) ≥ Φ( a*)Φ( a) and Φ( a*a) ≥ Φ( a)Φ( a*).
This extends the fact
φ( a*a) · 1 ≥ φ( a)* φ( a) = | φ( a)| 2, when φ is a linear functional.
The case when
a is self-adjoint, i.e. a = a*, is sometimes known as Kadison's inequality. 2-positive maps
When Φ is 2-positive, a stronger assumption than merely positive, one has something that looks very similar to the original Cauchy–Schwarz inequality:
Theorem ( Modified Schwarz inequality for 2-positive maps) [2] For a 2-positive map Φ between C*-algebras, for all a, b in its domain,
A simple argument for (2) is as follows. Consider the positive matrix
M= \begin{bmatrix}a^* & 0 \\b^* & 0\end{bmatrix}\begin{bmatrix}a & b \\0 & 0\end{bmatrix}=\begin{bmatrix}a^*a & a^* b \\b^*a & b^*b\end{bmatrix}.
By 2-positivity of Φ,
(I_2 \otimes \Phi) M = \begin{bmatrix}\Phi(a^*a) & \Phi(a^* b) \\\Phi(b^*a) & \Phi(b^*b)\end{bmatrix}
is positive. The desired inequality then follows from the properties of positive 2 × 2 (operator) matrices.
Part (1) is analogous. One can replace the matrix by
Physics
The general formulation of the Heisenberg uncertainty principle is derived using the Cauchy–Schwarz inequality in the Hilbert space of quantum observables.
See also Notes References External links
Earliest Uses: The entry on the Cauchy–Schwarz inequality has some historical information. Example of application of Cauchy–Schwarz inequality to determine Linearly Independent Vectors Tutorial and Interactive program.
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization. |
Forgot password? New user? Sign up
Existing user? Log in
Ifx2+2xy−y2=6.Thenfindtheminimumvalueof(x2+y2)2?If{\kern 1pt} {x^2}{\kern 1pt} + {\kern 1pt} 2xy{\kern 1pt} - {\kern 1pt} {y^2} = {\kern 1pt} 6.Then{\kern 1pt} {\kern 1pt} find{\kern 1pt} {\kern 1pt} the{\kern 1pt} {\kern 1pt} minimum{\kern 1pt} {\kern 1pt} value{\kern 1pt} {\kern 1pt} of{\kern 1pt} {({x^2} + {y^2})^2}?Ifx2+2xy−y2=6.Thenfindtheminimumvalueof(x2+y2)2? where x and y are real numbers.
Note by Kiran Patel 6 years, 3 months ago
Easy Math Editor
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
*italics*
_italics_
**bold**
__bold__
- bulleted- list
1. numbered2. list
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)
> This is a quote
This is a quote
# I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world"
# I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world"
\(
\)
\[
\]
2 \times 3
2^{34}
a_{i-1}
\frac{2}{3}
\sqrt{2}
\sum_{i=1}^3
\sin \theta
\boxed{123}
Sort by:
A much easier and convenient way would be this:
x2−y2=6−2xyx^2-y^2=6-2xyx2−y2=6−2xy
On squaring we have:
x4+y4=36+6(xy)2−24xyx^4+y^4=36+6(xy)^2-24xyx4+y4=36+6(xy)2−24xy
Let N=(x2+y2)2N=(x^2+y^2)^2N=(x2+y2)2 for some N∈RN \in \mathbb{R}N∈R obviously N≥0N \geq 0N≥0
N=x4+y4+2(xy)2N=x^4+y^4+2(xy)^2N=x4+y4+2(xy)2
N=36+8(xy)2−24xyN=36+8(xy)^2-24xyN=36+8(xy)2−24xy
Substitute the value of x4+y4=36+6(xy)2−24xyx^4+y^4=36+6(xy)^2-24xyx4+y4=36+6(xy)2−24xy
N=8((xy)2−3xy+92)\large N=8((xy)^2-3xy+\frac{9}{2})N=8((xy)2−3xy+29)
On factorising:
N=8((xy−32)2−92+94)\large N=8((xy-\frac{3}{2})^2 -\frac{9}{2} + \frac{9}{4})N=8((xy−23)2−29+49)
Clearly the minimum occurs at xy=32xy=\frac{3}{2}xy=23
⇒N=18\Rightarrow N=18⇒N=18 at xy=32xy=\frac{3}{2}xy=23 where (x,y)=(18+32,18−32)(x, y)=(\large \sqrt {\frac {\sqrt{18}+3}{2}},\sqrt {\frac {\sqrt{18}-3}{2}})(x,y)=(218+3,218−3)
We get the minimum to be 181818.
Log in to reply
As with all inequality questions, you need to verify that equality can actually hold. Simply stating that xy=32xy = \frac{3}{2} xy=23 is not sufficient to guarantee that real values of xxx and yyy exist. E.g. you could have complex solutions to the equation.
Yes I was about to do that in the edit.
Can't it be solved using trigonometry?
I think N≥0N\geq0N≥0 is incorrect.It must be N>0.
Yes, if you take into consideration the first equation N>0N>0N>0 is more accurate. I stated that N≥0N \geq 0N≥0 without considering the first equation.
Differentiate the first equation with respect to xxx.
We get:Let N=(x2+y2)2N=(x^2+y^2)^2N=(x2+y2)2
2x+2y+2x(dydx)−2y(dydx)=0\large 2x+2y+2x(\frac{dy}{dx})-2y(\frac{dy}{dx})=02x+2y+2x(dxdy)−2y(dxdy)=0
dydx=x+yy−x\large \frac{dy}{dx}=\frac{x+y}{y-x}dxdy=y−xx+y
Similarly differentiating the second equation with respect to xxx:
dNdx=2(x2+y2)(2x+2ydydx)\large \frac{dN}{dx}=2(x^2+y^2)(2x+2y\frac{dy}{dx})dxdN=2(x2+y2)(2x+2ydxdy)
In order to minimize NNN we have dNdx=0\frac{dN}{dx}=0 dxdN=0
As such we have:
x2+y2=0x^2+y^2=0x2+y2=0 or (2x+2ydydx)=0 (2x+2y\frac{dy}{dx})=0(2x+2ydxdy)=0
Note that first of the above equation gives us x=−y2x=\sqrt{-y^2}x=−y2 which is not possible since x,y∈Rx,y \in \mathbb{R}x,y∈R .
So we have that:
2x+2ydydx=0\large 2x+2y\frac{dy}{dx}=02x+2ydxdy=0
Substituting for dydx\frac{dy}{dx}dxdy, gives us:
y2−x2+2xy=0y^2-x^2+2xy=0y2−x2+2xy=0
Now, y2=2xy+x2−6y^2=2xy+x^2-6y2=2xy+x2−6
⇒2xy+x2−6−x2+2xy=0\Rightarrow 2xy+x^2-6-x^2+2xy=0⇒2xy+x2−6−x2+2xy=0
xy=32xy=\frac{3}{2}xy=23
This can be verified to be the minimum value for xyxyxy by checking the sign of d2ydx2\frac{d^2y}{dx^2}dx2d2y or maybe we could just use the fact that since it is obvious that NNN will never reach a maximum so xy=32xy=\frac{3}{2}xy=23 will give the minimum.(I am not exactly sure)
We can re-write N=(x2+x2+2xy−6)2\large N=(x^2+x^2+2xy-6)^2N=(x2+x2+2xy−6)2
N=4(x2+xy−3)2N=4(x^2+xy-3)^2N=4(x2+xy−3)2
Since we have xy=32xy=\frac {3}{2}xy=23
x2−y2=6−3x^2-y^2=6-3x2−y2=6−3
Squaring both sides gives us:
x4+y4−2(xy)2=9x^4+y^4-2(xy)^2=9x4+y4−2(xy)2=9
x4+y4=272x^4+y^4=\frac{27}{2}x4+y4=227
Now,
N=(x4+y4+2(xy)2N=(x^4+y^4+2(xy)^2N=(x4+y4+2(xy)2
⇒N=272+294\Rightarrow N=\frac{27}{2}+2\frac{9}{4}⇒N=227+249
N=18N=18N=18 is the minimum value of the expression.
Problem Loading...
Note Loading...
Set Loading... |
I am going to supply an answer that is quite similar to SRKX's (which is very very good) because I want to discuss in more detail a few important things. First, you cannot use a stochastic volatility model for the SDE that you've provided as that's GBM with constant diffusion. However, based on what you've said it's obvious you wish to model a discretized version of the following process:
$$\boxed{dS(t) = S(t)(\mu dt + \sigma(t)dW(t))}$$
where $W(t)$ is a standard scalar Brownian motion under the real-world probability measure, $\mu$ is its constant drift and $\sigma(t)$ is its volatility process which we assume to follow GARCH dynamics.
Doing what we always do most of the time in finance research, I'm going to define returns as $R(t):= \ln{\frac{S(t)}{S(t-1)}}$. In the univariate setting, the mean equation of a ARCH/GARCH type model generally follows:
$$\boxed{R(t) = \mu + \sum_{i=1}^N a_i R(t-i) + b \sigma(t) + \mathbf{c}\mathbf{X(t)} + \sigma(t) (W(t)-W(t-1))}$$
where $a_i$ are the AR(i) coefficients, $b$ is the coefficient of the garch-in-mean, $\mathbf{c}$ is a row vector of coefficients to some exogenous variables represented by column vector $\mathbf{X(t)}$. However, given the process that you've specified, we are restricted to a very specific mean equation:
$$\boxed{R(t) = \mu + \sigma(t)(W(t)-W(t-1)) }$$
where $W(t)-W(t-1) \overset{d}{=} W(1) \sim N(0,1)$. This restriction is not automatically troubling as it is prevalent when modelling DCC-fGARCH (time-varying correlations with family GARCH). However most studies that look to things such as bivariate variance spillovers (e.g., contagion research) or that are doing a study where univariate GARCH is involed (e.g., exchange rate exposure research where market returns and exchange rate returns are exogenous variables in the mean equation) will find this process to be inappropriate for their usage.
Ignoring the empirical issues with the mean equation that follow from the proposed SDE, we can see that the mean equation representation follows from the fact that, at $\mathcal{F}_{0}$ and under real-world $\mathbb{P}$:
$S(t) = e^{\mu t + \sigma(t) W(t)}$
$\therefore \ln{\frac{S(t)}{S(t-1)}} = \mu + \sigma(t) (W(t) - W(t-1))$
$\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \overset{d}{=}\mu + \sigma(t) W(1) $
Luckily, the variance equation is unconstrained, and we can use the GARCH model whose process is defined here. For a discretized econometric representation of GARCH(1,1) we have, as SRKX lays out;
$$\boxed{\sigma_t^2 = \omega + \alpha (W(t-1)-W(t-2))^2\sigma(t-1)^2 + \beta \sigma(t-1)^2}$$
So you should then proceed with your plan of discretization
while using the correct mean equation that I boxed. This is the default in R packages
ccgarch,
rugarch and
fGarch, so it's your lucky day! |
A transistor goes into saturation when both the base-emitter and base-collector junctions are forward biased, basically. So if the collector voltage drops below the base voltage, and the emitter voltage is below the base voltage, then the transistor is in saturation.
Consider this Common Emitter Amplifier circuit. If the collector current is high enough, then the voltage drop across the resistor will be big enough to lower the collector voltage below the base voltage. But note that the collector voltage can't go too low, because the base-collector junction will then be like a forward-biased diode! So, you will have a voltage drop across the base-collector junction but it will not be the usual 0.7V, it will be more like 0.4V.
How do you bring it out of saturation? You could reduce the amount of base drive to the transistor (either reduce the voltage \$V_{be}\$ or reduce the current \$I_b\$), which will then reduce the collector current, which means the voltage drop across the collector resistor will be decreased also. This should increase the voltage at the collector and act to bring the transistor out of saturation. In the "extreme" case, this is what is done when you switch off the transistor. The base drive is removed completely. \$V_{be}\$ is zero and so is \$I_b\$. Therefore, \$I_c\$ is zero too, and the collector resistor is like a pull-up, bringing the collector voltage up to \$V_{CC}\$.
A follow-up comment on your statement
Does a BJT become saturated by
raising Vbe above a certain threshold?
I doubt this, because BJTs, as I
understand them, are
current-controlled, not
voltage-controlled.
There are a number of different ways to describe transistor operation. One is to describe the relationship between currents in the different terminals:
$$I_c = \beta I_b$$
$$I_c = \alpha I_e$$
$$I_e = I_b + I_c$$
etc. Looking at it this way, you could say that the collector current is controlled by the base
current.
Another way of looking at it would be to describe the relationship between base-emitter voltage and collector current, which is
$$I_c = I_s e^{\frac{V_{be}} {V_T}}$$
Looking at it this way, the collector current is controlled by the base
voltage.
This is definitely confusing. It confused me for a long time. The truth is that you cannot really separate the base-emitter voltage from the base current, because they are interrelated. So both views are correct. When trying to understand a particular circuit or transistor configuration, I find it is usually best just to pick whichever model makes it easiest to analyze.
Edit:
Does a BJT become saturated by
allowing Ib to go over a certain
threshold?
If so, does this threshold
depend on the "load" that is connected
to the collector? Is a transistor
saturated simply because Ib is high
enough that the beta of the transistor
is no longer the limiting factor in
Ic?
The bold part is basically exactly right. But the \$I_b\$ threshold is not intrinsic to a particular transistor. It will depend not only on the transistor itself but on the configuration: \$V_{CC}\$, \$R_C\$, \$R_E\$, etc. |
Let $m\in \mathbb{N}$, $p\in [1,\infty]$, $W^{m,p}([0,1])$ the space of all functions $[0,1]\rightarrow \mathbb{R}$ which are $m$ times weakly differentiable and weak derivatives in $L^p$, $$|u|_{W^{m,p}([0,1])}:=\|u^{(m)}\|_{L^p([0,1])}$$ for all $u\in W^{m,p}$ the Sobolev seminorm where $u^{(m)}$ is the $m$-th derivative of $u$, $[n]=\{0,\dots,n\}$, $\ell([n],\mathbb{R})$ the set of all functions $[n]\rightarrow \mathbb{R}$, $\nabla \colon \ell([n],\mathbb{R})\rightarrow \ell([n-1],\mathbb{R})$ defined by $$(\nabla x)_i:=x_{i+1}-x_i,$$ $$\|f\|_{\ell^p}:=\left(\frac{1}{n}\sum_{i\in [n]} |f_i|^p\right)^{\frac{1}{p}}$$ the discrete $p$-norm and $$|f|_{\ell^{m,p}}:=\|n^m\nabla^mf\|_{\ell^p}$$ the discrete Sobolev norm. I want to find a linear operator $E\colon \ell([n],\mathbb{R})\rightarrow W^{m,p}([0,1])$ which is interpolatory, i.e. $Ef(k/n)=f_k$ for all $k\in [n]$, and satisfies $$C_1|f|_{\ell^{m,p}}\leq |Ef|_{W^{m,p}([0,1]}\leq C_2|f|_{\ell^{m,p}}$$for some $C_1,C_2$ independent of $n$ and $f$. I tried spline-interpolation of degree $m$. However I don't know how to prove the statement. Such a theorem would be very useful for me and maybe others to get some high-order approximation estimates.
I prove here that the upper bound for your inequality is true for $m=1$. However, this approach should help you prove what you want in general. (EDIT: This is true in general after checking with Charles Fefferman--see edit at bottom.)
What you're trying to do is similar to recent work by Charles Fefferman, Arie Israel, and Garving Luli. Using your notation, define $X_m \,\colon= W^{m,p}(\mathbb{R})$ equipped with the homogeneous seminorm $|F|_{X_m} := ||F^{(m)}||_{L^p(\mathbb{R})}$, and define $X_m([n])\, \colon= \ell([n],\mathbb{R})$ equipped with the seminorm $|f|_{X_m([n])} \,\colon= \inf\{|F|_{X_m} \, \colon\, F \in X_m,\, F=f \, \text{on}\, [n]\}$. Let $p \in (1,\infty)$. In Sobolev Extension by Linear Operators, one of the theorems Fefferman, Israel, and Luli prove is that there exists a linear map $T \colon X_m([n]) \rightarrow X_m$ such that $Tf = f$ on $[n]$ and $|Tf|_{X_m} \leq C|f|_{X_m([n])}$, where $C$ depends only on $m,n,p$. Note that we also have the trivial fact $|f|_{X_m([n])} \leq |Tf|_{X_m}$. Their result is in fact true for extensions of any subset in $\mathbb{R}^n$ with $p \in (n,\infty)$, not just finite subsets of $\mathbb{R}$ (the restriction on $p$ comes from the Sobolev Embedding Theorem). Recently, they proved in Fitting a Sobolev function to data, that when the subset is finite, $T$ additionally has the important but technical condition of "$\Omega$-assisted bounded depth," where $\Omega$ is a certain set of linear functionals on $X_m([n])$.
Applying this to your problem, define $Y_m\,\colon= W^{m,p}([0,n])$ with $|F|_{Y_m} := ||F^{(m)}||_{L^p(\mathbb{[0,n]})}$, and define $Y_m([n])\, \colon= \ell([n],\mathbb{R})$ equipped with seminorm $|f|_{Y_m([n])} \, \colon= \inf\{|F|_{Y_m} \, \colon\, F \in Y_m,\, F=f \, \text{on}\, [n]\}$. Note that $|f|_{Y_m([n])} = |f|_{X_m([n])}$. Suppose we have that $|f|_{Y_m([n])} = \big(\sum\limits_{i \in [n-m]} |\nabla^m f|^p\big)^{\frac{1}{p}}$. Then, $$\big(\sum\limits_{i \in [n-m]} |\nabla^m f|^p\big)^{\frac{1}{p}} = |f|_{Y_m([n])} \leq |Tf|_{Y_m} \leq |Tf|_{X_m} \leq C|f|_{X_m([n])} = C\big(\sum\limits_{i \in [n-m]} |\nabla^m f|^p\big)^{\frac{1}{p}}$$ Multiplying in by $n^{m -\frac{1}{p}}$, this implies your desired inequality, using that $$n^{m -\frac{1}{p}}\big(\int\limits_{0}^n |u^{(m)}(x)|^p \,dx\big)^{\frac{1}{p}} = \big(\int\limits_{0}^1 |u^{(m)}(nx)|^p \,dx\big)^{\frac{1}{p}}$$ for $u \in Y_m$ by scaling and the chain rule.
So, your problem comes down to comparing $|f|_{Y_m([n])}$ and $\big(\sum_{i \in [n-m]} |\nabla^m f|^p\big)^{\frac{1}{p}}$. It is easy to see that $|f|_{Y_1([n])} \leq \big(\sum_{i \in [n-1]} |\nabla f|^p\big)^{\frac{1}{p}}$, which gives the upper bound in your desired inequality for $m=1$. Let $f \in \ell([n],\mathbb{R})$, and let $G_1 \colon [0,n] \rightarrow \mathbb{R}$ be the function whose graph consists of the straight lines connecting $f(i)$ to $f(i+1)$ for $i=0,\dots,n-1$. Then, $G_1 \in Y_1$ and for $z \in (i,i+1)$, $\frac{dG_1}{dx}(z) = (\nabla f)_i$. Then, $|f|_{Y_1([n])} \leq ||G_1^{(1)}||_{L^p(\mathbb{R})} = \big(\sum_{i \in [n-1]} |\nabla f|^p\big)^{\frac{1}{p}}$.
EDIT: I asked Charles Fefferman today, just to make sure the direction I'm going is correct, if your discrete Sobolev norm and the norm induced on $[n]$ by restrictions of extensions are equivalent, and he said that they certainly are. The lower bound follows from considering a polynomial extension, and the upper bound should follow from an argument where you prove it for small subsets and use a partitions of unity to prove it for all of $[n]$. Since the norms on $[n]$ are equivalent, this means that using my little argument above, your problem is true, since it is equivalent to a specific case of Fefferman, Israel, and Luli's result. |
The Open and Closed Sets of a Topological Space Examples 1
Recall from The Open and Closed Sets of a Topological Space page that if $(X, \tau)$ is a topological space then a set $A \subseteq X$ is said to be open if $A \in \tau$ and $A$ is said to be closed if $A^c \in \tau$. Furthermore, if $A$ is both open and closed, then we say that $A$ is clopen.
We will now look at some examples of identifying the open, closed, and clopen sets of a topological space $(X, \tau)$.
Example 1 Let $X = \{ a, b, c, d \}$ and consider the topology $\tau = \{ \emptyset, \{ c \}, \{ a, b \}, \{ c, d \}, \{a, b, c \}, X \}$. What are the open, closed, and clopen sets of $X$ with respect to this topology?
The open sets of $X$ are those sets forming $\tau$:(1)
The closed sets of $X$ are the complements of all of the open sets:(2)
The clopen sets of $X$ are the sets that are both open and closed:(3)
Example 2 Prove that if $X$ is a set and every $A \subseteq X$ is clopen with respect to the topology $\tau$ then $\tau$ is the discrete topology on $X$.
Let $X$ be a set and let every $A \subseteq X$ be clopen. Then every $A \subseteq X$ is open, i.e., every subset of $X$ is open, so $\tau = \mathcal P(X)$. Hence $\tau$ is the discrete topology on $X$.
Example 3 Consider the topological space $(\mathbb{Z}, \tau)$ where $\tau$ is the cofinite topology. Determine whether the set of even integers is open, closed, and/or clopen. Determine whether the set $\mathbb{Z} \setminus \{1, 2, 3 \}$ is open, closed, and/or clopen. Determine whether the set $\{-1, 0, 1 \}$ is open, closed, and/or clopen. Show that any nontrivial subset of $\mathbb{Z}$ is never clopen.
Recall that the cofinite topology $\tau$ is described by:(4)
We first consider the set of even integers which we denote by $E = \{ ..., -2, 0, 2, ... \}$. We see that $E^c$ is the set of odd integers, i.e., $E^c = \{ ..., -3, -1, 1, 3, ... \}$ which is an infinite set. Therefore $E \not \in \tau$ so $E$ is not open. Furthermore, we have that $(E^c)^c = E$ is an infinite set and $E^c \not \in \tau$ so $E$ is not closed either.
We now consider the set $\mathbb{Z} \setminus \{1, 2, 3 \}$. We have that $(\mathbb{Z} \setminus \{1, 2, 3 \})^c = \{1, 2, 3 \}$ which is a finite set. Therefore $\mathbb{Z} \setminus \{1, 2, 3 \} \in \tau$, so $\mathbb{Z} \setminus \{1, 2, 3 \}$ is open. Now consider the complement $(\mathbb{Z} \setminus \{1, 2, 3 \})^c = \{1, 2, 3 \}$. The complement of this set is $(\{1, 2, 3 \})^c = \mathbb{Z} \setminus \{1, 2, 3 \}$ which is an infinite set, so $(\{ 1, 2, 3 \})^c \not \in \tau$. Hence $\mathbb{Z} \setminus \{1, 2, 3 \}$ is not closed.
Lastly we consider the set $\{ -1, 0, 1 \}$. We have that $(\{ -1, 0, 1 \})^c = \mathbb{Z} \setminus \{-1, 0, 1 \}$ which is an infinite set, so $\{-1, 0, 1 \} \not \in \tau$ so $\{ -1, 0, 1 \}$ is not open. Now consider the complement $(\{-1, 0, 1\})^c = \mathbb{Z} \setminus \{-1, 0, 1 \}$. The complement of this set is $\{ -1, 0, 1 \}$ is finite, seo $(\{-1, 0, 1\})^c \in \tau$. Hence $\{-1, 0, 1 \}$ is closed.
Lastly, let $A \subseteq \mathbb{Z}$ be a nontrivial subset of $\mathbb{Z}$, i.e., $A \neq \emptyset$ and $A \neq \mathbb{Z}$. Suppose that $A$ is clopen. Then $A$ is both open and closed. Hence by definition, $A$ and $A^c$ are both open. Hence $A^c$ and $A$ are both finite. But $\mathbb{Z} = A \cap A^c$ which implies that $\mathbb{Z}$ is a finite set - which is preposterous since the set of integers is an infinite set! Hence $A$ cannot be clopen. |
Euler-Binet Formula/Corollary 1 Corollary to Euler-Binet Formula $F_n = \dfrac {\phi^n} {\sqrt 5}$ rounded to the nearest integer
where:
Proof
From Euler-Binet Formula:
$F_n = \dfrac {\phi^n - \hat \phi^n} {\sqrt 5} = \dfrac {\phi^n } {\sqrt 5} - \dfrac {\hat \phi^n} {\sqrt 5}$
But $\size {\dfrac {\hat \phi^n} {\sqrt 5} } < \dfrac 1 2$ for all $n \ge 0$.
Thus $\dfrac {\phi^n } {\sqrt 5}$ differs from $F_n$ by a number less than $\dfrac 1 2$.
Thus the nearest integer to $\dfrac {\phi^n } {\sqrt 5}$ is $F_n$.
$\blacksquare$
Sources 1986: David Wells: Curious and Interesting Numbers... (previous) ... (next): $5$ 1997: Donald E. Knuth: The Art of Computer Programming: Volume 1: Fundamental Algorithms(3rd ed.) ... (previous) ... (next): $\S 1.2.8$: Fibonacci Numbers: $(15)$ 1997: David Wells: Curious and Interesting Numbers(2nd ed.) ... (previous) ... (next): $5$ |
No, you cannot skip it. Uninformative priors contain information and for multivariate regression assure that the sum of the probabilities will be unity. In fact, you cannot use a uniform prior on a multivariate regression with three or more independent variables or the sum of the probabilities of your posterior will not equal one. This is likely the reason that Stein's lemma in Frequentist statistics exists. If you try it and you get lucky, then you will get screwy results that warn you something is wrong. If you get unlucky, the results will not permit you to detect the problems.
Let me give you some examples of the information in uninformative prior distributions. For the binomial distribution whose true parameter value is unknown then a simple prior is the beta distribution. This does not mean you should use a beta distribution, merely that it is very convenient. There are three known uninformative priors for the binomial, each is a form of the beta distribution. The first will result in an point estimate identical to the Frequentist solution. It is $$p^{-1}(1-p)^{-1}.$$ It provides an unbiased estimator for $p$ in the sense that the maximum a posteriori(MAP) estimator and the Frequentist estimator match for the estimated value of $p$.
If you look at that prior, however, it provides infinite weight on either zero or one and minimal weight to $p=\frac{1}{2}$. It is uninformative in the sense that it does not influence the location of the MAP estimator. It does not weight the tails uniformly though.
The second is the Jeffreys' prior, which is $$p^{-1/2}(1-p)^{-1/2}.$$ As with the first one, it provides infinite weight on either extreme and minimal weight at $p=1/2$. It is a biased estimator in that it adds one half a success and one half of a failure to the ultimate solution. It is equivalent to tossing a coin one time and it lands on its side. The reason to use it is that it permits you to do something that cannot be done in non-Bayesian methods and cannot be done with either uninformative prior, it allows you to transform the variable of interest and get the same statistical results under the transformation as you get in the raw data.
A common transformation is to take the logarithm of data, but $\hat{\mu}_{raw}\ne\exp(\log(\hat{\mu}_{\log}))$ in most cases. If you are careful in your use of the prior, then $\hat{\mu}_{raw}=\exp(\log(\hat{\mu}_{\log}))$, as well as all other moments and intervals. A Jeffreys' prior preserves results across transformations. An important area of research in Bayesian methods are how to create invariant results.
The third common uninformative prior is the uniform distribution, which is also a beta distribution. The uniform distribution is $\Pr(p)=1,\forall{p}\in(0,1)$. This is equivalent to adding one success and one failure to the final answer. A consequence of this is that although each possible solution is equally probable, prior to seeing the data, the expectation of $p$ will be biased toward the center.
This is also true for multivariate regression. The more important issue, however, is whether or not you are in possession of prior information. Is there no prior research among any of the variables? It is generally the case that prior information exists, the more important issue is the disciplined incorporation of prior learning.
Let me give you a silly example, that at another level, is not so silly. Imagine you were eating green beans and you wondered how many calories were in the forkful you were holding. You decide to be careful and sample from the can of beans in equal amounts. You have many samples. Strangely, you also own a calorimeter. You want to estimate the calories per gram.
At first you consider using the maximum likelihood estimate(MLE), but then you realize that you could do better. The MLE is equivalent to using a uniform prior over the half plane, since you also have to estimate standard deviation as a nuisance parameter. You realize that you cannot have negative calories, so you only need a quarter plane. That is information, even though it gives equal weight to all positive solutions.
Then you realize you could do better. The recommended intake for an adult male is 2000 calories and you do not believe that a person could survive on one forkful of green beans, so you restrict it to uniform up to 2000 calories and half normal above 2000 calories so it quickly tapers off. Of course, you then realize the can itself tells the FDA estimate of the calories, but you are aware there have been significant errors in the FDA estimate in specific isolated foods.
According to the FDA there are .31 calories per gram, but how big should the standard deviation be? You don't trust the FDA estimate, so you reason that if three standard deviations are to the left, you should allow three more to the right so you have a truncated normal whose $\pm{3}\sigma=.31$ and whose center equals .31.
You have just gone from an uninformative prior to a weakly informative prior. If you collected multiple studies, you could weight them into your prior for a strongly informative prior distribution. This would protect you from happening to get the weird sample of beans by random chance.
The prior is the most difficult thing to define, particularly if you have academic adversaries as it should be no stronger than
their beliefs if you are to convince them. It sounds like you are using conjugate priors in order to get mathematical simplicity.
Although there are giant discussions on this, I will give you a weak heuristic to find your prior. First, gather as much academic research, Frequentist or Bayesian, on the topic that you can gather. You are looking for sample size, slope, intercept, covariance and variance estimates. If you think a study was very poorly done, then multiply its variance/covariance estimate by some suitable number, such as 25 or 100. This will weaken its influence in the process, but preserve the estimate of the location. You are keeping poor data, by weakening its value, but not excluding it entirely. Consider that your first prior, then look at the second study. That study, assuming they do not use the same data set, will be your likelihood. If you are using conjugate priors, update this so that your posterior is now the product of the two studies. Now get your third study and treat your last posterior as your new prior and the third study as your likelihood to get a new posterior. Continue this until you run out of studies and the final posterior will be your prior.
Again, if the research isn't exactly your questions then multiply the variances and covariances by some large enough number that it weakens the impact of the prior research. If some of the research uses different variables, then you will have to make judgment calls as there is no simple solution.
If you are wed to an uninformative prior because you are afraid of criticism, then you want the parameters of the prior to make a distribution that is as spread out as possible, but still a proper distribution. That is to say, it adds to one. You can, for example, give a million unit standard deviation when you believe it will probably be .01 units. If the prior is diffuse enough, it won't matter, except that it trivially avoids the problem brought about in Stein's lemma.
A solution is to look at the smallest reported digit. If your report out to five digits past the decimal, then you want your variance to be large enough that it only impacts up to the sixth decimal in the calculation.
There are far better solutions than this, but you really need to sit down with a statistician to do them. Bayesian methods are not DIY methods at first. |
The Open and Closed Sets of a Topological Space Examples 2
Recall from The Open and Closed Sets of a Topological Space page that if $(X, \tau)$ is a topological space then a set $A \subseteq X$ is said to be open if $A \in \tau$ and $A$ is said to be closed if $A^c \in \tau$. Furthermore, if $A$ is both open and closed, then we say that $A$ is clopen.
We will now look at some examples of identifying the open, closed, and clopen sets of a topological space $(X, \tau)$.
Example 1 Consider the topological space $(\mathbb{R}, \tau)$ where $\tau = \{ (-n, n) : n \in \mathbb{Z}, n \geq 1 \}$. What is the largest open set contained in the interval $(-\pi, e)$? Show that every nontrivial subset $A \subseteq \mathbb{R}$ cannot be clopen.
The open sets of $\mathbb{R}$ with respect to the topology $\tau$ above are:(1)
Consider the interval $(-\pi, e) \approx (-3.14159..., 2.71828...)$. We can clearly see that $(-1, 1) \subset (-\pi, e)$ and $(-2, 2) \subset (-\pi, e)$, but $(-3, 3) \not \subset (-\pi, e)$ since, for example, $2.9 \in (-3, 3)$ and $2.9 \not \in (-\pi, e)$. We see that the open sets of $\mathbb{R}$ with respect to the topology $\tau$ are nested, i.e.:(2)
Therefore the largest open set contains in the interval $(-\pi, e)$ is $(-2, 2)$.
Now let $A \subseteq X$ be a nontrivial subset of $X$, i.e., $A \neq \emptyset$ and $A \neq X$. Suppose that $A$ is clopen. Then $A$ and $A^c$ are both open. If $A$ is open, then $A = (-i, i)$ for some $i \in \mathbb{R}$. But then $A^c = (-\infty, -i] \cup [i, \infty) \not \in \tau$ which is a contradiction. Therefore $A$ cannot be clopen.
Example 2 Prove that if $(X, \tau)$ is a topological space where $\tau$ is a nested topology then every nontrivial subset $A \subseteq X$ cannot be clopen.
Let $(X, \tau)$ be a topological space and suppose that $\tau$ is a nested topology. Then for $\tau = \{ \emptyset, U_1, U_2, ..., U_n, ..., X \}$ we have that:(3)
Now suppose that $A \subset X$ is a nontrivial subset of $X$. Then $A \neq \emptyset$ and $A \neq X$. Suppose that $A$ is clopen. Then both $A$ and $A^c$ are open. Hence, for some $m, n \in \mathbb{N}$ we have that $A = U_m$ and $A^c = U_n$. But $A \cap A^c = \emptyset$, so $U_m \cap U_n = \emptyset$. But this happens if and only if either $U_m = \emptyset$ or $U_n = \emptyset$ due to the nesting above. Therefore either $A = \emptyset$ or $A^c = \emptyset$ which implies that $A = \emptyset$ or $A = X$ - a contradiction. Hence every nontrivial subset $A \subseteq X$ cannot be clopen.
Example 3 Let $X$ be nonempty finite set and let $(X, \tau)$ be a topological space. Prove that the number of clopen sets of $X$ with respect to the topology $\tau$ is always even.
Let $m$ denote the number of clopen subsets of $X$ with respect to $\tau$. Then clearly $m \geq 2$ since the sets $\emptyset$ and $X$ are always clopen. Let $A$ be an arbitrary clopen set. Then $A$ and $A^c$ are open sets, so $A^c$ is also a clopen set. Hence, every clopen set comes as a complementary pair and $A \neq A^c$. Hence, the number of clopen sets with respect to the topology $\tau$ is always even provided that $X$ is a nonempty finite set. |
My mnemonic is to remember a simple special case, and heuristically rederive the equations of motion from that. In the case of wave-related equations, you can obtain most of them by just inventing differential equations that describe different properties of plane waves.
Properties of a plane wave
The simplest solution to a wave equation is the plane wave $\Psi(x,t) = \exp(i\mathbf{k}\cdot\mathbf{x}-i\omega t)$. The time-derivative of this is:$$\frac{\partial\Psi}{\partial t} = -i\omega\Psi$$Insert the de Broglie relation $E = \hbar \omega$, and solve for $E$:$$E = i\hbar\frac{\partial}{\partial t}$$
Then try differentiating the plane wave $\Psi$ with respect to position $\mathbf{x}$:$$\nabla \Psi = i\mathbf{k}\Psi$$Now insert the de Broglie relation $\mathbf{p} = \hbar\mathbf{k}$ and solve for $\mathbf{p}$:$$\mathbf{p} = -i\hbar\nabla$$We now have two equations $E = i\hbar\partial_t$ and $\mathbf{p} = -i\hbar\nabla$ relating spacetime derivatives of a wave function to physical observables.
Schroedinger equation (energy)
Remember that the Schroedinger equation connects the time-derivative of a wave function to it's energy. Just right-multiply our equation for $E$ above with a wave function $\Psi$, and we get the Schroedinger equation:$^\dagger$$$E\Psi = i\hbar \frac{\partial\Psi}{\partial t}$$
To get the familiar form of the equation for a single particle in a potential $V(\mathbf{x})$, just remember that classically we have $E = \mathbf{p}^2/2m + V(\mathbf{x})$, use $\mathbf{p} = -i\hbar\nabla$ from above, and insert it into the Schroedinger equation:$$\left[-\frac{\hbar^2}{2m}\nabla^2 + V(\mathbf{x})\right]\Psi = i\hbar \frac{\partial\Psi}{\partial t}$$
Helmholtz equation (momentum)
The Helmholtz equation relates the momentum of a wave to its spatial derivative. To obtain it, just square the relation $\mathbf{p} = -i\hbar\nabla$, and right-multiply the result by a wave $\Psi$:$$\mathbf{p}^2\Psi = -\hbar^2\nabla^2\Psi$$To obtain the conventional form, divide the equation by $\hbar^2$, and use the de Broglie relation $\mathbf{p} = \hbar\mathbf{k}$:$$\left[\nabla^2 + \mathbf{k}^2\right]\Psi = 0$$
Wave equation (energy-momentum)
The energy and momentum of a free
massless particle is related by $E^2 = \mathbf{p}^2c^2$. Insert $E = i\hbar\partial_t$ and $\mathbf{p} = -i\hbar\nabla$, and we get:$$-\hbar^2 \frac{\partial^2}{\partial t^2} = -c^2\hbar^2\nabla^2$$Divide this by $\hbar^2c^2$ and right-multiply by a wave function, and you've got the wave equation:$$\nabla^2\Psi = \frac{1}{c^2}\frac{\partial^2\Psi}{\partial t^2}$$Using the relativistic notation $\partial^2 = \partial_t^2/c^2-\nabla^2$, we may write the equation more compactly as:$$\partial^2\Psi = 0$$ Klein-Gordon equation (energy-momentum)
To get a wave equation for a free
massive particle, we just start from the relativistic expression $E^2 = (\mathbf{p}c)^2 + (mc^2)^2$, and insert the relations $\mathbf{p} = -i\hbar\nabla$ and $E = i\hbar\partial_t$ that we got above:$$-\hbar^2\frac{\partial^2}{\partial t^2} = -\hbar^2c^2\nabla^2 + m^2c^4$$Then you right-multiply this by a wave $\Psi$, and you've got your equation of motion. To get the conventional form, insert $\partial^2 = \partial_t^2/c^2 - \nabla^2$ and rewrite it:$$\partial^2\Psi = -\left(\frac{mc}{\hbar}\right)^2\Psi$$ Where does it all come from?
Although the heuristic derivations above consider only the special case of a plane wave, the relations $E \sim \partial/\partial t$ and $\mathbf{p} \sim \nabla$ have quite deep origins. Technically, we say that the Hamiltonian is the
generator of time translations, and momentum is the generator of space translations. If you have studied classical mechanics, this is related to Noether's theorem, which connects the time-translation invariance of a theory to conservation of energy, and space-translation invariance to conservation of momentum. (Another important quantity is the angular momentum, the generator of rotations, which is related to rotational invariance of the theory.)
$^\dagger$ We usually refer to the operator as $H$ and the eigenvalues as $E$ in quantum mechanics, so just rename $E$ to $H$ to get the standard form of the Schroedinger equation. |
Table of Contents
Topological Complements of Normed Linear Subspaces
Recall from the Algebraic Complements of Linear Subspaces page that if $X$ is a linear space and $M \subset X$ is a linear subspace then another linear subspace $M' \subset X$ is said to be an algebraic complement of $M$ if both:(1)
We said that if a linear subspace $M$ has an algebraic complement $M'$ with finite dimensional, then $M$ is said to be finite co-dimensional. We also proved that every linear subspace $M$ of $X$ has a algebraic complement. When $X$ is a normed linear space, then we can look at algebraic complements with particular properties. One type of important algebraic complement is defined below.
Definition: Let $X$ be a normed linear space and let $M \subset X$ be a linear subspace. A Topological Complement of $M$ is an algebraic complement $M'$ that is closed.
The following theorem tells us exactly when $M \subset X$ has a topological complement when $X$ is a Banach space.
Theorem 1: Let $X$ be a Banach space. Then a linear subspace $M \subset X$ has a topological complement if and only if there exists an projection $P : X \to X$ with $P(X) = M$. Proof:$\Rightarrow$ Suppose that $M \subset X$ has a topological complement $M' \subset X$. Then $M \cap M' = \{ 0 \}$, $X = M \oplus M'$, and $M'$ is closed. In particular, for every $x \in X$, $x$ can be written uniquely as: where $m \in M$ and $m' \in M'$. Define a function $P : X \to X$ for all $x = m + m' \in X$ by: We first show that $P$ is linear. Let $x = m + m' \in X$ and $y = n + n' \in X$ (where $n \in M$, $n' \in M'$) and let $\lambda \in \mathbb{C}$. Then: So indeed, $P$ is linear. Furthermore, if $x = m + m' \in X$ then: Therefore $P$ is idempotent. Lastly, we show that $P(X) = M$. Clearly $P(X) \subseteq M$. Now let $m \in M$ . Then take $x = m \in X$. So $P(x) \in P(X)$. But $P(x) = m$, so $m \in P(X)$. Thus $P(X) \supseteq M$. So $P(X) = M$. $\Leftarrow$ Suppose that there exists an idempotent linear operator $P : X \to X$ with $P(X) = M$. Let $M' = \ker P$. Then by the lemma on the Projection/Idempotent Linear Operators page we have that: In particular, $M'$ is closed since $M' = \ker P = P^{-1}(\{ 0 \})$ and $P$ is a continuous map. Furthermore: So $M'$ is a topological complement of $M$. $\blacksquare$ |
For each $n = 1, 2, ....$, suppose that $X_n$ is a continuous random variable with density $$\hspace{10mm}\mathrm{f}(x) = \begin{cases} \frac{1}{2}(1+x)e^{-x}, & \text{if $x \ge 0$ } \\[2ex] 0, & \text{if $x\le 0$} \end{cases}$$ Set $Y_n$=min{$X_1,X_2,....X_n$}. Does n$\cdot$$Y_n$ converges in distribution as n$\to \infty $ ? What will be the limiting distribution of n$\cdot$$Y_n$? Attept: I was tryng to find the distribution of n$\cdot$$Y_n$. But it became too complicated. How should I proceed here. Any help would be much appreciated.
Let's try to find a distribution function of $nY_n$.\
$P(nY_n \leq t) = 1 - P(min \{ X_1, ..., X_n \} > \frac{t}{n}) = 1 - P(X_1 > \frac{t}{n})\cdot ... \cdot P(X_n > \frac{t}{n})$
We can calculate the tail of the random variable $X_1$ (and all of the variables because I suppose that they are i.i.d).
$P(X_1 > \frac{t}{n}) = \int_{\frac{t}{n}}^{\infty} \frac{1}{2}(1+x) \exp (-x) dx$.
Now, try to do this integral by parts. |
Your comments above lead me to believe that you are asking about the distribution of $X$ marginalized over the classes. This marginal distribution is Gaussian in the first dimension but not in the second.
Since the first and second dimensions are independent they can be treated separately. The means and variances conditional on class in the first dimension are the same for both classes, so mean and variance marginalized over class are the same those conditional on class $(\mu_1 = 1,\,\sigma_{1}^2 = 2)$. In the second dimension the marginal distribution is a mixture of Gaussians, which is not itself Gaussian:
$p(x_2) = \frac{1}{2}\text{N}(1,1) + \frac{1}{2}\text{N}(3,1),$
in which $\text{N}(\mu, \sigma^2)$ is the probability density of the Gaussian (aka normal) distribution with mean $\mu$ and variance $\sigma^2$.
The mean of the second dimension variable is
$\text{E}(X_2) = \Pr(C_1)\text{E}(X_2|C_1) + \Pr(C_2) \text{E}(X_2|C_2)$
$\text{E}(X_2) = \frac{1}{2} \cdot 1 + \frac{1}{2} \cdot 3 = 2.$
And now the variance. In general,
$\text{Var}(Y) = \text{E}(Y^2) - \left[\text{E}(Y)\right]^2.$
We'll find it useful to rearrange this as
$\text{E}(Y^2) = \text{Var}(Y) + \left[\text{E}(Y)\right]^2.$
So
$\text{E}(X_2^2) = \Pr(C_1)\text{E}(X_2^2|C_1) + \Pr(C_2) \text{E}(X_2^2|C_2)$
$\text{E}(X_2^2) = \frac{1}{2}(\text{Var}(X_2|C_1) + \left[\text{E}(X_2|C_1)\right]^2$$+\text{Var}(X_2|C_2) + \left[\text{E}(X_2|C_2)\right]^2)$
$\text{E}(X_2^2) = \frac{1}{2}(1 + 1^2 + 1 + 3^2) = 6.$
Now we can get
$\text{Var}(X_2) = \text{E}(X_2^2) - \left[\text{E}(X_2)\right]^2 = 6 - 2^2 = 2.$
Here's what the distribution of $X_2$ looks like, along with a Gaussian distribution with the same mean and variance. |
Dilution of a series
The inclusion of any finite number of zeros between adjacent terms of a series. For the series
\begin{equation}\label{eq:1} \sum\limits_{k=0}^{\infty}u_k \end{equation}
a diluted series has the form
\begin{equation} u_0+0+\dots+0+u_1+0+\dots+0+u_2+\dots \end{equation} Dilution of a series does not affect convergence of the series, but it may violate summability of the series (after dilution a series \eqref{eq:1} summable to the number $s$ by some summation method may turn out to be not summable at all by this method or may turn out to be summable to a number $a\ne s$).
How to Cite This Entry:
Dilution of a series.
Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Dilution_of_a_series&oldid=30623 |
Symmetry and nonexistence of positive solutions for fractional systems
1.
College of Science, Nanjing Forestry University, Nanjing, Jiangsu 210037, China
2.
Department of Mathematical Sciences, Yeshiva University, New York, NY 10033, USA
3.
Institute of Mathematics, School of Mathematical Sciences, Nanjing Normal University, Nanjing, Jiangsu 210023, China
$\left\{ \begin{array}{*{35}{l}} {}&{{(-\vartriangle )}^{\alpha /2}}u = |x{{|}^{a}}{{v}^{p}},~~&x\in {{R}^{n}}, \\ {}&{{(-\vartriangle )}^{\alpha /2}}v = |x{{|}^{b}}{{u}^{q}},~~&x\in {{R}^{n}}, \\ {}&u\ge 0,v\ge 0,&{} \\\end{array} \right.$
$0<α<2$
$a, b$
$≥0$
$n≥2$
$1<p<\frac{n+α+a}{n-α}$
$1<q<\frac{n+α+b}{n-α}$ Keywords:The method of moving planes, fractional Laplacian, nonlinear elliptic system, radial symmetry, nonexistence of positive solutions. Mathematics Subject Classification:35B06, 35B09, 35B50, 35B53. Citation:Pei Ma, Yan Li, Jihui Zhang. Symmetry and nonexistence of positive solutions for fractional systems. Communications on Pure & Applied Analysis, 2018, 17 (3) : 1053-1070. doi: 10.3934/cpaa.2018051
References:
[1] [2] [3]
G. Caristi, L. D'Ambrosio and E. Mitidieri,
Representation formula for solutions to some classes of higher order systems and related Liouville theorems,
[4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32]
P. Pol$\acute{a}\check{c}$ik, P. Quittner and P. Souplet,
Singularity and decay estimates in superlinear problems via Liouville-type theorems, Ⅰ: Elliptic systems,
[33] [34] [35] [36] [37] [38]
show all references
References:
[1] [2] [3]
G. Caristi, L. D'Ambrosio and E. Mitidieri,
Representation formula for solutions to some classes of higher order systems and related Liouville theorems,
[4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32]
P. Pol$\acute{a}\check{c}$ik, P. Quittner and P. Souplet,
Singularity and decay estimates in superlinear problems via Liouville-type theorems, Ⅰ: Elliptic systems,
[33] [34] [35] [36] [37] [38]
[1]
Leyun Wu, Pengcheng Niu.
Symmetry and nonexistence of positive solutions to fractional
[2] [3] [4] [5] [6]
Ran Zhuo, Wenxiong Chen, Xuewei Cui, Zixia Yuan.
Symmetry and non-existence of solutions for a nonlinear system involving the fractional Laplacian.
[7]
De Tang, Yanqin Fang.
Regularity and nonexistence of solutions for a system involving the fractional Laplacian.
[8]
Ran Zhuo, Yan Li.
Nonexistence and symmetry of solutions for Schrödinger systems involving fractional Laplacian.
[9]
Shiren Zhu, Xiaoli Chen, Jianfu Yang.
Regularity, symmetry and uniqueness of positive solutions to a nonlinear elliptic system.
[10]
Tomás Sanz-Perela.
Regularity of radial stable solutions to semilinear elliptic equations for the fractional Laplacian.
[11] [12]
Eric R. Kaufmann.
Existence and nonexistence of positive solutions for a nonlinear fractional boundary value problem.
[13]
Yayun Li, Yutian Lei.
On existence and nonexistence of positive solutions of an elliptic system with coupled terms.
[14]
Dagny Butler, Eunkyung Ko, Eun Kyoung Lee, R. Shivaji.
Positive radial solutions for elliptic equations on exterior domains with nonlinear boundary conditions.
[15]
Dengfeng Lü, Shuangjie Peng.
On the positive vector solutions for nonlinear fractional Laplacian systems with linear coupling.
[16] [17] [18]
Patricio Felmer, César Torres.
Radial symmetry of ground states for a regional fractional Nonlinear Schrödinger Equation.
[19]
Xudong Shang, Jihui Zhang.
Multi-peak positive solutions for a fractional nonlinear elliptic equation.
[20]
Leonelo Iturriaga, Eugenio Massa.
Existence, nonexistence and multiplicity of positive solutions for the poly-Laplacian and nonlinearities with zeros.
2018 Impact Factor: 0.925
Tools Metrics Other articles
by authors
[Back to Top] |
I am given that $a_n \ge 0$ and $\sum a_n$ converges.
I need to show that $\sum \frac{\sqrt{a_n}}{n}$ converges also.
After a long time, I came up with a solution by re-ordering the series into:
$\sum_{a_n < \frac{1}{n^2}} \frac{\sqrt{a_n}}{n} + \sum_{a_n \ge \frac{1}{n^2}} \frac{\sqrt{a_n}}{n}$ which is bounded by $\sum\frac{1}{n^2} + \sum{a_n}$
I was wondering if there is a more basic solution because I don't want to be using ad-hoc methods on an exam. |
This blog article is taken from our book [1].
In most entry-level materials on pairs trading such as in [2], a mean reverting basket is usually constructed by this relationship:
\(P_t – \gamma Q_t = Z_t, \textrm{(eq. 1)}\)
, where \(P_t\) is the price of asset \(P\) at time t, \(Q_t\) the price of asset \(Q\) at time t, and \(Z_t\) the price of the mean reverting asset to trade. One way to find \(\gamma\) is to use cointegration. There are numerous problems in this approach as detailed in [1]. To mention a few: the identified portfolios are dense; executions involve considerable transaction costs; the resultant portfolios behave like insignificant and non-tradable noise; cointegration is too stringent and often unnecessary a requirement to satisfy.
This article highlights one important problem: it is much better to work in the space of (log) returns than in the space of prices. Therefore, we would like to build a mean reverting portfolio using a similar relationship to (eq. 1) but in returns rather than in prices.
The Benefits of Using Log Returns
When we compare the prices of two assets, [… TODO …]
A Model for a Mean Reverting Synthetic Asset
Let’s assume prices are log-normally distributed, which is a popular assumption in quantitative finance, esp. in options pricing. Then, prices are always positive, satisfying the condition of “limited liability” of stocks. The upside is unlimited and may go to infinity. [5] We have:
\(P_t = P_0\exp(r_{P,t}) \\ Q_t = Q_0\exp(r_{Q,t}), \textrm{eq. 2}\)
\(r_{P,t}\) is the return for asset \(P\) between times 0 and t; likewise for asset \(Q\).
Instead of applying a relationship, e.g., cointegration (possible but not a very good way), to the pair on prices, we can do it on returns. This is possible because, just like prices, the returns at time t are simply random walks, hence \(I(1)\) series. We have (dropping the time subscript):
\(r_P – \gamma r_Q = Z, \textrm{(eq. 3)}\)
Of course, the \(\gamma\) is a different coefficient; the \(Z\) a different white noise.
Remove the Common Risk Factors
Let’s consider this scenario. Suppose the oil price suddenly drops by half (as is developing in the current market). Exxon Mobile (XOM), being an oil company, follows suit. American Airlines (AAL), on the other hand, can save cost on fuel and may rise. The naive (eq. 3) will show a big disequilibrium and signal a trade on the pair. However, this disequilibrium is spurious. Both XOM and AAL are simply reacting to the new market/oil regime and adjust their “fair” prices accordingly. (Eq. 3) fails to account for the common oil factor to both companies. Mean reversion trading should trade only on idiosyncratic risk that are not affected by systematic risks.
To improve upon (eq. 3), we need to remove systematic risks or common risk factors from the equation. Let’s consider CAPM. It says:
\(r = r_f + \beta (r_M – r_f) + \epsilon, \textrm{(eq. 4)}\)
The asset return, \(r\), and \(\epsilon\), are normally distributed random variables. The average market return, \(r_M\), and the risk free rate, \(r_f\), are constants.
Substituting (eq. 4) into the L.H.S. of (eq. 3) and grouping some constants, we have:
\((r_P – \beta_P (r_M-r_f)) – \gamma (r_Q – \beta_Q (r_M-r_f)) = \epsilon + \mathrm{constant}\)
To simply things:
\((r_P – \beta_P r_M) – \gamma (r_Q – \beta_Q r_M) = \epsilon + \gamma_0, \textrm{(eq. 5)}\)
where \(\gamma_0\) is a constant.
(Eq. 5) removes the market/oil effect from the pair. When the market simply reaches a new regime, our pair should not change its value. In general, for multiple n asset, we have:
\(\gamma_0 + \sum_{i=1}^{n}\gamma_i (r_i – \beta_ir_M) = \epsilon, \textrm{(eq. 6)}\)
For multiple n asset, multiple m common risk factors, we have:
\(\gamma_0 + \sum_{i=1}^{n}\gamma_i (r_i – \sum_{j=i}^{m}\beta_jF_j) = \epsilon, \textrm{(eq. 7)}\)
Trade on Dollar Values
It is easy to see that if we use (eq. 1) to trade the pair, to long (short) \(Z\), we buy (sell) 1 share of \(P\) and sell (long) \(\gamma\) share of \(Q\). How do we trade using (eqs. 3, 5, 6, 7)? When we work in the log-return space, we trade for each stock, \(i\), the number of shares worth of \(\gamma_i\). That is, we trade for each stock \(\gamma_i/P_i\) number of shares, where \(P_i\) is the current price of stock \(i\).
Let’s rewrite (eq. 3) in the price space.
\(\log(P/P_0) – \gamma \log(Q/Q_0) = Z\)
The R.H.S. is
\(\log(P/P_0) – \gamma \log(Q/Q_0) = \log(1 + \frac{P-P_0}{P_0}) – \gamma \log(1 + \frac{Q-Q_0}{Q_0})\)
Using the relationship \(\log(1+r) \approx r, r \ll 1\), we have
\(\log(1 + \frac{P-P_0}{P_0}) – \gamma \log(1 + \frac{Q-Q_0}{Q_0}) \approx \frac{P-P_0}{P_0} – \gamma \frac{Q-Q_0}{Q_0} \\ = (\frac{P}{P_0} -1) – \gamma (\frac{Q}{Q_0} -1) \\ = \frac{1}{P_0}P – \gamma \frac{1}{Q_0}Q + \mathrm{constant} \\= Z\)
Dropping the constant, we have:
\(\frac{1}{P_0}P – \gamma \frac{1}{Q_0}Q = Z, \textrm{(eq. 8)}\)
That is, we buy \(\frac{1}{P_0}\) shares of \(P\) at price \(P_0\) and \(\frac{1}{Q_0}\) shares of \(Q\) at price \(Q_0\). We can easily extend (eq. 8) to account for the general cases: we trade for each stock \(i\) \(\gamma_i/P_i\) number of shares.
References:
Numerical Methods in Quantitative Trading, Dr. Haksun Li, Dr. Ken W. Yiu, Dr. Kevin H. Sun Pairs Trading: Quantitative Methods and Analysis, by Ganapathy Vidyamurthy Identifying Small Mean Reverting Portfolios, Alexandre d’Aspremont Developing high-frequency equities trading models, Infantino The Econometrics of Financial Markets, John Y. Campbell, Andrew W. Lo, & A. Craig MacKinlay |
https://doi.org/10.1351/goldbook.Q04968
The square root of the expression in which the sum of squared observations is divided by the number \(n\). It can be calculated by the formula: \[\overline{x}_{\text{q}}=\sqrt{\frac{\sum x_{i}^{2}}{n}}\]
This quantity is also sometimes directly (but inappropriately) calculated, for example, when an observable is proportional to the square of concentration. The quadratic mean, also known as the root mean square, is sometimes appropriate, however, as in certain of the formulae connected with linear calibration functions. Note:
This quantity is also sometimes directly (but inappropriately) calculated, for example, when an observable is proportional to the square of concentration. The quadratic mean, also known as the root mean square, is sometimes appropriate, however, as in certain of the formulae connected with linear calibration functions. |
Capacitor ESR (Equivalent Series Resistance) measurement is very useful in diagnosing issues with power supplies. In linear power supplies, a high ESR filter capacitor may cause excessive ripple current on the voltage rails and can cause the capacitor to overheat due to its increased resistance. In low dropout linear supplies, the ESR of the output capacitors affect the loop stability and excessive ESR in these output capacitors can make the power supply unstable, which in turn can lead to out-of-tolerance voltage being applied to the load and could cause future damages. In switching mode power supplies, the ESR of capacitors are even more critical. In this blog post I will discuss how to measure the ESR of a capacitor using a function generator and a multimeter. A short video on this topic is also included towards the end.
A typical capacitor can be modeled as an ideal capacitor in series with a resistor–the equivalent series resistance. If we apply an AC voltage to the capacitor under test via current limiting resistor, we have the following circuit:
The circuit can be thought as a simple resistor divider if the frequency of the AC source is sufficiently high since the reactance of the capacitor is inversely proportional to the frequency applied for any given capacitance. Thus, we can use the measured voltage across the capacitor to calculate the ESR:
\[V_{ESR} \approx V_s \frac{R_{ESR}}{R_{ESR} + R_s} \]
Solving for ESR we get:
\[R_{ESR} \approx \frac{R_s}{\frac{V_s}{V_{ESR}} – 1}\]
If we use a 50 Ohm output impedance function generator to provide the AC input we can hook the capacitor under test directly to the output of the function generator and measure the AC voltage across the capacitor and calculate the ESR using the equation above.
How do we determine what voltage to use? Since electrolytic capacitors are polarized, we can either use an AC voltage with a fixed DC bias or simply use an AC voltage low enough so that the capacitor under test is never subjected to more than its maximum reverse voltage (typically less than 1V). Most of the ESR meters use this second approach as it is easy to implement and we do not need to worry about the polarity of the measurement. Here we choose 100mV as our measurement voltage. This voltage is chosen as it is below P/N junction’s forward voltage (roughly 0.2V to 0.7V depending on type) so that we can perform ESR measure in-circuit.
The graph below plots the calculated capacitor ESR against the measured voltage when an 100mV signal from a 50 Ohm source is used.
Our calculation so far is based on the assumption that the reactance of the capacitor is near zero. So in order to get the most accurate result, it is important to choose a measurement frequency based on the capacitor value so that the reactance is ignorable. Recall that the reactance of a capacitor is:
\[X_c = -\frac{1}{2\pi f C}\]
If we ignore the sign and fix the reactance, we can obtain the relationship of capacitance versus frequency. The following graph shows such relationships for three reactance values (0.5, 1, 2).
This graph is useful in determining the minimum frequency needed for the measurement for a given capacitance in order for the reactance to stay below a predetermined value. For instance, if we have a 10 uF capacitor, the minimum frequency needed to ensure that the reactance is under 2 Ohm is roughly 8 kHz. If we want the reactance to be less than 1 Ohm, then the minimum frequency needed is around 16 kHz. And if we want to reduce the reactance further to 0.5 Ohm, we will need to adjust the frequency to above 30 kHz.
While higher frequencies in theory is better for ESR measurement because of the reduced reactance, it is not always desired. This is because the reactance due to the inductance in the circuit increases proportionally as the input frequency increases and this reactance could significantly skew the measurement result. So for large filter capacitors, the frequency used is typically between 1 to 5 kHz and for smaller capacitors a higher frequencies between 10 kHz and 50 kHz can be used. Note that in order to measure the high frequency AC voltage accurately, you will need to use a multimeter with comparable bandwidth (e.g. Keithley 197). |
Unitary operations are only a special case of quantum operations, which are linear, completely positive maps ("channels") that map density operators to density operators. This becomes obvious in the Kraus-representation of the channel, $$\Phi(\rho)=\sum_{i=1}^n K_i \rho K_i^\dagger,$$ where the so-called Kraus operators $K_i$ fulfill $\sum_{i=1}^n K_i^\...
There are a lot of different ways of looking at qubits, and the state vector formalism is just one of them. In a general linear-algebraic sense a measurement is projection onto a basis. Here I will provide insight with an example from the Pauli observable point of view, that is the usual circuit model of QC.Firstly, it's of interest which basis the state ...
The question is, how did you get to your final state?The magic is in the gate operations that transformed your initial state to your final state. If we knew the final state to begin with, we wouldn't need a quantum computer - we'd have the answer already and could, as you suggest, simply sample from the corresponding probability distribution.Unlike ...
Entangling measurements are powerful. In fact, they are so powerful that universal quantum computation can be performed by sequences of entangling measurements only (i.e., without extra need for unitary gates or special input state preparations):Nielsen showed that universal quantum computation is possible given a quantum memory and the ability to perform ...
Short AnswerQuantum operations do not need to be unitary.In fact, many quantum algorithms and protocols make use of non-unitarity.Long AnswerMeasurements are arguably the most obvious example of non-unitary transitions being a fundamental component of algorithms (in the sense that a "measurement" is equivalent to sampling from the probability ...
At risk of going off-topic from quantum computing and into physics, I'll answer what I think is a relevant subquestion of this topic, and use it to inform the discussion of unitary gates in quantum computing.The question here is: Why do we want unitarity in quantum gates?The less specific answer is as above, it gives us 'reversibility', or as ...
There have been efforts to implement construct "floating point" representation of small rotations of qubit states, such as: Floating Point Representations in Quantum Circuit Synthesis. But there doesn't seem to be any international standard like the one you mentioned i.e. IEEE 754. IEEE 7130 - Standard for Quantum Computing Definitions is an ongoing project. ...
Break the problem in parts.Say we have already sent $\mid 00 \rangle$ to $\frac{1}{\sqrt{3}} \mid 00 \rangle + \frac{\sqrt{2}}{\sqrt{3}}\mid 01 \rangle$. We can send that to $\frac{1}{\sqrt{3}} \mid 00 \rangle + (\frac{1}{2} (1+i))\frac{\sqrt{2}}{\sqrt{3}}\mid 01 \rangle + (\frac{1}{2} (1-i))\frac{\sqrt{2}}{\sqrt{3}}\mid 10 \rangle$ by a $\sqrt{SWAP}$. ...
In simpler terms your question is: if noise/decoherence keeps entering the computation, how can a big computation possibly survive?The key concept you're missing is quantum error correction, which can pump noise/decoherence back out of the system. Of particular practical interest is the surface code.
Apart from the formal result about #P-hardness, there's something worth touching on, about the nature of strong simulation itself. I'll comment first on strong simulation, and then specifically on the quantum case.1. Strong simulation even of classical randomised computation is hardStrong simulation is a very powerful concept — not only in the fact ...
There are several misconceptions here, most of them originate from exposure to only the pure state formalism of quantum mechanics, so let's address them one by one:All quantum operations must be unitary to allow reversibility, butwhat about measurement?This is false. In general, the states of a quantum system are not just vectors in a Hilbert space $...
We simply translate the binary result of a qubit measurement to our guess whether it's the first state or the second, calculate the probability of success for every possible measurement of the qubit, and then more find the maximum of a function of two variables (on the two-sphere).First, something that we won't really need, the precise description of the ...
Suppose that, prior to measurement, your $n$-qubit system is in some state $\lvert \psi \rangle \in \mathcal H_2^{\otimes n}$, where $\mathcal H_2 \cong \mathbb C^2$ is the Hilbert space of a single qubit. Write$$ \lvert \psi \rangle = \sum_{x \in \{0,1\}^n} u_x \lvert x \rangle $$for some coefficients $u_x \in \mathbb C$ such that $\sum_x \lvert u_x \...
Let's step back from QC for a moment and think about a textbook example: the projector onto position, $|x\rangle$. This projective measurement is obviously unphysical, as the eigenstates of $|x\rangle$ are themselves unphysical due to the uncertainty principle. The real measurement of position, then, is one with some uncertainty. One can treat this either ...
A $1$-qubit system, in general, can be in a state $a|0\rangle+b|1\rangle$ where $|0\rangle$ and $|1\rangle$ are basis vectors of a two dimensional complex vector space. The standard basis for measurement here is $\{|0\rangle,|1\rangle\}$. When you are measuring in this basis, with $\frac{|a|^2}{|a|^2+|b|^2}\times 100\%$ probability you will find that the ...
I'll tell you how to create any two qubit pure state you might ever be interested in. Hopefully you can use it to generate the state you want.Using a single qubit rotation followed by a cnot, it is possible to create states of the form$$ \alpha \, |0\rangle \otimes |0\rangle + \beta \, |1\rangle \otimes |1\rangle .$$Then you can apply an arbitrary ...
Here is how you might go about designing such a circuit.$\def\ket#1{\lvert#1\rangle}$Suppose that you would like to produce the state $\ket{\psi} = \tfrac{1}{\sqrt 3} \bigl( \ket{00} + \ket{01} + \ket{10} \bigr)$. Note the normalisation of ${\small 1}/\small \sqrt 3$, which is necessary for $\ket{\psi}$ to be a unit vector.If we want to consider a ...
An observable only needs to be Hermitian, and can have any real eigenvalues. They don't even need to be distinct eigenvalues: if there are repeated eigenvalues, we say that the eigenspace for that eigenvalue is degenerate.(In the case of observables on a qubit, having a repeated eigenvalue makes the observable rather uninteresting, because absolutely all ...
So, Bob is given the following state (also called the maximally-mixed state):$\rho = \frac{1}{2}|0\rangle\langle 0| + \frac{1}{2}|1\rangle\langle 1| = \begin{bmatrix} \frac{1}{2} & 0 \\ 0 & \frac{1}{2} \end{bmatrix}$As you noticed, one nice feature of density matrices is they enable us to capture the uncertainty of an outcome of a measurement and ...
Yes.Remember that you require several properties of a projective measurement including $P_i^2=P_i$ for each projector, and$$\sum_iP_i=\mathbb{I}.$$The first of these show you that the $P_i$ have eigenvalues 0 and 1. Now take a $|\phi\rangle$ that is an eigenvector of eigenvalue 1 of a particular projector $P_i$. Use this in the identity relation:$$\...
For density matrices $\rho_A$ and $\rho_B$ having eigenvalues $\lambda^{\left(A\right)}$ and $\lambda^{\left(B\right)}$, \begin{align}S\left(\rho_A\otimes\rho_B\right) &= -\rho_A\otimes\rho_B\ln\left(\rho_A\otimes\rho_B\right)\\&= -\sum_{j, k}\lambda^{\left(A\right)}_j\lambda^{\left(B\right)}_k\ln\left(\lambda^{\left(A\right)}_j\lambda^{\left(B\...
Less formally-stated than the other answers, but for beginners I like the intuitive method outlined by Prof. Vazirani in this video.Suppose you have a general two-qbit state:$|\psi\rangle = \begin{bmatrix} \alpha_{00} \\ \alpha_{01} \\ \alpha_{10} \\ \alpha_{11} \end{bmatrix} = \alpha_{00}|00\rangle + \alpha_{01}|01\rangle + \alpha_{10}|10\rangle + \...
You define the projectors$$P_0=|0\rangle\langle 0|=\left(\begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array}\right)\qquad P_1=|1\rangle\langle 1|=\left(\begin{array}{cc} 0 & 0 \\ 0 & 1 \end{array}\right).$$For any state $|\psi\rangle$, the probability of getting answer $x$ is $p_x=\langle\psi|P_x|\psi\rangle$ and, after the measurement, the ...
I am not really sure about what you mean by "unmeasuring" a qubit, but if you mean to recover the qubit that was measured by manipulating the post-measurement state then I am afraid that the answer is no. When a quantum state is measured, the supoerposition state of such is collpased to one of the possible outcomes of the measurement, and so the qubit is ...
I'll add a small bit complementing the other answers, just about the idea of measurement.Measurement is usually taken as a postulate of quantum mechanics. There's usually some preceding postulates about hilbert spaces, but following thatEvery measurable physical quantity $A$ is described by an operator $\hat{A}$ acting on a Hilbert space $\mathcal{H}$. ...
Talking about bases such as $\left|0\rangle\langle0\right|$ and $\left|1\rangle\langle1\right|$ (or the equivalent vector notation $\left|0\right>$ and $\left|1\right>$, which I'll use in this answer) at the same time as 'horizontal' and 'vertical' are, to a fair extent (pardon the pun) orthogonal concepts.On a Bloch sphere, there are 3 different ...
I'm going to go for an intuitive answer here, as requested. Let's s go in steps:Your input is (often?) classical, so up to that point we're good.Then you start doing quantum operations and achieve, for example, quantum superpositions between different states. Here you're right, you cannot look to check if you're doing OK, and that indeed is a problem, or ...
Is that correct? Is it [not] necessary to measure the ancilla qubits in Shor's algorithm?Correct, it is not necessary to measure the ancillae.This is easily seen by appealing to the no-communication theorem. If measuring the ancillae could affect the success of the algorithm, you could communicate faster-than-light by starting the algorithm many times, ...
You probably want to look at old posts about Simon's algorithm, such as the rather complete explanation I gave here, or talking more specifically about the number of times the algorithm has to be repeated.Yes, you have to repeat the algorithm several times to get different pieces of classical data, which you then process classically to get your final ... |
It looks like you're new here. If you want to get involved, click one of these buttons!
Is the usual \(\leq\) ordering on the set \(\mathbb{R}\) of real numbers a total order?
So, yes.
1. Reflexivity holds
2. For any \\( a, b, c \in \tt{R} \\) \\( a \le b \\) and \\( b \le c \\) implies \\( a \le c \\)
3. For any \\( a, b \in \tt{R} \\), \\( a \le b \\) and \\( b \le a \\) implies \\( a = b \\)
4. For any \\( a, b \in \\tt{R} \\), we have either \\( a \le b \\) or \\( b \le a \\)
So, yes.
Perhaps, due to our interest in things categorical, we can enjoy (instead of Cauchy sequence methods) to see the order of the (extended) real line as the Dedekind-MacNeille_completion of the rationals. Matthew has told us interesting things about it before.
Hausdorff, on his part, in the book I mentioned here, says that any total order, dense, and without \( (\omega,\omega^*) \) gaps, has embedded the real line. I don't have a handy reference for an isomorphism instead of an embedding ("everywhere dense" just means dense here).
Perhaps, due to our interest in things categorical, we can enjoy (instead of Cauchy sequence methods) to see the order of the (extended) real line as the [Dedekind-MacNeille_completion of the rationals](https://en.wikipedia.org/wiki/Dedekind%E2%80%93MacNeille_completion#Examples). Matthew has told us interesting things about it [before](https://forum.azimuthproject.org/discussion/comment/16714/#Comment_16714).
Hausdorff, on his part, in the book I mentioned [here](https://forum.azimuthproject.org/discussion/comment/16154/#Comment_16154), [says](https://books.google.es/books?id=M_skkA3r-QAC&pg=PA85&dq=each+everywhere+dense+type&hl=en&sa=X&ved=0ahUKEwjLkJao-9DaAhWD2SwKHVrkBcIQ6AEIKTAA#v=onepage&q=each%20everywhere%20dense%20type&f=false) that any total order, dense, and without \\( (\omega,\omega^*) \\) [gaps](https://en.wikipedia.org/wiki/Hausdorff_gap), has embedded the real line. I don't have a handy reference for an isomorphism instead of an embedding ("everywhere dense" just means dense here).
I believe the hyperreal numbers give an example of a dense total order that embeds the reals without being isomorphic to it. (I can’t speak to the gaps condition though, and it’s just plausible that they’re isomorphic at the level of mere posets rather than ordered fields.)
I believe the [hyperreal numbers](https://en.wikipedia.org/wiki/Hyperreal_number) give an example of a dense total order that embeds the reals without being isomorphic to it. (I can’t speak to the gaps condition though, and it’s just plausible that they’re isomorphic at the level of mere posets rather than ordered fields.)
That's an interesting question, Jonathan.
That's an interesting question, Jonathan.
Jonathan Castello
I believe the hyperreal numbers give an example of a dense total order that embeds the reals without being isomorphic to it. (I can’t speak to the gaps condition though, and it’s just plausible that they’re isomorphic at the level of mere posets rather than ordered fields.)
In fact, while they are not isomorphic as lattices, they are in fact isomorphic as mere posets as you intuited.
First, we can observe that \(|\mathbb{R}| = |^\ast \mathbb{R}|\). This is because \(^\ast \mathbb{R}\) embeds \(\mathbb{R}\) and is constructed from countably infinitely many copies of \(\mathbb{R}\) and taking a quotient algebra modulo a free ultra-filter. We have been talking about quotient algebras and filters in a couple other threads.
Next, observe that all unbounded dense linear orders of cardinality \(\aleph_0\) are isomorphic. This is due to a rather old theorem credited to George Cantor. Next, apply the Morley categoricity theorem. From this we have that all unbounded dense linear orders with cardinality \(\kappa \geq \aleph_0\) are isomorphic. This is referred to in model theory as \(\kappa\)-categoricity.
Since the hypperreals and the reals have the same cardinality, they are isomorphic as unbounded dense linear orders.
Puzzle MD 1: Prove Cantor's theorem that all countable unbounded dense linear orders are isomorphic.
[Jonathan Castello](https://forum.azimuthproject.org/profile/2316/Jonathan%20Castello)
> I believe the hyperreal numbers give an example of a dense total order that embeds the reals without being isomorphic to it. (I can’t speak to the gaps condition though, and it’s just plausible that they’re isomorphic at the level of mere posets rather than ordered fields.)
In fact, while they are not isomorphic as lattices, they are in fact isomorphic as mere posets as you intuited.
First, we can observe that \\(|\mathbb{R}| = |^\ast \mathbb{R}|\\). This is because \\(^\ast \mathbb{R}\\) embeds \\(\mathbb{R}\\) and is constructed from countably infinitely many copies of \\(\mathbb{R}\\) and taking a [quotient algebra](https://en.wikipedia.org/wiki/Quotient_algebra) modulo a free ultra-filter. We have been talking about quotient algebras and filters in a couple other threads.
Next, observe that all [unbounded dense linear orders](https://en.wikipedia.org/wiki/Dense_order) of cardinality \\(\aleph_0\\) are isomorphic. This is due to a rather old theorem credited to George Cantor. Next, apply the [Morley categoricity theorem](https://en.wikipedia.org/wiki/Morley%27s_categoricity_theorem). From this we have that all unbounded dense linear orders with cardinality \\(\kappa \geq \aleph_0\\) are isomorphic. This is referred to in model theory as *\\(\kappa\\)-categoricity*.
Since the hypperreals and the reals have the same cardinality, they are isomorphic as unbounded dense linear orders.
**Puzzle MD 1:** Prove Cantor's theorem that all countable unbounded dense linear orders are isomorphic.
Hi Matthew, nice application of the categoricity theorem! One question if I may. You said:
In fact, while they are not isomorphic as lattices, they are in fact isomorphic as mere posets as you intuited.
But in my understanding the lattice and poset structure is inter-translatable as in here. Can two lattices be isomorphic and their associated posets not?
Hi Matthew, nice application of the categoricity theorem! One question if I may. You said:
> In fact, while they are not isomorphic as lattices, they are in fact isomorphic as mere posets as you intuited.
But in my understanding the lattice and poset structure is inter-translatable as in [here](https://en.wikipedia.org/wiki/Lattice_(order)#Connection_between_the_two_definitions). Can two lattices be isomorphic and their associated posets not?
(EDIT: I clearly have no idea what I'm saying and I should probably take a nap. Disregard this post.)
Can two lattices be isomorphic and their associated posets not?
Can two lattices be isomorphic and their associated posets not?
If two lattices are isomorphic preserving infima and suprema, ie limits, then they are order isomorphic.
The reals and hyperreals provide a rather confusing counter example to the converse. I am admittedly struggling with this myself, as it is highly non-constructive.
From model theory we have two maps \(\phi : \mathbb{R} \to\, ^\ast \mathbb{R} \) and \(\psi :\, ^\ast\mathbb{R} \to \mathbb{R} \) such that:
Now consider \(\{ x \, : \, x \in \mathbb{R} \text{ and } 0 < x \}\).
The hyperreals famously violate the Archimedean property. Because of this \(\bigwedge_{^\ast \mathbb{R}} \{ x \, : \, x \in \mathbb{R} \text{ and } 0 < x \}\) does not exist.
On the other than if we consider \( \bigwedge_{\mathbb{R}} \{ \psi(x) \, : \, x \in \mathbb{R} \text{ and } 0 < x \}\), that does exist by the completeness of the real numbers (as it is bounded below by \(\psi(0)\)).
Hence
$$\bigwedge_{\mathbb{R}} \{ \psi(x) \, : \, x \in \mathbb{R} \text{ and } 0 < x \} \neq \psi\left(\bigwedge_{^\ast\mathbb{R}} \{ x \, : \, x \in \mathbb{R} \text{ and } 0 < x \}\right)$$So \(\psi\) cannot be a complete lattice homomorphism, even though it is part of an order isomorphism.
However, just to complicate matters, I believe that \(\phi\) and \(\psi\) are a mere lattice isomorphism, preserving finite meets and joints.
> Can two lattices be isomorphic and their associated posets not?
If two lattices are isomorphic preserving *infima* and *suprema*, ie *limits*, then they are order isomorphic.
The reals and hyperreals provide a rather confusing counter example to the converse. I am admittedly struggling with this myself, as it is highly non-constructive.
From model theory we have two maps \\(\phi : \mathbb{R} \to\, ^\ast \mathbb{R} \\) and \\(\psi :\, ^\ast\mathbb{R} \to \mathbb{R} \\) such that:
- if \\(x \leq_{\mathbb{R}} y\\) then \\(\phi(x) \leq_{^\ast \mathbb{R}} \phi(y)\\)
- if \\(p \leq_{^\ast \mathbb{R}} q\\) then \\(\psi(q) \leq_{\mathbb{R}} \psi(q)\\)
- \\(\psi(\phi(x)) = x\\) and \\(\phi(\psi(p)) = p\\)
Now consider \\(\\{ x \, : \, x \in \mathbb{R} \text{ and } 0 < x \\}\\).
The hyperreals famously violate the [Archimedean property](https://en.wikipedia.org/wiki/Archimedean_property). Because of this \\(\bigwedge_{^\ast \mathbb{R}} \\{ x \, : \, x \in \mathbb{R} \text{ and } 0 < x \\}\\) does not exist.
On the other than if we consider \\( \bigwedge_{\mathbb{R}} \\{ \psi(x) \, : \, x \in \mathbb{R} \text{ and } 0 < x \\}\\), that *does* exist by the completeness of the real numbers (as it is bounded below by \\(\psi(0)\\)).
Hence
$$
\bigwedge_{\mathbb{R}} \\{ \psi(x) \, : \, x \in \mathbb{R} \text{ and } 0 < x \\} \neq \psi\left(\bigwedge_{^\ast\mathbb{R}} \\{ x \, : \, x \in \mathbb{R} \text{ and } 0 < x \\}\right)
$$
So \\(\psi\\) *cannot* be a complete lattice homomorphism, even though it is part of an order isomorphism.
However, just to complicate matters, I believe that \\(\phi\\) and \\(\psi\\) are a mere *lattice* isomorphism, preserving finite meets and joints. |
Search
Now showing items 1-10 of 15
A free-floating planet candidate from the OGLE and KMTNet surveys
(2017)
Current microlensing surveys are sensitive to free-floating planets down to Earth-mass objects. All published microlensing events attributed to unbound planets were identified based on their short timescale (below 2 d), ...
OGLE-2016-BLG-1190Lb: First Spitzer Bulge Planet Lies Near the Planet/Brown-Dwarf Boundary
(2017)
We report the discovery of OGLE-2016-BLG-1190Lb, which is likely to be the first Spitzer microlensing planet in the Galactic bulge/bar, an assignation that can be confirmed by two epochs of high-resolution imaging of the ...
OGLE-2015-BLG-1459L: The Challenges of Exo-Moon Microlensing
(2017)
We show that dense OGLE and KMTNet $I$-band survey data require four bodies (sources plus lenses) to explain the microlensing light curve of OGLE-2015-BLG-1459. However, these can equally well consist of three lenses ...
OGLE-2017-BLG-1130: The First Binary Gravitational Microlens Detected From Spitzer Only
(2018)
We analyze the binary gravitational microlensing event OGLE-2017-BLG-1130 (mass ratio q~0.45), the first published case in which the binary anomaly was only detected by the Spitzer Space Telescope. This event provides ...
OGLE-2017-BLG-1434Lb: Eighth q < 1 * 10^-4 Mass-Ratio Microlens Planet Confirms Turnover in Planet Mass-Ratio Function
(2018)
We report the discovery of a cold Super-Earth planet (m_p=4.4 +/- 0.5 M_Earth) orbiting a low-mass (M=0.23 +/- 0.03 M_Sun) M dwarf at projected separation a_perp = 1.18 +/- 0.10 AU, i.e., about 1.9 times the snow line. ...
OGLE-2017-BLG-0373Lb: A Jovian Mass-Ratio Planet Exposes A New Accidental Microlensing Degeneracy
(2018)
We report the discovery of microlensing planet OGLE-2017-BLG-0373Lb. We show that while the planet-host system has an unambiguous microlens topology, there are two geometries within this topology that fit the data equally ...
OGLE-2017-BLG-1522: A giant planet around a brown dwarf located in the Galactic bulge
(2018)
We report the discovery of a giant planet in the OGLE-2017-BLG-1522 microlensing event. The planetary perturbations were clearly identified by high-cadence survey experiments despite the relatively short event timescale ...
Spitzer Opens New Path to Break Classic Degeneracy for Jupiter-Mass Microlensing Planet OGLE-2017-BLG-1140Lb
(2018)
We analyze the combined Spitzer and ground-based data for OGLE-2017-BLG-1140 and show that the event was generated by a Jupiter-class (m_p\simeq 1.6 M_jup) planet orbiting a mid-late M dwarf (M\simeq 0.2 M_\odot) that ...
OGLE-2016-BLG-1266: A Probable Brown-Dwarf/Planet Binary at the Deuterium Fusion Limit
(2018)
We report the discovery, via the microlensing method, of a new very-low-mass binary system. By combining measurements from Earth and from the Spitzer telescope in Earth-trailing orbit, we are able to measure the ...
KMT-2016-BLG-0212: First KMTNet-Only Discovery of a Substellar Companion
(2018)
We present the analysis of KMT-2016-BLG-0212, a low flux-variation $(I_{\rm flux-var}\sim 20$) microlensing event, which is well-covered by high-cadence data from the three Korea Microlensing Telescope Network (KMTNet) ... |
Let $d_1<d_2<\dots<d_k$ be integers. Then the number of integers $n\leq x$, such that $n+d_1, n+d_2, \ldots, n+d_k$ are simultaneously prime, is bounded above by $$ \mathfrak{S}(d_1, \ldots, d_k) (Ck)^k\frac{x}{\log^k x}, $$ where $\mathfrak{S}(d_1, \ldots, d_k)$ is a singular series.
For $k$ fixed this follows without to much trouble from Selberg's sieve or the large sieve. Very precise results of this type are given e.g. in the book by Halberstam and Richert. However, for my application I need $k$ of order $\sqrt{\log\log x}$. With some effort the large sieve should still work. However, this question appears so natural that certainly someone already solved this problem.
So my first question is: If there a reference for a result as above, which is uniform in $k$ (at least up to $(\log\log x)^{1/2+\epsilon}$?
The second question is: From which point onward can one replace the factor $k^k$ by something like $k^{o(k)}$? I guess that $k^{k/2}$ would already be pretty difficult, at least for not too large $k$.
Third question: When bounding prime $k$-tuples, for which values of $k$ should one switch from Selberg's sieve to the large sieve, and for which value of $k$ from the large sieve to the larger sieve? From the work of Elsholtz it is clear that the larger sieve is best for $k$ of order $\log x$, which is probably the right order, but the change between Selberg's sieve and the large sieve appears less clear.
Jan-Christoph Schlage-Puchta |
Definition:Set Union/Family of Sets/Two Sets
Jump to navigation Jump to search
Definition
From the definition of the union of $S_i$:
$\displaystyle \bigcup_{i \mathop \in I} S_i := \set {x: \exists i \in I: x \in S_i}$
it follows that:
$\displaystyle \bigcup \set {S_\alpha, S_\beta} := S_\alpha \cup S_\beta$ Sources 1960: Paul R. Halmos: Naive Set Theory... (previous) ... (next): $\S 9$: Families 1975: T.S. Blyth: Set Theory and Abstract Algebra... (previous) ... (next): $\S 6$. Indexed families; partitions; equivalence relations 1975: Bert Mendelson: Introduction to Topology(3rd ed.) ... (previous) ... (next): Chapter $1$: Theory of Sets: $\S 4$: Indexed Families of Sets 1993: Keith Devlin: The Joy of Sets: Fundamentals of Contemporary Set Theory(2nd ed.) ... (previous) ... (next): $\S 1$: Naive Set Theory: $\S 1.4$: Sets of Sets 1999: András Hajnal and Peter Hamburger: Set Theory... (previous) ... (next): $1$. Notation, Conventions: $12$ |
I RECENTLY read this in an article by Battig and Jarrow, "the first fundamental theorem relates the notion of no arbitrage to the existence of an equivalent martingale measure, while the second fundamental theorem relates the notion of market completeness to the uniqueness of the equivalent martingale measure" can someone explain the difference between the two fundamental theorems? As easy and palatable as possible, preferably.
Let $\Omega$ be the outcome space at some future date and fix a specific outcome $\omega \in \Omega$. Now consider a portfolio that gives one unit of currency if $\omega$ happens and zero otherwise, i.e., with payoff $\mathbb{I}(\omega)$. Any other payoff function can be given as a linear combination of these portfolios. The price of this portfolio today is
\begin{equation} q_\omega = E^\mathbb{Q}[\mathbb{I}(\omega)] = \mathbb{Q}[\omega] \end{equation}
The price is simply the probability of $\omega$ happening under an equivalent martingale measure $\mathbb{Q}[\omega]$. The no-arbitrage condition tells us that the price $q_\omega $ should be unique as long as $\mathbb{I}(\omega)$ can be replicated (FTAP1). If $\mathbb{I}(\omega)$ can be replicated for all possible $\omega \in \Omega$ (completeness), then all $\mathbb{Q}[\omega]$ should also be unique (FTAP2).
As I discussed in another answer, in the case of a stock with
two possible moves, + and -, we have market completeness: there is a unique risk-neutral measure obtained from the fact that there is a unique straight line through two given points.
For a market with
three possible moves (say +1, 0, -1) and just one stock, it turns out that there is more than one risk-neutral measure; i.e., the market is not complete. This corresponds to the fact that given three points in the plane $P_i(x_i,y_i)$, $1\le i\le 3$ with $x_1<x_2<x_3$, there is more than one straight line that lies between the two lines $\ell_{12}$ through $P_1$ and $P_2$, and $\ell_{13}$ through $P_1$ and $P_3$.
Namely, there are many lines that go through $P_1$ but have an intermediate slope between the slopes of $\ell_{12}$ and $\ell_{13}$.
If you add another stock to the market, it becomes complete again as far as I recall.
This all gets much more technical when you have continuous time and arbitrary real numbers as moves for the stock price. But it is still just a matter of whether the risk-neutral measure exists and is unique. |
Your motors do not develop the thrust, they spin rotors which develop thrust. Practically speaking, it is probably better to create an empirical model (or look-up table) based on measurements of the thrust versus motor voltage -- see Bending Unit 22's answer.
However, if you really want to predict the thrust, you'll need a model of your rotor. Then you can use blade element momentum theory (BEMT) to estimate the thrust based on the rotor properties -- chord distribution, twist distribution, airfoil lift and drag profiles, etc.
You can also reduce the model to a simple equation like this:
$T = \frac{1}{4 \pi^2} K_T \rho D^4 \omega^2$
Where $T$ is the thrust, $\rho$ is the air density, $D$ is the diameter of your rotor, $\omega$ is the rotor angular speed, and $K_T$ is the thrust coefficient defined as a function of the
advance ratio, $\zeta$.
$K_T = C_{T1} \zeta + C_{T2}$
$\zeta = \frac{2 \pi \left( v - v_{\infty} \right)}{\omega D}$
The coefficients $C_{T1}$ and $C_{T2}$ are specific to the rotor blade, and $\zeta$ is a function of the relative vehicle speed
along the thrust direction with respect to the surrounding air ($v - v_{\infty}$).
Note that this simplified model is typically more applicable to underwater propellers, but might work for your quadrotor as well. However, BEMT is the usual approach for modeling rotorcraft thrust.
But to apply any model you will need to know the properties of your rotor, and they can be difficult to find (especially considering the properties are probably not constant along the blade). This is why an empirical method is likely your best option to avoid unnecessary work. |
Lovas and Andai (https://arxiv.org/abs/1610.01410) have recently established that the separability probability (ratio of separable volume to total volume) for the nine-dimensional convex set of two-re[al]bit states (representable by $4 \times 4$ “density matrices” with real entries) is $\frac{29}{64}$. The measure employed was the Hilbert-Schmidt one. Building upon this work of Lovas and Andai, strong evidence has been adduced that the corresponding result for the (standard) fifteen-dimensional convex set of two-qubit states (representable by density matrices with off-diagonal complex entries) is $\frac{8}{33}$ (https://arxiv.org/abs/1701.01973). (A density matrix is Hermitian, positive definite with unit trace.) Further, with respect to a different measure (one of the class of “monotone” ones), the two-qubit separability probability appears, quite strikingly, to be $1-\frac{256}{27 \pi^2}=1-\frac{2^8}{3^3 \pi^2}$. Further, exact values appear to have been found for higher-dimensional sets of states, endowed with Hilbert-Schmidt measure, such as $\frac{26}{323}$ for the 27-dimensional set of two-quater[nionic]bit states.
Now, perhaps the measure upon the quantum states of greatest interest is the Bures (minimal monotone) one (https://arxiv.org/abs/1410.6883). But exact computations pertaining to it appear to be more challenging. Lower-dimensional analyses (having set numbers of entries of the density matrices to zero) have yielded certain exact separability probabilities such as $\frac{1}{4}, \frac{1}{2}, \sqrt{2}-1$ (https://arxiv.org/abs/quant-ph/9911058). Efforts to estimate/determine the (15-dimensional) two-qubit Bures separability probability have been reported in https://arxiv.org/abs/quant-ph/0308037.
Recently (https://arxiv.org/abs/1809.09040, secs. X.B, XI), we have undertaken large-scale numerical simulations—employing both random and quasi-random [low-discrepancy] sequences of points—in this matter. Based on 4,372,000,000 randomly-generated points, we have obtained an estimate of 0.0733181. Further based on ongoing quasi-randomly-generated (sixty-four-dimensional) points, for which convergence should be stronger, we have obtained independent estimates of 0.0733181 and (for a larger sample) 0.0733117.
One approach to the suggestion of possible associated exact formulas, is to feed the estimates into http://www.wolframalpha.com and/or https://isc.carma.newcastle.edu.au (the Inverse Symbolic Calculator) and let it issue candidate expressions.
Certainly, $\frac{8}{11 \pi^2} \approx 0.073688$ would qualify as “elegant”, as well as $\frac{11}{150} \approx 0.073333$, but they do not seem to have the precision required. Also, since in the two cases mentioned above, we have the “entanglement probabilities” of $\frac{25}{33} =1 -\frac{8}{33}$ and $\frac{27}{256 \pi^2} =1-(1-\frac{27}{256 \pi^2})$, it might be insightful to think in such terms.
Bengtsson and Zyczkowski (p. 415 of “Geometry of quantum states: an introduction to quantum entanglement [2017]) ”have observed ``that the Bures volume of the set of mixed states is equal to the volume of an $(N^2-1)$-dimensional hemisphere of radius $R_B=\frac{1}{2}$''. It is also noted there that $R_B$ times the area-volume ratio asymptotically increases with the dimensionality $D=N^2-1$, which is typical for hemispheres. The Bures volume of the $N$-dimensional qubit density matrices is given by $\frac{2^{1-N^2} \pi ^{\frac{N^2}{2}}}{\Gamma \left(\frac{N^2}{2}\right)}$, which for $N=4$, gives $\frac{\pi ^8}{165150720}=\frac{\pi^8}{2^{19} \cdot 3^2 \cdot 5 \cdot 7}$.
Additionaly, we have similarly investigated the two-rebit Bures separability probability question with estimates being obtained of 0.1570934 and (larger sample) 0.1570971. But our level of confidence that some exact simple elegant formula exists for this probability is certainly not as high, based upon the Lovas-Andai two-rebit result for the particular monotone metric they studied.
Strongly related, with a slightly different focus, to this issue is my question Estimate/determine Bures separability probabilities making use of corresponding Hilbert-Schmidt probabilities |
Show that $$\{S_n=\sum_{k=1}^n \frac{1}{k!}\}$$ is convergent by showing that $S_n$ is a Cauchy sequence.
The hint given is $r!\geq2^{r-1}$ and $\sum_{r=1}^{\infty} \frac{1}{2^r}=2$
I know the definition of Cauchy sequence that is for each $\epsilon>0$ there exists a natural number such that $m,n\geq N$ implies that $|s_n-s_m|<\epsilon$
Here I also want to ask that what is defined by the $s_m$? |
Because I am waiting for my graphing calculator to ship, I need a quick-and-dirty way to calculate logarithms on a four-function calculator (for when I need to keep my laptop away from where I work).
By modifying the common $n^{th}$ root on a four-function calculator trick, I have found the following curves easy-to-remember and fairly accurate (2-3 significant figures).
$$\ln x \approx 2^{10}(x^{\frac{1}{2^{10}}}-1)$$
$$\log x \approx 444(x^{\frac{1}{2^{10}}}-1)$$
Now, I would prefer something along the lines of
$$\log_bx \approx k(x^{\frac{1}{2^{10}}}-1)$$
where $k$ is something that relates to the base $b$ of the logarithm.
I realize that this may not be possible, or will be so unintuitive that it would be easier to calculate two natural logarithms and make use of that memory key. But hey, curiosity is the lust of the mind.
Edit: Just realized that it's not possible (second edit: without more logarithms), as setting up the equations
$$10^{k_1}=444$$ $$e^{k_1}=1024$$
just yield parallel lines. Any other suggestions on how to make $k$ for any base easy to solve for? |
Let $M$ be a smooth manifold of dimension $n$.
My notes say
Theorem:A subset $S$ of $M$ could be given a structure of smooth manifold of dimension $k$ such that $S$ is an embedded submanifold of $M$ (i.e. the inclusione map $\iota:S\hookrightarrow M$ is a smooth embedding) if and only if for each point $p$ in $M$ there exists a smooth chart $(U,\phi)$ for $M$ centered at $p$ and $k$-adapted to $S$ (i.e. $U\cap S=\emptyset$ or $\phi(U\cap S)=\{x\in \phi(U):x^{k+1}=\dots=x^n=0\}$)(Or equivalently: there exists a smooth atlas for $M$ which is $k$-adapted to $S$, meaning that for eachsmooth chart $(U,\phi)$ in that altas, I have $(U,\phi)$ is $k$ adapted to $S$).
Now, in the proof we only show that if I take a point $p$ in $S$ (not, in general, in $M$ as stated above!) then exists a smooth chart $(U_p,\phi_p)$ for $M$ centered at $p$ and $k$-adapted to $S$. But now if I consider $\{(U_p,\phi_p)\}_{p\in S}$ I could not have an atlas for $M$. (I only know that $S\subseteq\bigcup_{p\in S}U_p$ but not that $M=\bigcup_{p\in S}U_p$).
So, the statement of the above theorem is not very correct or am I missing something? How could I complete (if I can!) the set $\{ (U_p,\phi_p)\}_{p\in S}$ to obtain a smooth atlas for $M$
such that each chart is $k$-adapted to $S$ ?
Or should I modify the statement in
Theorem:A subset $S$ of $M$ could be given a structure of smooth manifold of dimension $k$ such that $S$ is an embedded submanifold of $M$ if and only if for each point $p$ in $S$ there exists a smooth chart $(U,\phi)$ for $M$ centered at $p$ and $k$-adapted to $S$ ? |
Difference between revisions of "Brauer group"
m (moved subhead above MSC)
Line 1: Line 1: + +
{{MSC|}}
{{MSC|}}
{{TEX|done}}
{{TEX|done}}
−
$
$
\newcommand{\Br}{\mathrm{Br}}
\newcommand{\Br}{\mathrm{Br}}
Revision as of 13:34, 26 April 2012 of a field $k$$\newcommand{\Br}{\mathrm{Br}}$
The group of classes of finite-dimensional central simple algebras (cf. Central simple algebra) over $k$ with respect to the equivalence defined as follows. Two central simple $k$-algebras $A$ and $B$ of finite $k$-dimension are equivalent if there exist positive integers $m$ and $n$ such that the tensor products $A \otimes_k M_m(k)$ and $B \otimes_k M_n(k)$ are isomorphic $k$-algebras (here $M_r(k)$ is the algebra of square matrices of size $r$ over $k$). The tensor multiplication of algebras induces an Abelian group structure on the set of equivalence classes of finite-dimensional central simple algebras. This group is also known as the Brauer group of $k$ and is denoted by $\Br(k)$. The zero element of this group is the class of full matrix algebras, while the element inverse to the class of an algebra $A$ is the class of its opposite algebra. Each non-zero class contains, up to isomorphism, exactly one division algebra over $k$ (i.e. a skew-field over $k$).
Brauer groups were defined and studied in several publications by R. Brauer, E. Noether, A. Albert, H. Hasse and others, starting in the 1920s (see, for example, [De]). The most complete results, including the computation of the Brauer group, were obtained for number fields in connection with the development of class field theory. The general form of the reciprocity law is formulated in terms of Brauer groups.
The Brauer group is zero for any separably-closed field and any finite field. For the field of real numbers the Brauer group is a cyclic group of order two and its non-zero element is the class of the quaternion algebra. If $k$ is the field of $p$-adic numbers or any locally compact field that is complete with respect to a discrete valuation, then its Brauer group is isomorphic to $\mathbb{Q}/\mathbb{Z}$, where $\mathbb{Q}$ is the additive group of rational numbers and $\mathbb{Z}$ is the additive group of integers. This fact is of importance in local class-field theory.
Let $k$ be an algebraic number field of finite degree or a field of algebraic functions in one variable with a finite field of constants. Then there exists an exact sequence of groups
$$ 0 \to \Br(k) \to \sum_v \Br(k_v) \to \mathbb{Q}/\mathbb{Z} \to 0,$$
where $v$ runs through all possible norms of the field $k$, $k_v$ are the respective completions of $k$, and the homomorphism $\Br(k) \to \sum_v \Br(k_v)$ is induced by the natural embeddings $k \to k_v$. The image of an element from $\Br(k)$ in $\Br(k_v)$ is called a local invariant, the homomorphism $\sum_v \Br(k_v) \to \mathbb{Q}/\mathbb{Z}$ is the summation of local invariants. This fact is established in global class-field theory.
If $k$ is a field of algebraic functions in one variable over an algebraically closed field of constants, then its Brauer group is zero (Tsen's theorem). The case of an arbitrary field of constants is treated in [Fa] and in [Gr].
The Brauer group depends functorially on $k$, i.e. if $K$ is an extension of the field $k$, a homomorphism $\Br(k)\to \Br(K)$ is defined. Its kernel, denoted by $\Br(K/k)$, consists of classes of algebras splitting over $K$.
The construction of cross products with the aid of factor systems [Ch] results in a cohomological interpretation of Brauer groups. For any normal extension $K/k$ there exists an isomorphism
$$\Br(K/k) \simeq H^2(K, K^*)$$
where $H^2(K, K^*)$ is the second Galois cohomology group with coefficients in the multiplicative group $K^*$ of $K$. Moreover, the group $\Br(k)$ is isomorphic to $H^2(\bar{k},\bar{k}^*)$, where $\bar{k}$ is the separable closure of $k$. A central simple algebra is assigned its class in the Brauer group by the coboundary operator
$$\delta: H^1(K,\mathrm{PGL}(n,K)) \to H^2(K,K^*)$$
in the cohomology sequence corresponding to the exact group sequence
$$1 \to K^* \to \mathrm{GL}_n(K) \to \mathrm{PGL}_n(K) \to 1$$
where $\mathrm{GL}_n(K)$ and $\mathrm{PGL}_n(K)$ are the linear and the projective matrix groups of size $n$. Here the set $H^1(K,\mathrm{PGL}_n(K))$ is interpreted as the set of $k$-isomorphism classes of central simple algebras of dimension $n^2$ over the field $k$ which split over $K$, or as the set of classes of $k$-isomorphic Brauer–Severi varieties of dimension $n-1$, possessing a point that is rational over $K$ (cf. Brauer–Severi variety).
All Brauer groups are torsion groups. The order of any of its elements is a divisor of the number $n$, where $n$ is the rank of the skew-field representing this element.
The cohomological interpretation of the Brauer group makes it possible to consider it as the group of classes of extensions of the Galois group of the separable closure $\bar{k}/k$ by the group $\bar{k}^*$.
A generalization of the concept of a Brauer group is the Brauer–Grothendieck group, whose definition is analogous to that of the Brauer group, except that the central simple algebras are replaced by Azumaya algebras [Gr]. An algebra $A$ over a commutative ring $R$ is an Azumaya algebra if it is finitely generated and central over $R$ and separable.
References
[Bo] N. Bourbaki, "Elements of mathematics. Algebra: Algebraic structures. Linear algebra", 1, Addison-Wesley (1974) pp. Chapt.1;2 (Translated from French) MR0354207 [CaFr] J.W.S. Cassels (ed.) A. Fröhlich (ed.), Algebraic number theory, Acad. Press (1967) MR0215665 Zbl 0153.07403 [Ch] N.G. Chebotarev, "Introduction to the theory of algebra...", Moscow-Leningrad (1949) (In Russian) [De] M. Deuring, "Algebren", Springer (1935) MR0228526 Zbl 0011.19801 Zbl 61.0118.01 [Fa] D.K. Faddeev, "On the theory of algebras over fields of algebraic functions in one variable" Vestnik Leningrad. Univ. : 7 (1957) pp. 45–51 (In Russian) (English summary) MR89191 [Gr] A. Grothendieck, "Le groupe de Brauer I, II, III" A. Grothendieck (ed.) J. Giraud (ed.) et al. (ed.), Dix exposés sur la cohomologie des schémas, North-Holland & Masson (1968) pp. 46–188 [Mi] J.S. Milne, "Etale cohomology", Princeton Univ. Press (1980) MR0559531 Zbl 0433.14012 [Se] J.-P. Serre, "Cohomologie Galoisienne", Springer (1964) MR0180551 Zbl 0128.26303 Comments
A recent result in the theory of Brauer groups is a theorem of Merkuryev and Suslin [Su], which in its simplest form asserts that $\Br(k)$ is generated by the classes of algebras that split over a cyclic extension of $k$, provided that $k$ is a field of characteristic zero containing all roots of unity. The proof is based on the close relationship between the theory of Brauer groups and algebraic K-theory.
References
[Su] A. Suslin, "Plenary adress" A.M. Gleason (ed.), Proc. Internat. Congress Mathematicians (Berkeley, 1986), Amer. Math. Soc. (1987) pp. 1195–1209 How to Cite This Entry:
Brauer group.
Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Brauer_group&oldid=25498 |
First, let me correct you definition of $\widehat{LT}\!\!_3$: it is the class of depth-3
polynomial size threshold circuits with polynomially bounded weights.
Polynomial size threshold circuits can be converted to polynomial size Boolean circuits. To show this, it suffices to show that any single threshold gate can be computed by a polynomial size Boolean circuit. The proof goes along the following lines:
Every threshold gate is equivalent to a threshold gates whose weights have bit-length $O(m\log m)$, where $m$ is the number of inputs. This can be shown using linear programming (see below). Adding $m$ numbers of bit-length $O(m\log m)$ can be accomplished using a Boolean circuit of size polynomial in $m$.
In our case $m$ is at most the number of gates in the original threshold circuit, which is polynomial (in $n$, the number of input bits). Therefore the Boolean circuit equivalent to any single threshold gate has polynomial size.
The foregoing shows that $\widehat{LT}\!\!_3 \subseteq \mathrm{P/poly}$. While it is also true that $\widehat{LT}\!\!_3 \subseteq \mathrm{NP/poly}$, it seems like a typo in the paper.
Finally, let us indicate how to show that every threshold gate on $m$ inputs is equivalent to one in which the weights have bit-length $O(m\log m)$. Consider the following linear program. The variables are $c_1,\ldots,c_m,\theta$. For each $x_1,\ldots,x_m \in \{0,1\}^m$, if the threshold gate outputs 1 on this input, we add the constraint$$ \sum_{i=1}^m c_i x_i \geq \theta + 1, $$and if it outputs 0 then we add the constraint$$ \sum_{i=1}^m c_i x_i \leq \theta - 1. $$It is not too hard to check that this LP is feasible, essentially using the parameters of the threshold gate (possibly scaled). On the other hand, LP theory tells us that there exists a basic feasible solution, which is obtained by choosing $m+1$ linearly independent inequalities and treating them as equations. Cramer's rule, which expresses the solution as a ratio of determinants, completes the proof (exercise). |
Let us consider the Gaussian model $\mathcal{N}(\mu,\sigma^2)$, where both $\mu$ and $\sigma$ are unknown. I have learnt that (for example, from Amari's information geometry book) the exponential families always have efficient estimators (the ones which achieve Cramer-Rao lower bound) for its parameters.
We know that $\bar{X_n}=\frac{1}{n}\sum_{i=1}^n X_i$ is an unbiased and efficient estimator of $\mu$.
Also $S_{n-1}^2=\frac{1}{n − 1}\sum_{i=1}^n(X_i − \bar{X_n})^2$ is an unbiased estimator for $\sigma^2$; however this is not efficient.
Question: Would the limit as $n$ tends to $\infty$ of $S_{n-1}^2$ be an efficient estimator? If so, what is the limit? Does $\sigma^2$ have an efficient estimator at all? If not, doesn't this contradict the statement in the first paragraph?
More generally, when we say an exponential family has efficient estimators, is it meant that the efficient estimator need not exist for a finite sample but always exists in the asymptotic sense?
Can someone clarify this?
Note: If this or a related question has been discussed already, I request that someone please direct me to that question. |
Search
Now showing items 21-30 of 108
Energy dependence of forward-rapidity J/$\psi$ and $\psi(2S)$ production in pp collisions at the LHC
(Springer, 2017-06)
We present ALICE results on transverse momentum ($p_{\rm T}$) and rapidity ($y$) differential production cross sections, mean transverse momentum and mean transverse momentum square of inclusive J/$\psi$ and $\psi(2S)$ at ...
Transverse Momentum Distribution and Nuclear Modification Factor of Charged Particles in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV
(American Physical Society, 2013-02)
The transverse momentum ($p_T$) distribution of primary charged particles is measured in non single-diffractive p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV with the ALICE detector at the LHC. The $p_T$ spectra measured ...
Measurement of azimuthal correlations of D mesons and charged particles in pp collisions at $\sqrt{s}=7$ TeV and p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Springer, 2017-04)
The azimuthal correlations of D mesons and charged particles were measured with the ALICE detector in pp collisions at $\sqrt{s}=7$ TeV and p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV at the Large Hadron Collider. ...
Production of $\pi^0$ and $\eta$ mesons up to high transverse momentum in pp collisions at 2.76 TeV
(Springer, 2017-05)
The invariant differential cross sections for inclusive $\pi^{0}$ and $\eta$ mesons at midrapidity were measured in pp collisions at $\sqrt{s}=2.76$ TeV for transverse momenta $0.4<p_{\rm T}<40$ GeV/$c$ and $0.6<p_{\rm ...
Measurement of the production of high-$p_{\rm T}$ electrons from heavy-flavour hadron decays in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV
(Elsevier, 2017-08)
Electrons from heavy-flavour hadron decays (charm and beauty) were measured with the ALICE detector in Pb–Pb collisions at a centre-of-mass of energy $\sqrt{s_{\rm NN}}=2.76$ TeV. The transverse momentum ($p_{\rm T}$) ...
Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(American Physical Society, 2015-03)
We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ...
$^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2016-03)
The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ...
Flow dominance and factorization of transverse momentum correlations in Pb-Pb collisions at the LHC
(American Physical Society, 2017-04)
We present the first measurement of the two-particle transverse momentum differential correlation function, $P_2\equiv\langle \Delta p_{\rm T} \Delta p_{\rm T} \rangle /\langle p_{\rm T} \rangle^2$, in Pb-Pb collisions at ...
$\phi$-meson production at forward rapidity in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV and in pp collisions at $\sqrt{s}$ = 2.76 TeV
(Elsevier, 2017-03)
The first measurement of $\phi$-meson production in p-Pb collisions at a nucleon-nucleon centre-of-mass energy $\sqrt{s_{\rm NN}}$ = 5.02 TeV has been performed with the ALICE apparatus at the LHC. $\phi$ mesons have been ...
Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV
(Elsevier, 2016-09)
The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ... |
Table of Contents
Deformation Retract Subspaces of a Topological Space
Recall from the Retract Subspaces of a Topological Space page that if $X$ is a topological space and $A \subset X$ is a topological subspace then $A$ is said to be a retract of $X$ if there exists a continuous function $r : X \to A$ such that $r \circ \mathrm{in} : \mathrm{id}_A$ where $\mathrm{in} : A \to X$ is the inclusion map.
We will now define another type of retract called a deformation retract.
Definition: Let $X$ be a topological space and let $A \subset X$ be a topological subspace. Then $A$ is said to be a Deformation Retract of $X$ if there exists a continuous function $r : X \to A$ if $r \circ \mathrm{in} = \mathrm{id}_A$ and $\mathrm{in} \circ r = \mathrm{id}_X$. By the definition above, every deformation retract is a retract.
For the first example, consider the space $A = S^1$ which is the unit circle and $X = S^1 \times [0, 1]$ which is the unit cylinder. Then $A$ is a deformation retract of $X$:
For another example, consider the space $A = \{ p \}$ and $X = \{ p, q \}$ where $p, q \in \mathbb{R}^2$ and $p \neq q$ and equip $X$ with the discrete topology. Then $A$ is NOT a deformation retract of $X$ since $A$ is connected and $X$ is disconnected. |
The Union and Intersection of Two Topologies
Recall that if $X$ is a set then a topology $\tau$ is a collection of subsets of $X$ that satisfy three conditions - $\emptyset, X \in \tau$, given any arbitrary collection of subsets from $\tau$ we have that the union of this arbitrary collection is in $\tau$, and given any finite collection of subsets from $\tau$ we have that the intersection of this finite collection is in $\tau$.
Now suppose that we have two distinct topologies on $X$, say $\tau_1$ and $\tau_2$. What can we necessarily say about $\tau_1 \cup \tau_2$ and $\tau_1 \cap \tau_2$? Are these collections of subsets of $X$ necessarily topologies?
For example, consider the finite set $X = \{ a, b, c \}$ and the topologies $\tau_1 = \{ \emptyset, \{ a \}, \{ a, b \}, X \}$ and $\tau_2 = \{ \emptyset, \{ b, c \}, X \}$. The union of $\tau_1$ and $\tau_2$ is given as:(1)
However, $\tau_1 \cup \tau_2$ is NOT a topology on $X$ since $\{a, b \} \cap \{ b, c \} = \{ b \} \not \in \tau_1 \cup \tau_2$. With this counterexample, we see that $\tau_1 \cup \tau_2$ need not be a topology.
Now, the intersection of $\tau_1$ and $\tau_2$ is given as:(2)
In this case, we see that $\tau_1 \cap \tau_2$ is merely the indiscrete topology on $X$, i.e., $\tau_1 \cap \tau_2$ is a topology. In fact, as we will see in the following theorem, if $\tau_1$ and $\tau_2$ are both topologies on $X$ then $\tau_1 \cap \tau_2$ is also a topology on $X$.
Theorem 1: Let $X$ be a set and let $\tau_1$ and $\tau_2$ both be topologies on $X$. Then $\tau_1 \cap \tau_2$ is a topology on $X$. Proof:To show that $\tau_1 \cap \tau_2$ is a topology on $X$ we must verify that all three conditions hold for $\tau_1 \cap \tau_2$ to be a topology. Since $\emptyset, X \in \tau_1$ and $\emptyset, X \in \tau_2$ by the definition of $\tau_1$ and $\tau_2$ being topologies, we see that $\emptyset, X \in \tau_1 \cap \tau_2$, so the first condition is satisfied. Now let $\{ U_i \}_{i \in I}$ be a collection of sets such that $U_i \in \tau_1 \cap \tau_2$ for each $i \in I$ for some index set $I$. Since each $U_i \in \tau_1 \cap \tau_2$ for each $i \in I$ we see that $U_i \in \tau_1$ and $U_i \in \tau_2$ for each $i \in I$. So $\bigcup_{i \in I} U_i \in \tau_1$ and $\bigcup_{i \in I} U_i \in \tau_2$. Hence $\bigcup_{i \in I} U_i \in \tau_1 \cap \tau_2$. Therefore, the second condition is satisfied. Now let $U_1, U_2, ..., U_n \in \tau_1 \cap \tau_2$. Then $U_i \in \tau_1$ and $U_i \in \tau_2$ for each $i \in \{1, 2, ..., n \}$. Therefore $\bigcup_{i=1}^{n} U_i \in \tau_1$ and $\bigcup_{i=1}^{n} U_i \in \tau_2$. So $\bigcup_{i=1}^{n} U_i \in \tau_1 \cap \tau_2$. Therefore the third condition is satisfied. Hence $\tau_1 \cap \tau_2$ is a topology on $X$. $\blacksquare$
Corollary 1: Let $X$ be a set and let $\tau_1, \tau_2, ..., \tau_m$ be topologies on $X$. Then $\bigcap_{i=1}^{m} \tau_i$ is a topology on $X$. Proof:Let $\tau_1, \tau_2, ..., \tau_m$ be topologies on $X$. By Theorem 1, we have that $\tau_1 \cap \tau_2$ is a topology on $X$. Inductively we have that the following set is a topology on $X$: And similarly, we have that the following set is a topology on $X$: |
The von Neumann-Morgenstern (vNM) utility function takes the form \begin{equation}U(p,x)=\sum_{i=1}^np_iu(x_i)\end{equation}where $x=(x_1,\dots,x_n)$ with $x_i$ being the (monetary) payoff associated with outcome $i$ and $p=(p_1,\dots,p_n)$ with $p_i$ being the probability that $i$ occurs.
In behavioral economics, generalizations of the vNM utility usually happen in either (or both) of the following two channels:
Non-linear probability weighting: instead of using $p_i$'s, which can be considered objective probabilities, a model may use
decision weights $w_i$'s which map the objective probability distribution into a new distribution.
Reference dependent utility function: instead of $u(x_i)$, which depends only on the payoff in outcome $i$, a model may consider a utility function $u(x_i,r)$ that depends on both $x_i$ and some reference payoff $r$.
Modifications made through either of these will give rise to a non-expected utility function, which is supposed to improve the model's descriptive accuracy of people's decision under risk.
Example of 1: Rank-Dependent Utility
Suppose $x_1<x_2<\cdots<x_n$, and calculate decision weights ($w_i$'s) as follows: \begin{equation}w_i=\pi\left(\sum_{j=i}^np_j\right)-\pi\left(\sum_{j=i+1}^np_j\right),\end{equation}where $\pi(\cdot)$ is called a probability weighting function (PWF). A common PWF is the Tversky-Kahneman PWF: \begin{equation}\pi(t)=\frac{s^\gamma}{(s^\gamma+(1-s)^\gamma)^{1/\gamma}}.\end{equation}Then the rank-dependent utility with T-K PWF is \begin{equation}RDU(p,x;\gamma)=\sum_{i=1}^nw_iu(x_i).\end{equation}Note that $w_i=p_i$ if $\gamma=1$, so that vNM EU is a special case of RDU.
Example of 2: Disappointment Aversion
\begin{equation}DAU(p,x;\overline U)=\sum_{i=1}^np_i[u(x_i)-D(u(x_i)-\overline U)],\end{equation}where $D(\cdot)$ is a function that captures how disappointment ($u(x_i)<\overline U$) or elation ($u(x_i)>\overline U$) affects an individual's evaluation of a prospect. Here, the reference utility level $\overline U$ can be something like the certainty equivalent level of utility. EU can also be derived as a special case from DAU by requiring that $D(\cdot)$ be a constant function.
Example of 1+2: Prospect Theory
Prospect theory (PT) incorporates both non-linear probability weighting and reference dependence. Again, assume $x_1<\cdots<x_n$. PT distinguishes between gains ($x_i>r$) and losses ($x_i<r$). For gains, decision weights are given by \begin{equation}w_i=\pi^+(p_i+\cdots+p_n)-\pi^+(p_{i+1}+\cdots+p_n),\end{equation}while for losses the decision weights are \begin{equation}w_i=\pi^-(p_i+\cdots+p_n)-\pi^-(p_{i+1}+\cdots+p_n),\end{equation}where $\pi^+(\cdot)$ and $\pi^-(\cdot)$ are both PWFs. For instance, one can have both PWFs be the Tversky-Kahneman form, but with different parameters. In addition, the utility function also takes a reference dependent form: \begin{equation}u(x_i,r)=\begin{cases}(x_i-r)^\alpha &\text{if }x_i\ge r\\-\lambda(r-x_i)^\beta &\text{if }x_i<r\end{cases}\end{equation} where $\alpha,\beta\in(0,1)$ are parameters of risk aversion (differentiated between gains and losses) and $\lambda>0$ is the coefficient of loss aversion. Thus, the non-expected utility function under PT is \begin{equation}V(p,x;\text{parameters})=\sum_{i=1}^nw_iu(x_i,r)\end{equation} |
Table of Contents
Arbitrary Topological Products of Topological Spaces
Recall that if $X$ and $Y$ are both topological spaces and $X \times Y$ denoted the Cartesian product (or simply just product) of these two spaces, then the product topology on $X \times Y$ is the topology $\tau$ whose basis is given by:(1)
The product $X \times Y$ with the product topology $\tau$ is called the topological product of the spaces $X$ and $Y$.
Similarly, if $X_1, X_2, ..., X_n$ is a finite collection of topological spaces and $\displaystyle{\prod_{i=1}^{n} X_i = X_1 \times X_2 \times ... \times X_n}$ then the product topology on $\displaystyle{\prod_{i=1}^{n} X_i}$ is the topology $\tau$ whose basis is given by:(2)
Notice that the order of the topological spaces that constitute the topological product matter to some degree. If we instead have an infinite number of topological spaces and we want to consider the resulting topological product then a clear order in the notation above may not make sense (especially if we're considering an uncountable number of topological spaces). Consequentially we define arbitrary topological products in a different manner.
Let $\{ X_i \}_{i \in I}$ be an arbitrary collection of topological spaces where $I$ is an indexing set, and let $\displaystyle{\prod_{i \in I} X_i}$ be the Cartesian product of these spaces. We can think of the elements in $\displaystyle{\prod_{i \in I} X_i}$ as functions $f : I \to X_i$ defined for all $i \in I$ by $f(i) = x_i$ for $x_i \in X_i$, or unordered "sequences" $(x_i)_{i \in I}$ where $x_i \in X_i$.
Definition: Let $\{ X_i \}_{i \in I}$ be an arbitrary collection of topological spaces and let $\displaystyle{\prod_{i \in I} X_i}$ be the Cartesian product of these spaces. The Product Topology on $\displaystyle{\prod_{i \in I} X_i}$ is the topology $\tau$ is the initial topology induced by the projection maps $\displaystyle{p_i : \prod_{i \in I} X_i \to X_i}$. The corresponding Topological Product is the topological space $\displaystyle{\left ( \prod_{i \in I} X_i, \tau \right )}$.
Recall that the initial topology induced by a collection of maps $\{ f_i : i \in I \}$ has the following subbasis:(3)
So the initial topology on $\displaystyle{\prod_{i \in I} X_i}$ induced by the collection of projection maps $\{ p_i : i \in I \}$ has the following subbasis:(4)
In other words, the elements in the subbasis $\mathcal S$ are the arbitrary products $\displaystyle{U = \prod_{i \in I} U_i}$ such that the inverse image of this product $U$ with respect to some projection map $p_i$ is equal to an open set $V$ in the corresponding topological space $X_i$. |
Search
Now showing items 1-9 of 9
Measurement of the Ds<(+)-> l(+)ve branching fractions and the decay constant fDs+
(University of Groningen, 2016)
Observation of e+e-›??c1,2e+e-›??c1,2 near s?s = 4.42 and 4.6 GeV
(2016)
BEPC-BES. Based on data samples collected with the BESIII detector operating at the BEPCII storage ring at center-of-mass energies s?>s> 4.4 GeV, the processes e+e-›??c1,2e+e-›??c1,2 are observed for the first time. With ...
Study of J/Psi -> p(p)over-bar phi at BESIII
(2016)
Using a data sample of 1.31×109 J/? events accumulated with the BESIII detector, the decay J/?›pp¯? is studied via two decay modes, ?›KS0KL0 and ?›K+K-. The branching fraction of J/?›pp¯? is measured to be B(J/?›pp¯?)=[5 ...
Measurements of the center-of-mass energies at BESIII via the di-muon process
(2016)
From 2011 to 2014, the BESIII experiment collected about 5 fb$^{-1}$ data at center-of-mass energies around 4 GeV for the studies of the charmonium-like and higher excited charmonium states. By analyzing the di-muon process ...
Measurement of azimuthal asymmetries in inclusive charged dipion production in e + e - annihilations at ? s = 3.65 GeV
(APS Physics, 2016)
We present a measurement of the azimuthal asymmetries of two charged pions in the inclusive process e(+) e(-) -> pi pi X based on a data set of 62 pb(-1) at the center-of-mass energy of 3.65 GeV collected with the SESIII ...
Study of D + › K - ? + e + ? e
(APS Physics, 2016)
Observation of e(+)e(-) -> eta ' J/psi center-of-mass energies between 4.189 and 4.600 GeV
(APS Physics, 2016)
The process $e^{+}e^{-}\to \eta^{\prime} J/\psi$ is observed for the first time with a statistical significance of $8.6\sigma$ at center-of-mass energy $\sqrt{s} = 4.226$ GeV and $7.3\sigma$ at $\sqrt{s} = 4.258$ GeV using ... |
Here is how to implement your solution. Let $A = \langle Q, q_0, F, \delta \rangle$ be a DFA for $L$. We will construct an NFA $A' = \langle Q', q'_0, F', \delta' \rangle$ as follows:
$Q' = \{q'_0\} \cup Q^3$. The state $(q_1,q_2,q_3)$ means that we have guessed that when $A$ finishes reading the first copy of $w$, it will be in state $q_1$; the first copy of $A$, started at $q_0$, is at state $q_2$; and the second copy of $A$, started at $q_1$, is at state $q_3$. $F' = \{(q_1,q_1,q_2) : q_1 \in Q, q_2 \in F\}$. Thus we accept if the first copy of $A$ is in the guessed state, and the second copy of $A$ is at an accepting state. $\delta'(q'_0,\epsilon) = \{(q,q_0,q) : q \in Q\}$. This initializes the simulation of the two copies of $A$. $\delta'((q_1,q_2,q_3),a) = \{(q_1,\delta(q_2,a),\delta(q_3,a))\}$. This simulates both copies of $A$, while keeping the guessed state.
We leave the reader the formal proof that $L(A') = \sqrt{L(A)}$.
Here is another solution, which creates a DFA. We now run $|Q|$ copies of $A$ in parallel, starting at each state of $A$:
$Q' = Q^Q$. $q'_0 = q \mapsto q$, the identity function. $\delta'(f,a) = q \mapsto \delta(q,a)$. $F' = \{ f \in Q' : f(f(q_0)) \in F \}$.
What is the meaning of the condition $f(f(q_0)) \in F$? After reading a word $w$, the automaton $A'$ is in a state $f$ given by $f(q) = \delta(q,w)$. Thus $f(f(q_0)) = \delta(\delta(q_0,w),w) = \delta(q_0,w^2)$. |
So, I have to show that the function $f\colon \mathbb{R} \to \mathbb{R}$ given by:
$f(x) = \begin{cases} -x & \text{if $x < 0$,} \\ 2 & \text{if $x \geq 0$,} \end{cases}$
is $\mathcal{B}/\mathcal{B}$-measureable. And I'm not quite sure how to do it. Going by the definition, I'd have to show that
$f^{-1}(A) \in \mathcal{B}, \quad \forall A \in \mathcal{B}$.
I've already shown, that
$\{f \geq a\} = \begin{cases} \mathbb{R} & \text{if $a\leq 0$,} \\ (-\infty , -a] \cup [0,\infty) & \text{if $0<a\leq 2$,} \\ (-\infty , -a] & \text{if $a>2$,} \end{cases}$
and I'm thinking that I can use this to show measurability, but I'm not sure? Since all of the above are Borel sets, does it follow that $f^{-1}$ of any of the above are also Borel sets?
Any help would be much appreciated! |
Let $L|K$ be a finite field extension with basis $\{\beta_1, ...,\beta_n\}$ and $M\supset L$ a field. If $\mathcal{S}$ is an arbitrary subset of $M$ and $L(\mathcal{S}), K(\mathcal{S})$ are the fields obtained by adjoining $\mathcal{S}$ to $L$ and $K$ respectively, prove that: $$\text{dim}(L(\mathcal{S})|K(\mathcal{S}))\leq n$$
Here is what I've done so far: I need to prove that an arbitrary element $\frac{F(\alpha_1, ..., \alpha_k)}{G(\alpha_1, ..., \alpha_k)} \in L(\mathcal{S})$ (where $F, G\in L[X_1, ..., X_k]$, $G\neq 0$ and $\alpha_1, ..., \alpha_k \in \mathcal{S}$) can be written in the form:
$$\sum_{i=1}^{n}\frac{r_i(\alpha_1, ..., \alpha_k)}{s_i(\alpha_1, ..., \alpha_k)} \beta_i, \text{ where } \frac{r_i}{s_i} \in K(X_1, ..., X_k)$$
By writing $F(X_1, ..., X_k)=\sum_{i=1}^{n}p_i \beta_i$ and $G(X_1, ..., X_k)=\sum_{i=1}^{n}q_i \beta_i$ where $p_i, q_i \in K[X_1, ..., X_k]$, we need to find elements $r_i, s_i \in K[X_1, ..., X_k]$ such that:
$$\frac{\sum_{i=1}^{n}p_i \beta_i}{\sum_{i=1}^{n}q_i \beta_i}=\sum_{i=1}^{n}\frac{r_i}{s_i}\beta_i$$
I thought about multiplying the above equation by $\sum_{i=1}^{n}q_i \beta_i$, but then I don't know how to deal with the products $\beta_i\beta_j$ which arise from the product distribution.
Is there a simpler way to do this? Thanks! |
We shall show that there is a simple and perhaps unexpectedrelationship between the arithmetic, geometric and harmonic meansof two numbers, the sides of a right angled triangle and the GoldenRatio.
First we'll define these terms and then indicate a simple proof ofthe formula which connects them all. As usual with NRICH articles,we'll leave readers a little work to do for themselves so wesuggest you find a pen and paper.
Take any two numbers $a$ and $b$, where $0 < a < b$. Thearithmetic mean (AM) is ${1 \over 2}(a+b)$; the geometric mean (GM)is $\sqrt{ab}$; and the harmonic mean (HM) is the reciprocal of thearithmetic mean of the reciprocals of the numbers. The HM is$${1\over {1\over 2}({1\over a} + {1\over b})}$$ which simplifiesto $2ab/(a+b)$.
Consider the circle centre A and the tangent to the circle fromthe point M touching the circle at the point G with PM =$a$, QM =$b$ and $a > b > 0$. You can work out the radius of thecircle and show the length AM is $(a+b)/2$ (the arithmetic mean of$a$ and $b$). Using Pythagoras' theorem you can show the length GMis $\sqrt{ab}$ (the geometric mean of a and b). Using the fact thatthe triangles AGM and GHM are similar you can show that HM is$2ab/(a+b)$ (the harmonic mean of $a$ and $b$).
The diagram shows that AM > GM > HM
.
The Golden Ratio is said to give aesthetically pleasingproportions when you take a rectangle which is such that if youremove a square from it you have a rectangle of the sameproportions.
The Golden Ratio $\phi$ is such that $\phi \ : 1 = 1 : (\phi \ -1) $ so that $\phi^2 - \phi \ - 1=0$ and hence $\phi = {1\over 2}(1 + \sqrt{5})$.
In the diagram above used to illustrate the inequality AM > GM > HM
these three lengths did notappear in the same triangle. It can be shown that the AM, GM and HMof $a$ and $b$ can be the lengths of the sides of a right-angledtriangle if and only if \begin{equation*} a = b\phi^3\end{equation*}
where $\phi = {1 \over 2}(1+\sqrt{5})$ , the Golden Ratio.
All you need to do to prove this remarkable result is to useelementary algebra to find $a/b$ where $[AM]^2 = [GM]^2 + [HM]^2$,that is
$$ \left[{(a+b)\over2}\right]^2 = \left[\sqrt{ab}\right]^2 +\left[{2ab\over(a+b)}\right]^2. $$
The details of this proof are given in the solution to the problem Pythagorean Golden Means. |
It is easier to understand Taleb's distinction between 'soft' and 'hard' American options if we understand from the beginning that he is talking about FX options or behaving similarly options on another assets.
Under Garman-Kohlhagen model for valuation, the price of
European FX call is:
$$ C_{e}(S(\tau), K, \tau, r, r^f, \sigma ) = e^{-r^{f}\tau}S(\tau)N(d_+) - e^{-r\tau}KN(d_{\_}) \\d_{+} = \frac{1}{\sigma \sqrt{\tau}}\Big[\log{\frac{S(\tau)}{K}}+(r-r^f+\frac{1}{2}\sigma^2)\tau\Big] \\d_{\_} = d_{+} - \sigma \sqrt{\tau}$$
where:
$S(\tau)$ is an exchange rate in units of domestic currency($\$$) per $N_f$ units of foreign currency $N_f$ is a foreign notional amount, its actual value is not important $K = \$80$ is an option strike price in units of domestic currency per $N_f$ units of foreign currency $\sigma = 0.157$ is the volatility of the exchange rate $S(\tau)$ $\tau$ is time to expiration $r = 0.06$ is a domestic currency risk-free rate
American FX call price $C_a$ is higher or equal than $C_e$ for any type of options. But for FX options this boundary can be augmented further because under Garman-Kohlhagen model European call may have negative time value or cost less than its intrinsic value. Obviously, American option can't cost less than intrinsic value due to early exercise feature, so:
$$ C_{a}(S, K, \tau, r, r^f, \sigma ) \geq \max (C_{e}(S, K, \tau, r, r^f, \sigma ), S-K)$$
It is instructive to see how the low boundary of the price of the American option depends on $r^f$ assuming the other parameters fixed as in Taleb's 'soft' American option example, i.e. $\tau= \tau_0 = \frac{90}{360}$, $S(\tau_0) = \$100$:
When foreign risk-free rate rate $r^f$ is less than $\approx 5\%$, time-value of both American and European option is greater than zero. This value is lost when the American option is exercised early and generally speaking we would have to take this into consideration, but we will not since Taleb most likely assumes that this is not the case (see below).
Instead, if foreign interest rate is greater than $5\%$, European FX call costs less than its intrinsic value while cost of American one is equal to its intrinsic value. That's what Taleb probably refer to when he emphasizes that the option is an American one:
Assume also that the 3-month 80 call is worth $20,
at least if it is American.
Assume (as Taleb does) that the price of put available on the market is negligible and $r^f$ is greater than $5\%$. Compare final payoffs in domestic currency for two scenarious proposed by Taleb:
We keep the call till expiration and receive payoff $V_{call}$ We exercise the call borrowing $K$ and receiving $N_f$ units of foreign currency, buy put with strike $K$ and keep this position till expiration of put and then exchange $N_f$ back to domestic currency. The received payoff is $ V_{N_f + \text{put}}$
Thus:
$$V_{call} = \max(S(0) - K,0) $$
Here $S(0)$ is the exchange rate at the time of expiration ($\tau = 0$).
Then:
$$ V_{N_f + \text{put}} = \underbrace{N_f\frac{\max(S(0), K)}{N_f} - K}_{N_f + \text{put} = \max(S(0) - K,0) = V_\text{call}} + \underbrace{N_f (e^{r^f \tau_0}-1)\frac{S(0)}{N_f} - K(e^{r\tau_0}-1)}_\text{the interest earned} \tag{1} \label{one}$$
Here $N_f (e^{r^f \tau_0}-1)$ is the interest earned in units of foreign currency and $\frac{S(0)}{N_f}$ is an exchange rate in units of domestic currency per
1 (not $N_f$!) unit of foreign currency
It is "the interest earned" part of $ V_{N_f + \text{put}}$ which is used by Taleb to gauge 'hardness' of the FX option. In turn, we can further split it into:
$$ N_f(e^{r^f \tau_0}-1)\frac{S(0)}{N_f} - K(e^{r\tau_0}-1) = S(0)(e^{r^f \tau_0}-1) - K(e^{r\tau_0}-1) \\ \approx \tau_0 S(0)r^f - \tau_0 Kr = \underbrace{\tau_0 (S(0) - K)r}_\text{financing of intrinsic value} + \underbrace{\tau_0 S(0)(r^f-r)}_\text{carry cost of the underlying } \tag{2} \label{two} $$
So if "financing of intrinsic value" is much greater than "carry costs of the underlying": $$\tau_0 (S(0) - K)r \gg \tau_0 S(0)(r^f-r)$$ then American option is 'soft'. Otherwise it is 'hard'.
Now everything is ready to answer your questions.
Q1. Taleb's distinction between 'soft' and 'hard' options is easier to understand assuming that he is talking about FX options. An FX option is a 'call' option in one currency and 'put' in the other simultaneously, so you can assume that 'Soft American Option' may be 'put' as well as 'call'.
It is relationship between
foreign and domestic interest rates, not the growth or change of the exchange rate, that defines 'softness' of the American option as described above.
It is interesting that Hull's appreciate this fact:
In general, call options on high-interest currencies and put options on low-interest currencies are the most likely to be exercised prior to maturity.
but then gives what appears to be an incorrect explanation to this fact:
The reason is that a high-interest currency is expected to depreciate and a
low-interest currency is expected to appreciate.
Q2. Taleb assumes that
foreign and domestic interest rates are the same ($r^f \approx r = 0.06$) final exchange rate $S(0)$ is equal \$100 per $N_f$ units of foreign currency.
Then he calculates "financing of intrinsic value" as in formula $\eqref{two}$ above. "carry cost of the underlying" is zero in this case. Forgoing early exercise would lost this interest.
Q3. Replication
might have better costs if "the interest earned" component in $\eqref{one}$ is positive. It is likely the case when the call is very deep in-the-money, volatility is low and $r^f \geq r$. If $r^f < r$ "the intelligent operator" will need to take into consideration the loss of time value due to early exercise (i.e. difference between intrinsic value and the market price of the exercised option). |
Table of Contents
Sets of the First and Second Categories in a Topological Space
Recall from the Dense and Nowhere Dense Sets in a Topological Space page that if $(X, \tau)$ is a topological space then a set $A \subseteq X$ is said to be dense in $X$ if the intersection of $A$ with all open sets (except for the empty set) is nonempty, that is, for all $U \in \tau \setminus \{ \emptyset \}$ we have that:(1)
Furthermore, $A$ is said to be nowhere dense if the interior of the closure of $A$ is empty, that is:(2)
We will now look at two very important definitions regarding whether an arbitrary set $A \subseteq X$ can either be written as the union of a countable collection of nowhere dense subsets of $X$ or not.
Definition: Let $(X, \tau)$ be a topological space. A set $A \subseteq X$ is said to be of The First Category or Meager if $A$ can be expressed as the union of a countable number of nowhere dense subsets of $X$. If $A$ cannot be expressed as such a union, then $A$ is said to be of The Second Category or Nonmeager.
Note that in general it is much easier to show that a set $A \subseteq X$ of a topological space $(X, \tau)$ is of the first category since we only need to find a countable collection of nowhere dense subsets, say $\{ A_1, A_2, ... \}$ (possibly finite) where each $A_i$ is nowhere dense such that:(3)
Showing that $A \subseteq X$ is of the second category is much more difficult since we must show that no such union of a countable collection of nowhere dense subsets from $X$ equals $A$.
For an example of a set of the first category, consider the topological space $(\mathbb{R}, \tau)$ where $\tau$ is the usual topology of open intervals and consider the set $\mathbb{Q} \subseteq \mathbb{R}$ of rational numbers. We already know that the set of rational numbers is countable, so the following union is a union of a countable collection of subsets of $X$:(4)
Each of the sets $\{ q \}$ is nowhere dense. Therefore $\mathbb{Q}$ can be expressed as the union of a countable collection of nowhere dense subsets of $X$, so $\mathbb{Q}$ is of the first category. |
Reflexive: A binary relation $R$ is reflexive if $a\ R\ a$ for every $a$.
(1) Is it true that $xx = 1$ for all $x \in \Bbb Z$?
(2) Is it true that $|x| = |x|$ for all $x \in \Bbb Z$?
(3) Is it true that $p\mid p$ for all $p \in \Bbb Z$?
(4) Is it true that $x^2 + y^2 = x^2 + y^2$ for every $(x,y) \in \Bbb R^2$?
Irreflexive: A binary relation $R$ is irreflexive if $a\ R\ a$ is never true for any $a$.
are any of the statements above FALSE for every $x,p$, or $(x,y)$?
Symmetric: A binary relation $R$ is symmetric if $b\ R\ a$ is true whenever $a\ R\ b$ is true.
(1) If $xy = 1$, is it always true that $yx = 1$?
(2) if $|x| = |y|$, is it always true that $|y| = |x|$?
(3) if $p\mid q$, is it always true that $q\mid p$?
(4) if $x^2 + y^2 = p^2 + q^2$, is it always true that $p^2 + q^2 = x^2 + y^2$?
Antisymmetric: $R$ is antisymmetric if whenever $a\ R\ b$ and $a \ne b$, it is false that $b\ R\ a$. (Some definitions may not include the restriction that $a \ne b$ - check your textbook or notes to find out what definition you use. I leave it in so that $\le$ and $\ge$ qualify.)
for any of the conditions above, is it true that when the first holds for unequal values, the second is never true?
Transitive: $R$ is transitive if whenever $a\ R\ b$ and $b\ R\ c$, we also have $a\ R\ c$.
(1) If $xy = 1$ and $yz = 1$, does it follow that $xz = 1$?
(2) if $|x| = |y|$ and $|y| = |z|$, does it follow that $|x| = |z|$?
(3) if $p\mid q$ and $q\mid r$, does it follow that $p\mid r$?
(4) if $x^2 + y^2 = p^2 + q^2$ and $p^2 + q^2 = u^2 + v^2$, does it follow that $x^2 + y^2 = u^2 + v^2$?
Equivalence relation: $R$ is an equivalence relation if it is reflexive, symmetric, and transitive.
Did any of the 4 relations pass these three tests?
Partial order: $R$ is a partial order if it is (either reflexive or irreflexive), antisymmetric, and transitive. (Again, your book may differ as to whether reflexive or irreflexive is required, or if either one will work. Irreflexive partial orders include $<$ and $>$. Reflexive partial orders include $\le$ and $\ge$.)
Are any of the 4 relations transitive and antisymmetric? Of those, are they either reflexive or irreflexive?
Total order: $R$ is a total order if it is a partial order, and satifies that for all $a \ne b$, either $a\ R\ b$ or $b\ R\ a$ holds.
Of the partial orders, are there any two values in the set on which they are defined, for which the relation does not hold in either order? |
Consider the following one-shot version of a labour market matching model. Let the labour force be normalized at 1, who, because there is only one period, all start out as unemployed. There is a very large number of firms who can enter the market and search for a worker. Firms who engage in search first have to pay a fixed cost, $k$. If a measure $v$ of firms enters the labour market, a constant returns to scale matching function $m(1,v)$ gives us the total measure of matches in the economy.
Within each match, the firm and the worker bargain for the wage, $w$, so that the workers get a constant proportion of $y$. Denote this proportion by $\beta$, which is interpreted as the bargaining power of the worker. Assume $\frac{k}{y} < 1 - \beta$ for the firm.
Define market tightness as $ b \equiv \frac{1}{v}$ and assume that the arrival rate for a firm is given by: $a_{F} = 1 - e^{-b}$.
Consider the firms can enter the labour market freely if they pay the entry cost, then what is the equilibrium value of $b$? Describe it graphically? Does it always exist? Is it unique?
My solution: The value of a vacancy: $$V = -k + a_{F}(b)(J-V),$$and the value of a filled job: $$J = y-w.$$ If the firms enter the labour market freely then $V=0$. Then from these two equations I am left with this equation $$ 1 - e^{-b} = a_{F}(b) = \frac{k}{y (1-\beta)}.$$
Graphically, the function looks something like this:
From the graph, it looks like that $b^*$ is unique, but how do I know if it always exists or not? |
Most gauge transformations in the standard model are easy to see are measurement invariant. Coordinate transformations, SU(3) quark colours, U(1) phase rotations for charged particles all result in no measurable changes. But how does this work for SU(2) rotations in electroweak theory, where...
Mainly I want to know the following thing: electrons when excited they tend to want to go back to ground state, right? One way is by photons, but how does that work? Accelerating charges creates EM waves, but in this case there was no acceleration, right? Or is the term accelerating only a way...
The full questions is in the picture. I already solved a) and found 5.6E14 electrons per secondFor b) i first found the power of the light but just multiplying the intensity with the area: (6.0 W/m2)(3.5E-4 m^2) = 0.0021 WThen I tried to use the voltage from the graph but i am not sure which...
I'm a bit confused as to why can't you transmit AC current over a single wire.For instance, say you have an AC generator which induces potential difference at different points of the wire and thus, creating current. Downstream, the wire can be split and applied to a load. When the wire is...
Suppose you have a pair of electrons in the same quantum state, and are thus spin entangled, and they absorb a pair of photons and release them at the same time. How would this affect the photons? Would the photons be entangled? Would it affect the photon spin, and if so, how would it affect the...
Please let me know if the following reaction is possible for high energy electrons colliding with neutrons or neutron-rich nuclei:n+e^{-}\to \Delta^{-}+\nu_e.\tag{1}If it is forbidden for some conservation law or for some other reason, please give me an explanation why. This reaction is...
1. Homework StatementLong and thin sample of silicon is stationary illuminated with an intensive optical source which can be described by a generation function ##G(x)=\sum_{m=-\infty}^\infty Kδ(x-ma)## (Dirac comb function). Setting is room temperature and ##L_p## and ##D_p## are given. Find...
If an electron is moving in a circle in a magnetic field, it produces a magnetic field in accordance to the right hand rule. If a proton is moving in a circle in a magnetic field, would it produce a magnetic field in accordance to the left hand equivalent to the right hand rule.
The Atom of Helium is doubly excited in 2p2 1DCan someone explain to me how these energy symbols work? I have a problem with what the 1D means specifically. I know 2p2 means two electrons in the 2p state. The 1 in 1D could be referring to electron being in a singleton, but I don't understand...
I feel like I must be missing something obvious, but I can't figure it out. I have the speed of an electron, and to calculate its frequency i used p = h/λ, then subbed in p =mv and λ= v/f. Giving me the equation f = mv2/h. However, I also could use E = 1/2 mv2 and E = hf to give me the equation...
I am wondering whether electrons have a net motion against an applied constant electric field in a conductor. Intuition tells me that "of course they should", but so far the math has shown me otherwise.Here are my current thoughts:1) I cannot rely on the obsolute Drude's model. What's more...
1. Homework StatementSuppose an object has a charge of 1 C and gains #9.38 ✕ 10^18# electrons. When another object is brought in contact with the first object (after it gains the electrons), the resulting charge on the the second object is 0.9 C. What was the initial charge (in Coulombs)?2...
Theory explains magnetism in iron as a combined effect of magnetic moments of electrons. Now, what is confusing me is that valence electrons in iron are supposed to be free. The valence band and conduction band overlap. So, what kind of orbital and spin-ular momentum do these free electrons...
So I understand in a battery that an anode (such as zinc) and a cathode (such as carbon) are separated by an electrolyte. I also understand that the electrons want to flow into the cathode, but can't get to them, so as soon as a conductor connects the two terminals, current can flow. However...
So, I'm new to electronics and I started to build some circuits with LEDs. I read up on how LEDs work and how they consist of a doped semiconductor material etc. But when I actually went to wire the LED in, it said the anode should be connected to the positive terminal of the power source. I'm...
So in my physics textbook a problem is stated. We are given an external electric field directed downwards of 150N/C. We are then told that an electron is released in the electric field and it moves upwards 520m. Finally we are asked to calculate the change in electric potential energy of the...
Dear all, sorry I made a new post similar to the previous post "Initial conditions..", however, a critical point was missed in the previous discussion:The initial conditions y(0)=1 and y'(0)=0 are fine and help in solving the Schrödinger equation, however, studying free electrons, the equation...
We all think that electric current is the electrons flow without mass transfer in conductor, i.e. charged lepton flow.But charged baryons flow can also deemed as "electric" current, e.g. ionic current.My question is that charged baryons flow can induce magnetic field? Same amperes, then same B...
1. Homework StatementHi! So I stumbled upon this simple "plug n' play" exercise in my Physics textbook. Basically it gives you certain molecules/atoms, and tells you to measure the Electric Charge, and its Mass. Pretty simple, but I hit upon some hickups. Anyway, let's get to it:Find the...
In a diode, we have N side, P side, and a depletion region, made of positive and negative charged sides. N side and P side of the diodes are neutral charge.In N side there are free electrons. In the positive charged side of the depletion region, there are positively ionized atoms that "lack"...
What gives metals their Lustre?One of my books(living science chemistry by arun syamal) says that the electrons achieve a higher energy state by absorbing energy and come back to their ground state by emitting it of which light is a part but my teacher tells that it is due to the crystalline...
The prefix is a bit irrelevantThis is on renormalisation.How do they cancel out? Isn't it adding? So the mass experimental = m + (c2correction) so how do you cancel out the m and correction? I'm new to this area (just finished watching lectures by Richard Feynman, specifically a 4 lecture...
I've been reading about the photoelectric effect, and something got me thinking. If the frequency of light shone onto the metal is below the threshold frequency, no electrons are liberated from the surface of the metal, since electrons absorb quanta of energy, so if that light is shone for a...
The wave-particle duality of light was demonstrated first with Thomas Young's 1801 Interference Experiment...and then more clearly with the Double Slit Experiment. Both of these were done with light (so photons).My question is -- How did we come to understand the same of electrons? Did we...
Light bulbs and cathode ray tubes are structurally similar in some respects. For example, both contain a filament -- in the light bulb, the filament heats up to produce light, while in a cathode ray tube, the filament emits electrons, which are then steered into a target (in a CRT TV, the...
Hello,I have a presentation tomorrow and in a segment, I talk about light absorption. It's more conceptual than technical. I did quite a bit of research on the topic but because of simplifying information I may have butchered the facts and written something wrong. Could anyone please confirm/...
1. Homework StatementAn electron is released from rest in a weak electric field given by -2.8 x 10-10 N/C [PLAIN]https://www.flipitphysics.com/Content/smartPhysics/Media/Images/Tipler/Symbols/jhatbold.gif. [Broken] After the electron has traveled a vertical distance of 1.9 µm, what is its...
Electrons are moving VERY fast. However, they don't have a high drift velocity in a circuit. Why? Is it because every time they advance a little bit they collide with an atom? Or is it because the electric field in a circuit is not strong enough so the electrons don't get pushed enough? |
Prototype-based Machine Learning on Distance Data.
Project description Prototype-based Machine Learning on Distance Data
Copyright (C) 2019 - Benjamin Paassen
Machine Learning Research Group Center of Excellence Cognitive Interaction Technology (CITEC) Bielefeld University
This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 3 of the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with this program; if not, see http://www.gnu.org/licenses/.
Introduction
This scikit-learn compatible, Python3 library provides several algorithms to learn prototype models on distance data. At this time, this library features the following algorithms:
Relational Neural Gas (Hammer and Hasenfuss, 2007) for clustering, Relational Generalized Learning Vector Quantization (Hammer, Hofmann, Schleif, and Zhu, 2014) for classification, and Median Generalized Learning Vector Quantization (Nebel, Hammer, Frohberg, and Villmann, 2015) for classification.
Refer to the Quickstart Guide for a note on how to use these models and refer to the Background section for more details on the algorithms.
Note that this library follows the
If you intend to use this library in academic work, please cite the respective reference paper.
Installation
This package is available on
pypi as
proto_dist_ml. You can installit via
pip install proto_dist_ml
QuickStart Guide
For an example we recommend to take a look at the demo in the notebook
demo.ipynb. In general, all models in this library follow the scikit-learnconvention, i.e. you need to perform the following steps:
Instantiate your model, e.g. via
model = proto_dist_ml.rng.RNG(K)where
Kis the number of prototypes.
Fit your model to training data, e.g. via
model.fit(D), where
Dis the matrix of pairwise distances between your training data points.
Perform a prediction for test data, e.g. via
model.predict(D), where
Dis the matrix of distances from the test to the training data points.
Background
The basic idea of prototype models is that we can cluster andclassify data by assigning them to the cluster/class of the closest
prototype,where a prototype is a data point that represents the cluster/class well.
In case of distance data, we can not express a prototype in vectorial form butinstead need to use an indirect form, namely a convex combination of existingdata points. In other words, our $
k$th prototype $
w_k$ is defined as
\vec w_k = \sum_{i=1}^m \alpha_{k, i} \cdot \vec x_i \qquad \text{where } \sum_{i=1}^m \alpha_{k, i} = 1 \text{ and } \alpha_{k, i} \geq 0 \quad \forall i
where $
\vec x_1, \ldots, \vec x_m$ are the training data points and where$
\alpha_{k, 1}, \ldots, \alpha_{k, m}$ are the convex coefficiantsrepresenting prototype $
w_k$. Because the prototype is fully specified bythe data and the convex coefficients, we do not need any explicit form for$
w_k$ anymore.
To cluster/classify new data, we now need to determine the distance between adata point $
x$ and a prototpe $
w_k$. As it turns out, this distance canalso be expressed solely in terms of the convex coefficients and thedata-to-data distances. In particular, we obtain:
d(x, w_k)^2 = \sum_{i=1}^m \alpha_{k, i} \cdot d(x, x_i)^2 - \frac{1}{2} \sum_{i=1}^m \sum_{j=1}^m \alpha_{k, i} \cdot \alpha_{k, j} \cdot d(x_i, x_j)^2
In matrix-vector notation we obtain:
d(x, w_k)^2 = {\vec \alpha_k}^T \cdot \vec d^2 - \frac{1}{2} {\vec \alpha_k}^T \cdot D^2 \cdot \vec \alpha_k
where $
\vec d^2$ the vector of distances between $
x$ and all trainingdata points $
x_i$ and where $
D^2$ is the distance matrix between thetraining data points.
The main challenge for distance-based prototype learning is now to optimizethe coefficients $
\alpha_{k, i}$ according to some meaningful loss function.The loss function and its optimization differs between the algorithms. In moredetail, we take the following approaches.
Relational Neural Gas
Relational neural gas (RNG; Hammer and Hasenfuss, 2007) is a clustering approach that tries to optimize the loss function
\sum_{i=1}^m \sum_{k=1}^K h_{i, k} \cdot d(x_i, w_k)^2
where $
h_{i, k}$ quantifies how responsible prototype $
w_k$ is fordata point $
x_i$. This term is calculated as follows:
h_{i, k} = \exp(-r_{i, k} / \lambda) \qquad \text{where } r_{i, k} = |\{ l | d(x_i, w_l) < d(x_i, w_k) \}|
In other words, $
w_k$ is the $
r_{i, k}$-closest prototype to data point$
x_i$ and the lower ranked a prototype is (i.e. the closer it is), the higheris its responsibility for the data point. $
\lambda$ is a scaling factor thatexpresses how many prototypes are still considered. Per default, we start with$
\lambda = K / 2$ and then anneal $
\lambda$ until $
\lambda = 0.01$, i.e.only the closest prototype is considered. At that point, the loss above isequivalent to the $
K$-means loss.
Given the current values for $h_{i, k}$, optimizing the convex coefficients$
\alpha_{k, i}$ is possible in closed form. In particular, we obtain:$
\alpha_{k, i} = h_{i, k} / \sum_j h_{j, k}$. The RNGtraining procedure thus consists of three steps which are iterated in eachtraining epoch:
Compute the responsibilities $
h_{i, k}$.
Compute the new convex coefficients $
\alpha_{k, i}$.
Decrease $
\lambda$.
Relational Generalized Learning Vector Quantization
Relational generalized learning vector quantization (RGLVQ; Hammer, Hofmann, Schleif, and Zhu, 2014) is a classification approach which aims at optimizing the generalized learning vector quantization loss:
\sum_{i=1}^m \Phi\Big(\frac{d_i^+ - d_i^-}{d_i^+ + d_i^-}\Big)
where $
d_i^+$ is the distance of data point $
x_i$ to the closest prototypewith the same label, $
d_i^-$ is the distance of data point $
x_i$ to theclosest prototype with a different label, and $
\Phi$ is a squashingfunction (such as tanh or the logistic function).Note that data point $
x_i$ is correctly classified if and onlyif $
d_i^+ - d_i^- < 0$. As such, the GLVQ loss can be regarded as a softapproximation of the classification error.
Note that this loss has the drawback that distances need to be strictly positive in order to guarantee a nonzero denominator. This excludes non-Euclidean distances (i.e. distances which do not correspond to an inner product) because these may imply negative data-to-prototype distances.
We optimize this loss via L-BFGS, restricting the coefficients to be convex in each step. The gradient follows directly from the formula above and the distance formula above. For more details, refer to (Hammer et al., 2014).
Median Generalized Learning Vector Quantization
Median generalized learning vector quantization(MGLVQ; Nebel, Hammer, Frohberg, and Villmann, 2015) is a variantof GLVQ that restricts prototypes to be strictly data points, i.e. for eachprototype $
w_k$ there exists exactly one $
i$, such that $
\alpha_{k, i} = 1$and every other coefficient is zero. This has two key advantages. First, itpermits non-Euclidean and even asymmetric distances because we do not rely onan interpolation between data points. Second, it is more efficient duringclassification because we can compute the distances to the prototypes directlyand do not need to use the relational distance formula above.
However, MGLVQ is also more challenging to train because we can not perform a smooth gradient method but instead must apply a discrete optimization scheme. In this toolbox, we optimize the GLVQ loss (see above) via greedy hill climbing, i.e. we try to find any prototype-datapoint combination that would reduce the loss and apply the first such combination we find until no such move exists anymore.
Contents
This library contains the following files.
demo.ipynb: A demo script illustrating how to use this library.
LICENSE.md: A copy of the GPLv3 license.
mglvq_test.py: A set of unit tests for
mglvq.py.
proto_dist_ml/mglvq.py: An implementation of median generalized learning vector quantization.
proto_dist_ml/rglvq.py: An implementation of relational generalized learning vector quantization.
proto_dist_ml/rng.py: An implementation of relational neural gas.
README.md: This file.
rglvq_test.py: A set of unit tests for
rglvq.py.
rng_test.py: A set of unit tests for
rng.py.
Licensing
This library is licensed under the GNU General Public License Version 3.
Dependencies Literature Hammer, B. & Hasenfuss, A. (2007). Relational Neural Gas. Proceedings of the 30th Annual German Conference on AI (KI 2007), 190-204. doi:10.1007/978-3-540-74565-5_16. Link Hammer, B., Hofmann, D., Schleif, F., & Zhu, X. (2014). Learning vector quantization for (dis-)similarities. Neurocomputing, 131, 43-51. doi:10.1016/j.neucom.2013.05.054. Link Nebel, D., Hammer, B., Frohberg, K., & Villmann, T. (2015). Median variants of learning vector quantization for learning of dissimilarity data. Neurocomputing, 169, 295-305. doi:10.1016/j.neucom.2014.12.096 Project details Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Filename, size File type Python version Upload date Hashes Filename, size proto_dist_ml-1.0.1-py3-none-any.whl (30.5 kB) File type Wheel Python version py3 Upload date Hashes View hashes Filename, size proto_dist_ml-1.0.1.tar.gz (18.9 kB) File type Source Python version None Upload date Hashes View hashes Hashes for proto_dist_ml-1.0.1-py3-none-any.whl
Algorithm Hash digest SHA256
865e6230ea16be6f817876de4c36181e87f087b68872bca5e7d0c165f2e94f1e
MD5
cd3b98b0add015f0520ee1e186f69b40
BLAKE2-256
98a71fb559b58caf104dda5ade55500c9e434103278a2f0f8237fd7b18ca59ef |
$$Tr(\rho^{AB} (\sigma^A \otimes I/d)) = Tr(\rho^A \sigma^A)$$
I came across the above, but I'm not sure how it's true. I figured they first partial traced out the B subsystem, and then trace A, but I don't see how you are allowed to partial trace out B from both the factors in the arguments. A proof or any intuition on this would be appreciated.
Edit 1:
The notation
$\rho^{AB}$ is a state in Hilbert space $H_A \otimes H_B$
$\sigma^A$ is a state in Hilbert space $H_A$
$\rho^A$ is $\rho^{AB}$ with $B$ subsystem traced out.
$I/d$ is the maximally mixed state in Hilbert space $B$.
I saw this being used in Nielsen and Chuang, section 11.3.4, in the proof of subadditivity of entropy.
Edit 2:
So, I tried to write an answer based on DaftWullie's comment and Алексей Уваров's answer, but I am stuck again.
So, $$\rho^{AB} = \sum_{mnop} \rho_{mnop} |mo\rangle \langle np|$$
Then $$\rho^{A} = \sum_{mno} \rho_{mnoo} |m\rangle \langle n|$$
Let $$\sigma^A = \sum_{ij} \sigma_{ij} |i\rangle \langle j|$$
And $$I/d = \sum_{xy} [I/d]_{xy} |x\rangle \langle y|$$
RHS
$$Tr(\rho^A \sigma^A)\\ = Tr(\sum_{mno} \rho_{mnoo} |m\rangle \langle n|\sum_{ij} \sigma_{ij} |i\rangle \langle j|)\\ = Tr(\sum_{mnoj} \rho_{mnoo} \sigma_{nj} | m \rangle \langle j|)\\ = \sum_{mno} \rho_{mnoo} \sigma_{nm}$$
LHS
$$Tr(\rho^{AB} (\sigma^A \otimes I/d)\\ = Tr(\sum_{mnop} \rho_{mnop} |mo\rangle \langle np| \sum_{ijxy} \sigma_{ij} [I/d]_{xy} |ix\rangle \langle jy|)\\ = Tr(\sum_{mnoxjy}\rho_{mnox} \sigma_{nj} [I/d]_{xy} | mo \rangle \langle jy |)\\ = \sum_{mnyx} \rho_{nm}[I/d]_{xy}\\ = (1/d)\sum_{mny} \rho_{mnyy} \sigma_{nm}$$
Which is the same as the RHS, but there's an extra $1/d$ factor?
Also, am I thinking about this the wrong way? Is there a simpler way to look at this? |
I was looking for why optical quantum computers don't need "extremely low temperatures" unlike superconducting quantum computers.Superconducting qubits usually work in the frequency range 4 GHz to 10 GHz.The energy associated with a transition frequency $f_{10}$ in quantum mechanics is $E_{10} = h f_{10}$ where $h$ is Planck's constant.Comparing the ...
The existing answer does a good job at describing the state that comes from a SPDC configuration at low conversion efficiency, but it's also worth noting that the single-photon behaviour is not all there is to the process. Thus, in particular, if your conversion efficiency (or you detection time / efficiency / SNR) is good enough that you can detect (and ...
BackgroundFirst of all, I'll use $\lvert H\rangle$ as a horizontally polarised state and $\lvert V\rangle$ as a vertically polarised state1. There are three modes of light involved in the system: pump (p), taken to be a coherent light source (a laser); as well as signal and idler (s/i), the two generated photonsThe Hamiltonian for SPDC is given by $H = \...
Because light, at the right frequencies, interacts weakly with matter.In the quantum regime, this translates to single photons being largely free of the noise and decoherence that is the main obstacle with other QC architectures.The surrounding temperature doesn't disturb the quantum state of a photon as much as it does when the quantum information is ...
A standard reference for linear optical quantum computing is Kok et al. 2009 (quant-ph/0512071).If one qubit is encoded in the polarization degree of freedom of a single photon, and the second qubit in the path degree of freedom of the same photon, then a CNOT gate is trivially implemented by a polarizing beamsplitter.This is a kind of beamsplitter that ...
It appears to be true, up to a point. As I read Scott Aaronson's paper, it says that if you start with 1 photon in each of the first $M$ modes of an interferometer, and find the probability $P_S$ that a set $s_i$ photons is output in each mode $i\in\{1,\ldots, N\}$ where $\sum_is_i=M$, is$$P_s=\frac{|\text{Per(A)}|^2}{s_1!s_2!\ldots s_M!}.$$So, indeed, ...
According to this UK-oriented report by Gooch and Housego dated May 8, 2018, quantum computing is only one of several main key applications expected to have a market impact:Clock technology/timing (e.g. bridging between the optical frequencies typical of atomic clocks and electrical/microwavefrequencies typical of timing signals within ...
You cannot efficiently recover the absolute values of the amplitudes, but if you allow for arbitrary many samples, then you can estimate them to whatever degree of accuracy you like.More specifically, if the input state is a single photon in each of the first $n$ modes, and one is willing to draw an arbitrary number of samples from the output, then it is ...
To start off, I would really suggest you to read this review on "Quantum information with continuous variables(cv)". It covers most of your questions with cv architecture. Since it is a very big review, I will try to address your questions with what I can remember from reading that paper and glancing over it again now.For discrete variables(dv), as you ...
By photon qubits, I'm assuming that you meant single-photon qubit systems.Can one use squeezed light to effect multi-qubit operations on photon qubits, or are these completely independent approaches?There are two protocols in quantum communication namely, discrete-varibale (dv) and continuous variable (cv). Squeezed light qubits are a part of cv ...
Your question asks two questions that are less-related than you might hope.First, how do we increase the probability of down-conversion occuring?This is fundamentally a question about material properties: the chance per unit length of down-conversion occurring is proportional to $\chi^{(2)}$; if our material of choice doesn't have good phase-matching ...
At Xanadu, we're using integrated quantum photonics to build our photonic quantum computing chips. In this case, we have integrated chips containing waveguides --- these are coupled to lasers to generate input resource states, undergo manipulation on the chip, and then are measured via a variety of detectors available in quantum optics. These can include ...
Yes. The kets themselves can have arbitrary labels, and it's just for you to establish the connection between them and the physical scenario. There's no reason why you can't have the physical scenario you've specified and, indeed, people frequently do.
Here's some relevant work in Optimizing type-I polarization-entangled photons-Radhika Rangarajan, Michael Goggin, Paul Kwiat.Abstract:Optical quantum information processing needs ultra-bright sources ofentangled photons, especially from synchronizable femtosecond lasersand low-cost cw-diode lasers. Decoherence due to timing informationand ...
DanielSank is correct, but I think the answer is actually even more subtle. If there was no loss there would also be no way the background radiation leaking into your quantum device. Even if it was initially thermally excited, one could actively reset the state of the qubits. Thus, in addition to thermal excitations of microwave qubits, the fundamental ...
While the Pauli-$Z$ matrix is a 2 x 2 matrix, there are more basis states that need to be considered, namely the vacuum. The atomic basis states are $\left|0\right>$ and $\left|1\right>$, representing the number of excitations in the atom (as it's only a two-level atom, there can't be more than 1 excitation) and this is what $Z$ acts on. Similarly, the ...
This is very much possible. And is a very general technique of how product systems in composite states are coupled. Here of the form $|n_1\rangle |n_2\rangle$. This kind of general ket is a solution of the Hamiltonian interaction/coupling terms like $V\sim (a_1^\dagger a_2 +h.c)$ which describe the exchange of one quanta (between the two optical cavities ... |
The Fixed Point Method for Approximating Roots
We saw on the Fixed Points page that $\alpha$ is a fixed point of $g$ if $\alpha = g(\alpha)$. We will now discuss why fixed points are important in finding roots. Suppose that we want to solve $f(\alpha) = 0$. Then we can define a function $g$ with a fixed point at $\alpha$ in many different ways, and as a result, if $\alpha$ is a fixed point of $g$, then $f$ will have a zero at $x = \alpha$. For example, consider the function $f(x) = x^2 - 3x + 1$. Suppose that we want to find a root of $f$. Of course, the easiest method would be applying the quadratic formula, however, let's put that aside for the time. We want to find $\alpha$ such that $f(\alpha) = 0$, that is:(1)
Now suppose that we let $g(\alpha) = -\frac{1}{\alpha - 3}$ for $\alpha \neq 3$ (clearly $3$ is not a root of the original function since f(3) \neq 0 $]]). Then we have that:(2)
So $\alpha$ is a fixed point of $g$, and furthermore, $f(x) = g(x) - x = 0$, so $f(\alpha) = g(\alpha) - \alpha = 0$ and $\alpha$ is a root of $f$. So if we can approximate a fixed point of $g$, then we consequentially approximate a root of $f$.
Let $f$ be a continuous function and suppose that there exists a root $\alpha$ of $f$ on the interval $[a, b]$, that is, find $\alpha \in [a, b]$ such that $f(\alpha) = 0$. If we can rewrite $f$ as $x = g(x)$ for some function $g$ (note that the form $x = g(x)$ may not be unique), then it is possible that the recursive sequence $x_{n+1} = g(x_n)$ will converge, and if this sequence does converge, say to some $\alpha$, then:(3)
We are now ready to look at the Fixed Point Method for finding roots of a function.
Theorem 1 (The Fixed Point Method): Suppose that $f$ is a continuous function on $[a, b]$ and that we want to solve $f(x) = 0$ in the form $x = g(x)$ where $g$ is continuous on $[a, b]$. A fixed point $\alpha$ of $g$ is therefore a root of $f$. Step 1: Let $x_0$ be an initial approximation. Step 2: Generate a sequence $\{ x_n \}$ where $x_{n+1} = g(p_n)$ for $n ≥ 0$.
If the sequence $\{ x_n \}$ converges, then $\lim_{n \to \infty} x_n = \alpha$.
In Theorem 1 above, the Fixed Point Method may not converge. The following graph represents a scenario for which the sequence $\{ x_n \}$ converges to the fixed point of interest.
Meanwhile, the next graph represents a scenario for which the sequence $\{ x_n \}$ diverges. |
For boolean-valued functions $f : \{-1,1\}^n \to \{-1,1\}$ the total influence has several additional interpretations. First, it is often referred to as the
average sensitivity of $f$ because of the following proposition: Proposition 27For $f : \{-1,1\}^n \to \{-1,1\}$ \[ \mathbf{I}[f] = \mathop{\bf E}_{{\boldsymbol{x}}}[\mathrm{sens}_f({\boldsymbol{x}})], \] where $\mathrm{sens}_f(x)$ is the sensitivityof $f$ at $x$, defined to be the number of pivotal coordinates for $f$ on input $x$. Proof: \begin{multline*} \mathbf{I}[f] = \sum_{i=1}^n \mathbf{Inf}_i[f] = \sum_{i=1}^n \mathop{\bf Pr}_{{\boldsymbol{x}}}[f({\boldsymbol{x}}) \neq f({\boldsymbol{x}}^{\oplus i})] \\ = \sum_{i=1}^n \mathop{\bf E}_{{\boldsymbol{x}}}[\boldsymbol{1}_{f({\boldsymbol{x}}) \neq f({\boldsymbol{x}}^{\oplus i})}] = \mathop{\bf E}_{{\boldsymbol{x}}}\left[\sum_{i=1}^n \boldsymbol{1}_{f({\boldsymbol{x}}) \neq f({\boldsymbol{x}}^{\oplus i})}\right] = \mathop{\bf E}_{{\boldsymbol{x}}}[\mathrm{sens}_f({\boldsymbol{x}})]. \quad \Box \end{multline*}
The total influence of $f : \{-1,1\}^n \to \{-1,1\}$ is also closely related to the size of its
edge boundary; from Fact 14 we deduce: Examples 29(Recall Examples 15.) For boolean-valued functions $f : \{-1,1\}^n \to \{-1,1\}$ the total influence ranges between $0$ and $n$. It is minimized by the constant functions $\pm 1$ which have total influence $0$. It is maximized by the parity function $\chi_{[n]}$ and its negation which have total influence $n$; every coordinate is pivotal on every input for these functions. The dictator functions (and their negations) have total influence $1$. The total influence of $\mathrm{OR}_n$ and $\mathrm{AND}_n$ is very small: $n2^{1-n}$. On the other hand, the total influence of $\mathrm{Maj}_n$ is fairly large: roughly $\sqrt{2/\pi}\sqrt{n}$ for large $n$.
By virtue of Proposition 20 we have another interpretation for the total influence of
monotone functions:
This sum of the degree-$1$ Fourier coefficients has a natural interpretation in social choice:
Proposition 31Let $f : \{-1,1\}^n \to \{-1,1\}$ be a voting rule for a $2$-candidate election. Given votes ${\boldsymbol{x}} = ({\boldsymbol{x}}_1, \dots, {\boldsymbol{x}}_n)$, let $\boldsymbol{w}$ be the number of votes which agree with the outcome of the election, $f({\boldsymbol{x}})$. Then \[ \mathop{\bf E}[\boldsymbol{w}] = \frac{n}{2} + \frac12 \sum_{i=1}^n \widehat{f}(i). \] Proof: By the formula for Fourier coefficients, \begin{equation} \label{eqn:deg-1-sum} \sum_{i=1}^n \widehat{f}(i) = \sum_{i=1}^n \mathop{\bf E}_{{\boldsymbol{x}}}[f({\boldsymbol{x}}) {\boldsymbol{x}}_i] = \mathop{\bf E}_{{\boldsymbol{x}}}[f({\boldsymbol{x}})({\boldsymbol{x}}_1 + {\boldsymbol{x}}_2 + \cdots + {\boldsymbol{x}}_n)]. \end{equation} Now ${\boldsymbol{x}}_1 + \cdots + {\boldsymbol{x}}_n$ equals the difference between the number of votes for candidate $1$ and the number of votes for candidate $-1$. Hence $f({\boldsymbol{x}})({\boldsymbol{x}}_1 + \cdots + {\boldsymbol{x}}_n)$ equals the difference between the number of votes for the winner and the number of votes for the loser; i.e., $\boldsymbol{w} – (n-\boldsymbol{w}) = 2\boldsymbol{w} – n$. The result follows. $\Box$
Rousseau
[Rou62] suggested that the ideal voting rule is one which maximizes the number of votes which agree with the outcome. Here we show that the majority rule has this property (at least when $n$ is odd): Theorem 32The unique maximizers of $\sum_{i=1}^n \widehat{f}(i)$ among all $f : \{-1,1\}^n \to \{-1,1\}$ are the majority functions. In particular, $\mathbf{I}[f] \leq \mathbf{I}[\mathrm{Maj}_n] = \sqrt{2/\pi}\sqrt{n} + O(n^{-1/2})$ for all monotone $f$. Proof: From \eqref{eqn:deg-1-sum}, \[ \sum_{i=1}^n \widehat{f}(i) = \mathop{\bf E}_{{\boldsymbol{x}}}[f({\boldsymbol{x}})({\boldsymbol{x}}_1 + {\boldsymbol{x}}_2 + \cdots + {\boldsymbol{x}}_n)] \leq \mathop{\bf E}_{{\boldsymbol{x}}}[|{\boldsymbol{x}}_1 + {\boldsymbol{x}}_2 + \cdots + {\boldsymbol{x}}_n|], \] since $f({\boldsymbol{x}}) \in \{-1,1\}$ always. Equality holds if and only if $f(x) = \mathrm{sgn}(x_1 + \cdots + x_n)$ whenever $x_1 + \cdots + x_n \neq 0$. The second statement of the theorem follows from Proposition 30 and Exercise 18 in this chapter. $\Box$
Let’s now take a look at more analytic expressions for the total influence. By definition, if $f : \{-1,1\}^n \to {\mathbb R}$ then \begin{equation} \label{eqn:tinf-gradient} \mathbf{I}[f] = \sum_{i=1}^n \mathbf{Inf}_i[f] = \sum_{i=1}^n \mathop{\bf E}_{{\boldsymbol{x}}}[\mathrm{D}_i f({\boldsymbol{x}})^2] = \mathop{\bf E}_{{\boldsymbol{x}}}\left[\sum_{i=1}^n \mathrm{D}_i f({\boldsymbol{x}})^2\right]. \end{equation} This motivates the following definition:
Definition 33The (discrete) gradient operator$\nabla$ maps the function $f : \{-1,1\}^n \to {\mathbb R}$ to the function $\nabla f : \{-1,1\}^n \to {\mathbb R}^n$ defined by \[ \nabla f(x) = (\mathrm{D}_1 f(x), \mathrm{D}_2 f(x), \dots, \mathrm{D}_n f(x)). \]
Note that for $f : \{-1,1\}^n \to \{-1,1\}$ we have $\|\nabla f(x)\|_2^2 = \mathrm{sens}_f(x)$, where $\| \cdot \|_2$ is the usual Euclidean norm in ${\mathbb R}^n$. In general, from \eqref{eqn:tinf-gradient} we deduce:
An alternative analytic definition involves introducing the
Laplacian: Definition 35The Laplacian operator$\mathrm{L}$ is the linear operator on functions $f : \{-1,1\}^n \to {\mathbb R}$ defined by $\mathrm{L} = \sum_{i=1}^n \mathrm{L}_i$.
In the exercises you are asked to verify the following:
$\displaystyle \mathrm{L} f (x) = (n/2)\bigl(f(x) – \mathop{\mathrm{avg}}_{i \in [n]} \{f(x^{\oplus i})\}\bigr)$, $\displaystyle \mathrm{L} f (x) = f(x) \cdot \mathrm{sens}_f(x) \quad$ if $f : \{-1,1\}^n \to \{-1,1\}$, $\displaystyle \mathrm{L} f = \sum_{S \subseteq [n]} |S|\,\widehat{f}(S)\,\chi_S$, $\displaystyle \langle f, \mathrm{L} f \rangle = \mathbf{I}[f]$.
We can obtain a Fourier formula for the total influence of a function using Theorem 19; when we sum that theorem over all $i \in [n]$ the Fourier weight $\widehat{f}(S)^2$ is counted exactly $|S|$ times. Hence:
Theorem 37For $f : \{-1,1\}^n \to {\mathbb R}$, \begin{equation} \label{eqn:total-influence-formula} \mathbf{I}[f] = \sum_{S \subseteq [n]} |S| \widehat{f}(S)^2 = \sum_{k=0}^n k \cdot \mathbf{W}^{k}[f]. \end{equation} For $f : \{-1,1\}^n \to \{-1,1\}$ we can express this using the spectral sample: \[ \mathbf{I}[f] = \mathop{\bf E}_{\boldsymbol{S} \sim \mathscr{S}_{f}}[|\boldsymbol{S}|]. \]
Thus the total influence of $f : \{-1,1\}^n \to \{-1,1\}$ also measures the average “height” or degree of its Fourier weights.
Finally, from Proposition 1.13 we have $\mathop{\bf Var}[f] = \sum_{k > 0} \mathbf{W}^{k}[f]$; comparing this with \eqref{eqn:total-influence-formula} we immediately deduce a simple but important fact called the
Poincaré inequality. Poincaré InequalityFor any $f : \{-1,1\}^n \to {\mathbb R}$, $\mathop{\bf Var}[f] \leq \mathbf{I}[f]$.
Equality holds in the Poincaré inequality if and only if all of $f$’s Fourier weight is at degrees $0$ and $1$; i.e., $\mathbf{W}^{\leq 1}[f] = \mathop{\bf E}[f^2]$. For boolean-valued $f : \{-1,1\}^n \to \{-1,1\}$, Exercise 1.19 tells us this can only occur if $f = \pm 1$ or $f = \pm \chi_i$ for some $i$.
For boolean-valued $f : \{-1,1\}^n \to {\mathbb R}$, the Poincaré inequality can be viewed as an (edge-)isoperimetric inequality, or
(edge-)expansion bound, for the Hamming cube. If we think of $f$ as the indicator function for a set $A \subseteq \{-1,1\}^n$ of “measure” $\alpha = |A|/2^n$, then $\mathop{\bf Var}[f] = 4\alpha(1-\alpha)$ (Fact 1.14) whereas $\mathbf{I}[f]$ is $n$ times the (fractional) size of $A$’s edge boundary. In particular, the Poincaré inequality says that subsets $A \subseteq \{-1,1\}^n$ of measure $\alpha = 1/2$ must have edge boundary at least as large as those of the dictator sets.
For $\alpha \notin \{0, 1/2, 1\}$ the Poincaré inequality is not sharp as an edge-isoperimetric inequality for the Hamming cube; for small $\alpha$ even the asymptotic dependence is not optimal. Precisely optimal edge-isoperimetric results (and also vertex-isoperimetric results) are known for the Hamming cube. The following simplified theorem is optimal for $\alpha$ of the form $2^{-i}$:
This result illustrates an important recurring concept in the analysis of boolean functions: the Hamming cube is a “small-set expander”. Roughly speaking, this is the idea that “small” subsets $A \subseteq \{-1,1\}^n$ have unusually large “boundary size”. |
In the context of a mean_variance framework consider an optimizing investor who chooses at time $T$ portfolio weights $w$ so as to maximize the quadratic objective function:
$$U(w) = E[R_p] - \frac{\gamma}{2}Var[R_p]= w'\mu - \frac{\gamma}{2}w'Vw$$
Where $E$ and $Var$ denote the mean and variance of the uncertain portfolio rate of return $R_p = w'R_{T+1}$ to be realized in time $T + 1$ and $\gamma$ is the relative risk aversion coefficient. The optimal portfolio weights will be:
$$w^* = \frac{1}{\gamma}V^{-1}\mu $$
Could I have a reference that proves this result? preferably a textbook that builds up to it. |
This module shows how the HP 10bII+ financial calculator can be used to price a call option with the Black-Scholes (1973) model. A key feature of this calculator is the ability to return the normal lower tail probability for the value z. The alternative is often a probability table.
The HP 10bII+ financial calculator is approved for use in the GARP FRM exams, but not the CFA exams.
According to the the Black-Scholes (1973) model, the theoretical price \(C\) for European call option on a non dividend paying stock is $$\begin{equation} C=S_0 N(d_1)-Xe^{-rT}N(d_2) \end{equation}$$ where$$d_1=\frac {log \left( \frac{S_0}{X} \right) + \left( r+ \frac {\sigma^2} {2} \right )T}{\sigma \sqrt{T}} $$ $$d_2=\frac {log \left( \frac{S_0}{X} \right) + \left( r – \frac {\sigma^2} {2} \right )T}{\sigma \sqrt{T}} = d_1 – \sigma \sqrt{T}$$
In equation 1, \(S_0\) is the stock price at time 0, \(X\) is the exercise price of the option, \(r\) is the risk free interest rate, \(\sigma\) represents the annual volatility of the underlying asset, and \(T\) is the time to expiration of the option. Further discussion and examples in Excel can be found at: Black-Scholes option pricing
The Black-Scholes model in the HP 10bII+ Example: The stock price at time 0, six months before expiration date of the option is $42.00, option exerise price is $40.00, the rate of interest on a government bond with 6 months to expiration is 5%, and the annual volatility of the underlying stock is 20%.
Calculation of the the call price can be completed as a 5 step process. Step 1. d1; 2. d2; 3. N(d1); 4. N(d2); and step 5, C. To point the way, the call price from equation 1 is $4.08.
We need some planning first, because intermediate results are assigned to memory registers in the calculator.
The order of calculation is influenced by my exposure to the HP 12C (reverse polish notation) calculator. It uses a memory stack, so numbered registers can be reduced, but it does not have a normal lower tail probability function.
Replacing the variable names with their values will help in performing the calculation. The numbers in this list refer to the major sections of the steps in the calculations shown below. \(d_1=\frac {log \left( \frac{42.00}{40.00} \right) + \left( 0.05+ \frac {0.2^2} {2} \right )0.5}{0.2 \sqrt{0.5}}\). This is equation 1 with values from the example. Calculates the numerator of d 1 Calculates the denominator of d 1– stored in register 3at step 12 Calculates the value of d 1– stored in register 0at step 15
register 1at step 17 register 2at step 22 register 4at step 26
Calculator mode: Chain
Display digits: 4
1 a. d 1 numerator
# HP 10bII+ Keystrokes Comment Display reads 1. 0.2 [Orange SHIFT down] [x 2] sigma 2 [0.0400] 2. [\(\div\)] 2 Divide by 2 3. [+] 0.05 Add the rate 4. [x] 0.5 [=] Multiply by time and display the result [0.0350] 5. [Orange SHIFT down] [STO] 0 Store the displayed value in register 0 6. 42 [\(\div\)] 40 [=] Divide the stock price by the exercise price, and display result [1.0500] 7. [Orange SHIFT down] [LN] Take the natural log of the value in the display [0.04879] 8. [+] [RCL] 0 [=] Add the displayed value to recalled value from register 0 [0.08379] 9. [Orange SHIFT down] [STO] 0 Store the displayed value in register 0 [0.08379] 1 b. d 1 denominator
# HP 10bII+ Keystrokes Comment Display reads 10. 0.5 [Orange SHIFT down] [√x] Take the square root of time [0.7071] 11. [x] 0.2 [=] Multiple by 0.2 and display result
[0.1414]
12. [Orange SHIFT down] [STO] 3 Store the displayed value in register 3 (For use later in d 2)
[0.1414]
1 c. d 1
# HP 10bII+ Keystrokes Comment Display reads 13. [Orange SHIFT down] [1/x] Take the reciprocal of the value in the display
[7.0711]
14. [x] [Recall] 0 [=] Multiply the recalled value from register 0 [0.5925] 15. [Orange SHIFT down] [STO] 0 Store the displayed value d in 1 register 0 (For use later in d) 2 [0.5925] 2. N(d 1)
# HP 10bII+ Keystrokes Comment Display reads 16. [Blue SHIFT up] [ Z⇆P] Calculate the cumulative normal probability for the value in the display
[0.7232]
17. [Orange SHIFT down] [STO] 1 Store the displayed value N(d in r 1) egister 1 (For use later in C) [0.7232] 3. d 2
# HP 10bII+ Keystrokes Comment Display reads 18. [RCL] 0 Recall d from register 0 1
[0.5925]
19. [-] [RCL] 3 Subtract the value of d 1 recalled from register 3 [0.1414] 20. [=] Display the value of d 2 [0.4511] 4. N(d 2)
# HP 10bII+ Keystrokes Comment Display reads 21. [Blue SHIFT up] [ Z⇆P] Calculate the cumulative normal probability for the value in the display
[0.6740]
22. [Orange SHIFT down] [STO] 2 Store the displayed value N(d in 2) register 2 (For use later in C) [0.6740] 5. C
# HP 10bII+ Keystrokes Comment Display reads 23. 0.05 [+/-] [x] 0.5 [=] Calculate the exponent: multiply the negative rate by time [-0.02500] 24. [Orange SHIFT down] [e x] Base e to negative rate x time [0.9753] 25. [x] [RCL] 2 [x] 40 [=] Multiply the displayed value by the recalled value from register 2, then multiply by the exercise price and display the result [26.2955] 26. [Orange SHIFT down] 4 Store the intermediate result in register 4
(For use in step 28)
[26.2955] 27. 42 [x] [RCL] 1 [=] Multiply the stock price by the value in register 1 [30.3760] 28. [-] [RCL] 4 [=] From the displayed value subtract the recalled value from register 4 [4.0805] |
Preprints (rote Reihe) des Fachbereich Mathematik Refine Year of publication 1996 (22) (remove)
271
The paper deals with parallel-machine and open-shop scheduling problems with preemptions and arbitrary nondecreasing objective function. An approach to describe the solution region for these problems and to reduce them to minimization problems on polytopes is proposed. Properties of the solution regions for certain problems are investigated. lt is proved that open-shop problems with unit processing times are equivalent to certain parallel-machine problems, where preemption is allowed at arbitrary time. A polynomial algorithm is presented transforming a schedule of one type into a schedule of the other type.
301
We extend the methods of geometric invariant theory to actions of non reductive groups in the case of homomorphisms between decomposable sheaves whose automorphism groups are non recutive. Given a linearization of the natural actionof the group Aut(E)xAut(F) on Hom(E,F), a homomorphism iscalled stable if its orbit with respect to the unipotentradical is contained in the stable locus with respect to thenatural reductive subgroup of the automorphism group. Weencounter effective numerical conditions for a linearizationsuch that the corresponding open set of semi-stable homomorphismsadmits a good and projective quotient in the sense of geometricinvariant theory, and that this quotient is in additiona geometric quotient on the set of stable homomorphisms.
270
274
This paper investigates the convergence of the Lanczos method for computing the smallest eigenpair of a selfadjoint elliptic differential operator via inverse iteration (without shifts). Superlinear convergence rates are established, and their sharpness is investigated for a simple model problem. These results are illustrated numerically for a more difficult problem.
283
A regularization Levenberg-Marquardt scheme, with applications to inverse groundwater filtration problems (1996)
The first part of this paper studies a Levenberg-Marquardt scheme for nonlinear inverse problems where the corresponding Lagrange (or regularization) parameter is chosen from an inexact Newton strategy. While the convergence analysis of standard implementations based on trust region strategies always requires the invertibility of the Fréchet derivative of the nonlinear operator at the exact solution, the new Levenberg-Marquardt scheme is suitable for ill-posed problems as long as the Taylor remainder is of second order in the interpolating metric between the range and dornain topologies. Estimates of this type are established in the second part of the paper for ill-posed parameter identification problems arising in inverse groundwater hydrology. Both, transient and steady state data are investigated. Finally, the numerical performance of the new Levenberg-Marquardt scheme is studied and compared to a usual implementation on a realistic but synthetic 2D model problem from the engineering literature.
280
This paper develops truncated Newton methods as an appropriate tool for nonlinear inverse problems which are ill-posed in the sense of Hadamard. In each Newton step an approximate solution for the linearized problem is computed with the conjugate gradient method as an inner iteration. The conjugate gradient iteration is terminated when the residual has been reduced to a prescribed percentage. Under certain assumptions on the nonlinear operator it is shown that the algorithm converges and is stable if the discrepancy principle is used to terminate the outer iteration. These assumptions are fulfilled , e.g., for the inverse problem of identifying the diffusion coefficient in a parabolic differential equation from distributed data.
277
A convergence rate is established for nonstationary iterated Tikhonov regularization, applied to ill-posed problems involving closed, densely defined linear operators, under general conditions on the iteration parameters. lt is also shown that an order-optimal accuracy is attained when a certain a posteriori stopping rule is used to determine the iteration number.
275
276
Let \(a_1,\dots,a_n\) be independent random points in \(\mathbb{R}^d\) spherically symmetrically but not necessarily identically distributed. Let \(X\) be the random polytope generated as the convex hull of \(a_1,\dots,a_n\) and for any \(k\)-dimensional subspace \(L\subseteq \mathbb{R}^d\) let \(Vol_L(X) :=\lambda_k(L\cap X)\) be the volume of \(X\cap L\) with respect to the \(k\)-dimensional Lebesgue measure \(\lambda_k, k=1,\dots,d\). Furthermore, let \(F^{(i)}\)(t):= \(\bf{Pr}\) \(\)(\(\Vert a_i \|_2\leq t\)), \(t \in \mathbb{R}^+_0\) , be the radial distribution function of \(a_i\). We prove that the expectation functional \(\Phi_L\)(\(F^{(1)}, F^{(2)},\dots, F^{(n)})\) := \(E(Vol_L(X)\)) is strictly decreasing in each argument, i.e. if \(F^{(i)}(t) \le G^{(i)}(t)t\), \(t \in {R}^+_0\), but \(F^{(i)} \not\equiv G^{(i)}\), we show \(\Phi\) \((\dots, F^{(i)}, \dots\)) > \(\Phi(\dots,G^{(i)},\dots\)). The proof is clone in the more general framework of continuous and \(f\)- additive polytope functionals.
282
Let \(a_1,\dots,a_m\) be independent random points in \(\mathbb{R}^n\) that are independent and identically distributed spherically symmetrical in \(\mathbb{R}^n\). Moreover, let \(X\) be the random polytope generated as the convex hull of \(a_1,\dots,a_m\) and let \(L_k\) be an arbitrary \(k\)-dimensional subspace of \(\mathbb{R}^n\) with \(2\le k\le n-1\). Let \(X_k\) be the orthogonal projection image of \(X\) in \(L_k\). We call those vertices of \(X\), whose projection images in \(L_k\) are vertices of \(X_k\)as well shadow vertices of \(X\) with respect to the subspace \(L_k\) . We derive a distribution independent sharp upper bound for the expected number of shadow vertices of \(X\) in \(L_k\).
293
Tangent measure distributions were introduced by Bandt and Graf as a means to describe the local geometry of self-similar sets generated by iteration of contractive similitudes. In this paper we study the tangent measure distributions of hyperbolic Cantor sets generated by contractive mappings, which are not similitudes. We show that the tangent measure distributions of these sets equipped with either Hausdorff or Gibbs measure are unique almost everywhere and give an explicit formula describing them as probability distributions on the set of limit models of Bedford and Fisher.
279
It is shown that Tikhonov regularization for ill- posed operator equation \(Kx = y\) using a possibly unbounded regularizing operator \(L\) yields an orderoptimal algorithm with respect to certain stability set when the regularization parameter is chosen according to the Morozov's discrepancy principle. A more realistic error estimate is derived when the operators \(K\) and \(L\) are related to a Hilbert scale in a suitable manner. The result includes known error estimates for ordininary Tikhonov regularization and also the estimates available under the Hilbert scale approach.
284
A polynomial function \(f : L \to L\) of a lattice \(\mathcal{L}\) = \((L; \land, \lor)\) is generated by the identity function id \(id(x)=x\) and the constant functions \(c_a (x) = a\) (for every \(x \in L\)), \(a \in L\) by applying the operations \(\land, \lor\) finitely often. Every polynomial function in one or also in several variables is a monotone function of \(\mathcal{L}\). If every monotone function of \(\mathcal{L}\)is a polynomial function then \(\mathcal{L}\) is called orderpolynomially complete. In this paper we give a new characterization of finite order-polynomially lattices. We consider doubly irreducible monotone functions and point out their relation to tolerances, especially to central relations. We introduce chain-compatible lattices and show that they have a non-trivial congruence if they contain a finite interval and an infinite chain. The consequences are two new results. A modular lattice \(\mathcal{L}\) with a finite interval is order-polynomially complete if and only if \(\mathcal{L}\) is finite projective geometry. If \(\mathcal{L}\) is simple modular lattice of infinite length then every nontrivial interval is of infinite length and has the same cardinality as any other nontrivial interval of \(\mathcal{L}\). In the last sections we show the descriptive power of polynomial functions of lattices and present several applications in geometry. |
Fix a non-empty open domain $\Omega\subseteq \mathbb{R}^d$ with compact closure, and a finite Borel measure $\mu$ on its closure $\overline{\Omega}$.
In Halmos' book it is shown that:
Classical Result:*For any bounded function $f\in L^p_{\mu}(\Omega;\mathbb{R})$ and every $\epsilon >0$, there exists a continuous function $g$ such that$$\int_{x \in \Omega} |f(x)-g(x)|^p \mu(dx) < \epsilon.$$* Reference/Question:Is there an analogue of this result for the Musielak–Orlicz spaces?
More specifically, I'm wondering if $p:\Omega\rightarrow \mathbb{R}$ is a measurable function satisfying the usual conditions (for example see this paper), and $ f \in L^{p(x)}_{\mu}(\Omega),$ then for every $\epsilon>0$, can we find a continuous function $g$ on $\overline{\Omega}$ such that $$ \|f-g\|_{\mu,p}<\epsilon, $$
Background:Where in the above:$$ L^{p(x)}_{\mu}(\Omega)\triangleq \left\{f:\Omega \rightarrow \mathbb{R}: \mbox{f is measurable and } \int_{x \in \overline{\Omega}} |f(x)|^{p(x)} \mu(dx)<\infty\right\},$$and can be seen to be a Musielak–Orlicz space space under the Luxemburg norm $\|\cdot\|_{\mu,p}$ defined by$$\|f\|_{\mu,p}\triangleq \inf\left\{\lambda >0 : \int_{x \in \overline{\Omega}} \left(\frac{|f(x)|}{\lambda}\right)^{p(x)} \mu(dx) \leq 1\right\}.$$ |
The Countable Complement Topology
Recall from the Topological Spaces page that a set $X$ an a collection $\tau$ of subsets of $X$ together denoted $(X, \tau)$ is called a topological space if:
$\emptyset \in \tau$ and $X \in \tau$, i.e., the empty set and the whole set are contained in $\tau$. If $U_i \in \tau$ for all $i \in I$ where $I$ is some index set then $\displaystyle{\bigcup_{i \in I} U_i \in \tau}$, i.e., for any arbitrary collection of subsets from $\tau$, their union is contained in $\tau$. If $U_1, U_2, ..., U_n \in \tau$ then $\displaystyle{\bigcap_{i=1}^{n} U_i \in \tau}$, i.e., for any finite collection of subsets from $\tau$, their intersection is contained in $\tau$.
We will now look at an topology that is similar to The Cofinite Topology.
Definition: If $X$ be a nonempty set. The Countable Complement Topology of $X$ is the collection of subsets $\tau = \{ \emptyset \} \cup \{ U \subseteq X : U^c \: \mathrm{is \: countable} \}$.
Let's verify that $\tau$ is a topology.
For the first condition, clearly $\emptyset \in \tau$. Furthermore, we note that $X^c = X \setminus X = \emptyset$ which is a countable set, so $X \in \tau$.
For the second condition, let $\{ U_i \}_{i \in I}$ be any arbitrary collection of subsets of $X$ from $\tau$. Then $U_i^c$ is countable for each $i \in I$. By De Morgan's Laws we have that:(1)
The intersection of a collection of countable sets is countable, so $\displaystyle{\left ( \bigcup_{i \in I} U_i \right )^c}$ is countable and so $\displaystyle{\bigcup_{i \in I} U_i \in \tau}$.
For the third condition, let $U_1, U_2, ..., U_n$ be a finite collection of subsets of $X$ in $\tau$. Once again, $U_i^c$ is countable for each $i \in \{ 1, 2, ..., n \}$. By De Morgan's Laws we have that:(2)
The finite union of a collection of countable sets is countable, so $\displaystyle{\left ( \bigcap_{i=1}^{n} U_i \right )^c}$ is countable and so $\displaystyle{\bigcap_{i=1}^{n} U_i \in \tau}$.
Therefore $(X, \tau)$ is a topological space. |
One disadvantage of the fact that you have posted 5 identical answers (1, 2, 3, 4, 5) is that if other users have some comments about the website you created, they will post them in all these place. If you have some place online where you would like to receive feedback, you should probably also add link to that. — Martin Sleziak1 min ago
BTW your program looks very interesting, in particular the way to enter mathematics.
One thing that seem to be missing is documentation (at least I did not find it).
This means that it is not explained anywhere: 1) How a search query is entered. 2) What the search engine actually looks for.
For example upon entering $\frac xy$ will it find also $\frac{\alpha}{\beta}$? Or even $\alpha/\beta$? What about $\frac{x_1}{x_2}$?
*******
Is it possible to save a link to particular search query? For example in Google I am able to use link such as: google.com/search?q=approach0+xyz Feature like that would be useful for posting bug reports.
When I try to click on "raw query", I get curl -v https://approach0.xyz/search/search-relay.php?q='%24%5Cfrac%7Bx%7D%7By%7D%24' But pasting the link into the browser does not do what I expected it to.
*******
If I copy-paste search query into your search engine, it does not work. For example, if I copy $\frac xy$ and paste it, I do not get what would I expect. Which means I have to type every query. Possibility to paste would be useful for long formulas. Here is what I get after pasting this particular string:
I was not able to enter integrals with bounds, such as $\int_0^1$. This is what I get instead:
One thing which we should keep in mind is that duplicates might be useful. They improve the chance that another user will find the question, since with each duplicate another copy with somewhat different phrasing of the title is added. So if you spent reasonable time by searching and did not find...
In comments and other answers it was mentioned that there are some other search engines which could be better when searching for mathematical expressions. But I think that as nowadays several pages uses LaTex syntax (Wikipedia, this site, to mention just two important examples). Additionally, som...
@MartinSleziak Thank you so much for your comments and suggestions here. I have took a brief look at your feedback, I really love your feedback and will seriously look into those points and improve approach0. Give me just some minutes, I will answer/reply to your in feedback in our chat. — Wei Zhong1 min ago
I still think that it would be useful if you added to your post where do you want to receive feedback from math.SE users. (I suppose I was not the only person to try it.) Especially since you wrote: "I am hoping someone interested can join and form a community to push this project forward, "
BTW those animations with examples of searching look really cool.
@MartinSleziak Thanks to your advice, I have appended more information on my posted answers. Will reply to you shortly in chat. — Wei Zhong29 secs ago
We are open-source project hosted on GitHub: http://github.com/approach0Welcome to send any feedback on our GitHub issue page!
@MartinSleziak Currently it has only a documentation for developers (approach0.xyz/docs) hopefully this project will accelerate its releasing process when people get involved. But I will list this as a important TODO before publishing approach0.xyz . At that time I hope there will be a helpful guide page for new users.
@MartinSleziak Yes, $x+y$ will find $a+b$ too, IMHO this is the very basic requirement for a math-aware search engine. Actually, approach0 will look into expression structure and symbolic alpha-equivalence too. But for now, $x_1$ will not get $x$ because approach0 consider them not structurally identical, but you can use wildcard to match $x_1$ just by entering a question mark "?" or \qvar{x} in a math formula. As for your example, enter $\frac \qvar{x} \qvar{y} $ is enough to match it.
@MartinSleziak As for the query link, it needs more explanation, technologically the way you mentioned that Google is using, is a HTTP GET method, but for mathematics, GET request may be not appropriate since it has structure in a query, usually developer would alternatively use a HTTP POST request, with JSON encoded. This makes developing much more easier because JSON is a rich-structured and easy to seperate math keywords.
@MartinSleziak Right now there are two solutions for "query link" problem you addressed. First is to use browser back/forward button to navigate among query history.
@MartinSleziak Second is to use a computer command line 'curl' to get search results from particular query link (you can actually see that in browser, but it is in developer tools, such as the network inspection tab of Chrome). I agree it is helpful to add a GET query link for user to refer to a query, I will write this point in project TODO and improve this later. (just need some extra efforts though)
@MartinSleziak Yes, if you search \alpha, you will get all \alpha document ranked top, different symbols such as "a", "b" ranked after exact match.
@MartinSleziak Approach0 plans to add a "Symbol Pad" just like what www.symbolab.com and searchonmath.com are using. This will help user to input greek symbols even if they do not remember how to spell.
@MartinSleziak Yes, you can get, greek letters are tokenized to the same thing as normal alphabets.
@MartinSleziak As for integrals upper bounds, I think it is a problem on a JavaScript plugin approch0 is using, I also observe this issue, only thing you can do is to use arrow key to move cursor to the right most and hit a '^' so it goes to upper bound edit.
@MartinSleziak Yes, it has a threshold now, but this is easy to adjust from source code. Most importantly, I have ONLY 1000 pages indexed, which means only 30,000 posts on math stackexchange. This is a very small number, but will index more posts/pages when search engine efficiency and relevance is tuned.
@MartinSleziak As I mentioned, the indices is too small currently. You probably will get what you want when this project develops to the next stage, which is enlarge index and publish.
@MartinSleziak Thank you for all your suggestions, currently I just hope more developers get to know this project, indeed, this is my side project, development progress can be very slow due to my time constrain. But I believe its usefulness and will spend my spare time to develop until its publish.
So, we would not have polls like: "What is your favorite calculus textbook?" — GEdgar2 hours ago
@GEdgar I'd say this goes under "tools." But perhaps it could be made explicit. — quid1 hour ago
@quid I think that the type of question mentioned in GEdgar's comment is closer to book-recommendations which are valid questions on the main. (Although not formulated like that.) I also think that his comment was tongue-in-cheek. (Although it is a bit more difficult for me to detect sarcasm, as I am not a native speaker.) — Martin Sleziak57 mins ago
"What is your favorite calculus textbook?" is opinion based and/or too broad for main. If at all it is a "poll." On tex.se they have polls "favorite editor/distro/fonts etc" while actual questions on these are still on-topic on main. Beyond that it is not clear why a question which software one uses should be a valid poll while the question which book one uses is not. — quid7 mins ago
@quid I will reply here, since I do not want to digress in the comments too much from the topic of that question.
Certainly I agree that "What is your favorite calculus textbook?" would not be suitable for the main. Which is why I wrote in my comment: "Although not formulated like that".
Book recommendations are certainly accepted on the main site, if they are formulated in the proper way.
If there will be community poll and somebody suggests question from GEdgar's comment, I will be perfectly ok with it. But I thought that his comment is simply playful remark pointing out that there is plenty of "polls" of this type on the main (although ther should not be). I guess some examples can be found here or here.
Perhaps it is better to link search results directly on MSE here and here, since in the Google search results it is not immediately visible that many of those questions are closed.
Of course, I might be wrong - it is possible that GEdgar's comment was meant seriously.
I have seen for the first time on TeX.SE. The poll there was concentrated on TeXnical side of things. If you look at the questions there, they are asking about TeX distributions, packages, tools used for graphs and diagrams, etc.
Academia.SE has some questions which could be classified as "demographic" (including gender).
@quid From what I heard, it stands for Kašpar, Melichar and Baltazár, as the answer there says. In Slovakia you would see G+M+B, where G stand for Gašpar.
But that is only anecdotal.
And if I am to believe Slovak Wikipedia it should be Christus mansionem benedicat.
From the Wikipedia article: "Nad dvere kňaz píše C+M+B (Christus mansionem benedicat - Kristus nech žehná tento dom). Toto sa však často chybne vysvetľuje ako 20-G+M+B-16 podľa začiatočných písmen údajných mien troch kráľov."
My attempt to write English translation: The priest writes on the door C+M+B (Christus mansionem benedicat - Let the Christ bless this house). A mistaken explanation is often given that it is G+M+B, following the names of three wise men.
As you can see there, Christus mansionem benedicat is translated to Slovak as "Kristus nech žehná tento dom". In Czech it would be "Kristus ať žehná tomuto domu" (I believe). So K+M+B cannot come from initial letters of the translation.
It seems that they have also other interpretations in Poland.
"A tradition in Poland and German-speaking Catholic areas is the writing of the three kings' initials (C+M+B or C M B, or K+M+B in those areas where Caspar is spelled Kaspar) above the main door of Catholic homes in chalk. This is a new year's blessing for the occupants and the initials also are believed to also stand for "Christus mansionem benedicat" ("May/Let Christ Bless This House").
Depending on the city or town, this will be happen sometime between Christmas and the Epiphany, with most municipalities celebrating closer to the Epiphany."
BTW in the village where I come from the priest writes those letters on houses every year during Christmas. I do not remember seeing them on a church, as in Najib's question.
In Germany, the Czech Republic and Austria the Epiphany singing is performed at or close to Epiphany (January 6) and has developed into a nationwide custom, where the children of both sexes call on every door and are given sweets and money for charity projects of Caritas, Kindermissionswerk or Dreikönigsaktion[2] - mostly in aid of poorer children in other countries.[3]
A tradition in most of Central Europe involves writing a blessing above the main door of the home. For instance if the year is 2014, it would be "20 * C + M + B + 14". The initials refer to the Latin phrase "Christus mansionem benedicat" (= May Christ bless this house); folkloristically they are often interpreted as the names of the Three Wise Men (Caspar, Melchior, Balthasar).
In Catholic parts of Germany and in Austria, this is done by the Sternsinger (literally "Star singers"). After having sung their songs, recited a poem, and collected donations for children in poorer parts of the world, they will chalk the blessing on the top of the door frame or place a sticker with the blessing.
On Slovakia specifically it says there:
The biggest carol singing campaign in Slovakia is Dobrá Novina (English: "Good News"). It is also one of the biggest charity campaigns by young people in the country. Dobrá Novina is organized by the youth organization eRko. |
OFDM belongs to the class of multicarrier modulation schemes. OFDM decomposes the transmission frequency band into a group of narrower contiguous subbands (carriers), and each carrier is individually modulated. You can implement this type of modulation with an inverse fast Fourier transform (IFFT). By using narrow orthogonal subcarriers, the OFDM signal gains robustness over a frequency-selective fading channel and eliminates adjacent subcarrier crosstalk.
At the receiving end, you can demodulate the OFDM signal with a fast Fourier transform (FFT) and equalize it with a complex gain at each subcarrier. Combining OFDM with MIMO can improve communication speed without increasing the frequency band.
In the figure above, the waveforms of single-carrier modulation and multicarrier modulation are represented in the frequency domain (top) and the time domain (bottom). Since the multiple data streams can be transmitted simultaneously with multiple carriers, OFDM is not influenced by noise to the same degree as single-carrier modulation. That’s because the time per symbol can be lengthened by the number of carriers.
The Principles of OFDM
An OFDM signal aggregates the information in orthogonal single-carrier frequency-domain waveforms into a time-domain waveform that can be transmitted over the air. The subcarriers use QPSK or QAM as the primary modulation method.
The inverse discrete Fourier transform equation for this is:
$$f(x) = { 1 \over N} \sum_{t=0}^{N-1} F(t) e^{i \frac{2 \pi xt}{N}} $$
In OFDM, when the amplitude of each subcarrier reaches the maximum, the carriers are arranged at intervals of 1 / symbol time so that the amplitude of other subcarriers is 0, thereby preventing interference between symbols.
Moreover, OFDM of a multicarrier transmission is effective in multipath environments because the influence of multipath is concentrated on specific subcarriers compared with a single-carrier transmission. In the case of a single-carrier transmission, the multipath affects the whole.
The arrival time difference between the direct wave and the reflected wave increases when the signal is transmitted over a long range. In that situation, the number of subcarriers is larger than in a smaller service range.
OFDM Technology in 5G Systems
During the specification of the 5G standard, various technologies based on OFDM had been considered. CP-OFDM (cyclic prefix OFDM) is used in LTE and was also selected for the 3GPP Release 15 standard. This technique adds an upper-level signal called a cyclic prefix to the beginning of the OFDM symbol. CP-OFDM suppresses intersymbol interference (ISI) and intercarrier interference (ICI) by inserting the data for a certain period of time from the trailing end of the OFDM symbol as the cyclic prefix at the beginning of the OFDM symbol.
Pros and Cons of OFDM Advantages of OFDM
Multiple users can be assigned to OFDM subcarriers. Frequency can be efficiently used by orthogonal (1 / symbol time interval). It is resistant to transmission distortion due to multipath, making demodulation possible by error correction without using a complicated equalizer.
Disadvantages of OFDM
Because the amplitude of the signal changes significantly, it is necessary to design an amplifier that has a higher peak-to-average power ratio, smaller-than-average transmit power allowed by the amplifier, or an amplifier with a wide dynamic range. Particularly when the carrier interval is narrow, the effect of OFDM becomes weaker against the Doppler shift, so it is preferable to use an amplifier with a wide dynamic range.
OFDM Using MATLAB
MATLAB
® and related toolboxes, including Communications Toolbox™, WLAN Toolbox™, LTE Toolbox™, and 5G Toolbox™, provide functions to implement, analyze, and test OFDM waveforms and perform link simulation. The toolboxes also provide end-to-end transmitter/receiver system models with configurable parameters and wireless channel models to help evaluate the wireless systems that use OFDM waveforms. Specifically, as a part of wireless communication system design, you can use these OFDM capabilities to analyze link performance, robustness, system architecture options, channel effects, channel estimation, channel equalization, signal synchronization, and subcarrier modulation selections.
MATLAB functions and Simulink
® blocks for OFDM modulation provide adjustable parameters such as training signal, pilot signal, 0 padding, cyclic prefix, and points of FFT.
It is also possible to generate and analyze standard-compliant and custom OFDM waveforms over the air by using the Wireless Waveform Generator app in Communications Toolbox with Instrument Control Toolbox™ to connect MATLAB to RF test and measurement instruments. |
Leader Election Using Loneliness Detection Abstract
We consider the problem of leader election (LE) in single-hop radio networks with synchronized time slots for transmitting and receiving messages. We assume that the actual number
n of processes is unknown, while the size u of the ID space is known, but possibly much larger. We consider two types of collision detection: strong (SCD), whereby all processes detect collisions, and weak (WCD), whereby only non-transmitting processes detect collisions.
We introduce
loneliness detection (LD) as a key subproblem for solving LE in WCD systems. LD informs all processes whether the system contains exactly one process or more than one. We show that LD captures the difference in power between SCD and WCD, by providing an implementation of SCD over WCD and LD. We present two algorithms that solve deterministic and probabilistic LD in WCD systems with time costs of \(\mathcal{O}(\log \frac{u}{n})\) and \(\mathcal{O}(\min( \log \frac{u}{n}, \frac{\log (1/\epsilon)}{n}))\), respectively, where ε is the error probability. We also provide matching lower bounds.
We present two algorithms that solve deterministic and probabilistic LE in SCD systems with time costs of \(\mathcal{O}(\log u)\) and \(\mathcal{O}(\min ( \log u, \log \log n + \log (\frac{1}{\epsilon})))\), respectively, where
ε is the error probability. We provide matching lower bounds. Preview
Unable to display preview. Download preview PDF.
References 1.Bordim, J.L., Ito, Y., Nakano, K.: Randomized leader election protocols in noisy radio networks with a single transceiver. In: Guo, M., Yang, L.T., Di Martino, B., Zima, H.P., Dongarra, J., Tang, F. (eds.) ISPA 2006. LNCS, vol. 4330, pp. 246–256. Springer, Heidelberg (2006), http://dx.doi.org/10.1007/11946441_26 CrossRefGoogle Scholar 2. 3. 4.Ghaffari, M., Lynch, N., Sastry, S.: Leader election using loneliness detection. Tech. Rep. MIT-CSAIL-TR-2011-xxx, CSAIL. MIT (2011)Google Scholar 5. 6.Kowalski, D., Pelc, A.: Broadcasting in undirected ad hoc radio networks. Distributed Computing 18(1) (2005), http://dx.doi.org/10.1007/s00446-005-0216-7 7.Kowalski, D., Pelc, A.: Leader election in ad hoc radio networks: A keen ear helps. In: Albers, S., Marchetti-Spaccamela, A., Matias, Y., Nikoletseas, S., Thomas, W. (eds.) ICALP 2009. LNCS, vol. 5556, pp. 521–533. Springer, Heidelberg (2009), http://dx.doi.org/10.1007/978-3-642-02930-1_43 CrossRefGoogle Scholar 8. 9.Nakano, K., Olariu, S.: Uniform leader election protocols for radio networks. IEEE transactions on parallel and distributed systems 13(5) (2002), http://dx.doi.org/10.1109/TPDS.2002.1003864 10. 11. |
Alpha Decay of Np235
Gindler, J. E.; Engelkemeir, D. W. 1960-01-01
German National Library of Science and Technology (GetInfo) (German)
Alpha-decay valley of 106 Te
Poenaru, D.; Gherghescu, R.; Greiner, W. 1998-01-01
Alpha decay of fission isomers
Borstnik, N. M. 1999-01-01
Alpha Decay of Nonaxial Nuclei
Rafiqullah, A. K. 1962-01-01
Alpha-decay properties of 261 Bh
Kojouharov, I.; HeÃberger, F.; Hofmann, S. et al. 2010-01-01
Decay studies of 288â287115 alpha-decay chains
Sushil Kumar; Shagun Thakur; Rajesh Kumar 2009-01-01
Alpha decay of Europium isotopes
Tavares, O A P; Medeiros, E L 2007-01-01
Alpha Decay of Spheroidal Nuclei
Rasmussen, John O.; Segall, Benjamin 1956-01-01
Alpha Decay to Vibrational States
Bjørnholm, S.; Lederer, M.; Asaro, F. et al. 1963-01-01
Alpha decay of U231 to levels in Th227
Liang, C. F.; Sheline, R. K.; Paris, P. et al. 1994-01-01
Alpha decay as a fission-like process
M Ivascu; A Sandulescu; D N Poenaru 1999-01-01
Alpha-Decay Energies of Polonium Isotopes
Karraker, D. G.; Ghiorso, A.; Templeton, D. H. 1951-01-01
Alpha-Decay Theory and a Surface Well Potential
Winslow, George H. 1954-01-01
Alpha-decay half-life of 221 Fr in different environments
Fynbo, H.; Fraile, L.; Jeppesen, H. et al. 2007-01-01
Alpha decay as a test of the elastic channel wavefunction
Fliessbach, T. 1999-01-01
Alpha decay, cluster decay and spontaneous fission in 294â326122 isotopes
K P Santhosh; R K Biju 2008-01-01
Theory of Alpha Decay. I
Devaney, Joseph J. 1953-01-01
Alpha decay studies of Pa-230, Pa-228, Pa-226 and their descendants
Maccoy, Jerome D. 1964-01-01
Alpha-decay properties of Fr205,206,207,208: Identification of Frm206
Ritchie, B. G.; Toth, K. S.; Carter, H. K. et al. 1981-01-01
Exotic decay model and alpha decay studies
Shanmugam, G.; Kamalaharan, B. 1990-01-01
Alpha-decay lifetimes semiempirical relationship including shell effects
Gherghescu, R. A.; Poenaru, D. N.; Carjan, N. 2007-01-01
Alpha decay of Pb186 and Hg184: The influence of mixing of 0+ states on α-decay transition probabilities
Wauters, J.; Bijnens, N.; Folger, H. et al. 1994-01-01
Alpha decay of 221 Rn and 217Po; level structure of 217 Po and the 213 Pb ground state
Liang, C.; Paris, P.; Alexa, P. et al. 2004-01-01
Alpha-Decay Properties of Some Erbium Isotopes near the 82-Neutron Closed Shell
Macfarlane, Ronald D.; Griffioen, Roger D. 1963-01-01
Alpha Decay Properties of some Holmium Isotopes near the 82-Neutron Closed Shell
Alpha-Decay Properties of Some Francium Isotopes Near the 126-Neutron Closed Shell
Griffioen, Roger D.; Macfarlane, Ronald D. 1964-01-01
Alpha-Decay Properties of Some Lutetium and Hafnium Isotopes Near the 82-Neutron Closed Shell
Macfarlane, Ronald D. 1965-01-01
Alpha-Decay Barrier Penetrabilities with an Exponential Nuclear Potential: Even-Even Nuclei
Rasmussen, John O. 1959-01-01
Alpha-Decay Properties of Some Thulium and Ytterbium Isotopes Near the 82-Neutron Closed Shell
Macfarlane, Ronald D. 1964-01-01
Alpha-decay damage and recrystallization in zircon: evidence for an intermediate state from infrared spectroscopy
J Schlüter; P Leggo; I Farnan et al. 2000-01-01
Alpha decay and nuclear deformation: the case for favoured alpha transitions of even-even emitters*
Unknown 2000-01-01
Microscopic description of the anisotropy in alpha decay
Delion, D. S.; Insolia, A.; Liotta, R. J. 1994-01-01
N15 cluster states in triton transfer and their alpha decay
Liendo, J. A.; Fletcher, N. R.; Caussyn, D. D. et al. 1994-01-01
Linear Polarization Measurements Of Gamma Rays Following Alpha Decay
Moore, E. F.; Teh, K.; Jones, G. D. et al. 2005-01-01
Analysis of nuclear materials by energy dispersive X-ray fluorescence and spectral effects of alpha decay
Worley, Christopher 2009-01-01
Odd-proton alpha-decay systematics and nuclear structure just beyond 208 Pb
Liang, C.; Sheline, R.; Paris, P. 2002-01-01
Study of the Alpha-Decay Chain for 194Rn with Relativistic Mean-Field Theory
Sheng Zong-Qiang; Guo Jian-You 2008-01-01
Decay Times, Fluorescent Efficiencies, and Energy Storage Properties for Various Substances with Gamma-Ray or Alpha-Particle Excitation
Bittman, Loran; Furst, Milton; Kallmann, Hartmut 1952-01-01
Systematics of alpha-decay half-life: new evaluations for alpha-emitter nuclides
Tavares, O A P; Rodrigues, M M N; Medeiros, E L et al. 2006-01-01
Consistency of Nuclear Radii of Even-Even Nuclei from Alpha-Decay Theory
Perlman, I.; Ypsilantis, T. J. 1950-01-01
Antisymmetrisation in alpha decay and alpha transfer
M Rhoades-Brown; D F Jackson 1999-01-01
Theoretical Studies of the Alpha Decay of U233
Chasman, R. R.; Rasmussen, J. O. 1959-01-01
Electron Capture and Alpha Decay of Np235
Hoff, Richard W.; Olsen, James L.; Mann, Lloyd G. 1956-01-01
Alpha-gamma decay studies of 255 No
Kindler, B.; Yeremin, A.; Kojouharov, I. et al. 2006-01-01
Gamma-rays emitted in the alpha-decay of 245 Cm
Moody, K. 2009-01-01
A study of 223Th levels populated by alpha -decay
A M Y El-Lawindy; O Naviliat-Cuncic; D L Watson et al. 1999-01-01
Alpha-gamma decay studies of 255 Rf, 251 No and 247 Fm
Streicher, B.; Leino, M.; Yeremin, A. et al. 2006-01-01
Numerical Solutions of the Curium-242 Alpha-Decay Wave Equation
Rasmussen, John O.; Hansen, Eldon R. 1958-01-01
Alpha-Particle and Gamma-Ray Spectra of the U230 Decay Series
Asaro, Frank; Perlman, I. 1956-01-01
Alpha radioactivity above Sn100 including the decay of I108
Page, R. D.; Woods, P. J.; Cunningham, R. A. et al. 1994-01-01
Cryogenic measurement of alpha decay in a 4Ï absorber
Sang-Jun Lee; Min Kyu Lee; Yong Sic Jang et al. 2010-01-01
Effective liquid drop description for alpha decay of atomic nuclei
O RodrÃguez; F GarcÃa; O A P Tavares et al. 1999-01-01
Microscopic description of alpha decay of deformed nuclei
Insolia, A.; Curutchet, P.; Liotta, R. J. et al. 1991-01-01
The systematic study of spontaneous fission versus alpha decay of superheavy nuclei
K P Santhosh; R K Biju; Sabina Sahadevan 2009-01-01
Identity between the widths from alpha decay and alpha elastic scattering
Marquez, Luis 1982-01-01
Directional correlation studies of alpha decay, hyperfine interaction and internal conversion
Falk, Fredrik 1970-01-01
Role of configuration mixing on absolute alpha-decay width in Po isotopes
Janouch, F. A.; Liotta, R. J. 1982-01-01
A Mean-Field Approach to Alpha Decay Hindrance Factors
Wyss, R.; Karlgren, D. 2005-01-01
Deformation and alpha -decay anisotropies
G D Jones; M W Kermode; N Rowley 1999-01-01
Properties of $ \alpha$ -decay to ground and excited states of heavy nuclei
Wang, Y.; Peng, B.; Gu, J. et al. 2010-01-01
A model of nonlocal potential and alpha decay of deformed even nuclei
Chaudhury, M. L. 2001-01-01
Beta decay of N18 to alpha particle emitting states in O18 and a proposed search for parity violation in O18
Zhao, Z.; Gai, M.; Lund, B. J. et al. 1989-01-01
The analysis of predictability of $ \alpha$ -decay half-life formulae and the $ \alpha$ partial half-lives of some exotic nuclei
Dasgupta-Schubert, N.; Tamez, V.; Reyes, M. 2009-01-01
-decay characteristics of neutron-deficient 190, 192Po nuclei and alpha branching ratios of 186, 188Pb isotopes
Unknown 1999-01-01
Pressure-dependent decay of the 2p5-3p configuration in neon excited by alpha particles
P Lindblom; K Aho; O Solin et al. 1999-01-01
Angular correlations and widths for alpha-particle decay in the reaction Li7(12C,15N*âα+11Bg.s.)α
Liendo, J. A.; Fletcher, N. R.; Towers, E. E. et al. 1995-01-01
Determination of the Dalitz plot parameter $ \alpha$ for the decay $ \eta$ â 3 $ \pi^{0}_{}$ with the Crystal Ball at MAMI-B
Lugert, S.; Braghieri, A.; Heid, E. et al. 2009-01-01
The beta -delayed alpha spectrum of 16N and the low-energy extrapolation of the 12C( alpha , gamma )16O cross section
J M DâAuria; A Chen; K P Jackson et al. 1999-01-01
The Complex Alpha-Spectra of Am241 and Cm242
Asaro, Frank; Reynolds, F. L.; Perlman, I. 1952-01-01
The decay of the pair correlation function in simple fluids: long- versus short-ranged potentials
D C Hoyle; R J F Leote de Carvalho; J R Henderson et al. 1999-01-01
The Alpha Spectra of Cm242, Cm243, and Cm244
Asaro, Frank; Thompson, S. G.; Perlman, I. 1953-01-01
Systematics of Alpha-Radioactivity
Perlman, I.; Ghiorso, A.; Seaborg, G. T. 1950-01-01
Stability of 244â260 Fm isotopes against alpha and cluster radioactivity
Biju, R.; Santhosh, K.; Sahadevan, Sabina 2009-01-01
Relativistic mean field study of the newly discovered α-decay chain of 287115
Unknown 2004-01-01
Reflection asymmetry in the structure and spectroscopy of 224 Ac
Sheline, Raymond; Liang, C.; Kvasil, J. et al. 2004-01-01
Poloniumâlead extractions to determine the best method for the quantification of clean lead used in low-background radiation detectors
Finn, Erin; Schulte, S.; Miley, S. et al. 2009-01-01
On the modification of methods of nuclear chronometry in astrophysics and geophysics
Dolinska, Marina; Olkhovsky, Vladislav 2010-01-01
L-shell auto-ionization in the decay chain 210Bi-210Po-206Pb
R D Scott; R Wellum 1999-01-01
IzuÄenie nesochranenija P-Äetnosti na [gamma]-kvantach E[gamma]=0,478 MÄV v vychodnom kanale reakcii 10B(n,[alpha])7Li* - [gamma](M1) - 7Li
Vesna, V. A. 1996-01-01
Identification of the 109 Xe and 105 Te α-decay chain
Hamilton, J. H.; Grzywacz, R.; Liddick, S. N. et al. 2007-01-01
Half-life predictions for decay modes of superheavy nuclei
General energy decay rates for a weakly damped Timoshenko system
Mustafa, M.; Messaoudi, S. 2010-01-01
Enrico Fermiâs Discovery of Neutron-Induced Artificial Radioactivity:The Influence of His Theory of Beta Decay
Robotti, Nadia; Guerra, Francesco 2009-01-01
Correlations between the alpha particles and ejectiles in the 208 MeV N14 on Nb93 reaction at three different ejectile angles
Fukuda, T.; Ishihara, M.; Tanaka, M. et al. 1983-01-01
Beta Decay of Li8
Alburger, D. E.; Donovan, P. F.; Wilkinson, D. H. 1963-01-01
Benfordâs law and half-lives of unstable nuclei
Ni, Dongdong; Ren, Zhongzhou 2008-01-01
Anisotropy of favoured alpha transitions producing even-even deformed nuclei
Tavares, O. 1997-01-01
Angular analysis of bremsstrahlung in α-decay
Olkhovsky, V.; Maydanyuk, S. 2006-01-01
Analysis of Nuclear Material by Alpha Spectroscopy with a Transition-Edge Microcalorimeter
Rabin, M.; Beall, J.; Dry, D. et al. 2008-01-01
Analysis of Long-Range Alpha-Emission Data
Griffioen, R. D.; Rasmussen, J. O. 1961-01-01
A Study of the Alpha-Particles from Po with a Cyclotron-Magnet Alpha-Ray Spectrograph
Chang, W. Y. 1946-01-01
Decay rates of Volterra equations on â N
Gatti, Stefania; Conti, Monica; Pata, Vittorino 2007-01-01
Decay properties of neutron-deficient isotopes of elements from Z = 101 to Z = 108
Hofmann, S.; Streicher, B.; Šáro, Š. et al. 2009-01-01
Decay of Be9* (2.43-Mev State)
Henley, E. M.; Kunz, P. D. 1960-01-01
Decay Properties of Pu235, Pu237, and a New Isotope Pu233
Thomas, T. Darrah; Vandenbosch, Robert; Glass, Richard A. et al. 1957-01-01
Alpha-particle transfer from 6Li to 28Si leading to high excitation of 32S
Khlebnikov, S V; Belov, S E; Chengbo, Li et al. 2006-01-01
Alpha-Radioactivity in the 82-Neutron Region
Rasmussen Jr., J. O.; Thompson, S. G.; Ghiorso, A. 1953-01-01
Alpha-Particle Emission in the Decays of B12 and N12
Wilkinson, D. H.; Alburger, D. E.; Gallmann, A. et al. 1963-01-01
Alpha-Helium Method for Determining Geological Ages
Evans, Robley D.; Goodman, Clark 1944-01-01
Alpha-Alpha Angular Correlations in B11(p, αα)He4
Geer, E. H.; Nelson, E. B.; Wolicki, E. A. 1955-01-01
Website Policies and Important Links Comments
WorldWideScience.org is maintained by the U.S. Department of Energy's Office of Scientific and Technical Information as the Operating Agent for the WorldWideScience Alliance. |
The vector space $ \mathbb{V}$ is introduced in Simple tensor algrbra I, several rules are provided in it. To reduce the abstract, we represent the vectors $ \boldsymbol{x}, \boldsymbol{y}$ with the scalar coefficients $ \{{x_1},{x_2},{x_3} \}$ and $ \{{y_1},{y_2},{y_3} \}$ in $ \mathbb{E}^3$, respectively. The composition of two vectors $ \boldsymbol{x}, \boldsymbol{y} \in\mathbb{E}^3$ is expanded by,
Dot product ($ \cdot $):
$${\boldsymbol{x}} \cdot {\boldsymbol{y}} = {x_i}{{\boldsymbol{g}}^i} \cdot {y_j}{{\boldsymbol{g}}^j} = {x_i}{y_j}{{\boldsymbol{g}}^i} \cdot {{\boldsymbol{g}}^j} = {x_i}{y_j}{g^{ij}} = {x^i}{y_i} = \left[ {\begin{array}{*{20}{c}}
{{x_1}}&{{x_2}}&{{x_3}} \end{array}} \right]\left[ {\begin{array}{*{20}{c}} {{y_1}}\\ {{y_2}}\\ {{y_3}} \end{array}} \right] = \sum\limits_{i = 1}^3 {{x_i}} {y_i}$$
The
dot product is also called scalar product or inner product under Cartesian coordinates and it provides the Euclidean magnitudes of these two vectors and the cosine of the angle between them.
Cross product ($ \times$):
$${\boldsymbol{x}} \times {\boldsymbol{y}} = {x_i}{{\boldsymbol{g}}^i} \times {y_j}{{\boldsymbol{g}}^j} = {x_i}{y_j}{{\boldsymbol{g}}^i} \times {{\boldsymbol{g}}^j} = {x_i}{y_j}{\varepsilon ^{ijk}}g{{\boldsymbol{g}}_k} = \left| {\begin{array}{*{20}{c}}
{{x_1}}&{{x_2}}&{{x_3}}\\ {{y_1}}&{{y_2}}&{{y_3}}\\ {{{\boldsymbol{e}}_1}}&{{{\boldsymbol{e}}_2}}&{{{\boldsymbol{e}}_3}} \end{array}} \right|$$
The
cross product is also called vector product, and it represents the area of a parallelogram with the sides of these two vectors. Now, a new composition for two vectors $ \boldsymbol{x}, \boldsymbol{y} \in\mathbb{E}^3$ is introduced here.
This is not a tutorial, please refer to the books for tensor algebra.
(more…) |
Path Connectedness of Arbitrary Topological Products
Table of Contents
Path Connectedness of Arbitrary Topological Products
One very nice property of an arbitrary collection of path connected topological spaces is that the resulting topological product is also path connected as we will prove in the following theorem.
Theorem 1: Let $\{ X_i \}_{i \in I}$ be an arbitrary collection of path connected topological spaces. Then the topological product $\displaystyle{\prod_{i \in I} X_i}$ is path connected. Proof:For each $j \in I$, let $\displaystyle{p_j : \prod_{i \in I} X_i \to X_j}$ denote the projection maps defined for all $\displaystyle{(x_i)_{i \in I} \in \prod_{i \in I} X_i}$ by:
\begin{align} \quad p_j((x_i)_{i \in I}) = x_j \end{align}
Now, let $\displaystyle{(x_i)_{i \in I}, (y_i)_{i \in I} \in \prod_{i \in I} X_i}$. Since each $X_j$ is connected, there exists paths $\alpha_j : [0, 1] \to X_j$ such that $\alpha_j(0) = p_j((x_i)_{i \in I}) = x_j$ and $\alpha_j(1) = p_j((y_i)_{i \in I}) = y_j$. We now define a path $\displaystyle{\alpha : [0, 1] \to \prod_{i \in I} X_i}$ for all $x \in [0, 1]$ by:
\begin{align} \quad \alpha(x) = (\alpha_j(x))_{j \in I} \end{align}
Note that $\alpha$ is continuous since each component $\alpha_j$ for all $j \in I$ is continuous. Moreover we see that:
\begin{align} \quad \alpha(0) = (\alpha_j(0))_{j \in I} = (x_j)_{j \in J} = (x_i)_{i \in I} \end{align}(4)
\begin{align} \quad \alpha(1) = (\alpha_j(1))_{j \in I} = (y_j)_{j \in J} = (y_i)_{i \in I} \end{align}
So $\displaystyle{\prod_{i \in I} X_i}$ is path connected. $\blacksquare$ |
What is a ramjet? Was it used on the SR-71 Blackbird?
Aviation Stack Exchange is a question and answer site for aircraft pilots, mechanics, and enthusiasts. It only takes a minute to sign up.Sign up to join this community
A jet engine compresses air, heats it by mixing it with fuel and burns it, and lets the heated air escape at the end, where it accelerates to more than its initial speed in a convergent-divergent nozzle because the density of the heated gas is lower, thus needing a higher volume at the same pressure.
By converting the kinetic energy of the flow into pressure (potential energy), the intake creates high-pressure air to feed the engine. This is called pressure recovery and increases with the square of flow speed. Please see below for a plot: This puts a pressure of 1 at Mach 0.5, which is on the high side for flow speed near the compressor face in a jet engine intake.
Note that in static conditions the air needs to be accelerated, so the intake pressure is only 84% of ambient pressure, and at Mach 0.85, the maximum speed of airliners, the intake pressure is 1.37 times higher than ambient pressure. But at supersonic speed things take really off: Pressure recovery for the Concorde was already 6 at Mach 2.0, and for the SR-71 it was 40 at Mach 3.2. If you want a more mathematical approach, the equation for isentropic compression gives: $$p_0 = p_{\infty}\cdot\frac{(1.2\cdot Ma^2)^{3.5}}{\left(1+\frac{5}{6}\cdot(Ma^2-1)\right)^{2.5}}$$ The odd exponents have to do with the ratio of specific heats $\kappa$ of air. 3.5 is actually $\frac{\kappa}{\kappa-1}$ and 2.5 is $\frac{1}{\kappa-1}$. Real compression ratios are slightly below those of the ideal isentropic compression due to friction, but not by much.
The exact equation used for the plot above is produced by calculating the ratio to the intake Mach number directly, this time with $\kappa$ = 1.405:$$\frac{p_{intake}}{p_{\infty}} = \left(0.2025\cdot Ma^2 \cdot\left(1-\left(\frac{Ma_{intake}}{Ma_{\infty}}\right)^2\right) + 1\right)^{3.469}$$
Thus, you get already the compression ratio of a J-47, an early turbojet engine, at Mach 2 and that of a GE90, a modern turbofan engine, at Mach 3.2. Beyond that, it does not make much sense to complicate the engine with turbo machinery - just let the ram pressure give you the compression for thrust generation. You need, however, speed up the vehicle by other means first, because the possible thrust is proportional to the pressure recovery, or the square of airspeed. No speed, no thrust!
You might have read claims that the J-58 of the SR-71 was a ramjet. This is only half true. Below Mach 2 it worked as a regular turbojet, but it had bypass tubes which ducted some air from the fourth stage of the compressor around the later compressor stages, the combustion chambers and the turbine directly into the afterburner. Now some of the air was compressed in the intake and fed directly to a combustion area and through a convergent-divergent nozzle, so this part worked like a ramjet. Some of the air, however, was still passing through the core engine to keep it running, though.
A better example for a ramjet-powered plane is the Lockheed D-21 reconnaissance drone, which used a Marquardt RJ-43 ram jet for propulsion. Its cruise speed was Mach 3.7, back 50 years ago! See below for a picture (source).
Note that the same trick which makes a ramjet possible can be used to reduce cooling drag for high-speed piston aircraft. A well-designed cooling duct is slowing and compressing incoming air and heats it by letting it flow through a radiator. The heated air has a higher exit speed, resulting in jet thrust which can compensate cooling drag at higher speeds. The Republic XF-12, a much underappreciated design, made exemplary use of this technique.
The ramjet is conceptually the simplest jet engine. It is a duct where air is burned creating a hot jet that provides thrust, it is known aero thermodynamic
duct as it is no more than a duct where a thermodynamic cycle is performed. They are no more used for airplanes, as it can not provide thrust at zero airspeed -since he does not have any compressor within the diffuser- and modern turbofan is much more efficient. It is composed by a diffuser , a burner and a nozzle , it uses dynamic compression of ram air within its inlet and then the hot jet expand in a convergent-divergent nozzle as it is supersonic.
Airplanes like the Blackbird used a ramjet to reach higher mach speed by starting the ramjet when the where already subsonic.
but the short answer
A ramjet, sometimes referred to as a flying stovepipe or an athodyd (an abbreviation of aero thermodynamic duct), is a form of airbreathing jet engine that uses the engine's forward motion to compress incoming air without an axial compressor.
In other words it uses the thrust it generates to compress the on coming air by "ramming" it into the engine. Since they have no way of drawing air in they do not work in a static (not forward moving) situation. They often work best at supersonic speeds. Aside form the device used to pump fuel into the engine they essentially have no moving parts.
A ramjet is a jet engine in which ram air pressure, generated by the forward motion of the air vehicle is employed to compress air before fuel is mixed with it and burned to produce thrust through an increase in temperature and pressure of the expanding gases. This means that a ramjet cannot generate static thrust or operate efficiently below a certain airspeed. Ramjet engines are usually designed to operate at supersonic speeds. A scramjet, or supersonic combustion ramjet is a ramjet in which the airflow through the engine combustion section is supersonic. |
I'd like to gather information and references on the following functional equation for power series $$f(f(x))=x+f(x)^2,$$$$f(x)=\sum_{k=1}^\infty c_k x^k$$
(so $c_0=0$ is imposed).
First things that can be established quickly:
it has a unique solution in $\mathbb{R}[[x]]$, as the coefficients are recursively determined; its formal inverse is $f^{-1}(x)=-f(-x)$ , as both solve uniquely the same functional equation; since the equation may be rewritten $f(x)=f^{-1}(x)+x^2$, it also follows that $f(x)+f(-x)=x^2$, the even part of $f$ is just $x^2/2$, and $c_2$ is the only non-zero coefficient of even degree; from the recursive formula for the coefficients, they appear to be integer multiples of negative powers of $2$ (see below the recursive formmula). Rmk.It seems (but I did not try to prove it) that $2^{k-1}c_k$ is an integer for all $k$, and that $(-1)^k c_{2k-1} > 0$ for all $k\geq 2$.
Question :how to see in a quick way that this series has a positive radius of convergence, and possibly to compute or to evaluate it?
[updated]A more reasonable question, after the numeric results and various comments, seems to be, rather: how to prove that this series does notconverge.
Note that the radius of convergence has to be finite, otherwise $f$ would be an automorphism of $\mathbb{C}$. Yes, of course I did evaluate the first coefficients and put them in OEIS, getting the sequence of numerators A107700; unfortunately, it has no further information.
Motivation. I want to understand a simple discrete dynamical system on $\mathbb{R}^2$, namely the diffeomorphism $\phi: (x,y)\mapsto (y, x+y^2)$, which has a unique fixed point at the origin. It is not hard to show that the stable manifold and the unstable manifold of $\phi$ are $$W^s(\phi)=\mathrm{graph}\big( g_{|(-\infty,\ 0]}\big)$$$$W^u(\phi)=\mathrm{graph}\big( g_{|[0, \ +\infty)}\big)$$
for a certain continuous, strictly increasing function $g:\mathbb{R}\to\mathbb{R}$, that solves the above functional equation. Therefore, knowing that the power series solution has a positive radius of convergence immediately implies that it coincides locally with $g$ (indeed, if $f$ converges we have $f(x)=x+x^2/2+o(x^2)$ at $x=0$ so its graph on $x\le0$ is included in $W^s$, and its graph on $x\ge0$ is included in $W^u$: therefore the whole graph of $f$ would be included in the graph of $g$,implying that $f$ coincides locally with $g$). If this is the case, $g$ is then analytic everywhere, for suitable iterates of $\phi$ give analytic diffeomorphism of any large portion of the graph of $g$ with a small portion close to the origin.
One may also argue the other way, showing directly that $g$ is analytic, which would imply the convergence of $f$. Although it seems feasible, the latter argument would look a bit indirect way, and in that case I'd like to make sure there is no easy direct way of working on the coefficients (of course, it may happen that $g$ is not analytic and $f$ is not convergent).
Details: equating the coefficients in both sided of the equation for $f$ one has, for the 2-Jet$$c_1^2x+(c_1c_2+c_2c_1^2)x^2 =x + c_1^2x^2,$$whence $c_1=1$ and $c_2=\frac 1 2;$ and for $n>2$ $$2c_n=\sum_{1\le j\le n-1}c_jc_{n-j}\,-\sum_{1 < r < n \atop \|k\|_1=n}c_rc_{k_1}\dots c_{k_r}.$$ More details: since it may be of interest, let me add the argument to see $W^s(\phi)$ and$W^u(\phi)$ as graphs.
Since $\phi$ is conjugate to $\phi^{-1}=J\phi J $ by the linear involution $J:(x,y)\mapsto (-y,-x)$, we have $W^u(\phi):=W^s(\phi^{-1})=J\ W^s(\phi)$, and it suffices to study $\Gamma:=W^s(\phi)$. For any $(a,b)\in\mathbb{R}^2$ we have $\phi^n(a,b)=(x_n,x_{n+1})$, with $x_0=a$, $x_1=b$, and $x_{n+1}=x_n^2+x_{n-1}$ for all $n\in\mathbb{N}$. From this it is easy to see that $x_{2n}$ and $x_{2n+1}$ are both increasing; moreover, $x_{2n}$ is bounded above iff $x_{2n+1}$ is bounded above, iff $x_{2n}$ converges, iff $x_n\to 0$, iff $x_n\le 0 $ for all $n\in\mathbb{N}$.
As a consequence $(a,b)\in \Gamma$ iff $\phi^n(a,b)\in Q:=(-\infty,0]\times(-\infty,0]$, whence $ \Gamma=\cap_{ n\in\mathbb{N}} \phi^{-n}(Q)$. The latter is a nested intersection of connected unbounded closed sets containing the origin, therefore such is $\Gamma$ too.
In particular, for any $a\leq 0$ there exists at least a $b\leq 0$ such that $(a,b)\in \Gamma$: to prove that $b$ is unique, that is, that $\Gamma$ is a graph over $(\infty,0]$, the argument is as follows. Consider the function $V:\Gamma\times\Gamma\to\mathbb{R}$ such that $V(p,p'):=(a-a')(b-b')$ for all $p:=(a,b)$ and $p':=(a',b')$ in $\Gamma$.
Showing that $\Gamma$ is the graph of a strictly increasing function is equivalent to show that $V(p,p')>0$ for all pair of distinct points $p\neq p'$ in $\Gamma$.
By direct computation we have $V\big(\phi(p),\phi(p')\big)\leq V(p,p')$ and $\big(\phi(p)-\phi(p')\big)^2\geq \|p-p'\|^2+2V(p,p')(b+b')$. Now, if a pair $(p,p')\in\Gamma\times\Gamma$ has $V(p,p')\le0$, then also by induction $V\big(\phi^n(p),\phi^n(p')\big)\leq 0$ and $\big(\phi^n(p)-\phi^n(p')\big)^2\geq \|p-p'\|^2$ for all $n$, so $p=p'$ since both $\phi^n(p)$ and $\phi^n(p')$ tend to $0$. This proves that $\Gamma$ is a graph of a strictly increasing function $g:\mathbb{R}\to\mathbb{R}$: since it is connected, $g$ is also continuous. Of course the fact that $\Gamma$ is $\phi$-invariant implies that $g$ solves the functional equation. |
also, if you are in the US, the next time anything important publishing-related comes up, you can let your representatives know that you care about this and that you think the existing situation is appalling
@heather well, there's a spectrum
so, there's things like New Journal of Physics and Physical Review X
which are the open-access branch of existing academic-society publishers
As far as the intensity of a single-photon goes, the relevant quantity is calculated as usual from the energy density as $I=uc$, where $c$ is the speed of light, and the energy density$$u=\frac{\hbar\omega}{V}$$is given by the photon energy $\hbar \omega$ (normally no bigger than a few eV) di...
Minor terminology question. A physical state corresponds to an element of a projective Hilbert space: an equivalence class of vectors in a Hilbert space that differ by a constant multiple - in other words, a one-dimensional subspace of the Hilbert space. Wouldn't it be more natural to refer to these as "lines" in Hilbert space rather than "rays"? After all, gauging the global $U(1)$ symmetry results in the complex line bundle (not "ray bundle") of QED, and a projective space is often loosely referred to as "the set of lines [not rays] through the origin." — tparker3 mins ago
> A representative of RELX Group, the official name of Elsevier since 2015, told me that it and other publishers “serve the research community by doing things that they need that they either cannot, or do not do on their own, and charge a fair price for that service”
for example, I could (theoretically) argue economic duress because my job depends on getting published in certain journals, and those journals force the people that hold my job to basically force me to get published in certain journals (in other words, what you just told me is true in terms of publishing) @EmilioPisanty
> for example, I could (theoretically) argue economic duress because my job depends on getting published in certain journals, and those journals force the people that hold my job to basically force me to get published in certain journals (in other words, what you just told me is true in terms of publishing)
@0celo7 but the bosses are forced because they must continue purchasing journals to keep up the copyright, and they want their employees to publish in journals they own, and journals that are considered high-impact factor, which is a term basically created by the journals.
@BalarkaSen I think one can cheat a little. I'm trying to solve $\Delta u=f$. In coordinates that's $$\frac{1}{\sqrt g}\partial_i(g^{ij}\partial_j u)=f.$$ Buuuuut if I write that as $$\partial_i(g^{ij}\partial_j u)=\sqrt g f,$$ I think it can work...
@BalarkaSen Plan: 1. Use functional analytic techniques on global Sobolev spaces to get a weak solution. 2. Make sure the weak solution satisfies weak boundary conditions. 3. Cut up the function into local pieces that lie in local Sobolev spaces. 4. Make sure this cutting gives nice boundary conditions. 5. Show that the local Sobolev spaces can be taken to be Euclidean ones. 6. Apply Euclidean regularity theory. 7. Patch together solutions while maintaining the boundary conditions.
Alternative Plan: 1. Read Vol 1 of Hormander. 2. Read Vol 2 of Hormander. 3. Read Vol 3 of Hormander. 4. Read the classic papers by Atiyah, Grubb, and Seeley.
I am mostly joking. I don't actually believe in revolution as a plan of making the power dynamic between the various classes and economies better; I think of it as a want of a historical change. Personally I'm mostly opposed to the idea.
@EmilioPisanty I have absolutely no idea where the name comes from, and "Killing" doesn't mean anything in modern German, so really, no idea. Googling its etymology is impossible, all I get are "killing in the name", "Kill Bill" and similar English results...
Wilhelm Karl Joseph Killing (10 May 1847 – 11 February 1923) was a German mathematician who made important contributions to the theories of Lie algebras, Lie groups, and non-Euclidean geometry.Killing studied at the University of Münster and later wrote his dissertation under Karl Weierstrass and Ernst Kummer at Berlin in 1872. He taught in gymnasia (secondary schools) from 1868 to 1872. He became a professor at the seminary college Collegium Hosianum in Braunsberg (now Braniewo). He took holy orders in order to take his teaching position. He became rector of the college and chair of the town...
@EmilioPisanty Apparently, it's an evolution of ~ "Focko-ing(en)", where Focko was the name of the guy who founded the city, and -ing(en) is a common suffix for places. Which...explains nothing, I admit. |
Absorbent Sets
Absorbent Sets
Definition: Let $X$ be a linear space and let $E \subseteq X$. Then $E$ is said to be Absorbent or an Absorbing Set if for every $x \in X$ there exists a $\lambda > 0$ such that $\lambda x \in E$.
Proposition 1: Let $X$ be a seminormed linear space. Then the open unit ball $B(0, 1) = \{ x \in X : p(x) < 1 \}$ and the closed unit ball $\bar{B}(0, 1) = \{ x \in X : p(x) \leq 1 \}$ are both absorbent sets. Proof:Let $x \in X$. If $p(x) = 0$ then by definition, $x \in B(0, 1)$. So assume that $p(x) \neq 0$. Let $\lambda = \frac{1}{2p(x)} > 0$. Then:
\begin{align} \quad p(\lambda x) = p \left ( \frac{1}{2p(x)} x \right ) = \frac{1}{2p(x)} p(x) = \frac{1}{2} < 1 \end{align}
This shows that $\lambda x \in B(0, 1)$. So for every $x \in X$ there exists a $\lambda > 0$ such that $\lambda x \in B(0, 1)$. Thus $B(0, 1)$ is absorbent. A similar argument shows that $\bar{B}(0, 1)$ is also absorbent. $\blacksquare$ |
Bifurcation and multiplicity results for a class of $n\times n$ $p$-Laplacian system
1.
Department of Mathematics, Indian Institute of Technology Madras, Chennai-600036, India
2.
Department of Mathematics and Statistics, University of North Carolina at Greensboro, Greensboro, NC 27412, USA
3.
Department of Mathematics, Wayne State University, Detroit, MI 48202, USA
$n\times n$
$p$
$\begin{equation*}\begin{cases}-\left(\varphi_{p_1}(u_1')\right)' = \lambda h_1(t) \left(u_1^{p_1-1-\alpha_1}+f_1(u_2)\right),\quad t\in (0,1),\\-\left(\varphi_{p_2}(u_2')\right)' = \lambda h_2(t) \left(u_2^{p_2-1-\alpha_2}+f_2(u_3)\right),\quad t\in (0,1),\\\quad\quad\quad\vdots\qquad\,\: =\quad\quad\quad\quad\quad\quad \vdots\\-\left(\varphi_{p_n}(u_n')\right)' = \lambda h_n(t) \left(u_n^{p_n-1-\alpha_n}+f_n(u_1)\right),~~\, t\in (0,1),\\\quad\,\,\,\, u_j(0)=0=u_j(1); ~~ j=1,2,\dots,n, \\ \end{cases}\end{equation*}$
$\lambda$
$p_j>1$
$\alpha_j\in(0,p_j-1)$
$\varphi_{p_j}(w)=|w|^{p_j-2}w$
$h_j \in C((0,1),(0, \infty))\cap L^1((0,1),(0,\infty))$
$j=1,2,\dots,n$
$f_j:[0,\infty)\rightarrow[0,\infty)$
$j=1,2,\dots,n$
$f_j(0)=0$
$\lambda>0$
$\lambda$ Mathematics Subject Classification:Primary: 34B16, 34B18; Secondary: 35J57. Citation:Mohan Mallick, R. Shivaji, Byungjae Son, S. Sundar. Bifurcation and multiplicity results for a class of $n\times n$ $p$-Laplacian system. Communications on Pure & Applied Analysis, 2018, 17 (3) : 1295-1304. doi: 10.3934/cpaa.2018062
References:
[1] [2] [3] [4]
R. Shivaji, A remark on the existence of three solutions via sub-super solutions,
[5]
show all references
References:
[1] [2] [3] [4]
R. Shivaji, A remark on the existence of three solutions via sub-super solutions,
[5]
[1]
K. D. Chu, D. D. Hai.
Positive solutions for the one-dimensional singular superlinear $ p $-Laplacian problem.
[2]
Claudianor O. Alves, Vincenzo Ambrosio, Teresa Isernia.
Existence, multiplicity and concentration for a class of fractional $ p \& q $ Laplacian problems in $ \mathbb{R} ^{N} $.
[3]
Phuong Le.
Symmetry of singular solutions for a weighted Choquard equation involving the fractional $ p $-Laplacian.
[4]
Nicholas J. Kass, Mohammad A. Rammaha.
Local and global existence of solutions to a strongly damped wave equation of the $ p $-Laplacian type.
[5]
Gabriele Bonanno, Giuseppina D'Aguì.
Mixed elliptic problems involving the $p-$Laplacian with nonhomogeneous boundary conditions.
[6]
Vladimir Chepyzhov, Alexei Ilyin, Sergey Zelik.
Strong trajectory and global $\mathbf{W^{1,p}}$-attractors for the damped-driven Euler system in $\mathbb R^2$.
[7] [8]
Annalisa Cesaroni, Serena Dipierro, Matteo Novaga, Enrico Valdinoci.
Minimizers of the $ p $-oscillation functional.
[9]
Yonglin Cao, Yuan Cao, Hai Q. Dinh, Fang-Wei Fu, Jian Gao, Songsak Sriboonchitta.
Constacyclic codes of length $np^s$ over $\mathbb{F}_{p^m}+u\mathbb{F}_{p^m}$.
[10]
Joaquim Borges, Cristina Fernández-Córdoba, Roger Ten-Valls.
On ${{\mathbb{Z}}}_{p^r}{{\mathbb{Z}}}_{p^s}$-additive cyclic codes.
[11] [12]
Peng Mei, Zhan Zhou, Genghong Lin.
Periodic and subharmonic solutions for a 2$n$th-order $\phi_c$-Laplacian difference equation containing both advances and retardations.
[13]
Genghong Lin, Zhan Zhou.
Homoclinic solutions of discrete $ \phi $-Laplacian equations with mixed nonlinearities.
[14]
Jiao Du, Longjiang Qu, Chao Li, Xin Liao.
Constructing 1-resilient rotation symmetric functions over $ {\mathbb F}_{p} $ with $ {q} $ variables through special orthogonal arrays.
[15]
Erchuan Zhang, Lyle Noakes.
Riemannian cubics and elastica in the manifold $ \operatorname{SPD}(n) $ of all $ n\times n $ symmetric positive-definite matrices.
[16]
Lianjun Zhang, Lingchen Kong, Yan Li, Shenglong Zhou.
A smoothing iterative method for quantile regression with nonconvex $ \ell_p $ penalty.
[17]
Qunyi Bie, Haibo Cui, Qiru Wang, Zheng-An Yao.
Incompressible limit for the compressible flow of liquid crystals in $ L^p$ type critical Besov spaces.
[18]
VicenŢiu D. RǍdulescu, Somayeh Saiedinezhad.
A nonlinear eigenvalue problem with $ p(x) $-growth and generalized Robin boundary value condition.
[19]
Woocheol Choi, Yong-Cheol Kim.
$L^p$ mapping properties for nonlocal Schrödinger operators with certain potentials.
[20]
Shixiong Wang, Longjiang Qu, Chao Li, Huaxiong Wang.
Further improvement of factoring $ N = p^r q^s$ with partial known bits.
2018 Impact Factor: 0.925
Tools Metrics Other articles
by authors
[Back to Top] |
Research Open Access Published: Reconstruction of the Volterra-type integro-differential operator from nodal points Boundary Value Problems volume 2018, Article number: 47 (2018) Article metrics
516 Accesses
Abstract
In this work, the Sturm–Liouville problem perturbated by a Volterra-type integro-differential operator is studied. We give a uniqueness theorem and an algorithm to reconstruct the potential of the problem from nodal points (zeros of eigenfunctions).
Introduction
We consider the boundary value problem
L generated by the convolution-type Sturm–Liouville integro-differential operator
with boundary conditions
and with the discontinuity conditions
where
λ is the spectral parameter; α is a positive real constant; \(q(x)\) and \(M(x,t)\) are real-valued functions from the class \(L_{2}(0,\pi)\) and \(W_{2}^{1}(0,\pi)\), respectively. Without loss of generality, we assume that \(\int_{0}^{\pi} ( q(x)+M(x,x) ) \,dx=0\).
The first result of the inverse nodal Sturm–Liouville problem was given by McLaughlin in [1]. In this work, she proved that the potential of the considered problem can be uniquely determined by a given dense subset of the zeros of the eigenfunctions called nodal points. In 1989, Hald and McLaughlin studied more general boundary conditions and gave some numerical schemes for the reconstruction of the potential from a given dense subset of nodal points [2]. Yang provided an algorithm to determine the coefficients of the Sturm–Liouville problem by using the given nodal points in [3]. Inverse nodal problems for different types of operators have been extensively well studied in several papers (see [4–14] and [15]).
Inverse problems for integro-differential operators and the other classes of nonlocal operators are more difficult to investigate. The classical methods are often not applicable for such problems. In present, the studies concerning the perturbation of a differential operator by a Volterra-type integral operator, namely the integro-differential operator, continue to be performed and are beginning to have a significant place in the literature (see [16–21], and [22]). Inverse nodal problem for this type of operator was first discussed by [23]. It is shown in this study that the potential function can be determined by using nodal points, while the coefficient of the integral operator is known. The inverse nodal problem for Dirac-type integro-differential operators was first investigated by [24]. In this work, it is shown that the coefficients of the differential part of the operator can be determined by using nodal points, and nodal points also give partial information about the integral part.
Preliminaries
Let \(\varphi(x,\lambda)\) be the solution of (1) with the initial conditions
and the jump conditions (4).
We have the following integral equations of the solution of (1): for \(x<\frac{\pi}{2}\),
for \(x>\frac{\pi}{2}\),
where \(\alpha^{\pm}=\frac{1}{2} ( \alpha\pm\frac{1}{\alpha} ) \). By virtue of the above equations, we have the following asymptotic relations for sufficiently large \(\vert \lambda \vert \): for \(x<\frac{\pi}{2}\),
for \(x>\frac{\pi}{2}\),
where \(I_{1}(x)=2h+\int_{0}^{x}q(t)\,dt+\int_{0}^{x}M(t,t)\,dt\), \(I_{2}(x)=\int_{0}^{\pi/2} ( q(t)+M(t,t) ) \,dt-\int_{\pi /2}^{x} ( q(t)+M(t,t) ) \,dt\), and \(\tau= \vert \operatorname{Im}\sqrt {\lambda} \vert \).
Define a function \(\Delta(\lambda)\) as follows:
This entire function is called a characteristic function of the problem
L and the zeros of it are eigenvalues of the problem L. For sufficiently large \(\vert \lambda \vert \), by virtue of (8) and (9), we have the following asymptotic formula:
where
It can be easily shown that the sequence \(\{ \lambda_{n} \} _{n\geq0}\) satisfies the following asymptotic relation for \(n\rightarrow \infty\):
and
where \(\mu_{n}=\delta_{1}+(-1)^{n}\delta_{2}\).
Lemma 1 The eigenfunction \(\varphi(x,\lambda_{n})\) corresponding to the eigenvalue \(\lambda_{n}\) has exactly n zeros \(\{ x_{n}^{j}:\text{ }n\geq1, \text{ }j=\overline{0,n-1} \} \), namely nodal points, in \(( 0,\pi ) \), such that \(0< x_{n}^{0}< x_{n}^{1}<\cdots<x_{n}^{n-1}<\pi\) and the numbers \(\{ x_{n}^{j} \} \) have the following asymptotic formulae for sufficiently large n: for \(x_{n}^{j}\in(0,\frac{\pi}{2})\), and for \(x_{n}^{j}\in(\frac{\pi}{2},\pi)\), where \(\rho_{n}=\alpha^{+}+ ( -1 ) ^{n}\alpha^{-}\). Proof
for sufficiently large
n, uniformly in x. Since the zeros of eigenfunctions are nodal points, from \(\varphi(x_{n}^{j},\lambda_{n})=0\), we get
which implies that
which is equivalent to
for \(x_{n}^{j}>\frac{\pi}{2}\). Taylor’s expansions formula for the arctangent yields
If we divide both sides of this equality by \(\sqrt{\lambda_{n}}\) and take account of the asymptotic formula of \(\sqrt{\lambda_{n}}\), we get
Let \(\Omega=\Omega_{0}\cup\Omega_{1}\) be the set of zeros of eigenfunction, i.e., \(\Omega_{0}= \{ x_{n}^{j}:n=2k,\text{ }k\in \mathbb{Z} \} \), \(\Omega_{1}= \{ x_{n}^{j}:n=2k+1,\text{ }k\in \mathbb{Z} \} \). For each fixed \(x\in ( 0,\pi ) \), there exists a sequence \(( x_{n}^{j(n)} ) \subset\Omega_{m}\) (\(m=0,1\)), which converges to
x. Therefore, from Lemma 1, we can show that the following finite limits exist:
where
where \(\sigma_{m}=\frac{(-1)^{m}\alpha^{-} ( \mu_{m}-2I_{1}(\frac {\pi}{2})+2h ) }{\rho_{m}}\). Put
The following theorem shows that if one of \(q(x)\) or \(M(x,x)\) is given, then the other one can be determined uniquely by using a dense subset of the given nodal set.
Theorem 1 The given dense subset of the nodal set \(\Omega_{0}\) ( or \(\Omega_{1} \)) uniquely determines \(q(x)+M(x,x)\), a. e. on \(( 0,\pi ) \) and the coefficients h, H, and α of the boundary and discontinuity conditions. \(q(x)+M(x,x)\) and the constants h, H, and α can be constructed by the following formulae: 1. For each fixed\(x\in(0,\pi)\), choose a sequence\(( x_{n}^{j(n)} ) \subset\Omega_{0}\), i. e., \(\lim_{n\rightarrow \infty}x_{n}^{j(n)}=x\); 2. Find\(F_{m}(x)\) from equation(19) and calculate$$\begin{gathered} h=\frac{F_{0}(0)}{2}, \\ \mu_{0}=-F_{0}(\pi)+F_{0}(0)-F_{0} \biggl(\frac{\pi }{2}-0 \biggr)+F_{0} \biggl(\frac{\pi}{2}+0 \biggr), \\ q(x)+M(x,x) =F_{0}^{\prime}(x)+\frac{\mu_{0}}{\pi}, \\ \alpha=\sqrt{\frac{F_{0}(0)-2F_{0}(\frac{\pi}{2}-0)}{F_{0}(0)-2F_{0}(\frac{\pi}{2}+0)}}, \\ 2I_{1} \biggl( \frac{\pi}{2} \biggr) =-F_{0}( \pi )+F_{0}(0)+F_{0} \biggl(\frac{\pi}{2}-0 \biggr)+F_{0} \biggl(\frac{\pi}{2}+0 \biggr), \\ H=\frac{\alpha^{+} ( \mu_{0}-2h ) +2\alpha^{-} ( I_{1} ( \pi/2 ) -F_{0}(0) ) }{ ( \alpha^{+} ) ^{2}-2\alpha^{-}}.\end{gathered} $$ Example 1
Consider the following BVP:
where \(q(x)\) and \(M(x,t)\) are real-valued functions from the class \(L_{2}(0,\pi)\) and \(W_{2}^{1}(0,\pi)\), respectively, and
h, H, α are unknown coefficients we confirmed on the assumptions of the problem L. Let \(\{ x_{n}^{j} \} \) be the zeros of the eigenfunction of the considered problem in \((0,\pi)\) with the following asymptotics: If \(x_{n}^{j}\in ( 0,\frac{\pi}{2} ) \),
If \(x_{n}^{j}\in ( \frac{\pi}{2},\pi ) \),
then we can calculate that
According to Theorem 1,
Conclusion
In this paper we have investigated the discontinuous inverse nodal problem for Volterra type integro-differential operator. We showed that if one of \(q(x)\) or \(M(x,x)\) is given, then the other one can be determined uniquely by using only the given nodal points.
References 1.
McLaughlin, J.R.: Inverse spectral theory using nodal points as data—a uniqueness result. J. Differ. Equ.
73, 354–362 (1988) 2.
Hald, O.H., McLaughlin, J.R.: Solutions of inverse nodal problems. Inverse Probl.
5, 307–347 (1989) 3.
Yang, X.-F.: A solution of the nodal problem. Inverse Probl.
13, 203–213 (1997) 4.
Browne, P.J., Sleeman, B.D.: Inverse nodal problem for Sturm–Liouville equation with eigenparameter depend boundary conditions. Inverse Probl.
12, 377–381 (1996) 5.
Buterin, S.A., Shieh, C.T.: Inverse nodal problem for differential pencils. Appl. Math. Lett.
22, 1240–1247 (2009) 6.
Buterin, S.A., Shieh, C.T.: Incomplete inverse spectral and nodal problems for differential pencil. Results Math.
62, 167–179 (2012) 7.
Cheng, Y.H., Law, C.-K., Tsay, J.: Remarks on a new inverse nodal problem. J. Math. Anal. Appl.
248, 145–155 (2000) 8.
Guo, Y., Wei, G.: Inverse problems: dense nodal subset on an interior subinterval. J. Differ. Equ.
255, 2002–2017 (2013) 9.
Law, C.K., Shen, C.L., Yang, C.F.: The inverse nodal problem on the smoothness of the potential function. Inverse Probl.
15(1), 253–263 (1999). Erratum: Inverse Probl. 17, 361–363 (2001) 10.
Ozkan, A.S., Keskin, B.: Inverse nodal problems for Sturm–Liouville equation with eigenparameter dependent boundary and jump conditions. Inverse Probl. Sci. Eng.
23(8), 1306–1312 (2015) 11.
Shieh, C.-T., Yurko, V.A.: Inverse nodal and inverse spectral problems for discontinuous boundary value problems. J. Math. Anal. Appl.
347, 266–272 (2008) 12.
Yang, X.-F.: A new inverse nodal problem. J. Differ. Equ.
169, 633–653 (2001) 13.
Yang, C.-F., Yang, X.-P.: Inverse nodal problems for the Sturm–Liouville equation with polynomially dependent on the eigenparameter. Inverse Probl. Sci. Eng.
19(7), 951–961 (2011) 14.
Yang, C.-F.: Inverse nodal problems of discontinuous Sturm–Liouville operator. J. Differ. Equ.
254, 1992–2014 (2013) 15.
Yang, C.-F., Pivovarchik, V.N.: Inverse nodal problem for Dirac system with spectral parameter in boundary conditions. Complex Anal. Oper. Theory
7, 1211–1230 (2013) 16.
Buterin, S.A.: On the reconstruction of a convolution perturbation of the Sturm–Liouville operator from the spectrum. Differ. Equ.
46, 150–154 (2010) 17.
Buterin, S.A.: On an inverse spectral problem for first-order integro-differential operators with discontinuities. Appl. Math. Lett.
78, 65–71 (2018) 18.
Freiling, G., Yurko, V.A.: Inverse Sturm–Liouville Problems and Their Applications. Nova Science, New York (2001)
19.
Kuryshova, Y.V.: Inverse spectral problem for integro-differential operators. Math. Notes
81(6), 767–777 (2007) 20.
Wu, B., Yu, J.: Uniqueness of an inverse problem for an integro-differential equation related to the Basset problem. Bound. Value Probl.
2014, Article ID 229 (2014) 21.
Yurko, V.A.: An inverse problem for integro-differential operators. Math. Notes
50(5–6), 1188–1197 (1991) 22.
Yurko, V.A.: Inverse problems for second order integro-differential operators with discontinuities. Appl. Math. Lett.
74, 1–6 (2017) 23.
Kuryshova, Y.V., Shieh, C.T.: An inverse nodal problem for integro-differential operators. J. Inverse Ill-Posed Probl.
18, 357–369 (2010) 24.
Keskin, B., Ozkan, A.S.: Inverse nodal problems for Dirac-type integro-differential operators. J. Differ. Equ.
263, 8838–8847 (2017) 25.
Buterin, S.A.: The inverse problem of recovering the Volterra convolution operator from the incomplete spectrum of its rank-one perturbation. Inverse Probl.
22, 2223–2236 (2006) 26.
Buterin, S.A.: On an inverse spectral problem for a convolution integro-differential operator. Results Math.
50, 173–181 (2007) Availability of data and materials
Not applicable.
Funding
Not applicable.
Ethics declarations Ethics approval and consent to participate
Not applicable.
Competing interests
The author declares that he has no competing interests.
Additional information Abbreviations
Not applicable.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
I'll assume we want a cryptographic hash giving security in the Random Oracle Model; collision-resistance and preimage-resistance follows. Collision-resistance alone rules out CRC, regardless of size.
The standard technique would be to split the file into blocks, distribute them with index to the participants, which hash each of their blocks; then hash the hashes concatenated in order of increasing indexes, or use a Merkle tree, to form the hash of the whole file. However, with blocks distributed in haphazard manner (as in the question), most block hashes need to be exchanged, which can get sizable; and the distributed computation of the final hash is slightly hard to organize.
Rather, we can group the block hashes (made dependent of their index) using an order-independent hash, used by each participant over all the hashes of the blocks s/he is responsible for, then again to obtain the final hash. This simplifies the organization, and saves bandwidth when there more than a few blocks per participant: like 8 in the following simple example using $\Bbb Z_p^*$, but I conjecture that the overhead can be made negligible down to 1 block per participant using an Elliptic Curve Group instead.
For a 256-bit hash, marginally more costly than a regular one for large files, we'll use:
Some 512-bit hash $H$, e.g. $H=\operatorname{SHA-512}$. Some 2048-bit prime $p$ making the Discrete Logarithm Problem in $\mathbb Z_p^*$ (conjecturally) hard; see final section. Some public block size $b$ multiple of $2^{12}$ bit (512 bytes), e.g. $b=2^{23}$ for blocks of 1 MiB. Implicit conversion from integer to bitstring and back, per big-endian convention.
To hash a file of $s$ bits (with $s\le2^{62}b$, which is more than ample):
Split the file into $\lceil s/b\rceil$ blocks $B_i$ of size $b$-bit, except for the last which may be smaller (but non-empty), with $0\le i<\lceil s/b\rceil$. Distribute the blocks $B_i$ and indexes $i$, such that each participant $j$ is assigned a block only once. Have each participant $j$ perform: $f_j\gets1$ For each block $B_i$ assigned to participant $j$ $h_i\gets H(B_i)$. That's a 512-bit bitstring characteristic of $B_i$. $g_i\gets H(h_i\mathbin\|\widetilde{4i})\mathbin\|H(h_i\mathbin\|\widetilde{4i+1})\|H(h_i\mathbin\|\widetilde{4i+2})\|H(h_i\mathbin\|\widetilde{4i+3})$ where $\widetilde{\;n\;}$ is the representation of integer $n$ as a 64-bit bitstring. Since function $H$ returns 512 bits, the concatenation of the 4 hashes makes $g_i$ a 2048-bit bitstring, characteristic of $B_i$ and $i$. $f_j\gets f_j\cdot g_i\bmod p$. $f_j$ is a 2048-bit bitstring characteristic of the $B_i$ and $i$ assigned to participant $j$. If $j\ne 0$, transmit that $f_j$ to participant $0$. Participant $0$ performs: $f\gets f_o$ When receiving $f_j$ with $j\ne 0$ $f\gets f\cdot f_j\bmod p$. $h\gets H(f)$ truncated to it's first 256 bits, where $f$ is represented as a 2048-bit bitstring when applying $H$. Send $h$ to all participants.
Absent message alteration or loss, $h$ is independent of how the blocks have been distributed. That is a 256-bit bitstring characteristic of the whole file, computed in a largely distributed manner. The computation of $f$ and $h$ could be distributed too, at a small extra cost in message exchange.
The order-independent hash is borrowed from the multiplicative one in Dwaine Clarke, Srinivas Devadas, Marten van Dijk, Blaise Gassend, G. Edward Suh,
Incremental Multiset Hash Functions and Their Application to Memory Integrity Checking, in proceedings of AsiaCrypt 2013, which is given a security reduction in appendix C. The security of the whole construction should follow.
On choice of $p$: our requirement is hardness of the DLP in $\Bbb Z_p^*$, as in classic Diffie-Hellman key exchange. We need a 2048-bit safe prime, with no special form $p=2^k\pm s$ that could make SNFS easier. Customarily, it is used a nothing-up-my-sleeves number based on the bits of some transcendental mathematical constant, as a good-enough assurance that $p$ is of no special form.
That can be $p=\lfloor2^{2046}\pi\rfloor+3617739$. The construction uses the first 2048 bits of the binary representation of $\pi$, then increments until hitting a safe prime. Hexadecimal value:
c90fdaa22168c234c4c6628b80dc1cd129024e088a67cc74020bbea63b139b22514a08798e3404ddef9519b3cd3a431b302b0a6df25f14374fe1356d6d51c245e485b576625e7ec6f44c42e9a637ed6b0bff5cb6f406b7edee386bfb5a899fa5ae9f24117c4b1fe649286651ece45b3dc2007cb8a163bf0598da48361c55d39a69163fa8fd24cf5f83655d23dca3ad961c62f356208552bb9ed529077096966d670c354e4abc9804f1746c08ca18217c32905e462e36ce3be39e772c180e86039b2783a2ec07a28fb5c55df06f4c52c9de2bcbf6955817183995497cea956ae515d2261898fa051015728e5a8aaac42dad33170d04507a33a85521abdf53ee2f
As pointed by Sqeamish Ossifrage in comment, we could use the 2048-bit MODP proposed by RFC 3526: $p=2^{2048}-2^{1984}-1+2^{64}\cdot\lfloor2^{1918}\pi+124476\rfloor$. That similarly uses as many of the first bits of the binary representation of $\pi$ as possible, but by construction has the 66 high-order bits (including two from $\pi\approx 3$) and 64 low-order bits set. The high-order bits simplify choice of dividend limbs in Euclidean division by the classical method, while the low-order simplify Montgomery reduction. This is believed few enough forced bits to not allow a huge speedup of the DLP.
ffffffffffffffffc90fdaa22168c234c4c6628b80dc1cd129024e088a67cc74020bbea63b139b22514a08798e3404ddef9519b3cd3a431b302b0a6df25f14374fe1356d6d51c245e485b576625e7ec6f44c42e9a637ed6b0bff5cb6f406b7edee386bfb5a899fa5ae9f24117c4b1fe649286651ece45b3dc2007cb8a163bf0598da48361c55d39a69163fa8fd24cf5f83655d23dca3ad961c62f356208552bb9ed529077096966d670c354e4abc9804f1746c08ca18217c32905e462e36ce3be39e772c180e86039b2783a2ec07a28fb5c55df06f4c52c9de2bcbf6955817183995497cea956ae515d2261898fa051015728e5a8aacaa68ffffffffffffffff |
Transform to $y_i=\sin^2 x_i$. Then $\sum_iy_i=1$, and with $\sin x_i=\sqrt{y_i}$ and $\cos x_i=\sqrt{1-y_i}$ the inequality becomes
$$\sum_i\left(\sqrt{1-y_i}-3\sqrt{y_i}\right)\ge0\;.$$
The left-hand side is $10$ times the average value of $f(y)=\sqrt{1-y}-3\sqrt y$ at the $y_i$. The graph of $f$ has an inflection point at $y=1/\left(1+3^{-2/3}\right)\approx0.675$ and is convex to its left. Either none or one of the $y_i$ can be to its right. If none are, the average value of $f$ is greater or equal to the value at the average, $f(1/10)=0$. If one is, say, $y_i$, then we can bound the average value of the remaining ones by the value at their average, so in this case
$$\sum_i\left(\sqrt{1-y_i}-3\sqrt{y_i}\right)\ge\sqrt{1-y_1}-3\sqrt{y_1}+9\sqrt{1-(1-y_1)/9}-27\sqrt{(1-y_1)/9}\;.$$
This is non-negative with a single root at $1/10$, so the inequality holds in both cases. |
I need to write the tensorial representation of the Controlled Swap Gate, what I have written is $\operatorname{CSWAP}=|0\rangle\langle0|\otimes I\otimes I+|1\rangle\langle1|\otimes U$, where U is the matrix of $\operatorname{CSWAP}$ gate transformation, i.ethat is $$|00\rangle\to |00\rangle=1|00\rangle+0|01\rangle+0|10\rangle +0|11\rangle $$ $$ |01\rangle=0|00\rangle+0|01\rangle+1|10\rangle +0|11\rangle $$ $$|10\rangle=0|00\rangle+1|01\rangle+0|10\rangle +0|11\rangle $$ $$|11\rangle=0|00\rangle+0|01\rangle+0|10\rangle +1|11\rangle,$$ so the matrix becomes $$U=\begin{bmatrix} 1 &0&0&0\\ 0&0&1&0\\ 0&1&0&0\\ 0&0&0&1 \end{bmatrix},$$ Is this the correct implementation? Just to add a bit, how do I make this $2$-qubit gate as a tensor product of $1$-qubit gate so that there is uniformity in the equation above?
SWAP is a two-qubit gate and needs to be written as $$ \text{SWAP}=|00\rangle\langle 00|+|11\rangle\langle 11|+|01\rangle\langle 10|+|10\rangle\langle 01|. $$ If you want to write this in terms of Pauli-operators, for example, you might write $$ \text{SWAP}=\frac12\left(\mathbb{I}\otimes\mathbb{I}+Z\otimes Z+X\otimes X+Y\otimes Y\right). $$
Yes, that is the correct matrix representation of the CSWAP (also often referred to as Fredkin gate).
Regarding writing it as a "
tensor product of $1$-qubit gates", the only missing step is writing the swap in braket notation, which you can do as follows:$$\operatorname{SWAP}=\lvert00\rangle\!\langle00\rvert+\lvert11\rangle\!\langle11\rvert+\lvert01\rangle\!\langle10\rvert+\lvert10\rangle\!\langle01\rvert,$$so that the overall gate reads\begin{align}\operatorname{CSWAP}&=|0\rangle\!\langle0|\otimes I\otimes I+|1\rangle\!\langle1|\otimes \operatorname{SWAP}\\&=|0\rangle\!\langle0|\otimes I\otimes I+|1\rangle\!\langle1|\otimes(\lvert00\rangle\!\langle00\rvert+\lvert11\rangle\!\langle11\rvert+\lvert01\rangle\!\langle10\rvert+\lvert10\rangle\!\langle01\rvert).\end{align} |
Since you're writing what look to be Riemann integrals, I presume we're keeping to the case where $X$ is a continuous random variable.
If $m$ is a value for which $\int_{-\infty}^m f_X(x)\, dx =\frac12$ then $m$ is a
median of $X$ (if the density is $>0$ in a neighborhood of $m$, then $m$ will be unique; the median).
Since here the upper limit of the integral is given as $m=E(X)$, this amounts - as Nick Cox pointed out in comments - to saying "is there a distribution for which the mean differs from the median", and the obvious thing to do when searching for a counterexample is to try some distributions that are skew*.
Here's an obvious example: the exponential distribution, which has its median at $\ln 2$ times the mean (i.e. about 70% of the mean).
You might like to also consider the density
$$f(x) = \cases{ \begin{array}{lr} 0, & \text{for } x\leq 0\\ 2x, & \text{for } 0< x\leq 1\\ 0, & \text{for } x> 1 \end{array}} $$
as a simple one to try for yourself (since the integrations are particularly simple).
*(not every asymmetric distribution has mean different from median, however) |
Metrizability of Finite Topological Products
Metrizability of Finite Topological Products
Suppose that $\{ X_1, X_2, ..., X_n \}$ is a finite collection of metrizable topological spaces. As we will see in the following theorem, the topological product $\displaystyle{\prod_{i=1}^{n} X_i}$ will also be metrizable.
Theorem 1: Let $\{ X_1, X_2, …, X_n \}$ be a finite collection of topological spaces. If $X_i$ is metrizable for each $i \in I$ then the topological product $\displaystyle{\prod_{i=1}^{n} X_i}$ is also metrizable. Proof:Let $\{ X_1, X_2, ..., X_n \}$ be a finite collection of metrizable topological spaces. Then there exists metrics $d_i : X_i \times X_i \to [0, \infty)$ such that the open sets in the metric spaces $(X_i, d_i)$ give the topology on the topological spaces $(X_i, \tau_i)$ for all $i \in \{ 1, 2, ... n \}$. Consider the finite topological product $\displaystyle{\prod_{i=1}^{n} X_i}$. Define a metric $\displaystyle{d : \prod_{i=1}^{n} X_i \times \prod_{i=1}^{n} X_i \to [0, \infty)}$ for all $\displaystyle{\mathbf{x} = (x_1, x_2, ..., x_n), \mathbf{y} = (y_1, y_2, ..., y_n) \in \prod_{i=1}^{n} X_i}$ by:
\begin{align} \quad d(\mathbf{x}, \mathbf{y}) = \sqrt{[d_1(x_1, y_1)]^2 + [d_2(x_2, y_2)]^2 + ... + [d_n(x_n, y_n)]^2} \end{align}
It's not hard to check that $d$ is indeed a metric. We now need to show that the open sets in $\displaystyle{\left ( \prod_{i=1}^{n} X_i, \tau \right )}$ (where $\tau$ is the product topology) are the same as the sets in the metric space $\displaystyle{\left ( \prod_{i=1}^{n} X_i, d \right )}$. Let $\displaystyle{U = \prod_{i=1}^{n} U_i \subseteq \prod_{i=1}^{n} X_i}$ be an open set with respect to the metric space above. Let $\mathbf{x} \in U$. Since $U$ is open, there exists a ball centered at $\mathbf{x}$ with radius $r > 0$ such that:
\begin{align} \quad \mathbf{x} \in B(\mathbf{x}, r) = \left \{ \mathbf{y} \in \prod_{i=1}^{n} X_i : d(\mathbf{x}, \mathbf{y}) < r \right \} \subseteq U \end{align}
So if $\mathbf{y} \in B(\mathbf{x}, r)$ then:
\begin{align} \quad d(\mathbf{x}, \mathbf{y}) = \sqrt{[d_1(x_1, y_1)]^2 + [d_2(x_2, y_2)]^2 + ... + [d_n(x_n, y_n)]^2} < r \end{align}
This is satisfied if $d_1(x_i, y_i) < \frac{r}{n}$ for all $i \in \{ 1, 2, ..., n \}$. Take $r^* = \min \{ d_i(x_i, y_i) : i \in \{1, 2, ..., n\} \}$. Then $r^* \< \frac{r}{n}$. Since each $X_i$ is metrizable and since $U$ is open in $\displaystyle{\prod_{i=1}^{n} X_i}$ we have that each $U_i$ is open in $X_i$ and so for $\mathbf{x} \in U$ we have that $x_i \in U_i$, and that ball $B \left (x_i, r^* \right )$ is such that:
\begin{align} \quad x_i \in B \left ( x_i, r^* \right) \subseteq U_i \end{align}
Let $\displaystyle{V = \prod_{i=1}^{n} B(x_i, r^*)}$. Then $V$ is an open neighbourhood of $\mathbf{x}$ in the topological product $\displaystyle{\prod_{i=1}^{n} X_i}$ and furthermore:
\begin{align} \quad \mathbf{x} \in V = \prod_{i=1}^{n} B(x, r^*) \subseteq \prod_{i=1}^{n} U_i = U \end{align}
This shows that the open sets in the metric space $\displaystyle{\left ( \prod_{i=1}^{n} X_i, d \right )}$ are the same as the open sets in the topological product $\displaystyle{\left ( \prod_{i=1}^{n} X_i, \tau \right )}$, so $\displaystyle{\prod_{i=1}^{n} X_i}$ is metrizable. |
Table of Contents
The Algorithm for The Gauss-Seidel Iteration Method
We will now look at the algorithm for the Gauss-Seidel Iteration method for solving the system of equations $Ax = b$.
Let $A$ be an $n \times n$ matrix and let $b$ be an $n \times 1$ matrix. Suppose that a solution exists to the linear system $Ax = b$. Obtain an initial approximation $x^{(0)} = \begin{bmatrix} x_1^{(0)}\\ x_2^{(0)}\\ \vdots \\ x_n^{(0)}\\ \end{bmatrix}$ of the solution to this system, as well as a maximum number of iterations for the Gauss-Seidel method and a desired level of accuracy $\epsilon$.
For each $k = 1, 2, ...$ up to the maximum number of iterations prescribed:
(1)
Step 1: Obtain the approximations $x^{(k)}$ component wise by:
(2)
Step 2: Check the accuracy of the approximations. Check to see if:
If the above is true, then stop the iteration process. $x^{(n)}$ is an approximation of the solution with the desired accuracy. If no, then continue the iterations to obtain successive approximations of the solution. If after the maximum number of iterations, the accuracy $\epsilon$ is not achieved, then stop the iteration process and print out a failure message. |
It looks like you're new here. If you want to get involved, click one of these buttons!
Isomorphisms are very important in mathematics, and we can no longer put off talking about them. Intuitively, two objects are 'isomorphic' if they look the same. Category theory makes this precise and shifts the emphasis to the 'isomorphism' - the
way in which we match up these two objects, to see that they look the same.
For example, any two of these squares look the same after you rotate and/or reflect them:
An isomorphism between two of these squares is a
process of rotating and/or reflecting the first so it looks just like the second.
As the name suggests, an isomorphism is a kind of morphism. Briefly, it's a morphism that you can 'undo'. It's a morphism that has an inverse:
Definition. Given a morphism \(f : x \to y\) in a category \(\mathcal{C}\), an inverse of \(f\) is a morphism \(g: y \to x\) such that
and
I'm saying that \(g\) is 'an' inverse of \(f\) because in principle there could be more than one! But in fact, any morphism has at most one inverse, so we can talk about 'the' inverse of \(f\) if it exists, and we call it \(f^{-1}\).
Puzzle 140. Prove that any morphism has at most one inverse. Puzzle 141. Give an example of a morphism in some category that has more than one left inverse. Puzzle 142. Give an example of a morphism in some category that has more than one right inverse.
Now we're ready for isomorphisms!
Definition. A morphism \(f : x \to y\) is an isomorphism if it has an inverse. Definition. Two objects \(x,y\) in a category \(\mathcal{C}\) are isomorphic if there exists an isomorphism \(f : x \to y\).
Let's see some examples! The most important example for us now is a 'natural isomorphism', since we need those for our databases. But let's start off with something easier. Take your favorite categories and see what the isomorphisms in them are like!
What's an isomorphism in the category \(\mathbf{3}\)? Remember, this is a free category on a graph:
The morphisms in \(\mathbf{3}\) are paths in this graph. We've got one path of length 2:
$$ f_2 \circ f_1 : v_1 \to v_3 $$ two paths of length 1:
$$ f_1 : v_1 \to v_2, \quad f_2 : v_2 \to v_3 $$ and - don't forget - three paths of length 0. These are the identity morphisms:
$$ 1_{v_1} : v_1 \to v_1, \quad 1_{v_2} : v_2 \to v_2, \quad 1_{v_3} : v_3 \to v_3.$$ If you think about how composition works in this category you'll see that the only isomorphisms are the identity morphisms. Why? Because there's no way to compose two morphisms and get an identity morphism unless they're both that identity morphism!
In intuitive terms, we can only move from left to right in this category, not backwards, so we can only 'undo' a morphism if it doesn't do anything at all - i.e., it's an identity morphism.
We can generalize this observation. The key is that \(\mathbf{3}\) is a poset. Remember, in our new way of thinking a
preorder is a category where for any two objects \(x\) and \(y\) there is at most one morphism \(f : x \to y\), in which case we can write \(x \le y\). A poset is a preorder where if there's a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then \(x = y\). In other words, if \(x \le y\) and \(y \le x\) then \(x = y\). Puzzle 143. Show that if a category \(\mathcal{C}\) is a preorder, if there is a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then \(g\) is the inverse of \(f\), so \(x\) and \(y\) are isomorphic. Puzzle 144. Show that if a category \(\mathcal{C}\) is a poset, if there is a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then both \(f\) and \(g\) are identity morphisms, so \(x = y\).
Puzzle 144 says that in a poset, the only isomorphisms are identities.
Isomorphisms are a lot more interesting in the category \(\mathbf{Set}\). Remember, this is the category where objects are sets and morphisms are functions.
Puzzle 145. Show that every isomorphism in \(\mathbf{Set}\) is a bijection, that is, a function that is one-to-one and onto. Puzzle 146. Show that every bijection is an isomorphism in \(\mathbf{Set}\).
So, in \(\mathbf{Set}\) the isomorphisms are the bijections! So, there are lots of them.
One more example:
Definition. If \(\mathcal{C}\) and \(\mathcal{D}\) are categories, then an isomorphism in \(\mathcal{D}^\mathcal{C}\) is called a natural isomorphism.
This name makes sense! The objects in the so-called 'functor category' \(\mathcal{D}^\mathcal{C}\) are functors from \(\mathcal{C}\) to \(\mathcal{D}\), and the morphisms between these are natural transformations. So, the
isomorphisms deserve to be called 'natural isomorphisms'.
But what are they like?
Given functors \(F, G: \mathcal{C} \to \mathcal{D}\), a natural transformation \(\alpha : F \to G\) is a choice of morphism
$$ \alpha_x : F(x) \to G(x) $$ for each object \(x\) in \(\mathcal{C}\), such that for each morphism \(f : x \to y\) this naturality square commutes:
Suppose \(\alpha\) is an isomorphism. This says that it has an inverse \(\beta: G \to F\). This \(beta\) will be a choice of morphism
$$ \beta_x : G(x) \to F(x) $$ for each \(x\), making a bunch of naturality squares commute. But saying that \(\beta\) is the inverse of \(\alpha\) means that
$$ \beta \circ \alpha = 1_F \quad \textrm{ and } \alpha \circ \beta = 1_G .$$ If you remember how we compose natural transformations, you'll see this means
$$ \beta_x \circ \alpha_x = 1_{F(x)} \quad \textrm{ and } \alpha_x \circ \beta_x = 1_{G(x)} $$ for all \(x\). So, for each \(x\), \(\beta_x\) is the inverse of \(\alpha_x\).
In short: if \(\alpha\) is a natural isomorphism then \(\alpha\) is a natural transformation such that \(\alpha_x\) is an isomorphism for each \(x\).
But the converse is true, too! It takes a
little more work to prove, but not much. So, I'll leave it as a puzzle. Puzzle 147. Show that if \(\alpha : F \Rightarrow G\) is a natural transformation such that \(\alpha_x\) is an isomorphism for each \(x\), then \(\alpha\) is a natural isomorphism.
Doing this will help you understand natural isomorphisms. But you also need examples!
Puzzle 148. Create a category \(\mathcal{C}\) as the free category on a graph. Give an example of two functors \(F, G : \mathcal{C} \to \mathbf{Set}\) and a natural isomorphism \(\alpha: F \Rightarrow G\). Think of \(\mathcal{C}\) as a database schema, and \(F,G\) as two databases built using this schema. In what way does the natural isomorphism between \(F\) and \(G\) make these databases 'the same'. They're not necessarily equal!
We should talk about this. |
Find an extension $L/K$ of number fields with Galois group $G$ and respective rings of integers $O_L$ and $O_K$ for each of the following requirements:
The decomposition group $G_q$ of some prime ideal $q$ of $O_L$ over $p = q \cap O_K$ is not a normal subgroup of $G$.
$G=I_q\times I_{q'}$ is the direct product of two nontrivial inertia subgroups $I_q$ and $I_{q'}$, where $q, q'$ are prime ideals of $O_L$
The inertia group of $I_q$ is not cyclic for a prime ideal $q$ of $O_L$.
The only examples I know how to work with (like $\mathbb{Q}(i)$, $\mathbb{Q}(\sqrt[3]{2})$ or simple cyclotomic extensions) apperently are not enough for this exercise. Is there some strategy to find these examples? |
Blinker ship 1testitemqlstudop wrote:The front end moves at c/2, the back end moves at 12c/13:
Code: Select all
x = 79, y = 15, rule = B3/S23 10b4o$10bo3bo$10bo$b2o8bo$2ob2o$b4o3bo4bobo$2b2o3bo5bo2bo6bo5bo5bo5bo 5bo5bo5bo5bo5bo4b3o$6bo2bo6bo6bo5bo5bo5bo5bo5bo5bo5bo5bo4bobo$2b2o3bo 5bo2bo6bo5bo5bo5bo5bo5bo5bo5bo5bo4b3o$b4o3bo4bobo$2ob2o$b2o8bo$10bo$ 10bo3bo$10b4o!
Discord: Ian07#6028
That is actually 12c/26.testitemqlstudop wrote:12c/13
Things to work on:
- Find a (7,1)c/8 ship in a Non-totalistic rule
- Finish a rule with ships with period >= f_e_0(n) (in progress)
Wow, that's actually close to being a c/2 in its own right.AforAmpere wrote:That is actually 12c/26.testitemqlstudop wrote:12c/13
EDIT: Crystal:
Code: Select all
x = 500, y = 502, rule = B3/S23485bo$484bobo$484bobo$485bo5$487bo$486bobo$481bo4bobo$480bobo4bo9b2o$479bo2bo13bo2bo$480b2o9b2o4b2o$490bo2bo$470bo20b2o$469bobo$469bobo$470bo$487b2o$487bobo$488bo2$472b2o$472b2o2$482b2o$481bo2bo$482b2o7$465bo$465b2o$464bobo14$449bo$449b2o$448bobo14$433bo$433b2o$432bobo14$417bo$417b2o$416bobo14$401bo$401b2o$400bobo14$385bo$385b2o$384bobo14$369bo$369b2o$368bobo14$353bo$353b2o$352bobo14$337bo$337b2o$336bobo14$321bo$321b2o$320bobo14$305bo$305b2o$304bobo14$289bo$289b2o$288bobo14$273bo$273b2o$272bobo14$257bo$257b2o$256bobo14$241bo$241b2o$240bobo14$225bo$225b2o$224bobo14$209bo$209b2o$208bobo14$193bo$193b2o$192bobo14$177bo$177b2o$176bobo14$161bo$161b2o$160bobo14$145bo$145b2o$144bobo14$129bo$129b2o$128bobo14$113bo$113b2o$112bobo14$97bo$97b2o$96bobo14$81bo$81b2o$80bobo14$65bo$65b2o$64bobo14$49bo$49b2o$48bobo14$33bo$33b2o$32bobo14$17bo$17b2o$16bobo14$bo$b2o$obo!
'Sir, I exist!'
'However,' replied the universe,
'The fact has not created in me
A sense of obligation.'" -Stephen Crane
Code: Select all
x = 14, y = 18, rule = B3/S2311bobo$10bo2bo$11bobo4$9b2o$9b2o3$3b3o$2o4bo$o2bo3bo$6bo$5bo$3bo$3bo$2b2o!
Code: Select all
x = 18, y = 11, rule = B3/S23bo$2bo$3o4$11bo$10b2o$11bo4bo$15bobo$16bo!
'Sir, I exist!'
'However,' replied the universe,
'The fact has not created in me
A sense of obligation.'" -Stephen Crane
Code: Select all
x = 20, y = 20, rule = B3/S2310b2o3b2o$10b2o3b2o$6bo$2o3b2o$2o2bo10bo$5bo3b2o2b2obo$5bo3bo6b2o2$2o$2o3bo3b2o2b2o$5b2o2b2o3bo3b2o$18b2o2$2b2o6bo3bo$3bob2o2b2o3bo$4bo10bo2b2o$13b2o3b2o$13bo$3b2o3b2o$3b2o3b2o!
V ⃰_η=c²√(Λη)
K=(Λu²)/2
Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)
$$x_1=\eta x$$
$$V^*_\eta=c^2\sqrt{\Lambda\eta}$$
$$K=\frac{\Lambda u^2}2$$
$$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$
http://conwaylife.com/wiki/A_for_all
Aidan F. Pierce
I know, but what I realized was that the boats end up in the same place. Maybe not a coincidence. Also, wiki tags provide links to the wiki?
'Sir, I exist!'
'However,' replied the universe,
'The fact has not created in me
A sense of obligation.'" -Stephen Crane
That gave me an idea:2718281828 wrote:A phi-spark + a domino spark give a copy of it plus two tubs (see gen. 22):
Code: Select all
x = 5, y = 8, rule = LifeHistory 3.2E2$2.A$.3A$A.A.A$A.A.A$.3A$2.A!
Code: Select all
x = 25, y = 50, rule = B3/S2310b2o$10b2o4$15b2o2$14bo$13b3o$12bobobo$12bobobo$13b3o$14bo$2o$2o21$23b2o$23b2o$10bo$9b3o$8bobobo$8bobobo$9b3o$10bo2$8b2o4$13b2o$13b2o!
Moosey Posts:2493 Joined:January 27th, 2019, 5:54 pm Location:A house, or perhaps the OCA board. Contact: I can't verify, but I'm tempted to say yes-- It's a reaction between a b and a block.Entity Valkyrie wrote:Is this B-to-TL known?
Code: Select all
x = 7, y = 9, rule = B3/S23 5b2o$5b2o4$2o$b2o$2o$o!
Technically, though, as the B leaves its block, it's not a conduit. You could string it up to conduit 1, however, to get a true conduit.
Code: Select all
x = 6, y = 3, rule = Lifebo2bo$2b2o$6o!
If for some reason you needed to know, my PGP key ID is
DB271923.
Code: Select all
x = 22, y = 6, rule = LifeHistory.2D16.2C$ACA15.C2.C$2.C15.C2.C$3.C$2.C.C14.2CD$3.C17.A!
'Sir, I exist!'
'However,' replied the universe,
'The fact has not created in me
A sense of obligation.'" -Stephen Crane
Also:Hdjensofjfnen wrote:Close to p3 but not really. Dim hope that it can be hassled.
Code: Select all
x = 22, y = 6, rule = LifeHistory .2D16.2C$ACA15.C2.C$2.C15.C2.C$3.C$2.C.C14.2CD$3.C17.A!
Code: Select all
x = 6, y = 6, rule = B3/S232o$obo$bo$3b3o2$4bo!#C [[ STOP 3 ]]
Stabilised:
Code: Select all
x = 23, y = 12, rule = B3/S2310b2o$10bobo$11bo$13b3o2$14bo5bo$12bo2bo2bo2bo$5b2o15bo$b2ob4ob2ob2ob4ob2o$o15b2o$bo2bo2bo2bo$2bo6bo!
The 70th NAI-ve guy is watching
No-one can trust Entity Valkyrie, except Dean Hickerson.
I will paste all the LifeWiki stuff in several RLEs, and never use LifeWiki again.
Code: Select all
x = 13, y = 5, rule = B3/S233o2$9b3o$9bo2bo$10b2o!
'Sir, I exist!'
'However,' replied the universe,
'The fact has not created in me
A sense of obligation.'" -Stephen Crane
Moosey Posts:2493 Joined:January 27th, 2019, 5:54 pm Location:A house, or perhaps the OCA board. Contact:
Code: Select all
x = 7, y = 10, rule = B3/S23b2ob2o$bo3bo$2b3o2$7o$o2bo2bo2$2b3o$2bobo$2bobo!
Code: Select all
x = 7, y = 10, rule = B3/S233b2o$2bo2bo$2b3o2$7o$o2bo2bo3$2b3o$2bo2bo!
But it’s probably known.
This is Tanner's pi-catcher, used in his reduction of the period-45 glider gun. While the house induction coil achieves smallest population, the symmetric variant shown in the wiki achieves minimal bounding box.Moosey wrote:Is this near-inaccessible pi eater known?I suppose It’s not awful; pre-pis work.
Code: Select all
x = 7, y = 10, rule = B3/S23 b2ob2o$bo3bo$2b3o2$7o$o2bo2bo2$2b3o$2bobo$2bobo!Those displayed on lifewiki fit differently; this might reduce some conduits.
Code: Select all
x = 7, y = 10, rule = B3/S23 3b2o$2bo2bo$2b3o2$7o$o2bo2bo3$2b3o$2bo2bo!
But it’s probably known. Moosey Posts:2493 Joined:January 27th, 2019, 5:54 pm Location:A house, or perhaps the OCA board. Contact:
Code: Select all
x = 95, y = 82, rule = B3/S2348b2o3b2o$46b3obo2b2o$45bo4bo$45bo2b2ob4o$44b2obobobo2bo$45bobobobo$45bobob2o$46bo2$59b2o$50bo8bo$49b3o5bobo$48b5o4b2o$47b2o3b2o$46b3o3b3o$47b2o3b2o$48b5o$49b3o$48b3o$47b2o$48bo$45b3o$45bo5$6b2o7bo7b2o$7bo7b3o5b2o$7bobo8bo$8b2o7b2o2$20bo$19bobo48bo$5bo12bo3bo46b2o$5b3o10bo3bo8b2ob2o33bobo$8bo9bo3bo8b2obo2bo$7b2o10bobo12bob2o$20bo13bo$33b2o$30bobo2b2o$2b2ob2o23b2o2bo2bo$o2bob2o28b2o$2obo$3bo86bo$3b2o67b2o14b5o$b2o2bobo65bo13bo5bo$o2bo2b2o65bobo12b3o2bo$b2o16b2o53b2o15bob2o$19bo68b4o2bo$13b2o5b3o7bo52b2o3bo3b2o$13b2o7bo8b2o50b2o4b3o$30b2o59bo$91bob2o$90b2ob2o2$72b2o$55b2o7bo8b2o7b2o$55b2o5b3o7bo9bo$61bo21b3o$61b2o22bo4$74bo$72b3o$71bo$71b2o3$62bo$61bobo9b2ob2o$60bo3bo8b2obo2bo$60bo3bo11bob2o$60bo3bo11bo$61bobo11b2o$62bo9bobo2b2o$72b2o2bo2bo$59b2o16b2o$60bo$57b3o5b2o$57bo7b2o!
Code: Select all
x = 20, y = 21, rule = B3/S2315bo$15bo$15bo3$14b2o$14b2o$18b2o$14b2o2b2o$14b2o9$b2o$obo$2bo!
I am not sure the space has been fully explored to find clusters that are not exactly the same but can be used for later collisions with single gliders. Add catalysts, and eventually you get a stable reflector, but catalysts also take up diagonals. Anyway... I am trying to find something useful this way. Still trying.
Here are some collisions between gliders and natural clusters that produce a glider, an spaceship and not much additional debris. I don't know if all or any are new. Is anyone else looking for these? They are not very useful but could be part of a slow synthesis.
MWSS + glider
Code: Select all
x = 11, y = 11, rule = B3/S236b3o2$3b2o$2bo2bo$3b2o4bo$8bobo$8bobo$9bo2$b2o$obo$2bo$!
Code: Select all
x = 16, y = 12, rule = B3/S2313bo$13bobo$13b2o2$6b2o$5bo2bo$2o3bobo$2o4bo3$5bo$4bobo$4bobo$5bo$!
Code: Select all
x = 13, y = 10, rule = B3/S235b2o$2o3b2o$2o3$3b2o$2bo2bo$2bobo$3bo6b2o$10bobo$10bo$!
Code: Select all
x = 13, y = 10, rule = B3/S235b2o$2o3b2o$2o$10bo$10bobo$3b2o5b2o$2bo2bo$2bobo$3bo$!
Code: Select all
x = 13, y = 9, rule = B3/S2311b2o$10bobo$6b2o3bo$5bobo$5b2o5$b2o$obo$2bo$!
Code: Select all
x = 17, y = 9, rule = B3/S2311b3o$b2o4b2o$obo4bobo$2bo5bo3$15bo$14bobo$10b2o2bobo$10b2o3bo$!
Moosey Posts:2493 Joined:January 27th, 2019, 5:54 pm Location:A house, or perhaps the OCA board. Contact: Here's a similar reaction that produces a whole lot of extra junk:pcallahan wrote:Here is a useless collision that could be thought of as glider bowling. I can't do much with that, but it is rare to find a collision that preserves all the components. The initial cluster is "natural" in the sense that I got it by generating small starting seeds. Before we had examples of stable reflectors, it was speculated that a reflector could be a magic glider-preserving self-restoring cluster (with the parts back in the right place unlike this). It is very unlikely once you get beyond blocks and blinkers, and those don't work as reflectors.
Code: Select all
x = 20, y = 21, rule = B3/S23 15bo$15bo$15bo3$14b2o$14b2o$18b2o$14b2o2b2o$14b2o9$b2o$obo$2bo!
...
Code: Select all
x = 8, y = 11, rule = B3/S232o$2o3bo$5bo$5bo5$5b3o$5bo$6bo!
Code: Select all
x = 15, y = 13, rule = B3/S2313bo$12bo$12b3o8$2o$2o4b2o$6b2o!
Well, the point isMoosey wrote: Here's a similar reaction that produces a whole lot of extra junk: notto produce extra junk. If you could keep the block and blinker in the same place along with extra junk, that would be more interesting, though also useless.
Here's a possibly useful resettable glider shifter that I found around 1994 (first as far as I know) and forgot about for years.
Code: Select all
x = 81, y = 802bo$obo$b2o67$72b2o5b2o$72b2o5b2o5$79b2o$79b2o$67b2o$66bobo$68bo!
Code: Select all
x = 19, y = 14, rule = B3/S23$8b2o$8b2o5$15b2o$15b2o$3b2o$2bobo$4bo!
Code: Select all
x = 37, y = 35, rule = B3/S2335b2o$31b2o2b2o$31b2o6$34b2o$34b2o13$12b2o$11bobo$13bo8$2b2o$bobo$3bo!
Moosey Posts:2493 Joined:January 27th, 2019, 5:54 pm Location:A house, or perhaps the OCA board. Contact:
Code: Select all
x = 73, y = 12, rule = B3/S2317b2o35b2o$16bo2bo33bo2bo$12bo4b2o4bo25bo4b2o4bo$12bobo6bobo25bobo6bobo$2o11bobo4bobo11b2ob2o11bobo4bobo11b2o$2o11bo2bo2bo2bo11b2ob2o11bo2bo2bo2bo11b2o$13bobo4bobo27bobo4bobo$12bobo6bobo25bobo6bobo$12bo10bo25bo10bo$17b2o$17b2o35b2o$54b2o!
1. Eats block
2. Slightly different stabilization involving a block |
It looks like you're new here. If you want to get involved, click one of these buttons!
Is the usual \(\leq\) ordering on the set \(\mathbb{R}\) of real numbers a total order?
So, yes.
1. Reflexivity holds
2. For any \\( a, b, c \in \tt{R} \\) \\( a \le b \\) and \\( b \le c \\) implies \\( a \le c \\)
3. For any \\( a, b \in \tt{R} \\), \\( a \le b \\) and \\( b \le a \\) implies \\( a = b \\)
4. For any \\( a, b \in \\tt{R} \\), we have either \\( a \le b \\) or \\( b \le a \\)
So, yes.
Perhaps, due to our interest in things categorical, we can enjoy (instead of Cauchy sequence methods) to see the order of the (extended) real line as the Dedekind-MacNeille_completion of the rationals. Matthew has told us interesting things about it before.
Hausdorff, on his part, in the book I mentioned here, says that any total order, dense, and without \( (\omega,\omega^*) \) gaps, has embedded the real line. I don't have a handy reference for an isomorphism instead of an embedding ("everywhere dense" just means dense here).
Perhaps, due to our interest in things categorical, we can enjoy (instead of Cauchy sequence methods) to see the order of the (extended) real line as the [Dedekind-MacNeille_completion of the rationals](https://en.wikipedia.org/wiki/Dedekind%E2%80%93MacNeille_completion#Examples). Matthew has told us interesting things about it [before](https://forum.azimuthproject.org/discussion/comment/16714/#Comment_16714).
Hausdorff, on his part, in the book I mentioned [here](https://forum.azimuthproject.org/discussion/comment/16154/#Comment_16154), [says](https://books.google.es/books?id=M_skkA3r-QAC&pg=PA85&dq=each+everywhere+dense+type&hl=en&sa=X&ved=0ahUKEwjLkJao-9DaAhWD2SwKHVrkBcIQ6AEIKTAA#v=onepage&q=each%20everywhere%20dense%20type&f=false) that any total order, dense, and without \\( (\omega,\omega^*) \\) [gaps](https://en.wikipedia.org/wiki/Hausdorff_gap), has embedded the real line. I don't have a handy reference for an isomorphism instead of an embedding ("everywhere dense" just means dense here).
I believe the hyperreal numbers give an example of a dense total order that embeds the reals without being isomorphic to it. (I can’t speak to the gaps condition though, and it’s just plausible that they’re isomorphic at the level of mere posets rather than ordered fields.)
I believe the [hyperreal numbers](https://en.wikipedia.org/wiki/Hyperreal_number) give an example of a dense total order that embeds the reals without being isomorphic to it. (I can’t speak to the gaps condition though, and it’s just plausible that they’re isomorphic at the level of mere posets rather than ordered fields.)
That's an interesting question, Jonathan.
That's an interesting question, Jonathan.
Jonathan Castello
I believe the hyperreal numbers give an example of a dense total order that embeds the reals without being isomorphic to it. (I can’t speak to the gaps condition though, and it’s just plausible that they’re isomorphic at the level of mere posets rather than ordered fields.)
In fact, while they are not isomorphic as lattices, they are in fact isomorphic as mere posets as you intuited.
First, we can observe that \(|\mathbb{R}| = |^\ast \mathbb{R}|\). This is because \(^\ast \mathbb{R}\) embeds \(\mathbb{R}\) and is constructed from countably infinitely many copies of \(\mathbb{R}\) and taking a quotient algebra modulo a free ultra-filter. We have been talking about quotient algebras and filters in a couple other threads.
Next, observe that all unbounded dense linear orders of cardinality \(\aleph_0\) are isomorphic. This is due to a rather old theorem credited to George Cantor. Next, apply the Morley categoricity theorem. From this we have that all unbounded dense linear orders with cardinality \(\kappa \geq \aleph_0\) are isomorphic. This is referred to in model theory as \(\kappa\)-categoricity.
Since the hypperreals and the reals have the same cardinality, they are isomorphic as unbounded dense linear orders.
Puzzle MD 1: Prove Cantor's theorem that all countable unbounded dense linear orders are isomorphic.
[Jonathan Castello](https://forum.azimuthproject.org/profile/2316/Jonathan%20Castello)
> I believe the hyperreal numbers give an example of a dense total order that embeds the reals without being isomorphic to it. (I can’t speak to the gaps condition though, and it’s just plausible that they’re isomorphic at the level of mere posets rather than ordered fields.)
In fact, while they are not isomorphic as lattices, they are in fact isomorphic as mere posets as you intuited.
First, we can observe that \\(|\mathbb{R}| = |^\ast \mathbb{R}|\\). This is because \\(^\ast \mathbb{R}\\) embeds \\(\mathbb{R}\\) and is constructed from countably infinitely many copies of \\(\mathbb{R}\\) and taking a [quotient algebra](https://en.wikipedia.org/wiki/Quotient_algebra) modulo a free ultra-filter. We have been talking about quotient algebras and filters in a couple other threads.
Next, observe that all [unbounded dense linear orders](https://en.wikipedia.org/wiki/Dense_order) of cardinality \\(\aleph_0\\) are isomorphic. This is due to a rather old theorem credited to George Cantor. Next, apply the [Morley categoricity theorem](https://en.wikipedia.org/wiki/Morley%27s_categoricity_theorem). From this we have that all unbounded dense linear orders with cardinality \\(\kappa \geq \aleph_0\\) are isomorphic. This is referred to in model theory as *\\(\kappa\\)-categoricity*.
Since the hypperreals and the reals have the same cardinality, they are isomorphic as unbounded dense linear orders.
**Puzzle MD 1:** Prove Cantor's theorem that all countable unbounded dense linear orders are isomorphic.
Hi Matthew, nice application of the categoricity theorem! One question if I may. You said:
In fact, while they are not isomorphic as lattices, they are in fact isomorphic as mere posets as you intuited.
But in my understanding the lattice and poset structure is inter-translatable as in here. Can two lattices be isomorphic and their associated posets not?
Hi Matthew, nice application of the categoricity theorem! One question if I may. You said:
> In fact, while they are not isomorphic as lattices, they are in fact isomorphic as mere posets as you intuited.
But in my understanding the lattice and poset structure is inter-translatable as in [here](https://en.wikipedia.org/wiki/Lattice_(order)#Connection_between_the_two_definitions). Can two lattices be isomorphic and their associated posets not?
(EDIT: I clearly have no idea what I'm saying and I should probably take a nap. Disregard this post.)
Can two lattices be isomorphic and their associated posets not?
Can two lattices be isomorphic and their associated posets not?
If two lattices are isomorphic preserving infima and suprema, ie limits, then they are order isomorphic.
The reals and hyperreals provide a rather confusing counter example to the converse. I am admittedly struggling with this myself, as it is highly non-constructive.
From model theory we have two maps \(\phi : \mathbb{R} \to\, ^\ast \mathbb{R} \) and \(\psi :\, ^\ast\mathbb{R} \to \mathbb{R} \) such that:
Now consider \(\{ x \, : \, x \in \mathbb{R} \text{ and } 0 < x \}\).
The hyperreals famously violate the Archimedean property. Because of this \(\bigwedge_{^\ast \mathbb{R}} \{ x \, : \, x \in \mathbb{R} \text{ and } 0 < x \}\) does not exist.
On the other than if we consider \( \bigwedge_{\mathbb{R}} \{ \psi(x) \, : \, x \in \mathbb{R} \text{ and } 0 < x \}\), that does exist by the completeness of the real numbers (as it is bounded below by \(\psi(0)\)).
Hence
$$\bigwedge_{\mathbb{R}} \{ \psi(x) \, : \, x \in \mathbb{R} \text{ and } 0 < x \} \neq \psi\left(\bigwedge_{^\ast\mathbb{R}} \{ x \, : \, x \in \mathbb{R} \text{ and } 0 < x \}\right)$$So \(\psi\) cannot be a complete lattice homomorphism, even though it is part of an order isomorphism.
However, just to complicate matters, I believe that \(\phi\) and \(\psi\) are a mere lattice isomorphism, preserving finite meets and joints.
> Can two lattices be isomorphic and their associated posets not?
If two lattices are isomorphic preserving *infima* and *suprema*, ie *limits*, then they are order isomorphic.
The reals and hyperreals provide a rather confusing counter example to the converse. I am admittedly struggling with this myself, as it is highly non-constructive.
From model theory we have two maps \\(\phi : \mathbb{R} \to\, ^\ast \mathbb{R} \\) and \\(\psi :\, ^\ast\mathbb{R} \to \mathbb{R} \\) such that:
- if \\(x \leq_{\mathbb{R}} y\\) then \\(\phi(x) \leq_{^\ast \mathbb{R}} \phi(y)\\)
- if \\(p \leq_{^\ast \mathbb{R}} q\\) then \\(\psi(q) \leq_{\mathbb{R}} \psi(q)\\)
- \\(\psi(\phi(x)) = x\\) and \\(\phi(\psi(p)) = p\\)
Now consider \\(\\{ x \, : \, x \in \mathbb{R} \text{ and } 0 < x \\}\\).
The hyperreals famously violate the [Archimedean property](https://en.wikipedia.org/wiki/Archimedean_property). Because of this \\(\bigwedge_{^\ast \mathbb{R}} \\{ x \, : \, x \in \mathbb{R} \text{ and } 0 < x \\}\\) does not exist.
On the other than if we consider \\( \bigwedge_{\mathbb{R}} \\{ \psi(x) \, : \, x \in \mathbb{R} \text{ and } 0 < x \\}\\), that *does* exist by the completeness of the real numbers (as it is bounded below by \\(\psi(0)\\)).
Hence
$$
\bigwedge_{\mathbb{R}} \\{ \psi(x) \, : \, x \in \mathbb{R} \text{ and } 0 < x \\} \neq \psi\left(\bigwedge_{^\ast\mathbb{R}} \\{ x \, : \, x \in \mathbb{R} \text{ and } 0 < x \\}\right)
$$
So \\(\psi\\) *cannot* be a complete lattice homomorphism, even though it is part of an order isomorphism.
However, just to complicate matters, I believe that \\(\phi\\) and \\(\psi\\) are a mere *lattice* isomorphism, preserving finite meets and joints. |
kidzsearch.com > wiki Explore:images videos games
Mathematical proof
There are different ways of proving a mathematical theorem.
Proof by Induction
One type of proof is called proof by induction. This is usually used to prove a theorem that is true for all numbers. There are 4 steps in a proof by induction.
1. State that the proof will be by induction, and state which variable will be used in the induction step.
2. Prove that the statement is true for some beginning case.
3. Assume that for some value
n = n 0 the statement is true and has all of the properties listed in the statement. This is called the induction step.
4. Show that the statement is true for the next value,
n 0+1.
Once that is shown, then it means that for any value of
n that is picked, the next one is true. Since it's true for some beginning case (usually n=1), then it's true for the next one ( n=2). And since it's true for 2, it must be true for 3. And since it's true for 3, it must be true for 4, etc. Induction shows that it is always true, precisely because it's true for whatever comes after any given number.
An example of proof by induction:
Prove that for all natural numbers
n, 2(1+2+3+....+ n-1+ n)= n( n+1)
Proof: First, the statement can be written "for all natural numbers
n, 2[math]\sum_{k=1}^n k[/math]=n(n+1)
By induction on n,
First, for n=1, 2[math]\sum_{k=1}^1 k[/math]=2(1)=1(1+1), so this is true.
Next, assume that for some
n= n 0 the statement is true. That is, 2[math]\sum_{k=1}^{n_0} k[/math] = n 0(n 0+1)
Then for
n= n 0+1, 2[math]\sum_{k=1}^{{n_0}+1} k[/math]can be rewritten 2( n+1) + 2[math]\sum_{k=1}^{n_0} k[/math] 0
Since 2[math]\sum_{k=1}^{n_0} k[/math] = n
0(n 0+1),2 n 0+1 + 2[math]\sum_{k=1}^{n_0} k[/math] = 2(n 0+1) + 2n 0(n 0+1)
So 2(n
0+1) + 2n 0(n 0+1)= 2(n 0+1)(n 0 + 2) Proof by Contradiction
Proof by contradiction is a way of proving a mathematical theorem by showing that if the statement is false, there is a problem with the logic of the proof. That is, if one of the results of the theorem is assumed to be false, then the proof does not work.
When proving a theorem by way of contradiction, it is important to note that in the beginning of the proof. This is usually abbreviated BWOC. When the contradiction appears in the proof, there is usually an X made with 4 lines instead of 2 placed next to that line.
Other Examples of Proofs
Prove that x = -b ± ( √b² - 4ac ) / 2a from ax²+bx+c=0
[math]x^2+2xy+y^2 = (x+y)^2\![/math].
Dividing the quadratic equation
[math]ax^2+bx+c=0\![/math]
by
a (which is allowed because a is non-zero), gives: [math]x^2 + \frac{b}{a} x + \frac{c}{a}=0\![/math]
or
[math]x^2 + \frac{b}{a} x= -\frac{c}{a} \qquad (1)\![/math]
The quadratic equation is now in a form in which completing the square can be done.To "complete the square" is to find some number
k so that [math]x^2 + \frac{b}{a}x + k = x^2+2xy+y^2\![/math]
for another number
y. In order for these equations to be true, [math]y = \frac{b}{2a}\![/math]
and
[math]k = y^2\![/math]
so
[math] k = \frac{b^2}{4a^2}\![/math].
Adding this number to equation (1) makes
[math]x^2+\frac{b}{a}x+\frac{b^2}{4a^2}=-\frac{c}{a}+\frac{b^2}{4a^2}\![/math].
The left side is now a perfect square because
[math]x^2+\frac{b}{a}x+\frac{b^2}{4a^2} = \left( x + \frac{b}{2a} \right)^2\![/math]
The right side can be written as a single fraction, with common denominator 4
a 2. This gives [math]\left(x+\frac{b}{2a}\right)^2=\frac{b^2-4ac}{4a^2}\![/math].
Taking the square root of both sides gives
[math]\left|x+\frac{b}{2a}\right| = \frac{\sqrt{b^2-4ac\ }}{|2a|}\Rightarrow x+\frac{b}{2a}=\pm\frac{\sqrt{b^2-4ac\ }}{2a}\![/math].
Getting
x by itself gives [math]x=-\frac{b}{2a}\pm\frac{\sqrt{b^2-4ac\ }}{2a}=\frac{-b\pm\sqrt{b^2-4ac\ }}{2a}\![/math]. Other pages |
Let $f: (M, d) \rightarrow (N, \rho)$ be uniformly continuous. Prove or disprove that if M is complete, then $f(M)$ is complete.
If I am asking a previously posted question, please accept my apologies and tell me to bugger off. I saw a similar problem but the solution was dealing with a Bi-Lipschitz function or some such business.
I believe this statement to be true and here is a rough sketch of my reasoning:
Since $f$ is uniformly continuous, then $f$ maps Cauchy to Cauchy. Let $(x_n)$ be a Cauchy sequence in $M$. Since $M$ is complete, $x_n \rightarrow x \in M$. Again, because of $f$'s uniform continuity, we now have $(f(x_n))$ is Cauchy in $N$ and $f(x_n) \rightarrow f(x) \in N$. Thus $N$ is complete.
By the way, I am studying for an exam. This is certainly not homework. I gladly accept your criticisms. Thank you in advance for your help. |
为了更好的帮助您理解掌握查询词或其译词在地道英语中的实际用法,我们为您准备了出自英文原文的大量英语例句,供您参考。
Our results show that STUD-interval and ABC-interval are both second-order correct.
For von Mises population M(μ,k), we find that six bootstrap confidence intervals are second-order correct like the approximate normal confidence interval, and STUD-interval is third-order correct.
In the note the author gives a correct proof of the theorem on sunsets in spaces of bounded linear operators given in the author's another paper.
Compared with the serial mode, the experimental results of the post synthesized simulation show that the design method is correct and effective.
Based on the sampling data, a correct approach to the boundary effect on aggregation index was put forward to the spatial pattern analysis of the broad-leaved Korean pine forest in its different stages of development.
更多
Of special interest are the Mellin operators of differentiation and integration, more correctly of anti-differentiation, enabling one to establish the fundamental theorem of the differential and integral calculus in the Mellin frame.
The effectiveness of accounting correctly for the geometry of the sphere in the wavelet analysis of full-sky CMB data is demonstrated by the highly significant detections of physical processes and effects that are made in these reviewed works.
The analyzed results from the measured data verify that with this new method the target can be detected correctly from wide Doppler spectrum.
The experimental results show that the method not only can correctly detect the fuzzy edge and exiguous edge but can evidently improve the searching efficiency of fuzzy clustering algorithm based on IEA.
The simulation results indicate that the dynamical sliding-mode controller can not only track the given trajectory correctly but also reduce the chattering of sliding-mode control system considerably.
更多
We study certain naturally-defined analytic domains in the complexified groupHC which are invariant under left and right translation byH?.
Xi defined a partition ofWf into canonical right cells and the right order ≤R on the set of cells.
The bases are necessarily generalized Haar functions and the partial sums are a martingale closed on the right by f.
More precisely, we study the existence of \varepsilon } {f(y)} K(x,y)\left( {1 - \frac{\varepsilon }{{\left| {x - y} \right|}}} \right)^\alpha dy a.e.$$
The Ces\'aro operator $\mathcal{C}_{\alpha}$ is defined by \begin{equation*} (\mathcal{C}_{\alpha}f)(x) = \int_{0}^{1}t^{-1}f\left( t^{-1}x \right)\alpha (1-t)^{\alpha -1}\,dt~, \end{equation*} where $f$ denotes a function on $\mathbb{R}$.
更多
LetRo andR1 be two Kempf-Ness sets arising from moment maps induced by strictly plurisubharmonic,K-invariant, proper functions.
It is shown here that there are proper subsets of ${\cal M}$ that also form a complete set of translation invariants, and these subsets are characterized.
The existence and uniqueness of optional and predictable projections of setvalued measurable processes are proved under proper circumstances.
Benson proper efficiency in the nearly cone-subconvexlike vector optimization with set-valued functions
Under the assumption of nearly cone-subconvexlikeness, a Lagrangian multiplier theorem on Benson proper efficiency is presented.
更多 其他 |
I have the same problem as the guy in this question, but in this case the input voltage varies from 20 volts to 28 volts.
I would like a current of 20A to flow through the mosfet without it getting really hot. Which mosfet should I use?
Electrical Engineering Stack Exchange is a question and answer site for electronics and electrical engineering professionals, students, and enthusiasts. It only takes a minute to sign up.Sign up to join this community
I have the same problem as the guy in this question, but in this case the input voltage varies from 20 volts to 28 volts.
I would like a current of 20A to flow through the mosfet without it getting really hot. Which mosfet should I use?
Select a MOSFET that can block the maximum voltage
In the transistor's datasheet, usually under the section
absolute maximum ratings, will be \$V_{DSS}\$ (drain-source voltage). If the voltage from the drain to the source exceeds this, the MOSFET will probably be damaged. So, calculate the maximum voltage that could be experienced in your circuit, considering also the possibility of switching transients, then add a margin of at least 20% for robustness.
Look for a MOSFET (using the parametric search tool of your vendor or manufacturer) with \$V_{DSS}\$ around this value. A device with \$V_{DSS}\$ higher than you need will work fine, but will probably be more expensive, slower to switch, or less efficient than one with a lower (but not too low!) \$V_{DSS}\$.
Calculate resistive losses
The MOSFET, when on, looks much like a resistor. The datasheet will specify \$R_{DS(on)}\$, the resistance between drain and source when the MOSFET is all the way on. Determine the maximum current your MOSFET will have to pass, and you can calculate the resistive losses in the MOSFET just as you would for a resistor:
$$ P = I^2 R_{DS(on)} $$
Calculate switching losses
MOSFETs (all transistors, really) take time to switch. During this time, there will be high current and high voltage in the MOSFET simultaneously, which means high losses in the MOSFET (\$P=IE\$) for that brief period. If you are not doing PWM or similar, than the time you spent switching relative to the time you spend on or off will be very small, and switching losses will be negligible. If this is not the case, then you must consider switching losses in your calculations. That's enough for another question, so either don't do PWM, or add a healthy margin to your power calculations (say, 50%) to be safe.
Calculate junction temperature
You now have a number that represents the power dissipation in the transistor, in watts. Compare this against the maximum power dissipation in the
absolute maximum ratings. If you have exceeded this, you can't use this MOSFET no matter how big your heatsink is. Otherwise, you can use this MOSFET, but you may need a heatsink.
Rule of thumb: If it's less than \$1W\$, and your device is in a TO-220 package, you are probably good. If you want to be robust and you want to be able to touch the thing without getting a burn, you will want less than \$0.5W\$, or add a small heatsink.
You must absolutely not exceed the maximum junction temperature listed in
absolute maximum ratings. It's probably in the neighborhood of \$175^\circ C\$. The junction temperature is a function of the ambient temperature, the power dissipation, and the thermal resistance from the junction to ambient.
You know the ambient temperature (look at a thermometer) and the power dissipation (you already calculated). To calculate the total thermal resistance, add the thermal resistance of all the things between the junction and ambient. If you plan to operate with no heatsink at all, the datasheet probably lists \$R_{\theta JA}\$ or junction-to-ambient thermal resistance. If you do use a heatsink, that heatsink's datasheet will specify its thermal resistance. Add to this the junction-to-case thermal resistance from the transistor datasheet, and also the thermal resistance for the interface between the heatsink and the transistor case (typical values usually in the transistor datasheet; for a TO-220 with thermal grease, \$0.5^\circ C/W\$ is typical).
So now you have your total thermal resistance \$R_\theta\$ in \$^\circ C/W\$, your total power dissipation in watts, and your maximum junction temperature \$T_{J(max)}\$ and ambient temperature \$T_A\$ in \$^\circ C\$. Your transistor will not be destroyed if:
$$ T_{J(max)} > T_A + P \cdot R_\theta $$
If that's true, you are good. Otherwise, get a bigger heatsink, a transistor with a lower \$R_{DS(on)}\$, reduce the ambient temperature, or reduce the current in the transistor.
Example
Let's do these calculations with FQP50N06. No particular reason other than I had the datasheet on my desktop.
Maximum \$V_{DSS}\$ is 60V. This is safely above the 28V in your circuit.
Maximum \$I_D\$ is 35.4A. This is safely above the 20A in your circuit.
The current will be 20A and the datasheet says \$R_{DS(on)}\$ could be as high as \$0.022\Omega\$. That means my power dissipation will be
$$(20\:\mathrm A)^2 \cdot 0.022\:\Omega = 8.88\:\mathrm W $$
I'm going to assume you are not doing PWM, so switching losses are negligible.
Can I run this without a heatsink? Let's say I want it to work with ambient temperatures as high as \$40^\circ C\$. The datasheet says the junction-to-ambient thermal resistance is \$62.5^\circ C\$. So, the junction temperature will be:
$$ 40^\circ \mathrm C + 8.88\:\mathrm W \cdot 62.5^\circ \mathrm C = 595^\circ \mathrm C $$
This is well above the maximum junction temperature specified in the datasheet, \$175^\circ \mathrm C\$, so you have a problem. You could solve that by using a larger heatsink, which reduces the thermal resistance. Or you could find a transistor with a lower \$R_{DS(on)}\$ which will reduce the power the heatsink must dissipate.
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead? |
As part of a problem-set I'm self-studying, I'm trying to interpret the graph of $f(x)=\frac{ax+b}{cx+d}$ as a transformation of the graph of $y=\frac{1}{x}$, including determining what restrictions should be placed on $a, b, c,$ and $d$ for the interpretation to remain valid.
For context, the preceding problem was as follows: Sketch the graph of $f(x)=\frac{3x-7}{x-2}$ by applying transformations to the graph of $y=\frac{1}{x}$. We used polynomial long division to show $f(x)=3-\frac{1}{x-2}$, yielding the following transformations:
$\rightarrow$
$\rightarrow$$\rightarrow$
Applying the same approach to $f(x)=\frac{ax+b}{cx+d}$, I did some long division and came up with, $$f(x)=\frac{a}{c}+\frac{bc-ad}{c(cx+d)}=\frac{a}{c}+\left(\frac{bc-ad}{c}\right)\left(\frac{1}{cx+d}\right)$$
I'm not totally confident in my interpretation of the transformations involved, but here's what I've got:
Clearly, we must have $c\ne 0$, and the first transformation of $y=\frac{1}{x}$ is $y=(\frac{1}{c})(\frac{1}{x})=\frac{1}{cx}$, which could be viewed as either a vertical or horizontal scaling. If $c>1$, it shrinks the graph, compressing by a factor of $c$. If $0<c<1$, it grows by a factor of $\frac{1}{c}$ (again, either horizontally or vertically, the graph is such that the two are equivalent). Trivially, if $c=1$, it remains as is. A similar trio of cases for negative values of $c$ would apply similar results and a reflection (and like the scaling, the reflection could be applied either vertically or horizontally).
I suspect it makes more sense to interpret the above as a horizontal transformation, which can then be followed by a horizontal shift of $d$ units, yielding $y=\frac{1}{cx+d}$. If $d>0$, everything shifts to the left, if $d<0$ it shifts to the right, and if $d=0$ it all stays put.
The next transformation scales everything vertically by $\frac{bc-ad}{c}$, and the final transformation applies a vertical shift of $\frac{a}{c}$.
Does that sound about right? I didn't come up with any restrictions on $a,b,c,$ and $d$, beyond $c\ne 0$. I feel a bit less than competent when handling horizontal graph transformations in general. Dealing with shifts is easy enough, and I can handle simple scalings, but when they all combine, and possibly involve reflections as well, I start struggling to keep my head above water. |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
How does $E=mc^2$ put a upper limit to velocity of a body? I have read some articles on speed of light and they just tell me that it is the maximum velocity that can be acquired by any particle. How is it so? What is violated if $v>c$ ?
$E_0 = m_0 c^2$ is only the equation for the "rest energy" of a particle/object.
The full equation for the kinetic energy of a moving particle is actually:
$$E-E_0 = \gamma m_0c^2 - m_0c^2,$$ where $\gamma$ is defined as $\gamma = \frac{1}{\sqrt{1 - (v/c)^2}}, $
where $v$ is the relative velocity of the particle.
An "intuitive" answer to the question can be seen by noticing that the particle's energy approaches $\infty$ as its velocity approaches the speed of light. Thus, in order for the particle to move faster than the speed of light would require it to attain infinite kinetic energy, which can't happen.
To complete bclifford's answer, our current equation for energy-momentum of a particle is $E^2=p^2c^2+m^2c^4$ which is the final expression for $E=\gamma mc^2$, where $\gamma$ is the Lorentz factor obtained from his transformations.
Hence, for a particle like the photon - this equation is valid throwing $E=pc$, which says that the photon has momentum.
For particles at rest, $p=0$ which gives the rest energy $mc^2$ of the massive object.
The great necessity of this equation is that it restricts massive objects to be accelerated above $c$ as it requires infinite energy, massless particles to travel at $c$
always and also faster-than-light hypothetical particles to travel above $c$ always...
You can consider $pc$ and $mc^2$ as the opposite and adjacent sides of a right-angled triangle and the energy along the hypotenuse. No matter how fast a massive object moves, the hypotenuse is always greater than the other two sides, (i.e.) it can never quite reach $c$...
Just to visualise the other answers, here is a plot of the kinetic energy of a body in relativistic and nonrelativistic mechanics (note the log scale on the vertical axis):
You can see that as the speed approaches the speed of light the energy required according to special relativity shoots up compared to what nonrelativistic mechanics would say. It requires an infinite amount of energy for any massive body to reach the speed of light.
It doesn't. The equation $E = mc^2$ and the fact that no physical object can be accelerated past the speed of light are two entirely separate conclusions of special relativity.
The reason $c$ is an upper bound on the speed of an object has to do with the Lorentz transformations. These are the mathematical expressions that relate positions and times as measured by one observer to positions and times as measured by another observer. Now, suppose an object starts at rest with respect to observer A, and then accelerates until it is at rest with respect to observer B, which is moving at a speed $v$ relative to A. There has to be some Lorentz transformation you can use to convert between A's measurements and B's measurements, or equivalently, between the reference frame of the object pre-acceleration and its reference frame post-acceleration. But there is no Lorentz transformation that will take you from a reference frame in which an object is going slower than light to a reference frame where the same object is going faster than light or at the speed of light.
(Technically, that argument is a little hand-wavy, but it should get the main point across.) |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
I am aware that there are a couple of well-known proofs of this theorem, but I'm specifically grappling with the proof given in Fraleigh's
A First Course in Abstract Algebra (Theorem 9.15 in the textbook).
Let $s$ be a permutation in the symmetric group of degree $n$, and let $t$ be a transposition $(i,j)$ in the same group. If $n$ is $1$ or infinite, we are done. Otherwise, ....[details of the proof omitted.] (We use the right-to-left convention to multiply permutations.)
Okay, we have shown that the number of orbits of $s$ and $ts$ differ by 1. This part, I understand. But I don't understand how to infer the theorem from here. I would be very grateful if someone can help me clear my blind spot. Thank you so much!
Added by Dylan. Here is Fraleigh's explanation (please don't sue me):
We have shown that the number of orbits of $\tau \sigma$ differs from the number of orbits of $\sigma$ by $1$. The identity permutation $\iota$ has $n$ orbits, because each element is the only member of its orbit. Now the number of orbits of a given permutation $\sigma \in S_n$ differs from $n$ by either an even or odd number, but not both. Thus it is impossible to write $$ \sigma = \tau_1 \tau_2 \cdots \tau_m \iota $$ where the $\tau_k$ are transpositions, in two ways, once with $m$ even and once with $m$ odd. $\qquad \diamond$ |
Talk:Absolute continuity
Could I suggest using $\lambda$ rather than $\mathcal L$ for Lebesgue measure since
it is very commonly used, almost standard it would be consistent with the notation for a general measure, $\mu$ calligraphic is being used already for $\sigma$-algebras
--Jjg 12:57, 30 July 2012 (CEST)
Between metric setting and References I would like to type the following lines. But for some reason which is misterious to me, any time I try the page comes out a mess... Camillo 10:45, 10 August 2012 (CEST)
if for every $\varepsilon$ there is a $\delta > 0$ such that, for any $a_1<b_1<a_2<b_2<\ldots < a_n<b_n \in I$ with $\sum_i |a_i -b_i| <\delta$, we have \[ \sum_i d (f (b_i), f(a_i)) <\varepsilon\, . \] The absolute continuity guarantees the uniform continuity. As for real valued functions, there is a characterization through an appropriate notion of derivative. Theorem 1A continuous function $f$ is absolutely continuous if and only if there is a function $g\in L^1_{loc} (I, \mathbb R)$ such that\begin{equation}\label{e:metric}d (f(b), f(a))\leq \int_a^b g(t)\, dt \qquad \forall a<b\in I\,\end{equation}(cp. with ). This theorem motivates the following Definition 2If $f:I\to X$ is a absolutely continuous and $I$ is compact, the metric derivative of $f$ is the function $g\in L^1$ with the smalles $L^1$ norm such that \ref{e:metric} holds (cp. with ) OK, I found a way around. But there must be some bug: it seems that whenever I write the symbol "bigger" then things gets messed up (now even on THIS page). Camillo 10:57, 10 August 2012 (CEST) But I did not understand what is the problem. Messed up? On this page? Where? And what was the way around? --Boris Tsirelson 13:06, 10 August 2012 (CEST) > How to Cite This Entry:
Absolute continuity.
Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Absolute_continuity&oldid=27473 |
Search
Now showing items 1-9 of 9
Production of $K*(892)^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$ =7 TeV
(Springer, 2012-10)
The production of K*(892)$^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$=7 TeV was measured by the ALICE experiment at the LHC. The yields and the transverse momentum spectra $d^2 N/dydp_T$ at midrapidity |y|<0.5 in ...
Transverse sphericity of primary charged particles in minimum bias proton-proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV
(Springer, 2012-09)
Measurements of the sphericity of primary charged particles in minimum bias proton--proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV with the ALICE detector at the LHC are presented. The observable is linearized to be ...
Pion, Kaon, and Proton Production in Central Pb--Pb Collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012-12)
In this Letter we report the first results on $\pi^\pm$, K$^\pm$, p and pbar production at mid-rapidity (|y|<0.5) in central Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV, measured by the ALICE experiment at the LHC. The ...
Measurement of prompt J/psi and beauty hadron production cross sections at mid-rapidity in pp collisions at root s=7 TeV
(Springer-verlag, 2012-11)
The ALICE experiment at the LHC has studied J/ψ production at mid-rapidity in pp collisions at s√=7 TeV through its electron pair decay on a data sample corresponding to an integrated luminosity Lint = 5.6 nb−1. The fraction ...
Suppression of high transverse momentum D mesons in central Pb--Pb collisions at $\sqrt{s_{NN}}=2.76$ TeV
(Springer, 2012-09)
The production of the prompt charm mesons $D^0$, $D^+$, $D^{*+}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at the LHC, at a centre-of-mass energy $\sqrt{s_{NN}}=2.76$ TeV per ...
J/$\psi$ suppression at forward rapidity in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012)
The ALICE experiment has measured the inclusive J/ψ production in Pb-Pb collisions at √sNN = 2.76 TeV down to pt = 0 in the rapidity range 2.5 < y < 4. A suppression of the inclusive J/ψ yield in Pb-Pb is observed with ...
Production of muons from heavy flavour decays at forward rapidity in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012)
The ALICE Collaboration has measured the inclusive production of muons from heavy flavour decays at forward rapidity, 2.5 < y < 4, in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV. The pt-differential inclusive ...
Particle-yield modification in jet-like azimuthal dihadron correlations in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012-03)
The yield of charged particles associated with high-pT trigger particles (8 < pT < 15 GeV/c) is measured with the ALICE detector in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV relative to proton-proton collisions at the ...
Measurement of the Cross Section for Electromagnetic Dissociation with Neutron Emission in Pb-Pb Collisions at √sNN = 2.76 TeV
(American Physical Society, 2012-12)
The first measurement of neutron emission in electromagnetic dissociation of 208Pb nuclei at the LHC is presented. The measurement is performed using the neutron Zero Degree Calorimeters of the ALICE experiment, which ... |
Fundamental Sets of Solutions to a Linear Homogeneous System of First Order ODEs
Definition: A Fundamental Set of Solutions to the linear homogeneous system of first order ODEs $\mathbf{x}' = A(t) \mathbf{x}$ on $J = (a, b)$ is a set $\{ \phi^{[1]}, \phi^{[2]}, ..., \phi^{[n]} \}$ of linearly independent solutions to this system on $J$.
For example, consider the following linear homogeneous system of $2$ first order ODEs:(1)
We can easily solve this system. For the first differential equation:(2)
Where $C_1 = e^C > 0$.
For the second differential equation:(3)
Where $C_2 = e^C > 0$.
Note that in fact $C_1, C_2$ can be any real numbers are we are not simply restricted to $C_1, C_2 > 0$.
Now by taking $C_1 = 1$, $C_2 = 0$ we get that $\phi^{[1]} = \begin{bmatrix} e^t\\ 0 \end{bmatrix}$ is a solution to this system. Also, by taking $C_1 = 0$ and $C_2 = 1$ we get that $\phi^{[2]} = \phi^{[1]} = \begin{bmatrix} 0\\ e^{2t} \end{bmatrix}$ is a solution to this system.
We will now show that $\{ \phi^{[1]}, \phi^{[2]} \}$ is a Fundamental set of solutions to this system on all of $\mathbb{R}$. Let $\alpha, beta \in \mathbb{R}$ and consider the following equation:(4)
The equation above implies that $\alpha e^t = 0$ and $\beta e^{2t} = 0$ for all $t \in \mathbb{R}$. Since $e^t, e^{2t} > 0$ for all $t \in \mathbb{R}$ this implies that $\alpha, \beta = 0$. So $\{ \phi^{[1]}, \phi^{[2]} \}$ is a linearly independent set of solutions to this system and so $\{ \phi^{[1]}, \phi^{[2]} \}$ is a fundamental set of solutions to this system on $\mathbb{R}$. |
Homeomorphisms on Topological Spaces Examples 1
Recall from the Homeomorphisms on Topological Spaces page that if $X$ and $Y$ are topological spaces then a bijective map $f : X \to Y$ is said to be a homeomorphism of these two spaces if $f$ is also open and continuous, or equivalently, both $f$ and $f^{-1}$ are continuous.
We will now look at some examples of homeomorphic topological spaces.
Example 1 Show that the topological spaces $(0, 1)$ and $(0, \infty)$ (with their topologies being the unions of open balls resulting from the usual Euclidean metric on these subsets of $\mathbb{R}$) are homeomorphic.
To show that these two topological spaces are homeomorphic we must find a continuous bijection $f : X \to Y$ such that $f^{-1}$ is also continuous.
Consider the following function $f : (0, 1) \to (1, \infty)$ given by:(1)
We first show that $f$ is bijection. Let $x, y \in (0, 1)$ and suppose that $f(x) = f(y)$. Then:(2)
Cross multiplying gives us that then $x = y$, so $f$ is injective.
Now let $b \in (1, \infty)$. Since $b > 1$ we have that $0 < \frac{1}{b} < 1$, and so let $a = \frac{1}{b}$. Then:(3)
So for all $b \in (1, \infty)$ there exists an $a \in (0, 1)$ such that $f(a) = b$, so $f$ is surjective.
It's not hard to see that $f$ is a continuous map. Furthermore, $f^{-1} : (1, \infty) \to (0, 1)$ is also given by $f^{-1}(x) = \frac{1}{x}$ (which is continuous), and so $f$ is a homeomorphism between $(0, 1)$ and $(1, \infty)$, so these spaces are homeomorphic.
Example 2 Show that the spaces $(-r, r)$, $r > 0$ and $\mathbb{R}$ with the topologies obtained by the unions of open balls with respect to the usual Euclidean metric are homeomorphic.
Consider the following function $f : (- r, r) \to \mathbb{R}$ given by:(4)
Then $f$ is clearly continuous as $f$ will always have the following form:
Further it should be clear that $f^{-1}$ will always always be continuous:
Therefore $f$ is a homeomorphism between $(-r, r)$ and $\mathbb{R}$ so these spaces are homeomorphic. |
With $\operatorname{gd}(x)$ we denote the so-called Gudermannian function, see for example this MathWorld.
I know the closed-form of the following integral for the more simple cases of integers $n,m\geq 1$ $$\int_0^\infty\frac{(\operatorname{gd}(x))^n}{(\cosh(x))^m}dx.\tag{1}$$
I don't know if such family was in the literature.
Question.Can you provide me a summary of the strategy to get the closed-form of an integral in $(1)$, for example with $n=3$ and $m=4$? Are not required all tedious calculations, nor the exact closed-form: what it is required is the strategy to get the result. Thanks in advance. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.