text stringlengths 256 16.4k |
|---|
EUDML | On the equivalence of Quillen's and Swan's -theories. EuDML | On the equivalence of Quillen's and Swan's -theories.
On the equivalence of Quillen's and Swan's
K
-theories.
Ebeling, Paul; Keune, Frans
Ebeling, Paul, and Keune, Frans. "On the equivalence of Quillen's and Swan's -theories.." Georgian Mathematical Journal 9.4 (2002): 691-702. <http://eudml.org/doc/50469>.
@article{Ebeling2002,
author = {Ebeling, Paul, Keune, Frans},
keywords = {simplicial resolutions; non-abelian derived functors},
title = {On the equivalence of Quillen's and Swan's -theories.},
AU - Ebeling, Paul
AU - Keune, Frans
TI - On the equivalence of Quillen's and Swan's -theories.
KW - simplicial resolutions; non-abelian derived functors
simplicial resolutions, non-abelian derived functors
K
Computations of higher
K
Articles by Ebeling
Articles by Keune |
EUDML | A closer look at lattice points in rational simplices. EuDML | A closer look at lattice points in rational simplices.
A closer look at lattice points in rational simplices.
Beck, Matthias. "A closer look at lattice points in rational simplices.." The Electronic Journal of Combinatorics [electronic only] 6.1 (1999): Research paper R37, 9 p.-Research paper R37, 9 p.. <http://eudml.org/doc/120187>.
author = {Beck, Matthias},
keywords = {lattice point; -dimensional polytopes; simplex; quasipolynomials; -dimensional polytopes},
title = {A closer look at lattice points in rational simplices.},
TI - A closer look at lattice points in rational simplices.
KW - lattice point; -dimensional polytopes; simplex; quasipolynomials; -dimensional polytopes
lattice point,
n
-dimensional polytopes, simplex, quasipolynomials,
n
Lattices and convex bodies in |
15 February 2006 Agmon-Kato-Kuroda theorems for a large class of perturbations
Alexandru D. Ionescu,1 Wilhelm Schlag2
1Department of Mathematics, University of Wisconsin–Madison
We prove asymptotic completeness for operators of the form
H=-\Delta +L
{L}^{2}\left({\mathbb{R}}^{d}\right)
d\ge 2
L
is an admissible perturbation. Our class of admissible perturbations contains multiplication operators defined by real-valued potentials
V\in {L}^{q}\left({\mathbb{R}}^{d}\right)
q\in \left[d/2,\left(d+1\right)/2\right]
d=2
, then we require
q\in \left(1,3/2\right]
), as well as real-valued potentials
V
satisfying a global Kato condition. The class of admissible perturbations also contains first-order differential operators of the form
\stackrel{\to }{a}·\nabla -\nabla ·\stackrel{̲}{\stackrel{\to }{a}}
for suitable vector potentials
a
. Our main technical statement is a new limiting absorption principle, which we prove using techniques from harmonic analysis related to the Stein-Tomas restriction theorem
Alexandru D. Ionescu. Wilhelm Schlag. "Agmon-Kato-Kuroda theorems for a large class of perturbations." Duke Math. J. 131 (3) 397 - 440, 15 February 2006. https://doi.org/10.1215/S0012-7094-06-13131-9
Alexandru D. Ionescu, Wilhelm Schlag "Agmon-Kato-Kuroda theorems for a large class of perturbations," Duke Mathematical Journal, Duke Math. J. 131(3), 397-440, (15 February 2006) |
Validity range for uncertain real (ureal) parameters - MATLAB getLimits
Validity Range for Uncertain Parameters
ActLims
NormLims
Validity range for uncertain real (ureal) parameters
[ActLims,NormLims] = getLimits(ublk)
When the uncertainty range of a ureal parameter is not centered at its nominal value, there are restrictions on the range of values the parameter can take. For robust stability analysis, these restrictions mean that the smallest destabilizing perturbation of the parameter may be out of the reach of the specified ureal model. Use getLimits to find out the range of actual and normalized values that a ureal parameter can take.
[ActLims,NormLims] = getLimits(ublk) computes the intervals of actual and normalized values that an uncertain real parameter can take. For meaningful analysis results, the actual and normalized values of ublk must remain in these intervals. Values outside these intervals are essentially meaningless. In other words, ActLims and NormLims are the ranges of validity of the uncertainty model for real parameters.
Create a ureal uncertain parameter with range centered at the nominal value.
ublk = ureal('a',1,'range',[-1 3])
ublk =
Uncertain real parameter "a" with nominal value 1 and range [-1,3].
For such a parameter, b = 0 (see Algorithms), so there is no constraint on the values that the actual uncertainty (ublk) and the normalized uncertainty (Δ) can take. Use getLimits to confirm the ranges of the actual and normalized uncertainty.
ActLims = 1×2
NormLims = 1×2
Skew the uncertainty range to the right of the nominal value (DL < DR).
ublk.PlusMinus = [-1 2]
Uncertain real parameter "a" with nominal value 1 and range [0,3].
Now, the values that ublk and Δ can take for analysis purposes are limited.
-3.0000 Inf
-Inf 3
ublk — Uncertain real parameter
Uncertain real parameter, specified as a ureal object.
ActLims — Limits on actual uncertainty
Limits on the actual uncertainty range taken by ublk for analysis purposes, returned as a 2-element vector of the form [min,max]. When the uncertainty range specified in ublk is centered on the nominal value, ActLims = -Inf,Inf.
NormLims — Limits on normalized uncertainty
Limits on the normalized uncertainty range of ublk used for analysis purposes, returned as a 2-element vector of the form [min,max]. When the uncertainty range specified in ublk is centered on the nominal value, NormLims = -Inf,Inf.
Analysis functions such as robstab and robgain model uncertain real parameters as:
u={u}_{nom}+\frac{a\Delta }{1-b\Delta },\text{ }a>0,
where u is the actual value, unom is the nominal value, and Δ is the normalized value. When the uncertainty range is centered at the nominal value, there are no restrictions on the values u or Δ can take. However, when the uncertainty range is skewed, there are limitations on these values. To ensure continuity, the analysis functions restrict the values Δ and u to the ranges:
\begin{array}{l}\Delta <\frac{1}{|b|},\text{\hspace{0.17em}}\text{\hspace{0.17em}}u>\left({u}_{nom}-|\frac{a}{b}|\right),\text{\hspace{0.17em}}\text{for}\text{\hspace{0.17em}}DL<DR\\ \Delta >-\frac{1}{|b|},\text{\hspace{0.17em}}\text{\hspace{0.17em}}u<\left({u}_{nom}+|\frac{a}{b}|\right),\text{\hspace{0.17em}}\text{for}\text{\hspace{0.17em}}DL<DR,\end{array}
where DL and DR define the uncertainty range of u, [unom–DL,unom+DR]. Note that b and DR–DL always have the same sign.
normalized2actual | ureal | actual2normalized |
Local field - Wikipedia
A locally compact topological field
In mathematics, a field K is called a (non-Archimedean) local field if it is complete with respect to a topology induced by a discrete valuation v and if its residue field k is finite.[1] Equivalently, a local field is a locally compact topological field with respect to a non-discrete topology.[2] Sometimes, real numbers R, and the complex numbers C (with their standard topologies) are also defined to be local fields; this is the convention we will adopt below. Given a local field, the valuation defined on it can be of either of two types, each one corresponds to one of the two basic types of local fields: those in which the valuation is Archimedean and those in which it is not. In the first case, one calls the local field an Archimedean local field, in the second case, one calls it a non-Archimedean local field.[3] Local fields arise naturally in number theory as completions of global fields.[4]
While Archimedean local fields have been quite well known in mathematics for at least 250 years, the first examples of non-Archimedean local fields, the fields of p-adic numbers for positive prime integer p, were introduced by Kurt Hensel at the end of the 19th century.
Every local field is isomorphic (as a topological field) to one of the following:[3]
Archimedean local fields (characteristic zero): the real numbers R, and the complex numbers C.
Non-Archimedean local fields of characteristic zero: finite extensions of the p-adic numbers Qp (where p is any prime number).
Non-Archimedean local fields of characteristic p (for p any given prime number): the field of formal Laurent series Fq((T)) over a finite field Fq, where q is a power of p.
In particular, of importance in number theory, classes of local fields show up as the completions of algebraic number fields with respect to their discrete valuation corresponding to one of their maximal ideals. Research papers in modern number theory often consider a more general notion, requiring only that the residue field be perfect of positive characteristic, not necessarily finite.[5] This article uses the former definition.
1 Induced absolute value
2 Basic features of non-Archimedean local fields
2.2 Higher unit groups
2.3 Structure of the unit group
3 Theory of local fields
4 Higher-dimensional local fields
Induced absolute value[edit]
Given such an absolute value on a field K, the following topology can be defined on K: for a positive real number m, define the subset Bm of K by
{\displaystyle B_{m}:=\{a\in K:|a|\leq m\}.}
Then, the b+Bm make up a neighbourhood basis of b in K.
Conversely, a topological field with a non-discrete locally compact topology has an absolute value defining its topology. It can be constructed using the Haar measure of the additive group of the field.
Basic features of non-Archimedean local fields[edit]
For a non-Archimedean local field F (with absolute value denoted by |·|), the following objects are important:
its ring of integers
{\displaystyle {\mathcal {O}}=\{a\in F:|a|\leq 1\}}
which is a discrete valuation ring, is the closed unit ball of F, and is compact;
the units in its ring of integers
{\displaystyle {\mathcal {O}}^{\times }=\{a\in F:|a|=1\}}
which forms a group and is the unit sphere of F;
the unique non-zero prime ideal
{\displaystyle {\mathfrak {m}}}
in its ring of integers which is its open unit ball
{\displaystyle \{a\in F:|a|<1\}}
a generator
{\displaystyle \varpi }
{\displaystyle {\mathfrak {m}}}
called a uniformizer of
{\displaystyle F}
its residue field
{\displaystyle k={\mathcal {O}}/{\mathfrak {m}}}
which is finite (since it is compact and discrete).
Every non-zero element a of F can be written as a = ϖnu with u a unit, and n a unique integer. The normalized valuation of F is the surjective function v : F → Z ∪ {∞} defined by sending a non-zero a to the unique integer n such that a = ϖnu with u a unit, and by sending 0 to ∞. If q is the cardinality of the residue field, the absolute value on F induced by its structure as a local field is given by:[6]
{\displaystyle |a|=q^{-v(a)}.}
An equivalent and very important definition of a non-Archimedean local field is that it is a field that is complete with respect to a discrete valuation and whose residue field is finite.
The p-adic numbers: the ring of integers of Qp is the ring of p-adic integers Zp. Its prime ideal is pZp and its residue field is Z/pZ. Every non-zero element of Qp can be written as u pn where u is a unit in Zp and n is an integer, then v(u pn) = n for the normalized valuation.
The formal Laurent series over a finite field: the ring of integers of Fq((T)) is the ring of formal power series Fq[[T]]. Its maximal ideal is (T) (i.e. the power series whose constant term is zero) and its residue field is Fq. Its normalized valuation is related to the (lower) degree of a formal Laurent series as follows:
{\displaystyle v\left(\sum _{i=-m}^{\infty }a_{i}T^{i}\right)=-m}
(where a−m is non-zero).
The formal Laurent series over the complex numbers is not a local field. For example, its residue field is C[[T]]/(T) = C, which is not finite.
Higher unit groups[edit]
The nth higher unit group of a non-Archimedean local field F is
{\displaystyle U^{(n)}=1+{\mathfrak {m}}^{n}=\left\{u\in {\mathcal {O}}^{\times }:u\equiv 1\,(\mathrm {mod} \,{\mathfrak {m}}^{n})\right\}}
for n ≥ 1. The group U(1) is called the group of principal units, and any element of it is called a principal unit. The full unit group
{\displaystyle {\mathcal {O}}^{\times }}
is denoted U(0).
The higher unit groups form a decreasing filtration of the unit group
{\displaystyle {\mathcal {O}}^{\times }\supseteq U^{(1)}\supseteq U^{(2)}\supseteq \cdots }
whose quotients are given by
{\displaystyle {\mathcal {O}}^{\times }/U^{(n)}\cong \left({\mathcal {O}}/{\mathfrak {m}}^{n}\right)^{\times }{\text{ and }}\,U^{(n)}/U^{(n+1)}\approx {\mathcal {O}}/{\mathfrak {m}}}
for n ≥ 1.[7] (Here "
{\displaystyle \approx }
" means a non-canonical isomorphism.)
Structure of the unit group[edit]
The multiplicative group of non-zero elements of a non-Archimedean local field F is isomorphic to
{\displaystyle F^{\times }\cong (\varpi )\times \mu _{q-1}\times U^{(1)}}
where q is the order of the residue field, and μq−1 is the group of (q−1)st roots of unity (in F). Its structure as an abelian group depends on its characteristic:
If F has positive characteristic p, then
{\displaystyle F^{\times }\cong \mathbf {Z} \oplus \mathbf {Z} /{(q-1)}\oplus \mathbf {Z} _{p}^{\mathbf {N} }}
where N denotes the natural numbers;
If F has characteristic zero (i.e. it is a finite extension of Qp of degree d), then
{\displaystyle F^{\times }\cong \mathbf {Z} \oplus \mathbf {Z} /(q-1)\oplus \mathbf {Z} /p^{a}\oplus \mathbf {Z} _{p}^{d}}
where a ≥ 0 is defined so that the group of p-power roots of unity in F is
{\displaystyle \mu _{p^{a}}}
Theory of local fields[edit]
This theory includes the study of types of local fields, extensions of local fields using Hensel's lemma, Galois extensions of local fields, ramification groups filtrations of Galois groups of local fields, the behavior of the norm map on local fields, the local reciprocity homomorphism and existence theorem in local class field theory, local Langlands correspondence, Hodge-Tate theory (also called p-adic Hodge theory), explicit formulas for the Hilbert symbol in local class field theory, see e.g.[9]
Higher-dimensional local fields[edit]
Main article: Higher local field
A local field is sometimes called a one-dimensional local field.
A non-Archimedean local field can be viewed as the field of fractions of the completion of the local ring of a one-dimensional arithmetic scheme of rank 1 at its non-singular point.
For a non-negative integer n, an n-dimensional local field is a complete discrete valuation field whose residue field is an (n − 1)-dimensional local field.[5] Depending on the definition of local field, a zero-dimensional local field is then either a finite field (with the definition used in this article), or a perfect field of positive characteristic.
From the geometric point of view, n-dimensional local fields with last finite residue field are naturally associated to a complete flag of subschemes of an n-dimensional arithmetic scheme.
Higher local field
^ Cassels & Fröhlich 1967, p. 129, Ch. VI, Intro..
^ a b Milne 2020, p. 127, Remark 7.49.
^ Neukirch 1999, p. 134, Sec. 5.
^ a b Fesenko & Vostokov 2002, Def. 1.4.6.
^ Weil 1995, Ch. I, Theorem 6.
^ Neukirch 1999, p. 122.
^ Neukirch 1999, Theorem II.5.7.
^ Fesenko & Vostokov 2002, Chapters 1-4, 7.
Fesenko, Ivan B.; Vostokov, Sergei V. (2002), Local fields and their extensions, Translations of Mathematical Monographs, vol. 121 (Second ed.), Providence, RI: American Mathematical Society, ISBN 978-0-8218-3259-2, MR 1915966
Milne, James S. (2020), Algebraic Number Theory (3.08 ed.)
Neukirch, Jürgen (1999). Algebraic Number Theory. Vol. 322. Translated by Schappacher, Norbert. Berlin: Springer-Verlag. ISBN 978-3-540-65399-8. MR 1697859. Zbl 0956.11021.
Weil, André (1995), Basic number theory, Classics in Mathematics, Berlin, Heidelberg: Springer-Verlag, ISBN 3-540-58655-5
"Local field", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Retrieved from "https://en.wikipedia.org/w/index.php?title=Local_field&oldid=1087300683" |
\textcolor[rgb]{0.407843137254902,0.250980392156863,0.36078431372549}{\mathrm{ω}}
\mathrm{ω}
\mathrm{π}:E→M
be a fiber bundle, with base dimension
m
{\mathrm{π}}^{\mathrm{∞}}:{J}^{\mathrm{∞}}\left(E\right) → M
E
({x}^{i}, {u}^{\mathrm{α}}, {u}_{{i}_{}}^{\mathrm{α}}, {u}_{{i}_{}j}^{\mathrm{α}}
{u}_{\mathrm{ij} \cdot \cdot \cdot k}^{\mathrm{α}}, ....)
{\mathrm{Θ}}^{\mathrm{α}} = {\mathrm{du}}^{\mathrm{α}}-{u}_{\mathrm{ℓ}}^{\mathrm{α}}{\mathrm{dx}}^{\mathrm{ℓ}}
{\mathrm{\Omega }}^{\left(n,s\right)}\left({J}^{\infty }\left(E\right)\right)
n
s.
\mathrm{ω} ∈{\mathrm{\Omega }}^{\left(n,s\right)}\left({J}^{\infty }\left(E\right)\right)
{E}_{\mathrm{α}}\left(\mathrm{ω}\right) ∈ {\mathrm{Ω}}^{\left(n-1,s\right)}\left({J}^{\infty }\left(E\right)\right)
\mathrm{ω}
I: {\mathrm{\Omega }}^{\left(n,s\right)}\left({J}^{\infty }\left(E\right)\right)→{\mathrm{\Omega }}^{\left(n,s\right)}\left({J}^{\infty }\left(E\right)\right)
I\left(\mathrm{ω}\right) = \frac{1}{s}{\mathrm{Θ}}^{\mathrm{α} }∧{E}_{\mathrm{α}}\left(\mathrm{ω}\right).
I
\mathrm{η}
\left(n-1, s\right),
I\left({d}_{H}\mathrm{η}\right) = 0
{d}_{H }\mathrm{η}
\mathrm{η}
\mathrm{ω}
\left(n,s\right)
I\left(\mathrm{ω}\right) =0,
\left(n-1, s\right)
\mathrm{ω} = {d}_{H }\mathrm{η}
I
I∘I = I
\textcolor[rgb]{0.407843137254902,0.250980392156863,0.36078431372549}{\mathrm{\omega }}
\left(n, s\right)
I\left(\mathrm{ω}\right)
{J}^{3}\left(E\right)
E
\left(x,u\right)→ x.
{\mathrm{ω}}_{1}
\textcolor[rgb]{0,0,1}{\mathrm{ω1}}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{\mathrm{_DG}}\textcolor[rgb]{0,0,1}{}\left([[\textcolor[rgb]{0,0,1}{"biform"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{E}\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}]]\textcolor[rgb]{0,0,1}{,}[[[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}]\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{a}]\textcolor[rgb]{0,0,1}{,}[[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}]\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{b}]\textcolor[rgb]{0,0,1}{,}[[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}]\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{c}]\textcolor[rgb]{0,0,1}{,}[[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{5}]\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{d}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{x}\right)]]]\right)
\textcolor[rgb]{0,0,1}{\mathrm{_DG}}\textcolor[rgb]{0,0,1}{}\left([[\textcolor[rgb]{0,0,1}{"biform"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{E}\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}]]\textcolor[rgb]{0,0,1}{,}[[[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}]\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{-}{\textcolor[rgb]{0,0,1}{d}}_{\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{x}}\textcolor[rgb]{0,0,1}{+}{\textcolor[rgb]{0,0,1}{c}}_{\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{x}}\textcolor[rgb]{0,0,1}{-}{\textcolor[rgb]{0,0,1}{b}}_{\textcolor[rgb]{0,0,1}{x}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{a}]]]\right)
{\mathrm{ω}}_{2}
\textcolor[rgb]{0,0,1}{\mathrm{ω2}}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{\mathrm{_DG}}\textcolor[rgb]{0,0,1}{}\left([[\textcolor[rgb]{0,0,1}{"biform"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{E}\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}]]\textcolor[rgb]{0,0,1}{,}[[[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}]\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{a}]\textcolor[rgb]{0,0,1}{,}[[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}]\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{b}]\textcolor[rgb]{0,0,1}{,}[[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}]\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{c}]]]\right)
\textcolor[rgb]{0,0,1}{\mathrm{ω3}}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{\mathrm{_DG}}\textcolor[rgb]{0,0,1}{}\left([[\textcolor[rgb]{0,0,1}{"biform"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{E}\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}]]\textcolor[rgb]{0,0,1}{,}[[[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}]\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{-}\frac{{\textcolor[rgb]{0,0,1}{c}}_{\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{x}}}{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{-}{\textcolor[rgb]{0,0,1}{b}}_{\textcolor[rgb]{0,0,1}{x}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{a}]\textcolor[rgb]{0,0,1}{,}[[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}]\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{-}\frac{\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{c}}_{\textcolor[rgb]{0,0,1}{x}}}{\textcolor[rgb]{0,0,1}{2}}]\textcolor[rgb]{0,0,1}{,}[[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{5}]\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{c}]]]\right)
{\mathrm{ω}}_{3}
{\mathrm{ω}}_{3}
\textcolor[rgb]{0,0,1}{\mathrm{_DG}}\textcolor[rgb]{0,0,1}{}\left([[\textcolor[rgb]{0,0,1}{"biform"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{E}\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}]]\textcolor[rgb]{0,0,1}{,}[[[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}]\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{-}\frac{{\textcolor[rgb]{0,0,1}{c}}_{\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{x}}}{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{-}{\textcolor[rgb]{0,0,1}{b}}_{\textcolor[rgb]{0,0,1}{x}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{a}]\textcolor[rgb]{0,0,1}{,}[[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}]\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{-}\frac{\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{c}}_{\textcolor[rgb]{0,0,1}{x}}}{\textcolor[rgb]{0,0,1}{2}}]\textcolor[rgb]{0,0,1}{,}[[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{5}]\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{c}]]]\right)
{J}^{3}\left(E\right)
E
\left(x,y, u, v\right)→ \left(x,y\right)
{\mathrm{ω}}_{4}.
\textcolor[rgb]{0,0,1}{\mathrm{ω4}}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{\mathrm{_DG}}\textcolor[rgb]{0,0,1}{}\left([[\textcolor[rgb]{0,0,1}{"biform"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{E}\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}]]\textcolor[rgb]{0,0,1}{,}[[[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}]\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{a}]\textcolor[rgb]{0,0,1}{,}[[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}]\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{b}]\textcolor[rgb]{0,0,1}{,}[[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{5}]\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{c}]\textcolor[rgb]{0,0,1}{,}[[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{6}]\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{d}]\textcolor[rgb]{0,0,1}{,}[[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{7}]\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{e}]\textcolor[rgb]{0,0,1}{,}[[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{8}]\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{f}]]]\right)
\textcolor[rgb]{0,0,1}{\mathrm{_DG}}\textcolor[rgb]{0,0,1}{}\left([[\textcolor[rgb]{0,0,1}{"biform"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{E}\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}]]\textcolor[rgb]{0,0,1}{,}[[[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}]\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{-}{\textcolor[rgb]{0,0,1}{d}}_{\textcolor[rgb]{0,0,1}{y}}\textcolor[rgb]{0,0,1}{-}{\textcolor[rgb]{0,0,1}{c}}_{\textcolor[rgb]{0,0,1}{x}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{a}]\textcolor[rgb]{0,0,1}{,}[[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}]\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{-}{\textcolor[rgb]{0,0,1}{f}}_{\textcolor[rgb]{0,0,1}{y}}\textcolor[rgb]{0,0,1}{-}{\textcolor[rgb]{0,0,1}{e}}_{\textcolor[rgb]{0,0,1}{x}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{b}]]]\right)
{\mathrm{ω}}_{5}.
\textcolor[rgb]{0,0,1}{\mathrm{ω5}}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{\mathrm{_DG}}\textcolor[rgb]{0,0,1}{}\left([[\textcolor[rgb]{0,0,1}{"biform"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{E}\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}]]\textcolor[rgb]{0,0,1}{,}[[[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{5}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{7}]\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{a}]]]\right)
\textcolor[rgb]{0,0,1}{\mathrm{_DG}}\textcolor[rgb]{0,0,1}{}\left([[\textcolor[rgb]{0,0,1}{"biform"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{E}\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}]]\textcolor[rgb]{0,0,1}{,}[[[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{7}]\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{-}\frac{{\textcolor[rgb]{0,0,1}{a}}_{\textcolor[rgb]{0,0,1}{x}}}{\textcolor[rgb]{0,0,1}{2}}]\textcolor[rgb]{0,0,1}{,}[[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{12}]\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{-}\frac{\textcolor[rgb]{0,0,1}{a}}{\textcolor[rgb]{0,0,1}{2}}]\textcolor[rgb]{0,0,1}{,}[[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{5}]\textcolor[rgb]{0,0,1}{,}\frac{{\textcolor[rgb]{0,0,1}{a}}_{\textcolor[rgb]{0,0,1}{x}}}{\textcolor[rgb]{0,0,1}{2}}]\textcolor[rgb]{0,0,1}{,}[[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{9}]\textcolor[rgb]{0,0,1}{,}\frac{\textcolor[rgb]{0,0,1}{a}}{\textcolor[rgb]{0,0,1}{2}}]]]\right)
{\mathrm{ω}}_{6}
\mathrm{η}.
\textcolor[rgb]{0,0,1}{\mathrm{\eta }}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{\mathrm{_DG}}\textcolor[rgb]{0,0,1}{}\left([[\textcolor[rgb]{0,0,1}{"biform"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{E}\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}]]\textcolor[rgb]{0,0,1}{,}[[[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{6}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{7}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{9}]\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{u}}_{\textcolor[rgb]{0,0,1}{1}}]]]\right)
\textcolor[rgb]{0,0,1}{\mathrm{ω6}}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{\mathrm{_DG}}\textcolor[rgb]{0,0,1}{}\left([[\textcolor[rgb]{0,0,1}{"biform"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{E}\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}]]\textcolor[rgb]{0,0,1}{,}[[[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{6}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{7}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{9}]\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{-}{\textcolor[rgb]{0,0,1}{u}}_{\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}}]\textcolor[rgb]{0,0,1}{,}[[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{6}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{7}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{16}]\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{-}{\textcolor[rgb]{0,0,1}{u}}_{\textcolor[rgb]{0,0,1}{1}}]\textcolor[rgb]{0,0,1}{,}[[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{6}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{9}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{13}]\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{u}}_{\textcolor[rgb]{0,0,1}{1}}]\textcolor[rgb]{0,0,1}{,}[[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{7}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{9}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{11}]\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{-}{\textcolor[rgb]{0,0,1}{u}}_{\textcolor[rgb]{0,0,1}{1}}]]]\right)
\textcolor[rgb]{0,0,1}{\mathrm{_DG}}\textcolor[rgb]{0,0,1}{}\left([[\textcolor[rgb]{0,0,1}{"biform"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{E}\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}]]\textcolor[rgb]{0,0,1}{,}[[[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{5}]\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0}]]]\right) |
From Erasmus Alvey Darwin [May 1844 – 1 October 1846]1
diag 14
\frac{1}{2}
miles base with 200ft
gives 0o 7′ 48’
35 miles with 674 feet
gives 0o 10′ 53’ramme Dear Charles.
Swale2 sent here Lady Willogby’s Diary3 which I have transferred to Gower St4 to take down to you. I hope Emma was none the worse for her journey & Granny5 had a beautiful day for hers—
Yours. E D Saturday
The date range is set by the publication of the first volume of Lady Willoughby’s diary in May 1844 (Literary Advertiser) and the completion of South America. This letter is associated with two covers addressed to CD by E. A. Darwin, both with further calculations by E. A. Darwin: 14
\frac{1}{2}
miles 188 feet, rise gives 7′ 20’ 486 feet rise in 35miles (2025yards to mile) gives an angle 0o 7′ 52’ 100 feet in 14
\frac{1}{2}
miles gives 0o 3′ 5’ CD has annotated these covers: 7′ 20’] ‘p 182, p. 185’ added pencil 486 feet … 35miles] ‘((842 is the rise of the bottom of the lava in the 35 miles))’ added ink 0o 3’ 5’] ‘1’ [’’‘ over ’o‘]. 22’ (77 & 85 fathoms)’ added ink The calculations relate to the inclination of lava flows in the valley of the Santa Cruz River described by CD in South America, pp. 116–17. According to CD’s ‘Journal’ (Correspondence vol. 3, Appendix II) he worked on the book from 27 July 1844 to 1 October 1846.
Ralph and William H. Swale, Booksellers, 21 Great Russell Street, London.
[Rathbone] 1844–8, a fictitious diary.
The Hensleigh Wedgwoods lived at 16 Gower Street.
Susan Darwin, CD’s sister.
[Rathbone, Hannah Mary]. 1844–8. So much of the diary of Lady Willoughby as relates to her domestic history, and to the eventful period of the reign of Charles the First. 2 vols. London.
Sends calculations of angles of elevation [of sea-bottom, for South America?].
Swale has sent Lady Willoughby’s diary, which EAD will forward to CD.
ALS 2pp †, Amem † |
We study the gradient flow for the total variation functional, which arises in image processing and geometric applications. We propose a variational inequality weak formulation for the gradient flow, and establish well-posedness of the problem by the energy method. The main idea of our approach is to exploit the relationship between the regularized gradient flow (characterized by a small positive parameter
\epsilon
, and the minimal surface flow [21] and the prescribed mean curvature flow [16]. Since our approach is constructive and variational, finite element methods can be naturally applied to approximate weak solutions of the limiting gradient flow problem. We propose a fully discrete finite element method and establish convergence to the regularized gradient flow problem as
h,k\to 0
, and to the total variation gradient flow problem as
h,k,\epsilon \to 0
in general cases. Provided that the regularized gradient flow problem possesses strong solutions, which is proved possible if the datum functions are regular enough, we establish practical a priori error estimates for the fully discrete finite element solution, in particular, by focusing on the dependence of the error bounds on the regularization parameter
\epsilon
. Optimal order error bounds are derived for the numerical solution under the mesh relation
k=O\left({h}^{2}\right)
. In particular, it is shown that all error bounds depend on
\frac{1}{\epsilon }
only in some lower polynomial order for small
\epsilon
Classification : 35B25, 35K57, 35Q99, 65M60, 65M12
Mots clés : bounded variation, gradient flow, variational inequality, equations of prescribed mean curvature and minimal surface, fully discrete scheme, finite element method
author = {Feng, Xiaobing and Prohl, Andreas},
title = {Analysis of total variation flow and its finite element approximations},
TI - Analysis of total variation flow and its finite element approximations
Feng, Xiaobing; Prohl, Andreas. Analysis of total variation flow and its finite element approximations. ESAIM: Mathematical Modelling and Numerical Analysis - Modélisation Mathématique et Analyse Numérique, Tome 37 (2003) no. 3, pp. 533-556. doi : 10.1051/m2an:2003041. http://www.numdam.org/articles/10.1051/m2an:2003041/
[1] L. Ambrosio, N. Fusco and D. Pallara, Functions of bounded variation and free discontinuity problems. The Clarendon Press Oxford University Press, New York (2000). | MR 1857292 | Zbl 0957.49001
[2] F. Andreu, C. Ballester, V. Caselles and J.M. Mazón, The Dirichlet problem for the total variation flow. J. Funct. Anal. 180 (2001) 347-403. | Zbl 0973.35109
[3] F. Andreu, C. Ballester, V. Caselles and J.M. Mazón, Minimizing total variation flow. Differential Integral Equations 14 (2001) 321-360. | Zbl 1020.35037
[4] F. Andreu, V. Caselles, J.I. Díaz and J.M. Mazón, Some qualitative properties for the total variation flow. J. Funct. Anal. 188 (2002) 516-547. | Zbl 1042.35018
[5] G. Bellettini and V. Caselles, The total variation flow in
{𝐑}^{N}
. J. Differential Equations (accepted). | Zbl 1036.35099
[6] S.C. Brenner and L.R. Scott, The mathematical theory of finite element methods. Springer-Verlag, New York, 2nd ed. (2002). | MR 1894376 | Zbl 0804.65101
[7] H. Brézis, Opérateurs maximaux monotones et semi-groupes de contractions dans les espaces de Hilbert. North-Holland Publishing Co., Amsterdam, North-Holland Math. Stud., No. 5. Notas de Matemática (50) (1973). | MR 348562 | Zbl 0252.47055
[8] E. Casas, K. Kunisch and C. Pola, Regularization by functions of bounded variation and applications to image enhancement. Appl. Math. Optim. 40 (1999) 229-257. | Zbl 0942.49014
[9] A. Chambolle and P.-L. Lions, Image recovery via total variation minimization and related problems. Numer. Math. 76 (1997) 167-188. | Zbl 0874.68299
[10] T. Chan and J. Shen, On the role of the BV image model in image restoration. Tech. Report CAM 02-14, Department of Mathematics, UCLA (2002). | MR 2011710 | Zbl 1035.94501
[11] T.F. Chan, G.H. Golub and P. Mulet, A nonlinear primal-dual method for total variation-based image restoration. SIAM J. Sci. Comput. 20 (1999) 1964-1977 (electronic). | Zbl 0929.68118
[12] P.G. Ciarlet, The finite element method for elliptic problems. North-Holland Publishing Co., Amsterdam, Stud. Math. Appl. 4 (1978). | MR 520174 | Zbl 0383.65058
[13] M.G. Crandall and T.M. Liggett, Generation of semi-groups of nonlinear transformations on general Banach spaces. Amer. J. Math. 93 (1971) 265-298. | Zbl 0226.47038
[14] D.C. Dobson and C.R. Vogel, Convergence of an iterative method for total variation denoising. SIAM J. Numer. Anal. 34 (1997) 1779-1791. | Zbl 0898.65034
[15] C. Gerhardt, Boundary value problems for surfaces of prescribed mean curvature. J. Math. Pures Appl. 58 (1979) 75-109. | Zbl 0413.35024
[16] C. Gerhardt, Evolutionary surfaces of prescribed mean curvature. J. Differential Equations 36 (1980) 139-172. | Zbl 0485.35053
[17] D. Gilbarg and N.S. Trudinger, Elliptic partial differential equations of second order. Springer-Verlag, Berlin (2001). Reprint of the 1998 ed. | MR 1814364 | Zbl 1042.35002
[18] E. Giusti, Minimal surfaces and functions of bounded variation. Birkhäuser Verlag, Basel (1984). | MR 775682 | Zbl 0545.49018
[19] R. Hardt and X. Zhou, An evolution problem for linear growth functionals. Comm. Partial Differential Equations 19 (1994) 1879-1907. | Zbl 0811.35061
[20] C. Johnson and V. Thomée, Error estimates for a finite element approximation of a minimal surface. Math. Comp. 29 (1975) 343-349. | Zbl 0302.65086
[21] A. Lichnewsky and R. Temam, Pseudosolutions of the time-dependent minimal surface problem. J. Differential Equations 30 (1978) 340-364. | Zbl 0368.49016
[22] J.-L. Lions, Quelques méthodes de résolution des problèmes aux limites non linéaires. Dunod (1969). | MR 259693 | Zbl 0189.40603
[23] R. Rannacher, Some asymptotic error estimates for finite element approximation of minimal surfaces. RAIRO Anal. Numér. 11 (1977) 181-196. | Numdam | Zbl 0356.35034
[24] L. Rudin, S. Osher and E. Fatemi, Nonlinear total variation based noise removal algorithms. Phys. D 60 (1992) 259-268. | Zbl 0780.49028
{L}^{p}\left(0,T;B\right)
. Ann. Mat. Pura Appl. 146 (1987) 65-96. | Zbl 0629.46031
[26] M. Struwe, Applications to nonlinear partial differential equations and Hamiltonian systems, in Variational methods. Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics (Results in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics), Vol. 34. Springer-Verlag, Berlin, 3rd ed. (2000). | MR 1736116 | Zbl 0939.49001 |
Difference between revisions of "Known infinite families of quadratic APN polynomials over GF(2^n)" - Boolean Functions
Difference between revisions of "Known infinite families of quadratic APN polynomials over GF(2^n)"
<td><math>L(x)^{2^i}x+L(x)x^{2^i}</math></td>
<td><math>n=km, m>1, \gcd(n,i)=1, L(x)=\sum_{j=0}^{k-1}a_jx^{2^{jm}}</math> satisfies the conditions in Theorem 3.6 of [7]</td>
<td><ref>Villa I, Budaghyan L, Calderini M, Carlet C, Coulter R. Constructing APN functions through isotopic shift. Cryptology ePrint Archive, Report 2018/769</ref></td>
<td><ref>Budaghyan L, Calderini M, Carlet C, Coulter R, Villa I. Constructing APN functions through isotopic shift. Cryptology ePrint Archive, Report 2018/769</ref></td>
{\displaystyle N^{\circ }}
C1-C2
{\displaystyle x^{2^{s}+1}+u^{2^{k}-1}x^{2^{ik}+2^{mk+s}}}
{\displaystyle n=pk,\gcd(k,3)=\gcd(s,3k)=1,p\in \{3,4\}}
{\displaystyle i=sk{\bmod {p}},m=p-i,n\geq 12,u{\text{ primitive in }}\mathbb {F} _{2^{n}}^{*}}
{\displaystyle sx^{q+1}+x^{2^{i}+1}+x^{q(2^{i}+1)}+cx^{2^{i}q+1}+c^{q}x^{2^{i}+q}}
{\displaystyle q=2^{m},n=2m,gcd(i,m)=1,c\in \mathbb {F} _{2^{n}},s\in \mathbb {F} _{2^{n}}\setminus \mathbb {F} _{q}}
{\displaystyle X^{2^{i}+1}+cX^{2^{i}}+c^{q}X+1{\text{ has no solution }}x{\text{ s.t. }}x^{q+1}=1}
{\displaystyle x^{3}+a^{-1}\mathrm {Tr} _{n}(a^{3}x^{9})}
{\displaystyle a\neq 0}
{\displaystyle x^{3}+a^{-1}\mathrm {Tr} _{n}^{3}(a^{3}x^{9}+a^{6}x^{18})}
{\displaystyle 3|n}
{\displaystyle a\neq 0}
{\displaystyle x^{3}+a^{-1}\mathrm {Tr} _{n}^{3}(a^{6}x^{18}+a^{12}x^{36})}
{\displaystyle 3|n,a\neq 0}
{\displaystyle ux^{2^{s}+1}+u^{2^{k}}x^{2^{-k}+2^{k+s}}+vx^{2^{-k}+1}+wu^{2^{k}+1}x^{2^{s}+2^{k+s}}}
{\displaystyle n=3k,\gcd(k,3)=\gcd(s,3k)=1,v,w\in \mathbb {F} _{2^{k}}}
{\displaystyle vw\neq 1,3|(k+s),u{\text{ primitive in }}\mathbb {F} _{2^{n}}^{*}}
{\displaystyle (x+x^{2{^{m}}})^{2^{k}+1}+u'(ux+u^{2^{m}}x^{2^{m}})^{(2^{k}+1)2^{i}}+u(x+x^{2^{m}})(ux+u^{2^{m}}x^{2^{m}})}
{\displaystyle n=2m,m\geqslant 2}
{\displaystyle \gcd(k,m)=1}
{\displaystyle i\geqslant 2}
{\displaystyle u{\text{ primitive in }}\mathbb {F} _{2^{n}}^{*},u'\in \mathbb {F} _{2^{m}}{\text{ not a cube }}}
{\displaystyle L(x)^{2^{i}}x+L(x)x^{2^{i}}}
{\displaystyle n=km,m>1,\gcd(n,i)=1,L(x)=\sum _{j=0}^{k-1}a_{j}x^{2^{jm}}}
satisfies the conditions in Theorem 3.6 of [7] [7]
{\displaystyle ut(x)(x^{q}+x)+t(x)^{2^{2i}+2^{3i}}+at(x)^{2^{2i}}(x^{q}+x)^{2^{i}}+b(x^{q}+x)^{2^{i}+1}}
{\displaystyle n=2m,q=2^{m},\gcd(m,i)=1,t(x)=u^{q}x+x^{q}u}
{\displaystyle X^{2^{i}+1}+aX+b{\mbox{ has no solution over }}\mathbb {F} _{2^{m}}}
{\displaystyle x^{3}+a(x^{2^{i}+1})^{2^{k}}+bx^{3\cdot 2^{m}}+c(x^{2^{i+m}+2^{m}})^{2^{k}}}
{\displaystyle n=2m=10,(a,b,c)=(\beta ,1,0,0),i=3,k=2,\beta {\text{ primitive in }}\mathbb {F} _{2^{2}}}
{\displaystyle n=2m,m\ odd,3\nmid m,(a,b,c)=(\beta ,\beta ^{2},1),\beta {\text{ primitive in }}\mathbb {F} _{2^{2}}}
{\displaystyle i\in \{m-2,m,2m-1,(m-2)^{-1}\mod n\}}
↑ Budaghyan L, Calderini M, Carlet C, Coulter R, Villa I. Constructing APN functions through isotopic shift. Cryptology ePrint Archive, Report 2018/769
↑ Taniguchi H. On some quadratic APN functions. Des. Codes Cryptogr. 2019, https://doi.org/10.1007/s10623-018-00598-2
Retrieved from "https://boolean.h.uib.no/mediawiki/index.php?title=Known_infinite_families_of_quadratic_APN_polynomials_over_GF(2%5En)&oldid=487" |
Tumeken's heka - OSRS Wiki
Tumeken's heka
Tumeken's heka is a temporary piece of content found only in unrestricted worlds.
The contents of this page refer to a piece of content only available on temporary unrestricted worlds as a means to test it before release.
Its design is subject to change, and the graphics are used as a placeholder.
Tumeken's heka (uncharged)
Wield, Charge
A wand once used by Tumeken himself. It is currently uncharged.
Tumeken's heka (uncharged)Tumeken's heka? (edit)UnchargedChargedHeka? (edit) ? (edit)File:Tumeken's heka (uncharged).pngFile:Tumeken's heka.png12 August 2021 (Update)? (edit)YesNoNo? (edit)? (edit)YesNoNoNo? (edit)? (edit)Wield, ChargeWield, Check, Charge, UnchargeCheckDrop? (edit)A wand once used by Tumeken himself. It is currently uncharged.A wand once used by Tumeken himself.150000150,000 coinstrue90,000 coins60,000 coins900000.1980.198 kg? (edit)? (edit)falsefalseTumeken's heka (uncharged)Tumeken's heka0Not soldinfobox-cell-hidden-? (edit)--No data to displayItem? (edit)2598925987? (edit)2598925987Versions: 2Default version: Uncharged
SMW Subobject for ChargedHigh Alchemy value: 90000Examine: A wand once used by Tumeken himself.Is members only: trueIs variant of: Tumeken's hekaUses infobox: ItemImage: File:Tumeken's heka.pngWeight: 0.198Value: 150000Version anchor: ChargedRelease date: 12 August 2021Item ID: 25987
SMW Subobject for UnchargedHigh Alchemy value: 90000Examine: A wand once used by Tumeken himself. It is currently uncharged.Is members only: trueIs variant of: Tumeken's hekaUses infobox: ItemImage: File:Tumeken's heka (uncharged).pngWeight: 0.198Value: 150000Version anchor: UnchargedRelease date: 12 August 2021Item ID: 25989
Tumeken's heka is a powered wand requiring 85 Magic to equip. It belonged to Tumeken, the head of the Menaphite Pantheon, and was given to Osmumten by Elidinis after Tumeken's death in an ancient war. It is only available from the supplies table in an unrestricted world as a test item for players and is one of the proposed rewards for the Tombs of Amascut raid.
Like powered staves, Tumeken's heka has a built-in magic spell that can be used regardless of the spellbook the player is using and cannot be used to autocast any other combat spells. However, the heka has a unique attack effect in which every fourth hit will fire a charged attack that deals significantly more damage, albeit with a longer delay after the attack. The first three attacks have an attack speed of two ticks (1.2 seconds), with the fourth attack having an attack speed of four ticks (2.4 seconds).
The spell cannot be used against other players in the Wilderness. The only way that the spell can be used against other players is during the Castle Wars, Soul Wars and TzHaar Fight Pit minigames, as well as the Clan Wars minigame if the clan leaders enable it.
A player using Tumeken's heka's built in spell.
+0+0+0+0+0+0+25+25-4-4+2+2+3+3+1+1+20+20+0+0+0+0+0+0+0+0+0%0weapon List -26
squares6magic? (edit) ? (edit)149149? (edit) ? (edit)149149Versions: 2
SMW Subobject for ChargedMagic defence bonus: +20Weapon attack range: 6Is variant of: Tumeken's hekaSlash attack bonus: +0Magic Damage bonus: 0Stab attack bonus: +0Prayer bonus: +0Ranged Strength bonus: +0Weapon attack speed: -2Stab defence bonus: +2Slash defence bonus: +3Range defence bonus: +0Magic attack bonus: +25Crush attack bonus: +0Crush defence bonus: +1Strength bonus: +0Range attack bonus: -4Equipment slot: weaponCombat style: magic
SMW Subobject for UnchargedMagic defence bonus: +20Weapon attack range: 6Is variant of: Tumeken's hekaSlash attack bonus: +0Magic Damage bonus: 0Stab attack bonus: +0Prayer bonus: +0Ranged Strength bonus: +0Weapon attack speed: -2Stab defence bonus: +2Slash defence bonus: +3Range defence bonus: +0Magic attack bonus: +25Crush attack bonus: +0Crush defence bonus: +1Strength bonus: +0Range attack bonus: -4Equipment slot: weaponCombat style: magic
Category: Powered Wand
Accurate Magic Accurate Magic and Hitpoints +3 Magic
Longrange Magic Longrange Magic, Defence, and Hitpoints +1 Magic, +3 Defence, and +2 Attack range
In order to use the wand's magic spell, it must be charged with soul and chaos runes, holding up to 20,000 charges when fully charged. Each cast requires 1 soul rune and 3 chaos runes, costing 380 coins per cast. The total cost to fully charge the wand with 20,000 soul and 60,000 chaos runes is 7,600,000.
Players can uncharge the wand whenever and wherever they choose, getting all of their runes back.
Maximum hit[edit | edit source]
The maximum hit dealt by Tumeken's heka is dependent on the player's current Magic level; the formula for the heka's fast and charged attacks are as follows. Note that any magic damage bonuses are applied after the base max hit is calculated.
{\displaystyle {\begin{aligned}Fast&={\frac {Magic}{6}}+2\\Charged&={\frac {Magic-25}{2}}\\\end{aligned}}}
The Magic requirement to equip Tumeken's heka was increased from level 84 to 85.
The base max hit of the heka's fast attack has been increased by 1.
A heka, also known as a crook, is a hooked sceptre associated with the god Osiris and was a symbol of rule in Ancient Egypt.
Elder Weapon Beta
Theatre of Blood Rewards Testing
Vampyrium vambraces
Nightmare Reward beta
Equipment Rebalancing Beta
Black d'hide armour
Arceuus Spellbook Beta
Resurrect spells
ToA Weapons Beta
Masori equipment
Elidinis' ward (broken)
Keris partisan of breaching
Nex Weapons Beta
Virtus robes
Arena Reward Beta
Maoma's headgear
Koriff's headgear
Saika's headgear
Calamity chests
Calamity breeches
Blighted wave sack
Blighted surge sack
Centurion cuirass
Scroll of imbuing
Wristbands of the arena (imbued)
Humble Chivalry
Retrieved from ‘https://oldschool.runescape.wiki/w/Tumeken%27s_heka?oldid=14280614’ |
MatGcd - Maple Help
Home : Support : Online Help : Mathematics : Linear Algebra : LinearAlgebra Package : Modular Subpackage : MatGcd
compute mod m GCD from Matrix of coefficients
MatGcd(m, A, nrow)
mod m Matrix; each row stores the coefficients of a polynomial
number of rows in A containing polynomial coefficients
The MatGcd function computes the GCD of the nrow polynomials formed by multiplication of the input Matrix A by the Vector
[1,x,{x}^{2},...]
. It is capable of computing the mod m GCD of more than two polynomials simultaneously.
Each polynomial must be stored in a row of the input Matrix, in order of increasing degree for the columns. For example, the polynomial
{x}^{2}+2x+3
is stored in a row as [3, 2, 1].
On successful completion, the degree of the GCD is returned, and the coefficients of the GCD are returned in the first row of A.
Note: The returned GCD is not normalized to the leading coefficient 1, as the leading coefficient is required for some modular reconstruction techniques.
This command is part of the LinearAlgebra[Modular] package, so it can be used in the form MatGcd(..) only after executing the command with(LinearAlgebra[Modular]). However, it can always be used in the form LinearAlgebra[Modular][MatGcd](..).
\mathrm{with}\left(\mathrm{LinearAlgebra}[\mathrm{Modular}]\right):
p≔97
\textcolor[rgb]{0,0,1}{p}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{97}
An example of three polynomials with a known GCD.
G≔\mathrm{randpoly}\left(x,\mathrm{degree}=2,\mathrm{coeffs}=\mathrm{rand}\left(0..p-1\right)\right)
\textcolor[rgb]{0,0,1}{G}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{92}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{44}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{95}
\mathrm{Ac}≔\mathrm{randpoly}\left(x,\mathrm{degree}=2,\mathrm{coeffs}=\mathrm{rand}\left(0..p-1\right)\right):
A≔\mathrm{expand}\left(G\mathrm{Ac}\right)
\textcolor[rgb]{0,0,1}{A}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{460}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{4}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{5556}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{6983}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{7402}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{4085}
\mathrm{Bc}≔\mathrm{randpoly}\left(x,\mathrm{degree}=3,\mathrm{coeffs}=\mathrm{rand}\left(0..p-1\right)\right):
B≔\mathrm{expand}\left(G\mathrm{Bc}\right)
\textcolor[rgb]{0,0,1}{B}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{3404}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{5}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{7884}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{4}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{8899}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{16344}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{6650}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{9025}
\mathrm{Cc}≔\mathrm{randpoly}\left(x,\mathrm{degree}=1,\mathrm{coeffs}=\mathrm{rand}\left(0..p-1\right)\right):
C≔\mathrm{expand}\left(G\mathrm{Cc}\right)
\textcolor[rgb]{0,0,1}{C}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{1472}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{8892}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{5436}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{8455}
\mathrm{cfs}≔[[\mathrm{seq}\left(\mathrm{coeff}\left(A,x,i\right),i=0..\mathrm{degree}\left(A,x\right)\right)],[\mathrm{seq}\left(\mathrm{coeff}\left(B,x,i\right),i=0..\mathrm{degree}\left(B,x\right)\right)],[\mathrm{seq}\left(\mathrm{coeff}\left(C,x,i\right),i=0..\mathrm{degree}\left(C,x\right)\right)]]
\textcolor[rgb]{0,0,1}{\mathrm{cfs}}\textcolor[rgb]{0,0,1}{≔}[[\textcolor[rgb]{0,0,1}{4085}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{7402}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{6983}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{5556}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{460}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{9025}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{6650}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{16344}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{8899}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{7884}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3404}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{8455}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{5436}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{8892}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1472}]]
M≔\mathrm{Mod}\left(p,\mathrm{cfs},\mathrm{float}[8]\right)
\textcolor[rgb]{0,0,1}{M}\textcolor[rgb]{0,0,1}{≔}[\begin{array}{cccccc}\textcolor[rgb]{0,0,1}{11.}& \textcolor[rgb]{0,0,1}{30.}& \textcolor[rgb]{0,0,1}{96.}& \textcolor[rgb]{0,0,1}{27.}& \textcolor[rgb]{0,0,1}{72.}& \textcolor[rgb]{0,0,1}{0.}\\ \textcolor[rgb]{0,0,1}{4.}& \textcolor[rgb]{0,0,1}{54.}& \textcolor[rgb]{0,0,1}{48.}& \textcolor[rgb]{0,0,1}{72.}& \textcolor[rgb]{0,0,1}{27.}& \textcolor[rgb]{0,0,1}{9.}\\ \textcolor[rgb]{0,0,1}{16.}& \textcolor[rgb]{0,0,1}{4.}& \textcolor[rgb]{0,0,1}{65.}& \textcolor[rgb]{0,0,1}{17.}& \textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{0.}\end{array}]
\mathrm{gdeg}≔\mathrm{MatGcd}\left(p,M,3\right):
M,\mathrm{gdeg}
[\begin{array}{cccccc}\textcolor[rgb]{0,0,1}{95.}& \textcolor[rgb]{0,0,1}{44.}& \textcolor[rgb]{0,0,1}{92.}& \textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{0.}\\ \textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{0.}\\ \textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{0.}\end{array}]\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}
g≔\mathrm{add}\left(\mathrm{trunc}\left(M[1,i+1]\right){x}^{i},i=0..\mathrm{gdeg}\right)
\textcolor[rgb]{0,0,1}{g}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{92}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{44}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{95}
\mathrm{modp}\left(\mathrm{Expand}\left(\frac{G}{\mathrm{lcoeff}\left(G,x\right)}\mathrm{lcoeff}\left(g,x\right)\right),p\right)
\textcolor[rgb]{0,0,1}{92}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{44}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{95}
An example of a trivial GCD.
A≔\mathrm{randpoly}\left(x,\mathrm{degree}=5\right)
\textcolor[rgb]{0,0,1}{A}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{62}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{5}}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{82}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{4}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{80}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{44}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{71}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{17}
B≔\mathrm{randpoly}\left(x,\mathrm{degree}=4\right)
\textcolor[rgb]{0,0,1}{B}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{75}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{4}}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{10}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{7}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{40}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{42}
\mathrm{cfs}≔[[\mathrm{seq}\left(\mathrm{coeff}\left(A,x,i\right),i=0..\mathrm{degree}\left(A,x\right)\right)],[\mathrm{seq}\left(\mathrm{coeff}\left(B,x,i\right),i=0..\mathrm{degree}\left(B,x\right)\right)]]:
M≔\mathrm{Mod}\left(p,\mathrm{cfs},\mathrm{integer}[]\right)
\textcolor[rgb]{0,0,1}{M}\textcolor[rgb]{0,0,1}{≔}[\begin{array}{cccccc}\textcolor[rgb]{0,0,1}{80}& \textcolor[rgb]{0,0,1}{71}& \textcolor[rgb]{0,0,1}{53}& \textcolor[rgb]{0,0,1}{80}& \textcolor[rgb]{0,0,1}{15}& \textcolor[rgb]{0,0,1}{62}\\ \textcolor[rgb]{0,0,1}{42}& \textcolor[rgb]{0,0,1}{57}& \textcolor[rgb]{0,0,1}{90}& \textcolor[rgb]{0,0,1}{87}& \textcolor[rgb]{0,0,1}{22}& \textcolor[rgb]{0,0,1}{0}\end{array}]
\mathrm{MatGcd}\left(p,M,2\right)
\textcolor[rgb]{0,0,1}{0}
M
[\begin{array}{cccccc}\textcolor[rgb]{0,0,1}{75}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\end{array}] |
CYK Parsing - Cohen Courses
This is a Method page for CYK Parsing.
The Cocke–Younger–Kasami (CYK) algorithm is an algorithm for parsing PCFG. It is bottom up and makes use of dynamic programming.
The input to the algorithm is a grammar
{\displaystyle G}
in Chomsky Normal Form, and a sentence
{\displaystyle X}
. The idea behind the parsing algorithm is to recursively build parses from bottom up, and mantaining a chart C[i, j] which contains productions that generates X[i] ... X[j].
for each i = 1 to n
for each production A -> a
if a == X[i]
add (A -> a, 0) to C[i,i]
for each L = 2 to n
for each i = 1 to n-L+1
for each j = 1 to L - 1
for each production A -> A1 A2
if A1 in C[i,i+j] and A2 in C[i+j+1,i+L]
add (A -> A1 A2, i+j) to C[i,i+L]
We can recover the parse trees by traversing through the chart, which contains the production rule used and the point where the subtrees are split. The probability of a parse tree is simply the product of the probability of each production rule that we have used.
CKY parsing was briefly covered in Fall 2011 11-763 Class meeting on Natural Language Parsing.
Daniel H. Younger (1967). Recognition and parsing of context-free languages in time n3. Information and Control 10(2): 189–208. One of the original authors of CYK algorithm
Inside outside algorithm uses CKY parsing to calculate expected counts of production rules being used.
The estimation of stochastic context-free grammars using the Inside-Outside algorithm
Retrieved from "http://curtis.ml.cmu.edu/w/courses/index.php?title=CYK_Parsing&oldid=9885" |
EUDML | Frame characterizations of Besov and Triebel-Lizorkin spaces on spaces of homogeneous type and their applications. EuDML | Frame characterizations of Besov and Triebel-Lizorkin spaces on spaces of homogeneous type and their applications.
Frame characterizations of Besov and Triebel-Lizorkin spaces on spaces of homogeneous type and their applications.
Yang, Dachun. "Frame characterizations of Besov and Triebel-Lizorkin spaces on spaces of homogeneous type and their applications.." Georgian Mathematical Journal 9.3 (2002): 567-590. <http://eudml.org/doc/50022>.
author = {Yang, Dachun},
keywords = {homogeneous type spaces; -type test function; Besov spaces; Triebel-Lizorkin spaces; -type test function},
title = {Frame characterizations of Besov and Triebel-Lizorkin spaces on spaces of homogeneous type and their applications.},
AU - Yang, Dachun
TI - Frame characterizations of Besov and Triebel-Lizorkin spaces on spaces of homogeneous type and their applications.
KW - homogeneous type spaces; -type test function; Besov spaces; Triebel-Lizorkin spaces; -type test function
homogeneous type spaces,
\left({x}_{0},r,\beta ,\gamma \right)
-type test function, Besov spaces, Triebel-Lizorkin spaces,
\left({x}_{0},r,\beta ,\gamma \right)
-type test function
{L}^{p}
s |
Compensate for frequency offset of PAM, PSK, or QAM signal - MATLAB - MathWorks France
MaximumFrequencyOffset
Specific to comm.CoarseFrequencyCompensator
Compensate for Frequency Offset in QPSK Signal
Correlation-Based Estimation
FFT-Based Estimation
Compensate for frequency offset of PAM, PSK, or QAM signal
The comm.CoarseFrequencyCompensator System object™ compensates for the frequency offset of received signals using an open-loop technique.
To compensate for the frequency offset of a PAM, PSK, or QAM signal:
Create the comm.CoarseFrequencyCompensator object and set its properties.
coarseFreqComp = comm.CoarseFrequencyCompensator
coarseFreqComp = comm.CoarseFrequencyCompensator(Name,Value)
coarseFreqComp = comm.CoarseFrequencyCompensator creates a coarse frequency offset compensator System object. This object uses an open-loop technique to estimate and compensate for the carrier frequency offset in a received signal. For more information about the estimation algorithm options, see Algorithms.
coarseFreqComp = comm.CoarseFrequencyCompensator(Name,Value) specifies properties using one or more name-value arguments. For example, Modulation='QPSK' specifies quadrature phase-shift keying modulation.
Modulation type, specified as one of,
'BPSK' – Binary phase shift keying
'QPSK' – Quadrature phase shift keying
'OQPSK' – Offset quadrature phase shift keying
'8PSK' – 8-phase shift keying
'PAM' – Pulse amplitude modulation
'QAM' – Quadrature amplitude modulation
Algorithm — Algorithm used to estimate frequency offset
'FFT-based' (default) | 'Correlation-based'
Algorithm used to estimate the frequency offset, specified as 'FFT-based' or 'Correlation-based'.
To enable this property, set Modulation to 'BPSK', 'QPSK', '8PSK', or 'PAM'. This table shows the valid combinations of the modulation type and the estimation algorithm.
BPSK, QPSK, 8PSK, PAM Yes Yes
OQPSK, QAM Yes No
Use the correlation-based algorithm for HDL implementations and for other situations in which you want to avoid using an FFT.
FrequencyResolution — Frequency resolution
Frequency resolution for the offset frequency estimation in hertz, specified as a positive scalar. This property establishes the FFT length used to perform spectral analysis and must be less than the sample rate.
MaximumFrequencyOffset — Maximum measurable frequency offset
Maximum measurable frequency offset in hertz, specified as a positive scalar.
The value of this property must be less than fsamp / M. For more details, see Correlation-Based Estimation.
To enable this property, set the Algorithm property to 'Correlation-based'.
Sample rate in samples per second, specified as a positive scalar.
4 (default) | even integer, greater than or equal to 4
Samples per symbol, specified as an even positive integer greater than or equal to 4.
To enable this property, set Modulation to 'OQPSK'.
y = coarseFreqComp(x)
[y,estimate] = coarseFreqComp(x)
y = coarseFreqComp(x) returns a signal that compensates for the carrier frequency offset of the input signal.
[y,estimate] = coarseFreqComp(x) returns a scalar estimate of the frequency offset.
y — Compensated output signal
Compensated output signal, returned as a complex column vector with the same dimensions and data type as the input x.
estimate — Estimate of frequency offset
Estimate of the frequency offset, returned as a scalar.
info Characteristic information about coarse frequency compensator
Compensate for a 4 kHz frequency offset imposed on a noisy QPSK signal.
Set up the example parameters.
nSym = 2048; % Number of input symbols
sps = 4; % Samples per symbol
nSamp = nSym*sps; % Number of samples
fs = 80000; % Sampling frequency (Hz)
Create a square root raised cosine transmit filter.
'FilterSpanInSymbols',8, ...
Create a phase frequency offset object to introduce the 4 kHz frequency offset.
freqOffset = comm.PhaseFrequencyOffset( ...
'FrequencyOffset',-4000, ...
Create a coarse frequency compensator object to compensate for the offset.
freqComp = comm.CoarseFrequencyCompensator( ...
'FrequencyResolution',1);
Generate QPSK symbols, filter the modulated data, pass the signal through an AWGN channel, and apply the frequency offset.
data = randi([0 3],nSym,1);
offsetData = freqOffset(rxSig);
Compensate for the frequency offset using the coarse frequency compensator. When the frequency offset is high, applying coarse frequency compensation prior to receive filtering is benefitial because filtering suppresses energy in the useful spectrum.
[compensatedData,estFreqOffset] = freqComp(offsetData);
Display the estimate of the frequency offset.
estFreqOffset
estFreqOffset = -4.0001e+03
Return information about the coarse frequency compensator System object. To obtain the FFT length, you must call coarse frequency compensator System object prior to calling the info object function.
freqCompInfo = info(freqComp)
freqCompInfo = struct with fields:
FFTLength: 131072
Algorithm: 'FFT-based'
Create a spectrum analyzer object and plot the offset and compensated spectra. Verify that the compensated signal has a center frequency at 0 Hz and that the offset signal has a center frequency at -4 kHz.
specAnal = dsp.SpectrumAnalyzer('SampleRate',fs,'ShowLegend',true, ...
'ChannelNames',{'Offset Signal','Compensated Signal'});
specAnal([offsetData compensatedData])
Use the correlation-based estimation algorithm to estimate the frequency offset for PSK and PAM signals. To determine the frequency offset, Δf, the algorithm performs a maximum likelihood (ML) estimation of the complex-valued oscillation exp(j2πΔft), as described in [1]. The observed signal, rk, is represented as
{r}_{k}={e}^{j\left(2\pi \Delta fk{\text{T}}_{s}+\theta \right)},\text{\hspace{0.17em}}1\le k\le N
Ts is the sampling interval, θ is an unknown random phase, and N is the number of samples. The ML estimation of the frequency offset is equivalent to seeking the maximum of the likelihood function,
\Lambda \left(\Delta f\right)\approx {|\sum _{i=1}^{N}{r}_{i}{e}^{-j2\pi \Delta fi{T}_{s}}|}^{2}=\sum _{k=1}^{N}\sum _{m=1}^{N}{r}_{k}{r}_{m}^{*}{e}^{-j2\pi \Delta f{T}_{s}\left(k-m\right)}
After simplifying, the problem is expressed as a discrete Fourier transform, weighted by a parabolic windowing function. It is expressed as
\mathrm{Im}\left\{\sum _{k=1}^{N-1}k\left(N-k\right)R\left(k\right){e}^{j2\pi \Delta \stackrel{^}{f}{T}_{s}}\right\}=0
R(k) denotes the estimated autocorrelation of the sequence rk and is represented as
R\left(k\right)\triangleq \frac{1}{N-k}\sum _{i=k+1}^{N}{r}_{i}\text{\hspace{0.17em}}{r}_{i-k}^{*},\text{\hspace{0.17em}}0\le k\le N-1
The term k(N–k) is the parabolic windowing function. In [1], it is shown that R(k) is a poor estimate of the autocorrelation of rk when k = 0 or when k is close to N. Consequently, the windowing function can be expressed as a rectangular sequence of 1s for k = 1, 2, ..., L, where L ≤ N – 1. The result is a modified ML estimation strategy in which
\mathrm{Im}\left\{\sum _{k=1}^{L}R\left(k\right){e}^{-j2\pi \Delta \stackrel{^}{f}k{T}_{s}}\right\}=0
This equation results in an estimate of
\Delta \stackrel{^}{f}
\Delta \stackrel{^}{f}\cong \frac{{f}_{samp}}{\pi \left(L+1\right)}\mathrm{arg}\left\{\sum _{k=1}^{L}R\left(k\right)\right\}
The sampling frequency, fsamp, is the reciprocal of Ts. The number of elements used to compute the autocorrelation sequence, L, are determined as
L=\mathrm{round}\left(\frac{{f}_{samp}}{{f}_{max}}\right)-1
fmax is the maximum expected frequency offset. and round is the nearest integer function. The frequency offset estimate improves when L ≥ 7 and leads to the recommendation that fmax ≤ fsamp / (4M).
FFT-based estimation algorithms can be used to estimate the frequency offset for all modulation types. The comm.CoarseFrequencyOffset System object uses two variations.
For BPSK, QPSK, 8PSK, PAM, or QAM modulation, the comm.CoarseFrequencyOffset System object uses the FFT-based algorithm described in [2]. The algorithm estimates
\Delta \stackrel{^}{f}
by using a periodogram of the mth power of the received signal and is given as
\Delta \stackrel{^}{f}=\frac{{f}_{samp}}{N\cdot m}\mathrm{arg}\underset{f}{\mathrm{max}}|\sum _{k=0}^{N-1}{r}^{m}\left(k\right){e}^{-j2\pi kt/N}|,\text{ }\left(-\frac{{R}_{sym}}{2}\le f\le \frac{{R}_{sym}}{2}\right)
where m is the modulation order, r(k) is the received sequence, Rsym is the symbol rate, and N is the number of samples. The algorithm searches for a frequency that maximizes the time average of the mth power of the received signal multiplied by various frequencies in the range of [–Rsym/2, Rsym/2]. Because the form of the algorithm is the definition of the discrete Fourier transform of rm(t), searching for a frequency that maximizes the time average is equivalent to searching for a peak line in the spectrum of rm(t). The number of points required by the FFT is
N={2}^{⌈{\mathrm{log}}_{2}\left(\frac{{f}_{samp}}{{f}_{r}}\right)⌉}
where fr is the desired frequency resolution.
For OQPSK modulation, the comm.CoarseFrequencyOffset System object uses the FFT-based algorithm described in [4]. The algorithm searches for spectral peaks at ±200 kHz around the symbol rate. This technique locates desired peaks in the presence of interference from spectral content around baseband frequencies due to filtering.
[1] Luise, M., and R. Reggiannini. “Carrier Frequency Recovery in All-Digital Modems for Burst-Mode Transmissions.” IEEE® Transactions on Communications, vol. 43, no. 2/3/4, Feb. 1995, pp. 1169–78.
[2] Wang, Y., et al. “Non-Data-Aided Feedforward Carrier Frequency Offset Estimators for QAM Constellations: A Nonlinear Least-Squares Approach.” EURASIP Journal on Advances in Signal Processing, vol. 2004, no. 13, Dec. 2004, p. 856139.
[3] Nakagawa, Tadao, et al. “Non-Data-Aided Wide-Range Frequency Offset Estimator for QAM Optical Coherent Receivers.” Optical Fiber Communication Conference/National Fiber Optic Engineers Conference 2011, OSA, 2011, p. OMJ1.
[4] Olds, Jonathan. Designing an OQPSK demodulator.
comm.PhaseFrequencyOffset | comm.CarrierSynchronizer | dsp.FFT |
Determine rotation vector from quaternion - Simulink - MathWorks España
Quaternions to Rotation Angles
Determine rotation vector from quaternion
The Quaternions to Rotation Angles block converts the four-element quaternion vector (q0, q1, q2, q3), into the rotation described by the three rotation angles (R1, R2, R3). The block generates the conversion by comparing elements in the direction cosine matrix (DCM) as a function of the rotation angles. The rotation used in this block is a passive transformation between two coordinate systems. The elements in the DCM are functions of a unit quaternion vector. Aerospace Blockset™ uses quaternions that are defined using the scalar-first convention. This block normalizes all quaternion inputs. For more information on the direction cosine matrix, see Algorithms.
For the ZYX, ZXY, YXZ, YZX, XYZ, and XZY rotations, the block generates an R2 angle that lies between ±pi/2 radians, and R1 and R3 angles that lie between ±pi radians.
For the 'ZYZ', 'ZXZ', 'YXY', 'YZY', 'XYX', and 'XZX' rotations, the block generates an R2 angle that lies between 0 and pi radians, and R1 and R3 angles that lie between ±pi radians. However, in the latter case, when R2 is 0, R3 is set to 0 radians.
Rotation angles, returned 3-by-1 vector, in radians.
Output rotation order for the three rotation angles.
Values: ZYX | ZYZ |ZXY | ZXZ | YXZ | YXY | YZX | YZY | XYZ | XYX | XZY | XZX
The elements in the DCM are functions of a unit quaternion vector. For example, for the rotation order z-y-x, the DCM is defined as:
DCM=\left[\begin{array}{lll}\mathrm{cos}\theta \mathrm{cos}\psi \hfill & \mathrm{cos}\theta \mathrm{sin}\psi \hfill & -\mathrm{sin}\theta \hfill \\ \left(\mathrm{sin}\varphi \mathrm{sin}\theta \mathrm{cos}\psi -\mathrm{cos}\varphi \mathrm{sin}\psi \right)\hfill & \left(\mathrm{sin}\varphi \mathrm{sin}\theta \mathrm{sin}\psi +\mathrm{cos}\varphi \mathrm{cos}\psi \right)\hfill & \mathrm{sin}\varphi \mathrm{cos}\theta \hfill \\ \left(\mathrm{cos}\varphi \mathrm{sin}\theta \mathrm{cos}\psi +\mathrm{sin}\varphi \mathrm{sin}\psi \right)\hfill & \left(\mathrm{cos}\varphi \mathrm{sin}\theta \mathrm{sin}\psi -\mathrm{sin}\varphi \mathrm{cos}\psi \right)\hfill & \mathrm{cos}\varphi \mathrm{cos}\theta \hfill \end{array}\right]
The DCM defined by a unit quaternion vector is:
DCM=\left[\begin{array}{lll}\left({q}_{0}^{2}+{q}_{1}^{2}-{q}_{2}^{2}-{q}_{3}^{2}\right)\hfill & 2\left({q}_{1}{q}_{2}+{q}_{0}{q}_{3}\right)\hfill & 2\left({q}_{1}{q}_{3}-{q}_{0}{q}_{2}\right)\hfill \\ 2\left({q}_{1}{q}_{2}-{q}_{0}{q}_{3}\right)\hfill & \left({q}_{0}^{2}-{q}_{1}^{2}+{q}_{2}^{2}-{q}_{3}^{2}\right)\hfill & 2\left({q}_{2}{q}_{3}+{q}_{0}{q}_{1}\right)\hfill \\ 2\left({q}_{1}{q}_{3}+{q}_{0}{q}_{2}\right)\hfill & 2\left({q}_{2}{q}_{3}-{q}_{0}{q}_{1}\right)\hfill & \left({q}_{0}^{2}-{q}_{1}^{2}-{q}_{2}^{2}+{q}_{3}^{2}\right)\hfill \end{array}\right]
From the preceding equation, you can derive the following relationships between DCM elements and individual rotation angles for a ZYX rotation order:
\begin{array}{c}\varphi =\text{atan}\left(DCM\left(2,3\right),DCM\left(3,3\right)\right)\\ =\text{atan}\left(2\left({q}_{2}{q}_{3}+{q}_{0}{q}_{1}\right),\left({q}_{0}^{2}-{q}_{1}^{2}-{q}_{2}^{2}+{q}_{3}^{2}\right)\right)\\ \theta =\text{asin}\left(-DCM\left(1,3\right)\right)\\ =\text{asin}\left(-2\left({q}_{1}{q}_{3}-{q}_{0}{q}_{2}\right)\right)\\ \psi =\text{atan}\left(DCM\left(1,2\right),DCM\left(1,1\right)\right)\\ =\text{atan}\left(2\left({q}_{1}{q}_{2}+{q}_{0}{q}_{3}\right),\left({q}_{0}^{2}+{q}_{1}^{2}-{q}_{2}^{2}-{q}_{3}^{2}\right)\right)\end{array}
where Ψ is R1, Θ is R2, and Φ is R3.
Direction Cosine Matrix to Rotation Angles | Direction Cosine Matrix to Quaternions | Quaternions to Direction Cosine Matrix | Rotation Angles to Direction Cosine Matrix | Rotation Angles to Quaternions |
A Comparison of Turbulence Levels From Particle Image Velocimetry and Constant Temperature Anemometry Downstream of a Low-Pressure Turbine Cascade at High-Speed Flow Conditions | J. Turbomach. | ASME Digital Collection
Silvio Chemnitz,
Email: silvio.chemnitz@unibw.de
Chemnitz, S., and Niehuis, R. (June 30, 2020). "A Comparison of Turbulence Levels From Particle Image Velocimetry and Constant Temperature Anemometry Downstream of a Low-Pressure Turbine Cascade at High-Speed Flow Conditions." ASME. J. Turbomach. July 2020; 142(7): 071008. https://doi.org/10.1115/1.4046272
The development and verification of new turbulence models for Reynolds-averaged Navier–Stokes (RANS) equation-based numerical methods require reliable experimental data with a deep understanding of the underlying turbulence mechanisms. High accurate turbulence measurements are normally limited to simplified test cases under optimal experimental conditions. This work presents comprehensive three-dimensional data of turbulent flow quantities, comparing advanced constant temperature anemometry (CTA) and stereoscopic particle image velocimetry (PIV) methods under realistic test conditions. The experiments are conducted downstream of a linear, low-pressure turbine cascade at engine relevant high-speed operating conditions. The special combination of high subsonic Mach and low Reynolds number results in a low density test environment, challenging for all applied measurement techniques. Detailed discussions about influences affecting the measured result for each specific measuring technique are given. The presented time mean fields as well as total turbulence data demonstrate with an average deviation of
ΔTu<0.4%
ΔC/Cref<0.9%
an extraordinary good agreement between the results from the triple sensor hot-wire probe and the 2D3C-PIV setup. Most differences between PIV and CTA can be explained by the finite probe size and individual geometry.
turbine aerodynamic design, measurement techniques, turbine blade and measurement advancements
Cascades (Fluid dynamics), Flow (Dynamics), Particulate matter, Pressure, Probes, Temperature, Turbines, Turbulence, Wakes, Wire, Density, Calibration
LES and RANS Analysis of the End-Wall Flow in a Linear LPT Cascade: Part I: Flow and Secondary Vorticity Fields Under Varying Inlet Condition
LES and RANS Analysis of the End-Wall Flow in a Linear LPT Cascade With Variable Inlet Conditions: Part II: Loss Generation
Logarithmic Scaling of Turbulence in Smooth- and Rough-Wall Pipe Flow
Estimating Wind Tunnel Turbulence Level by Means of PIV/PTV
Highly Spatially Resolved Velocity Measurements of a Turbulent Channel Flow by a Fiber-Optic Heterodyne Laser-Doppler Velocity-Profile Sensor
Near-Wall Turbulence Characterization Using 4D-PTV “Shake-The-Box
The High Speed Cascade Wind Tunnel of the German Armed Forces University Munich
8th Symposium on Measurement Techniques for Transonic and Supersonic Flow in Cascades and Turbomachines
Die Erzeugung Hoeherer Turbulenzgrade in Der Messstrecke Des Hochgeschwindigkeits-Gitterwindkanals: Braunschweig, Zur Simulation Turbomaschinenähnlicher Bedingungen
,” DFVLR-Forschungsbericht, DFVLR-FB 82-25.
Leading Edge Shape for Flat Plate Boundary Layer Studies
Untersuchung Zweier Verschiedener Axialer Überschallverdichterstufen Unter besonderer Berücksichtigung Der Wechselwirkungen Zwischen Lauf- Und Leitrad
Aachen, Germany (in German)
Development and Implementation of a Technique for Fast Five-Hole Probe Measurements Downstream of a Linear Cascade
Correction Method for the Head Geometry Influence of a Five-Hole Pressure Probe on the Measurement Results
. 10.1524/teme.1990.57.jg.296
, reprinted ed.,
Oxford Science Publications, Oxford University Press
Hot Wire and PIV Studies of Transonic Turbulent Wall-Bounded Flows
Einfluß der Geometrie von Mehrfach-Hitzdrahtsonden auf die Meßergebnisse in turbulenten Strömungen
,” Report, Deutsche Forschungsanstalt für Luft- und Raumfahrt, DLR-FB 89-26, Cologne, Germany (in German).
Data Validation, False Vectors Correction and Derived Magnitudes Calculation on PIV Data
PIV Uncertainty Quantification and Beyond
. 10.22261/JGPPS.JPRQQM
Secondary Flow Effects and Mixing of the Wake Behind a Turbine Stator
, “Subsonic-Choked (turbine) Cascades,”
Advanced Methods for Cascade Testing
NATO AGARD
Boundary Layer Development in Axial Compressors and Turbines: Part 1 of 4—Composite Picture
A Comparison of Turbulence Levels From PIV and CTA Downstream of a Low-Pressure Turbine Cascade at High-Speed Flow Conditions
Application of Hot-Wire Anemometry in a Blow-Down Turbine Facility |
Sample size determination - Wikipedia
(Redirected from Sample size)
2.1 Estimation of a proportion
2.2 Estimation of a mean
3 Required sample sizes for hypothesis tests
3.2 Mead's resource equation
4 Stratified sample size
Estimation of a proportion[edit]
{\displaystyle {\hat {p}}=X/n}
{\displaystyle p(1-p)}
{\displaystyle {\hat {p}}}
will be closely approximated by a normal distribution.[1] Using this and the Wald method for the binomial distribution, yields a confidence interval of the form
{\displaystyle \left({\widehat {p}}-Z{\sqrt {\frac {0.25}{n}}},\quad {\widehat {p}}+Z{\sqrt {\frac {0.25}{n}}}\right)}
{\displaystyle Z{\sqrt {\frac {0.25}{n}}}=W/2}
{\displaystyle n={\frac {Z^{2}}{W^{2}}}}
{\displaystyle Z{\sqrt {\frac {p(1-p)}{n}}}=W/2}
{\displaystyle n={\frac {4Z^{2}p(1-p)}{W^{2}}}}
{\displaystyle \left({\widehat {p}}-1.96{\sqrt {\frac {0.25}{n}}},{\widehat {p}}+1.96{\sqrt {\frac {0.25}{n}}}\right)}
{\displaystyle 4{\sqrt {\frac {0.25}{n}}}=W}
can be solved for n, yielding[2][3] n = 4/W2 = 1/B2 where B is the error bound on the estimate, i.e., the estimate is usually given as within ± B. So, for B = 10% one requires n = 100, for B = 5% one needs n = 400, for B = 3% the requirement approximates to n = 1000, while for B = 1% a sample size of n = 10000 is required. These numbers are quoted often in news reports of opinion polls and other sample surveys. However, always remember that the results reported may not be the exact value as numbers are preferably rounded up. Knowing that the value of the n is the minimum number of sample points needed to acquire the desired result, the number of respondents then must lie on or above the minimum.
Estimation of a mean[edit]
{\displaystyle {\frac {\sigma }{\sqrt {n}}}.}
{\displaystyle \left({\bar {x}}-{\frac {Z\sigma }{\sqrt {n}}},\quad {\bar {x}}+{\frac {Z\sigma }{\sqrt {n}}}\right)}
{\displaystyle {\frac {Z\sigma }{\sqrt {n}}}=W/2}
{\displaystyle n={\frac {4Z^{2}\sigma ^{2}}{W^{2}}}}
{\displaystyle {\frac {4\times 1.96^{2}\times 15^{2}}{6^{2}}}=96.04}
Required sample sizes for hypothesis tests [edit]
The table shown on the right can be used in a two-sample t-test to estimate the sample sizes of an experimental group and a control group that are of equal size, that is, the total number of individuals in the trial is twice that of the number given, and the desired significance level is 0.05.[4] The parameters used are:
Mead's resource equation[edit]
Mead's resource equation is often used for estimating sample sizes of laboratory animals, as well as in many other laboratory experiments. It may not be as accurate as using other methods in estimating sample size, but gives a hint of what is the appropriate sample size where parameters such as expected standard deviations or expected differences in values between groups are unknown or very hard to estimate.[5]
The equation is:[5]
{\displaystyle E=N-B-T,}
For example, if a study using laboratory animals is planned with four treatment groups (T=3), with eight animals per group, making 32 animals total (N=31), without any further stratification (B=0), then E would equal 28, which is above the cutoff of 20, indicating that sample size may be a bit too large, and six animals per group might be more appropriate.[6]
{\displaystyle H_{0}:\mu =0}
{\displaystyle H_{a}:\mu =\mu ^{*}}
{\displaystyle \Pr({\bar {x}}>z_{\alpha }\sigma /{\sqrt {n}}\mid H_{0})=\alpha }
{\displaystyle {\bar {x}}}
{\displaystyle z_{\alpha }\sigma /{\sqrt {n}}}
{\displaystyle \Pr({\bar {x}}>z_{\alpha }\sigma /{\sqrt {n}}\mid H_{a})\geq 1-\beta }
{\displaystyle n\geq \left({\frac {z_{\alpha }+\Phi ^{-1}(1-\beta )}{\mu ^{*}/\sigma }}\right)^{2}}
{\displaystyle \Phi }
Stratified sample size[edit]
There are many reasons to use stratified sampling:[7] to decrease variances of sample estimates, to use partly non-random methods, or to study strata individually. A useful, partly non-random method would be to sample individuals where easily accessible, but, where not, sample clusters to save travel costs.[8]
{\displaystyle {\bar {x}}_{w}=\sum _{h=1}^{H}W_{h}{\bar {x}}_{h},}
{\displaystyle \operatorname {Var} ({\bar {x}}_{w})=\sum _{h=1}^{H}W_{h}^{2}\operatorname {Var} ({\bar {x}}_{h}).}
{\displaystyle W_{h}}
{\displaystyle W_{h}=N_{h}/N}
{\displaystyle n=\sum n_{h}}
{\displaystyle \operatorname {Var} ({\bar {x}}_{w})=\sum _{h=1}^{H}W_{h}^{2}\operatorname {Var} ({\bar {x}}_{h})\left({\frac {1}{n_{h}}}-{\frac {1}{N_{h}}}\right),}
{\displaystyle n_{h}/N_{h}=kS_{h}}
{\displaystyle S_{h}={\sqrt {\operatorname {Var} ({\bar {x}}_{h})}}}
{\displaystyle k}
{\displaystyle \sum {n_{h}}=n}
{\displaystyle C_{h}}
{\displaystyle {\frac {n_{h}}{N_{h}}}={\frac {KS_{h}}{\sqrt {C_{h}}}},}
{\displaystyle K}
{\displaystyle \sum {n_{h}}=n}
{\displaystyle n_{h}={\frac {K'W_{h}S_{h}}{\sqrt {C_{h}}}}.}
Qualitative research[edit]
Sample size determination in qualitative studies takes a different approach. It is generally a subjective judgment, taken as the research proceeds.[13] One approach is to continue to include further participants or material until saturation is reached.[14] The number needed to reach saturation has been investigated empirically.[15][16][17][18]
There is a paucity of reliable guidance on estimating sample sizes before starting the research, with a range of suggestions given.[16][19][20][21] A tool akin to a quantitative power calculation, based on the negative binomial distribution, has been suggested for thematic analysis.[22][21]
^ NIST/SEMATECH, "7.2.4.2. Sample sizes required", e-Handbook of Statistical Methods.
^ "Inference for Regression". utdallas.edu.
^ "Confidence Interval for a Proportion" Archived 2011-08-23 at the Wayback Machine
^ a b Chapter 13, page 215, in: Kenny, David A. (1987). Statistics for the social and behavioral sciences. Boston: Little, Brown. ISBN 978-0-316-48915-7.
^ a b Kirkwood, James; Robert Hubrecht (2010). The UFAW Handbook on the Care and Management of Laboratory and Other Research Animals. Wiley-Blackwell. p. 29. ISBN 978-1-4051-7523-4. online Page 29
^ Isogenic.info > Resource equation by Michael FW Festing. Updated Sept. 2006
^ Kish (1965, Section 3.1)
^ Kish (1965), p. 148.
^ Kish (1965), p. 78.
^ Sandelowski, M. (1995). Sample size in qualitative research. Research in Nursing & Health, 18, 179–183
^ Glaser, B. (1965). The constant comparative method of qualitative analysis. Social Problems, 12, 436–445
^ Francis, Jill J.; Johnston, Marie; Robertson, Clare; Glidewell, Liz; Entwistle, Vikki; Eccles, Martin P.; Grimshaw, Jeremy M. (2010). "What is an adequate sample size? Operationalising data saturation for theory-based interview studies" (PDF). Psychology & Health. 25 (10): 1229–1245. doi:10.1080/08870440903194015. PMID 20204937. S2CID 28152749.
^ a b Guest, Greg; Bunce, Arwen; Johnson, Laura (2006). "How Many Interviews Are Enough?". Field Methods. 18: 59–82. doi:10.1177/1525822X05279903. S2CID 62237589.
^ Wright, Adam; Maloney, Francine L.; Feblowitz, Joshua C. (2011). "Clinician attitudes toward and use of electronic problem lists: A thematic analysis". BMC Medical Informatics and Decision Making. 11: 36. doi:10.1186/1472-6947-11-36. PMC 3120635. PMID 21612639.
^ Mason, Mark (2010). "Sample Size and Saturation in PhD Studies Using Qualitative Interviews". Forum Qualitative Sozialforschung. 11 (3): 8.
^ Emmel, N. (2013). Sampling and choosing cases in qualitative research: A realist approach. London: Sage.
^ Onwuegbuzie, Anthony J.; Leech, Nancy L. (2007). "A Call for Qualitative Power Analyses". Quality & Quantity. 41: 105–121. doi:10.1007/s11135-005-1098-1. S2CID 62179911.
^ a b Fugard AJB; Potts HWW (10 February 2015). "Supporting thinking on sample sizes for thematic analyses: A quantitative tool" (PDF). International Journal of Social Research Methodology. 18 (6): 669–684. doi:10.1080/13645579.2015.1005453. S2CID 59047474.
^ Galvin R (2015). How many interviews are enough? Do qualitative interviews in building energy consumption research produce reliable knowledge? Journal of Building Engineering, 1:2–12.
Kish, L. (1965). Survey Sampling. Wiley. ISBN 978-0-471-48900-9.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Sample_size_determination&oldid=1041655234" |
Which of the following should be most viscous? — Competitive Exam India
Which of the following should be most viscous?
C{H}_{3}OH
C{H}_{3}C{H}_{2}OH
C{H}_{3}OC{H}_{3}
HOC{H}_{2}C{H}_{2}OH
Units, Dimensions and Errors MCQ Structural Organization In Animals MCQ Banker's Discount MCQ Order & Ranking MCQ Auto CAD 2D MCQ Airport Engineering MCQ Decision Making MCQ WAN MCQ General Chemistry MCQ The Living World MCQ Compressors, Gas Turbines & Jet Engines MCQ Sexual dysfunctions MCQ |
Lorentz force - Citizendium
In physics, the Lorentz force is the force on an electrically charged particle that moves through a magnetic plus an electric field.
The Lorentz force has two vector components, one proportional to the magnetic field and one proportional to the electric field. These components must be added vectorially to obtain the total force.
1. The strength (absolute value) of the magnetic component is proportional to four factors: the charge q of the particle, the speed v of the particle, the intensity B of the magnetic induction, and the sine of the angle between the vectors v and B. The direction of the magnetic component is given by the right hand rule: put your right hand along v with fingers pointing in the direction of v and the open palm toward the vector B. Stretch the thumb of your right hand, then the Lorentz force is along it, pointing from your wrist to the tip of your thumb.
2. The electric component of the Lorentz force is equal to q•E (charge of the particle times the electric field). It is in the same direction as E for positively charged particles and in the opposite direction of E for negatively charged particles.
The force is named after the Dutch physicist Hendrik Antoon Lorentz, who gave its equation in 1892.[1]
The Lorentz force F is given by the expression
{\displaystyle \mathbf {F} =q(\mathbf {E} +k\mathbf {v} \times \mathbf {B} ),}
where k is a constant depending on the units. In SI units k = 1; in Gaussian units k = 1/c, where c is the speed of light in vacuum (299 792 458 m s−1 exactly). The quantity q is the electric charge of the particle and v is its velocity. The vector B is the magnetic induction. The product v × B is the vector product, also referred to as the cross product.
As any vector field, the electric field E appearing in the Lorentz force F is the sum of a longitudinal (curl-free) component and a transverse (divergence-free) component. The following form holds when the Coulomb gauge ∇ • A = 0 is chosen,
{\displaystyle \mathbf {E} (\mathbf {r} ,t)=-{\boldsymbol {\nabla }}V(\mathbf {r} ,t)-k{\frac {\partial \mathbf {A} (\mathbf {r} ,t)}{\partial t}},}
where V is a scalar (electric) potential and the (magnetic) vector potential A is connected to B via
{\displaystyle \mathbf {B} (\mathbf {r} ,t)={\boldsymbol {\nabla }}\times \mathbf {A} (\mathbf {r} ,t).}
The operator ∇ acting on V gives the gradient of V, while ∇ × A is the curl of A. Since ∇ × (∇ V) = 0 and ∇ • A = 0, the components of E are indeed curl-free and divergence-free, respectively.
Note that the Lorentz force does not depend on the medium; the electric force does not contain the electric permittivity ε and the magnetic force does not the contain magnetic permeability μ.
If B is static (does not depend on time) then A is also static and
{\displaystyle \mathbf {E} =-{\boldsymbol {\nabla }}V\quad {\hbox{and}}\quad \mathbf {F} =-q{\boldsymbol {\nabla }}V.}
Non-relativistically, the electric field E may be absent (zero) while B is static and non-zero; the Lorentz force is then given by,
{\displaystyle \mathbf {F} =k\,q\,\mathbf {v} \times \mathbf {B} ,\quad {\hbox{with}}\quad F=k\,q\,v\,B\sin \alpha ,}
where k = 1 for SI units and 1/c for Gaussian units and α the angle between v and B. The italic, non-bold, quantities are the strengths (lengths) of the corresponding vectors
{\displaystyle F\equiv |\mathbf {F} |,\quad v\equiv |\mathbf {v} |,\quad B\equiv |\mathbf {B} |.}
The Lorentz force as a vector (cross) product was given by Oliver Heaviside in 1889, three years before Lorentz.[2]
In special relativity the Lorentz force transforms as a four-vector under a Lorentz transformation. Because relativistically the fields E and B are components of the same second rank tensor, a Lorentz transformation gives a linear combination of E and B, and hence in relativity theory these two fields do not have an independent existence.[3]
↑ H. A. Lorentz, La théorie électromagnétique de Maxwell et son application aux corps mouvants [The electromagnetic theory of Maxwell and its application to moving bodies], Archives néerlandaises des Sciences exactes et naturelles, vol. 25 p. 363 (1892).
↑ E. Whittaker, A History of the Theories of Aether and Electricity, vol. I, 2nd edition, Nelson, London (1951). Reprinted by the American Institute of Physics, (1987). p. 310. It is of interest to note that James Clerk Maxwell gave the expression for the Lorentz force already in his historic memoir of 1865. (J. Clerk Maxwell, A Dynamical Theory of the Electromagnetic Field, Phil. Trans. Roy. Soc., vol. 155, pp. 459 - 512 (1865) online)
↑ J. D. Jackson, Classical Electrodynamics, John Wiley, New York, 2nd ed. (1975), p. 553
Retrieved from "https://citizendium.org/wiki/index.php?title=Lorentz_force&oldid=865951" |
Analytical Chemistry: Uses Of Ammonium Hydroxide And Sodium Hydroxide, Studymaterial: ICSE Class 10 CHEMISTRY, Concise Chemistry 10 - Meritnation
In the qualitative analysis of compounds, their colour helps in their identification. The table given below shows some examples of colourless and coloured ions.
Colourless Coloured
Cation Symbol Cation Symbol Colour
{\mathrm{NH}}_{4}^{+}
Cupric Cu2+ Blue
Sodium Na+ Ferrous Fe2+ Light green
Potassium K+ Ferric Fe3+ Yellowish-brown
Calcium Ca+ Nickel Ni2+ Green
Magnesium Mg2+ Chromium Cr3+ Green |
Generate CRC code bits and append them to input data - Simulink - MathWorks France
The NR CRC Encoder block calculates and generates a short, fixed-length binary sequence, known as the cyclic redundancy check (CRC) checksum, appends it to each frame of streaming data samples, and outputs CRC-encoded data. The block accepts and returns a data sample stream with accompanying control signals. The control signals indicate the validity of the samples and the boundaries of the frame.
The block supports scalar and vector inputs and outputs data as either a scalar or vector based on the input data. To achieve higher throughput, the block accepts a binary vector or unsigned integer scalar input and implements a parallel architecture. The input data width must be less than or equal to the length of the CRC polynomial and the length of the CRC polynomial, must be divisible by the input data width. The block supports all CRC polynomials specified according to the 5G new radio (NR) standard 3GPP TS 38.212 [1]. When you select the CRC24C polynomial, the block supports dynamic CRC mask.
Input data, specified as a binary scalar, binary vector, or unsigned integer scalar.
Vector – Specify a vector of binary values of size N. For this case, the block supports Boolean and ufix1 data.
To enable this port, set the CRC type parameter to CRC24C and select the Enable CRC mask input port parameter.
CRC-encoded data with appended CRC checksum, returned as a scalar or vector. The output data type and size are the same as the input data.
Select the type of CRC. Each CRC type indicates a polynomial, as shown in this table.
These CRC polynomials are specified according to the 5G NR standard 3GPP TS 38.212 [1].
{X}^{\text{'}}={F}_{W}\left(×\right)X\left(+\right)D.
The latency of the block varies with the CRC polynomial length, the type of input (scalar or vector), and the data width of the input. The latency of the block is calculated from the start of the input frame to the start of the output frame by using the formula (CRCLength/N) + 3 clock cycles, where N is the input data width.
The frame gap between two frames (that is, from ctrl.end of the first frame to ctrl.start of the next frame) must be greater than the length of the CRC polynomial plus the latency of the first frame. In case of continuous inputs, the block discards the first frame and starts processing the next frame.
This figure shows a sample output and latency of the NR CRC Encoder block when you specify a scalar input of data type ufix8 with a data width of 8, and set the CRC type parameter to CRC16. The latency of the block is 5 clock cycles, as shown in this figure.
This figure shows a sample output and latency of the NR CRC Encoder block when you specify a vector input of data type ufix1 with a data width 3, set the CRC type parameter to CRC24C, and select the Enable CRC mask input port parameter. The latency of the block is 11 clock cycles, as shown in this figure.
This table shows the resource and performance data synthesis results of the block when the CRC type parameter is set to CRC24A for a scalar input of data type ufix1, scalar input of data type ufix24, and 24-by-1 vector input of data type ufix1. The generated HDL is targeted to the Xilinx® Zynq®- 7000 ZC706 evaluation board. |
To Susan Darwin 3[–4] September 1845
Wednesday 3rd Sept., 1845.
Please to thank Jos1 for the Railway Dividend; and further ask him how it comes, that as additional shares were bought in our three Railways in July of this year, the last Dividends in all three have been the same as hitherto. It is long since I have written to you, and now I am going to write such a letter, as I verily believe no other family in Britain would care to receive, viz., all about our household and money affairs; but you have often said that you like such particulars. First, however, I am sorry to say, that poor Emma is more uncomfortable to-day than before: but her teeth are better than two days: she really has had a most suffering time and it has been so provoking that no one could come here to comfort her: Elizabeth2 would have been such a pleasure to her. When we shall move, and what we shall do, must remain in the clouds.3 Erasmus is here yet; he must have found it woefully dull for I also have not been up to my average: but as he was to have gone on Saturday and then on Monday and willingly stayed, we have the real pleasure to think, wonderful as it is, that Down is not now duller to him than Park St. I have taken my Bismuth regularly, I think it has not done me quite so much good, as before;4 but I am recovering from too much exertion with my Journal: I am extremely pleased my Father likes the new edition.
I have just balanced my
\frac{1}{2}
years accounts and feel exactly as if some one had given me one or two hundred per annum: this last half year, our expenses with some extras has only been 456£, that is excluding the new Garden wall; so that allowing Christmas half year to be about a 100£ more, we are living on about 1000£ per annum: moreover this last year, subtracting extraordinary receipts, has been 1400£ so that we are as rich as Jews. Caroline5 always foresaw that our expenditure would fall. We are now undertaking some great earthworks; making a new walk in the K. Garden; and removing the mound under the Yews, on which the evergreens, we found did badly, and which, as Erasmus has always insisted was a great blemish in hiding part of the Field and the old Scotch-firs; and now that we have Sale’s corner, we do not want it for shelter. We are making a mound which will be excavated by all the family, viz., in front of the door out of the house, between two of the Lime Trees: we find the winds from the N. intolerable, and we retain the view from the grass mound and in walking down to the orchard. It will make the place much snugger, though a great blemish till the evergreens grow on it. Erasmus has been of the utmost service, in scheming and in actually working; making creases in the turf, striking circles, driving stakes, and such jobs. He has tired me out several times.6
Thursday morning. I had not time to finish my foolish letter yesterday, so I will today: Emma intends lying in bed till Luncheon, so that I shall not be able to say how she really is. Our grandest scheme, is the making our schoolroom and one (or as I think it will turn out) two small bedrooms. Mr Cresy is making a plan and he assures me all shall be done for 300£. The servants complained to me, what a nuisance it was to them to have the passage for everything only through the Kitchen: again Parslow’s pantry is too small to be tidy, and some small room is terribly wanted to put strangers into (as you have often insisted on) and all these things will be effected by our plan; and besides there is another advantage equally great. If it is done for 350£, which with Murray 150£ I can pay out of my income I shall think it worth while. It seemed so selfish making the house so luxurious for ourselves and not comfortable for our servants, that I was determined if possible to effect their wishes; and had we not built a schoolroom and bedroom; we should have had only two spare bed-rooms; so that for instance, we could never have had anyone to meet the Hensleighs7 and their children. So I hope the Shrewsbury conclave will not condemn me for extreme extravagance: though now that we are reading aloud Walter Scott’s life,8 I sometimes think that we are following his road to ruin at a snail-like pace. We have had some more turmoil in the village (though I have not yet been involved): old Price has been agitating building a wall across the pool, but thank Heavens he has at last aroused everybodies anger, except Sir Johns: Capt. Crosse told him the old women would hoot him through the village:9 and Mr. Smith cut short his usual rigmarole of his “having no selfish motives” by asking him, “if it is not for yourself, who the devil is it for?” Mr. Ainslie, the new Methodist resident at old Cockle’s10 house is also litigious and has been altering the road illegally; and defies us all, casting in our teeth that we allowed Mr. Price
Josiah Wedgwood III, Emma Darwin’s brother.
Sarah Elizabeth (Elizabeth) Wedgwood, Emma’s elder sister.
CD and Emma had been planning a visit to Shrewsbury followed by a tour to York and Lincolnshire, intending to start at the end of August (see letter to J. D. Hooker, [11–12 July 1845]). They eventually left on 15 September.
See Colp 1977, p. 37, for an account of the therapeutic uses of bismuth.
Caroline Wedgwood, CD’s sister.
Atkins 1974, pp. 24–6, gives further details of the work undertaken at this time at Down. CD’s Account Book (Down House MS) records payments made to Isaac Laslett for a shed and walls, to William Sales for ‘Permission to build wall’, and sums for extra labour between July and September.
Hensleigh and Fanny Mackintosh Wedgwood.
Edward Price, Sir John William Lubbock, and Captain Thomas Crosse. See Correspondence vol. 2, letter to [Susan? Darwin], [1843 – 8 March 1846].
Edgar Cockell, surgeon and apothecary of Down.
"All about household and money matters." The family is now living on about £1000 per annum. Plans a new walk and additions to the house. |
Stable periodic constant mean curvature surfaces and mesoscopic phase separation | EMS Press
We establish the existence and uniqueness of the solution to some inner obstacle problems for a coupling of a multidimensional quasilinear first-order hyperbolic equation set in a region
\Omega_h
with a quasilinear parabolic one set in the complementary
\Omega_p=\Omega\backslash \Omega_h
. The mathematical problem is motivated by physical models for infiltration processes with saturation thresholds.
Antonio Ros, Stable periodic constant mean curvature surfaces and mesoscopic phase separation. Interfaces Free Bound. 9 (2007), no. 3, pp. 355–365 |
Four-quadrant inverse tangent of fixed-point values - MATLAB atan2 - MathWorks Italia
Calculate Arctangent of Fixed-Point Input Values
Four-Quadrant Arctangent
Four-quadrant inverse tangent of fixed-point values
z = atan2(y,x) returns the four-quadrant arctangent of fi inputs y and x.
Use the atan2 function to calculate the arctangent of unsigned and signed fixed-point input values.
Unsigned Input Values
This example uses unsigned, 16-bit word length values.
Signed Input Values
This example uses signed, 16-bit word length values.
y-coordinates, specified as a scalar, vector, matrix, or multidimensional array.
y and x can be real-valued, signed or unsigned scalars, vectors, matrices, or N-dimensional arrays containing fixed-point angle values in radians. The inputs y and x must be the same size. If they are not the same size, at least one input must be a scalar value. Valid data types of y and x are:
fi fixed-point with binary point scaling
fi scaled double with binary point scaling
x-coordinates, specified as a scalar, vector, matrix, or multidimensional array.
z — Four-quadrant arctangent
Four-quadrant arctangent, returned as a scalar, vector, matrix, or multidimensional array.
z is the four-quadrant arctangent of y and x. The numerictype of z depends on the signedness of y and x:
If either y or x is signed, then z is a signed, fixed-point number in the range [–pi,pi]. It has a 16-bit word length and 13-bit fraction length (numerictype(1,16,13)).
If both y and x are unsigned, then z is an unsigned, fixed-point number in the range [0,pi/2]. It has a 16-bit word length and 15-bit fraction length (numerictype(0,16,15)).
The output, z, is always associated with the default fimath.
The four-quadrant arctangent is defined as follows, with respect to the atan function:
\begin{array}{l}\text{atan2}\left(y,x\right)=\left\{\begin{array}{l}\text{atan}\left(\frac{y}{x}\right)\text{ }x>0\\ \pi +\text{atan}\left(\frac{y}{x}\right)\text{ }y\ge 0,x<0\\ -\pi +\text{atan}\left(\frac{y}{x}\right)\text{ }y<0,x<0\\ \frac{\pi }{2}\text{ }y>0,x=0\\ -\frac{\pi }{2}\text{ }y<0,x=0\\ 0\text{ }y=0,x=0\end{array}\\ \end{array}
The atan2 function computes the four-quadrant arctangent of fixed-point inputs using an 8-bit lookup table as follows:
Divide the input absolute values to get an unsigned, fractional, fixed-point, 16-bit ratio between 0 and 1. The absolute values of y and x determine which value is the divisor.
The signs of the y and x inputs determine in what quadrant their ratio lies. The input with the larger absolute value is used as the denominator, thus producing a value between 0 and 1.
Compute the table index, based on the 16-bit, unsigned, stored integer value:
Use the 8 least-significant bits to interpolate between the first and second values using nearest neighbor linear interpolation. This interpolation produces a value in the range [0, pi/4).
Perform octant correction on the resulting angle, based on the values of the original y and x inputs.
This arctangent calculation is accurate only to within the top 16 most-significant bits of the input.
The atan2 function ignores and discards any fimath attached to the inputs. The output, z, is always associated with the default fimath. |
The homotopy Gerstenhaber algebra of Hochschild cochains of a regular algebra is formal | EMS Press
JournalsjncgVol. 1, No. 1pp. 1–25
The homotopy Gerstenhaber algebra of Hochschild cochains of a regular algebra is formal
The solution of Deligne's conjecture on Hochschild cochains and the formality of the operad of little disks provide us with a natural homotopy Gerstenhaber algebra structure on the Hochschild cochains of an associative algebra. In this paper we construct a natural chain of quasi-isomorphisms of homotopy Gerstenhaber algebras between the Hochschild cochain complex C•(A) of a regular commutative algebra A over a field
\mathbb{K}
of characteristic zero and the Gerstenhaber algebra of multiderivations of A. Unlike the original approach of the second author based on the computation of obstructions our method allows us to avoid the bulky Gelfand–Fuchs trick and prove the formality of the homotopy Gerstenhaber algebra structure on the sheaf of polydifferential operators on a smooth algebraic variety, a complex manifold, and a smooth real manifold.
Vasily Dolgushev, Dmitry Tamarkin, Boris Tsygan, The homotopy Gerstenhaber algebra of Hochschild cochains of a regular algebra is formal. J. Noncommut. Geom. 1 (2007), no. 1, pp. 1–25 |
Radiochemistry - Citizendium
1 Main decay modes
2 Activation analysis
3 Biochemical uses
4.1 Chemical form of the actinides
4.2 Movement of colloids
4.2.1 Normal background
4.3 Action of microorganisms
Radiochemistry is the chemistry of radioactive materials, where radioactive isotopes of elements are used to study the properties and chemical reactions of non-radioactive isotopes (often within radiochemistry the absence of radioactivity leads to a substance being described as being inactive as the isotopes are stable). Much of radiochemistry deals with the use of radioactivity to study ordinary chemical reactions. Radiochemistry includes the study of both natural and man-made radioisotopes. Many naturally occurring substances contain radioactive elements including uranium (U), radium (Ra) and thorium (Th).
Main decay modes
All radioisotopes are unstable isotopes of elements—undergo nuclear decay and emit some form of radiation. The radiation emitted can be one of three types, called alpha, beta, or gamma radiation.
{\displaystyle \alpha }
(alpha) radiation - the emission of an alpha particle, which contains two protons and two neutrons, from an atomic nucleus. When this occurs, the atom’s atomic mass decreases by 4 units and atomic number decreases by 2.
{\displaystyle \beta }
(beta) radiation - the transmutation of a neutron into an electron and a proton. After this happens, the electron is emitted from the nucleus into the electron cloud.
{\displaystyle \gamma }
(gamma) radiation - the emission of electromagnetic energy (such as X-rays) from the nucleus of an atom. This usually occurs during alpha or beta radioactive decay.
These three types of radiation can distinguished by their difference in penetrating power.
Alpha radiation can be blocked quite easily by a few centimetres in air or a piece of paper, and is equivalent to a helium nucleus. Beta radiation can be blocked by an aluminium sheet just a few millimetres thick and are electrons. Gamma radiation is the most penetrating, and is a massless chargeless high energy photon. Gamma radiation requires an appreciable amount of heavy metal radiation shielding (usually lead or barium-based) to reduce its intensity.
By neutron irradiation of objects it is possible to induce radioactivity, and this activation of stable isotopes to create radioisotopes is the basis of neutron activation analysis. One of the most interesting objects which has been studied in this way is the hair of Napoleon's head, which have been examined for their arsenic content.[1]
A series of different experimental methods have been designed to enable the measurement of different elements in different matrices. To reduce the effect of the matrix it is common to use the chemical extraction of the wanted element and/or to allow the radioactivity due to the matrix elements to decay before the measurement of the radioactivity.
In this diagram the matrix atoms are shown in black, the element of interest is in blue while the interfering element is shown in green. The radioisotopes formed by the element of interest and the interfering element are shown in magenta and red respectively. In the first picture a sample containing black (matrix), green and blue atoms is obtained, the sample is processed to extract only the blue and the green atoms, then the atoms are irradiated with neutrons to render the atoms radioactive. After allowing the radioactivity due to the short lived isotopes to decay (magenta) the red radioisotope is counted
The effects of a series of different cooling times can be seen if a hypothetical sample which contains sodium, uranium and cobalt in a 100:10:1 ratio is subjected to a very short pulse of thermal neutrons. In the following bar chart the radioactivity due to each of these three elements is shown, note that the uranium-239 decays very quickly to neptunium 239Np, which decays with a halflife of 2.36 days.
In this diagram the sodium activity (sodium-24) is on the left, the neptunium-239 activity in the centre and the cobalt-60 activity is on the right
Biochemical uses
One biological application is the study of DNA using radioactive phosphorus (32P). In these experiments, stable phosphorus is replaced by the chemical identical radioactive P-32, and the resulting radioactivity is used in analysis of the molecules and their behaviour.
Another example is the work done on the methylation of elements such as sulfur, selenium, tellurium and polonium by living organisms. Bacteria can convert these elements into volatile compounds,[2] it is thought that methylcobalamin (vitamin B12 alkylates these elements to create the dimethyls. A combination of Cobaloxime and inorganic polonium in sterile water forms a volatile polonium compound, while a control experiment which did not contain the cobalt compound did not form the volatile polonium compound.[3]. For the sulfur work the isotope 35S was used, while for polonium 207Po was used. In some related work by the addition of 57Co to the bacterial culture, followed by isolation of the cobalamin from the bacteria (and the measurement of the radioactivity of the isolated cobalamin) it was shown that the bacteria convert available cobalt into methylcobalamin.
In the first picture the polonium is added to the culture of the bacteria, by the time that the second picture is shown the bacteria have taken up some of the polonium. In the third picture the bacteria have converted some of the inorganic polonium into the dimethyl form
Radiochemistry also includes the study of the behaviour of radioisotopes in the environment; for instance, a forest or grass fire can make radioisotopes become mobile again.[4] In these experiments, fires were started in the exclusion zone around Chernobyl and the radioactivity in the air downwind was measured.
It is important to note that a vast number of processes can release radioactivity into the environment, for example the action of cosmic rays on the air is responsible for the formation of radioisotopes (such as 14C and 32P), the decay of 226Ra forms 222Rn which is a gas which can diffuse through rocks before entering buildings[5][6][7] and dissolve in water and thus enter drinking water[8] in addition human activities such as bomb tests, accidents(for example [9]) and normal releases from industry have resulted in the release of radioactivity.
Chemical form of the actinides
The environmental chemistry of some radioactive elements such as plutonium is complicated by the fact that solutions of this element can undergo disproportionation[10] and as a result many different oxidation states can coexist at once. Some work has been done on the identification of the oxidation state and coordination number of plutonium and the other actinides under different conditions has been done.[4] This includes work on both solutions of relatively simple complexes[11] and work on colloids [12] Two of the key matrices are soil/rocks and concrete, in these systems the chemical properties of plutonium have been studied using methods such as EXAFS and XANES.[13][5][6]
The thermodynamics of the actinides controls much of their environmental chemistry, a series of reviews on uranium and the other actinides has been published by the NEA. [14]
Movement of colloids
While binding of a metal to the surfaces of the soil particles can prevent its movement through a layer of soil, it is possible for the particles of soil which bear the radioactive metal can migrate as colloidal particles through soil. This has been shown to occur using soil particles labelled with 134Cs, these have been shown to be able to move through cracks in the soil.[15]
Radioactivity is present everywhere (and has been since the formation of the earth). According to the IAEA, one kilogram of soil typically contains the following amounts of the following three natural radioisotopes 370 Bq 40K (typical range 100-700 Bq), 25 Bq 226Ra (typical range 10-50 Bq), 25 Bq 238U (typical range 10-50 Bq) and 25 Bq 232Th (typical range 7-50 Bq).[16]
Action of microorganisms
The action of microorganisms can fix uranium; it is interesting to note that Thermoanaerobacter can used chromium(VI), iron(III), cobalt(III), manganese(IV) and uranium(VI) as electron acceptors while acetate, glucose, hydrogen, lactate, pyruvate, succinate, and xylose can act as electron donors for the metabolism of the bacteria. In this way the metals can be reduced to form magnetite (Fe3O4), siderite (FeCO3), rhodochrosite (MnCO3), and uraninite (UO2).[17] Other researchers have also worked on the fixing of uranium using bacteria[7][8][9], Livens FR et. al. (Working at Manchester) have suggested that the reason why Geobacter sulfurreducens can reduce UO22+ carions to uranium dioxide is that the bacteria reduce the uranyl cations to UO2+ which then undergoes disproportionation to form UO22+ and UO2. This reasoning was based (at least in part) on the observation that NpO2+ is not converted to an insoluble neptunium oxide by the bacteria.[18]
↑ Smith H et al. (1962) Nature 194:725-6.
↑ Momoshima N, Li-X. Song, Osaki S, Maeda Y (2002) Biologically induced Po emission from fresh water. J Envir Radioactivity 63:187-97.
↑ Momoshima N, Li-X. Song, Osaki S, Maeda Y (2001) Formation and emission of volatile polonium compound by microbial activity and polonium methylation with methylcobalamin. Envir Sci Technol, 35:2956-60.
↑ Yoschenko VI et al (2006) Resuspension and redistribution of radionuclides during grassland and forest fires in the Chernobyl exclusion zone: part I. Fire experiments J Envir Radioact 86:143-63 PMID 16213067
↑ Janja Vaupotič and Ivan Kobal, "Effective doses in schools based on nanosize radon progeny aerosols", Atmospheric Environment, 2006, 40, 7494-7507
↑ Michael Durand, Building and Environment, "Indoor air pollution caused by geothermal gases", 2006, 41, 1607-1610
↑ Boffetta P "Human cancer from environmental pollutants: The epidemiological evidence", Mutation Research/Genetic Toxicology and Environmental Mutagenesis, 2006, 608, 157-162
↑ Forte M et al. The measurement of radioactivity in Italian drinking waters. Microchemical J, 2007, 85:98-102
↑ Pöllänen R et al. "Multi-technique characterization of a nuclear bomb particle from the Palomares accident", J Envir Radioactivity, 2006, 90:15-28
↑ Rabideau, S.W., J Am Chem Society, 1957, 79, 6350-6353
↑ Allen PG et al. Investigation of aquo and chloro complexes of UO22+, NpO2+, Np4+, and Pu3+ by X-ray absorption fine structure spectroscopy. Inorganic Chemistry, 1997, 36, 4676-4683
Clark DL et al. (1998) Identification of the limiting species in the plutonium(IV) carbonate system. Solid state and solution molecular structure of the [Pu(CO3)5]6- Ion. Inorganic Chemistry 37:2893-9
↑ Rothe J et al. XAFS and LIBD investigation of the formation and structure of colloidal Pu(IV) hydrolysis products. Inorganic Chem, 2004, 43:4708-18
↑ M. C. Duff, et al. (1999) "Mineral associations and average oxidation states of sorbed Pu on Tuff. Environ Sci Technol33:2163-9.
↑ For a review on americium and uranium see [1] and [2]
↑ Whicker RD, Ibrahim SA (2006) Vertical migration of 134Cs bearing soil particles in arid soils: implications for plutonium redistribution", J Envir Radioactivity 88:171-88.
↑ Generic Procedures for Assessment and Response during a Radiological Emergency, IAEA TECDOC Series number 1162, published in 2000 [3]
↑ Roh Y et al. (2002) Isolation and Characterization of Metal-Reducing Thermoanaerobacter Strains from Deep Subsurface Environments of the Piceance Basin, Colorado. Applied and Environmental Microbiology 68:6013-20.
↑ Renshaw JC et al. (2005) Environ Sci Technol 39:5657-60.
Retrieved from "https://citizendium.org/wiki/index.php?title=Radiochemistry&oldid=6989"
Nuclear Engineering Subgroup |
(Redirected from Electrostatic nuclear accelerator)
An electrostatic particle accelerator is a particle accelerator in which charged particles are accelerated to a high energy by a static high voltage potential. This contrasts with the other major category of particle accelerator, oscillating field particle accelerators, in which the particles are accelerated by oscillating electric fields.
Owing to their simpler design, electrostatic types were the first particle accelerators. The two most common types are the Van de Graaf generator invented by Robert Van de Graaff in 1929, and the Cockcroft-Walton accelerator invented by John Cockcroft and Ernest Walton in 1932. The maximum particle energy produced by electrostatic accelerators is limited by the maximum voltage which can be achieved the machine. This is in turn limited by insulation breakdown to a few megavolts. Oscillating accelerators do not have this limitation, so they can achieve higher particle energies than electrostatic machines.
The advantages of electrostatic accelerators over oscillating field machines include lower cost, the ability to produce continuous beams, and higher beam currents that make them useful to industry. As such, they are by far the most widely used particle accelerators, with industrial applications such as plastic shrink wrap production, high power X-ray machines, radiation therapy in medicine, radioisotope production, ion implanters in semiconductor production, and sterilization. Many universities worldwide have electrostatic accelerators for research purposes. High energy oscillating field accelerators usually incorporate an electrostatic machine as their first stage, to accelerate particles to a high enough velocity to inject into the main accelerator.
Electrostatic accelerators are a subset of linear accelerators (linacs). While all linacs accelerate particles in a straight line, electrostatic accelerators use a fixed accelerating field from a single high voltage source, while radiofrequency linacs use oscillating electric fields across a series of accelerating gaps.
Electrostatic accelerators have a wide array of applications in science and industry. In the realm of fundamental research, they are used to provide beams of atomic nuclei for research at energies up to several hundreds of MeV. They are also used as the initial stage of most large multi-stage machines, such as the Large Hadron Collider.
In industry and materials science they are used to produce ion beams for materials modification, including ion implantation and ion beam mixing. There are also a number of materials analysis techniques based on electrostatic acceleration of heavy ions, including Rutherford backscattering spectrometry (RBS), particle-induced X-ray emission (PIXE), accelerator mass spectrometry (AMS), Elastic recoil detection (ERD), and others.
Although these machines primarily accelerate atomic nuclei, there are a number of compact machines used to accelerate electrons for industrial purposes including sterilization of medical instruments, x-ray production, and silicon wafer production.[1]
Single-ended machines[edit]
Tandem accelerators[edit]
Particle energy[edit]
{\displaystyle E}
{\displaystyle q}
{\displaystyle V}
{\displaystyle E=qV}
{\displaystyle 2qV}
{\displaystyle q}
{\displaystyle V}
{\displaystyle e=1.6(10^{-19})}
{\displaystyle q}
{\displaystyle V}
{\displaystyle E}
^ Hinterberger, F. "Electrostatic Accelerators" (PDF). CERN. Retrieved 10 May 2022.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Electrostatic_particle_accelerator&oldid=1088744085" |
From W. E. Darwin 10 November [1866]1
Thank you for your letter, it is just as well to know what one has to look forward to. All I can say is that I do’nt believe there is one father in five hundred thousand, who starting with what you did, would leave his sons as well off as you will us boys. If the profits of the Bank at all keep up to £500 a year I shall be able to save considerably, so that beside the £5000 I may consider I get great advantage out of Claythorpe Farm.2
I have already made my will, I did so on becoming Aunt Catherine’s executor; & I had a clause to include landed property put in; but I will see that it is all right3 I have left everything straight back to you as the simplest plan. I am going to Claythorpe on the 27th or 28th. & shall sleep at Uncle Ras’s where I hope you may all be.4
My love to Mama & Hen.5 | Your affect son | W. E. Darwin
Uncle Ras will send you a transfer for signature; as Uncle Langton expressed a wish to have his £1000 for Edmund in this stock, I thought none of the legatees would object in his case.6
Mr Salt wants to know whether you would like a Mortgage of £15,000 at 4 per Cent, on Mr Childe’s estate, as H. Parker has called up Dr Darwin’s mortgage of that amount7 I should have thought you ought to get 4
\frac{1}{4}
now a days
The year is established by the relationship between this letter and the letter to W. E. Darwin, 8 November [1866] (Correspondence vol. 14).
William had inherited Claythorpe Farm in Lincolnshire from his aunt Susan Elizabeth Darwin, who died in October 1866; in his letter of 8 November [1866] (Correspondence vol. 14), CD had told William how much he intended to deduct from his own bequests to William on this account, and how much William could expect to inherit from the older generation overall. William was a partner in the Southampton and Hampshire Bank.
In his letter of 8 November [1866] (Correspondence vol. 14), CD had advised William to make a will. William was an executor for his aunt, CD’s sister Emily Catherine Langton, who had died in February 1866.
The Darwins stayed with Erasmus Alvey Darwin, CD’s brother, from 22 to 29 November 1866 (Correspondence vol. 14, Appendix II).
Emily Catherine Langton had left her stepson Edmund Langton £1000 in her will. Edmund was Charles Langton’s son by his marriage to Emma Darwin’s sister Charlotte.
William Lacon Childe of Kinlet Hall in Shropshire had borrowed £60,000 from CD’s father, Robert Waring Darwin (Shropshire Archives, SA D3651/B/47/1/1/1/1/6). Henry Parker was the son of CD’s sister Marianne. Thomas Salt was CD’s solicitor in Shrewsbury.
Has made will. Discusses financial arrangements and asks whether CD would like a mortgage. |
Parameterize Gain Schedules - MATLAB & Simulink
Basis Function Parameterization
Tunable Gain Surfaces
Tunable Surfaces in Simulink
Tunable Surfaces in MATLAB
Typically, gain-scheduled control systems in Simulink® use lookup tables or MATLAB Function blocks to specify gain values as a function of the scheduling variables. For tuning, you replace these blocks by parametric gain surfaces. A parametric gain surface is a basis-function expansion whose coefficients are tunable. For example, you can model a time-varying gain k(t) as a cubic polynomial in t:
k(t) = k0 + k1t + k2t2 + k3t3.
Here, k0,...,k3 are tunable coefficients. When you parameterize scheduled gains in this way, systune can tune the gain-surface coefficients to meet your control objectives at a representative set of operating conditions. For applications where gains vary smoothly with the scheduling variables, this approach provides explicit formulas for the gains, which the software can write directly to MATLAB Function blocks. When you use lookup tables, this approach lets you tune a few coefficients rather than many individual lookup-table entries, drastically reducing the number of parameters and ensuring smooth transitions between operating points.
In a gain-scheduled controller, the scheduled gains are functions of the scheduling variables, σ. For example, a gain-scheduled PI controller has the form:
C\left(s,\sigma \right)={K}_{p}\left(\sigma \right)+\frac{{K}_{i}\left(\sigma \right)}{s}.
Tuning this controller requires determining the functional forms Kp(σ) and Ki(σ) that yield the best system performance over the operating range of σ values. However, tuning arbitrary functions is difficult. Therefore, it is necessary either to consider the function values at only a finite set of points, or restrict the generality of the functions themselves.
In the first approach, you choose a collection of design points, σ, and tune the gains Kp and Ki independently at each design point. The resulting set of gain values is stored in a lookup table driven by the scheduling variables, σ. A drawback of this approach is that tuning might yield substantially different values for neighboring design points, causing undesirable jumps when transitioning from one operating point to another.
Alternatively, you can model the gains as smooth functions of σ, but restrict the generality of such functions by using specific basis function expansions. For example, suppose σ is a scalar variable. You can model Kp(σ) as a quadratic function of σ:
{K}_{p}\left(\sigma \right)={k}_{0}+{k}_{1}\sigma +{k}_{2}{\sigma }^{2}.
After tuning, this parametric gain might have a profile such as the following (the specific shape of the curve depends on the tuned coefficient values and range of σ):
Or, suppose that σ consists of two scheduling variables, α and V. Then, you can model Kp(σ) as a bilinear function of α and V:
{K}_{p}\left(\alpha ,V\right)={k}_{0}+{k}_{1}\alpha +{k}_{2}V+{k}_{3}\alpha V.
After tuning, this parametric gain might have a profile such as the following. Here too, the specific shape of the curve depends on the tuned coefficient values and ranges of σ values:
For tuning gain schedules with systune, you use a parametric gain surface that is a particular expansion of the gain in basis functions of σ:
K\left(\sigma \right)={K}_{0}+{K}_{1}{F}_{1}\left(n\left(\sigma \right)\right)+\dots +{K}_{M}{F}_{M}\left(n\left(\sigma \right)\right).
The basis functions F1,...,FM are user-selected and fixed. These functions operate on n(σ), where n is a function that scales and normalizes the scheduling variables to the interval [–1,1] (or an interval you specify). The coefficients of the expansion, K0,...,KM, are the tunable parameters of the gain surface. K0,...,KM can be scalar or matrix-valued, depending on the I/O size of the gain K(σ). The choice of basis function is problem-dependent, but in general, try low-order polynomial expansions first.
Use the tunableSurface command to construct a tunable model of a gain surface sampled over a grid of design points (σ values). For example, consider the gain with bilinear dependence on two scheduling variables, α and V:
{K}_{p}\left(\alpha ,V\right)={K}_{0}+{K}_{1}\alpha +{K}_{2}V+{K}_{3}\alpha V.
Suppose that α is an angle of incidence that ranges from 0° to 15°, and V is a speed that ranges from 300 m/s to 700 m/s. Create a grid of design points that span these ranges. These design points must match the parameter values at which you sample your varying or nonlinear plant. (See Plant Models for Gain-Scheduled Controller Tuning.)
[alpha,V] = ndgrid(0:5:15,300:100:700);
Specify the basis functions for the parameterization of this surface, α, V, and αV. The tunableSurface command expects the basis functions to be arranged as a vector of functions of two input variables. You can use an anonymous function to express the basis functions.
shapefcn = @(alpha,V)[alpha,V,alpha*V];
Alternatively, use polyBasis, fourierBasis, or ndBasis to generate basis functions of as many scheduling variables as you need.
Create the tunable surface using the design points and basis functions.
Kp = tunableSurface('Kp',1,domain,shapefcn);
Kp is a tunable model of the gain surface. tunableSurface parameterizes the surface as:
{K}_{p}\left(\alpha ,V\right)={\overline{K}}_{0}+{\overline{K}}_{1}\overline{\alpha }+{\overline{K}}_{2}\overline{V}+{\overline{K}}_{3}\overline{\alpha }\overline{V},
\overline{\alpha }=\frac{\alpha -7.5}{7.5},\text{ }\overline{V}=\frac{V-500}{200}.
The surface is expressed in terms of the normalized variables,
\overline{\alpha },\overline{V}\in {\left[-1,1\right]}^{2}
rather than in terms of α and V. This normalization, which tunableSurface performs by default, improves the conditioning of the optimization performed by systune. If needed, you can change the default scaling and normalization. (See tunableSurface).
The second input argument to tunableSurface specifies the initial value of the constant coefficient, K0. By default, K0 is the gain when all the scheduling variables are at the center of their ranges. tunableSurface takes the I/O dimensions of the gain surface from K0. Therefore, you can create array-valued tunable gains by providing an array for that input.
Karr = tunableSurface('Karr',ones(2),domain,shapefcn);
Karr is a 2-by-2 matrix in which each entry is a bilinear function of the scheduling variables with independent coefficients.
x=\frac{\alpha -7.5}{7.5},\phantom{\rule{1em}{0ex}}y=\frac{V-450}{150}.
K\left(\alpha ,V\right)={K}_{0}+{K}_{1}x+{K}_{2}y+{K}_{3}xy,
{K}_{0},...,{K}_{3}
\begin{array}{c}{F}_{1}\left(x,y\right)=x\\ {F}_{2}\left(x,y\right)=y\\ {F}_{3}\left(x,y\right)=xy.\end{array}
In your Simulink model, you model gain schedules using lookup table blocks, MATLAB Function blocks, or Matrix Interpolation blocks, as described in Model Gain-Scheduled Control Systems in Simulink. To tune these gain surfaces, use tunableSurface to create a gain surface for each block. In the slTuner interface to the model, designate each gain schedule as a block to tune, and set its parameterization to the corresponding gain surface. For instance, the rct_CSTR model includes a gain-scheduled PI controller, the Concentration controller subsystem, in which the gains Kp and Ki vary with the scheduling variable Cr.
To tune the lookup tables Kp and Ki, create a tunable surface for each one. Suppose that CrEQ is the vector of design points, and that you expect the gains to vary quadratically with Cr.
Kp = tunableSurface('Kp',0,TuningGrid,ShapeFcn);
Ki = tunableSurface('Ki',-2,TuningGrid,ShapeFcn);
Suppose that you have an array Gd of linearizations of the plant subsystem, CSTR, at each of the design points in CrEQ. (See Plant Models for Gain-Scheduled Controller Tuning). Create an slTuner interface that substitutes this array for the plant subsystem and designates the two lookup-table blocks for tuning.
ST0 = slTuner('rct_CSTR',{'Kp','Ki'},BlockSubs);
Finally, use the tunable surfaces to parameterize the lookup tables.
When you tune STO, systune tunes the coefficients of the tunable surfaces Kp and Ki, so that each tunable surface represents the tuned relationship between Cr and the gain. When you write the tuned values back to the block for validation, setBlockParam automatically generates tuned lookup-table data by evaluating the tunable surfaces at the breakpoints you specify in the corresponding blocks.
For more details about this example, see Gain-Scheduled Control of a Chemical Reactor.
For a control system modeled in MATLAB®, use tunable surfaces to construct more complex gain-scheduled control elements, such as gain-scheduled PID controllers, filters, or state-space controllers. For example, suppose that you create two gain surfaces Kp and Ki using tunableSurface. The following command constructs a tunable gain-scheduled PI controller.
C0 = pid(Kp,Ki);
Similarly, suppose that you create four matrix-valued gain surfaces A, B, C, D. The following command constructs a tunable gain-scheduled state-space controller.
C1 = ss(A,B,C,D);
You then incorporate the gain-scheduled controller into a generalized model of your entire control system. For example, suppose G is an array of models of your plant sampled at the design points that are specified in Kp and Ki. Then, the following command builds a tunable model of a gain-scheduled single-loop PID control system.
When you interconnect a tunable surface with other LTI models, the resulting model is an array of tunable generalized genss models. The design points in the tunable surface determine the dimensions of the array. Thus, each entry in the array represents the system at the corresponding scheduling variable value. The SamplingGrid property of the array stores those design points.
T0 = feedback(G*Kp,1)
Kp: Parametric 1x4 matrix, 1 occurrences.
Type "ss(T0)" to see the current value, "get(T0)" to see all properties, and
"T0.Blocks" to interact with the blocks.
The resulting generalized model has tunable blocks corresponding to the gain surfaces used to create the model. In this example, the system has one gain surface, Kp, which has the four tunable coefficients corresponding to K0, K1, K2, and K3. Therefore, the tunable block is a vector-valued realp parameter with four entries.
When you tune the control system with systune, the software tunes the coefficients for each of the design points specified in the tunable surface.
For an example illustrating the entire workflow in MATLAB, see the section "Controller Tuning in MATLAB" in Gain-Scheduled Control of a Chemical Reactor. |
EUDML | Order properties of compact maps on L...-spaces associated with von Neumann algebras. EuDML | Order properties of compact maps on L...-spaces associated with von Neumann algebras.
Order properties of compact maps on L...-spaces associated with von Neumann algebras.
Erwin Neuhardt
Neuhardt, Erwin. "Order properties of compact maps on L...-spaces associated with von Neumann algebras.." Mathematica Scandinavica 66.1 (1990): 110-116. <http://eudml.org/doc/167096>.
@article{Neuhardt1990,
author = {Neuhardt, Erwin},
keywords = {compact linear maps on non commutative -spaces; von Neumann algebras; completely positive compact map; order ideal},
title = {Order properties of compact maps on L...-spaces associated with von Neumann algebras.},
AU - Neuhardt, Erwin
TI - Order properties of compact maps on L...-spaces associated with von Neumann algebras.
KW - compact linear maps on non commutative -spaces; von Neumann algebras; completely positive compact map; order ideal
compact linear maps on non commutative
{L}^{p}
-spaces, von Neumann algebras, completely positive compact map, order ideal
p
{C}^{*}
{W}^{*}
Articles by Erwin Neuhardt |
Algebraic Riccati equation - Wikipedia
Nonlinear equation which arises on linear optimal control problems
An algebraic Riccati equation is a type of nonlinear equation that arises in the context of infinite-horizon optimal control problems in continuous time or discrete time.
A typical algebraic Riccati equation is similar to one of the following:
the continuous time algebraic Riccati equation (CARE):
{\displaystyle A^{T}P+PA-PBR^{-1}B^{T}P+Q=0\,}
or the discrete time algebraic Riccati equation (DARE):
{\displaystyle P=A^{T}PA-(A^{T}PB)(R+B^{T}PB)^{-1}(B^{T}PA)+Q.\,}
Though generally this equation can have many solutions, it is usually specified that we want to obtain the unique stabilizing solution, if such a solution exists.
2 Context of the discrete-time algebraic Riccati equation
The name Riccati is given to these equations because of their relation to the Riccati differential equation. Indeed, the CARE is verified by the time invariant solutions of the associated matrix valued Riccati differential equation. As for the DARE, it is verified by the time invariant solutions of the matrix valued Riccati difference equation (which is the analogue of the Riccati differential equation in the context of discrete time LQR).
Context of the discrete-time algebraic Riccati equation[edit]
In infinite-horizon optimal control problems, one cares about the value of some variable of interest arbitrarily far into the future, and one must optimally choose a value of a controlled variable right now, knowing that one will also behave optimally at all times in the future. The optimal current values of the problem's control variables at any time can be found using the solution of the Riccati equation and the current observations on evolving state variables. With multiple state variables and multiple control variables, the Riccati equation will be a matrix equation.
The algebraic Riccati equation determines the solution of the infinite-horizon time-invariant Linear-Quadratic Regulator problem (LQR) as well as that of the infinite horizon time-invariant Linear-Quadratic-Gaussian control problem (LQG). These are two of the most fundamental problems in control theory.
A typical specification of the discrete-time linear quadratic control problem is to minimize
{\displaystyle \sum _{t=1}^{T}(y_{t}^{T}Qy_{t}+u_{t}^{T}Ru_{t})}
subject to the state equation
{\displaystyle y_{t}=Ay_{t-1}+Bu_{t-1},}
where y is an n × 1 vector of state variables, u is a k × 1 vector of control variables, A is the n × n state transition matrix, B is the n × k matrix of control multipliers, Q (n × n) is a symmetric positive semi-definite state cost matrix, and R (k × k) is a symmetric positive definite control cost matrix.
Induction backwards in time can be used to obtain the optimal control solution at each time,[1]
{\displaystyle u_{t}^{*}=-(B^{T}P_{t}B+R)^{-1}(B^{T}P_{t}A)y_{t-1},}
with the symmetric positive definite cost-to-go matrix P evolving backwards in time from
{\displaystyle P_{T}=Q}
{\displaystyle P_{t-1}=Q+A^{T}P_{t}A-A^{T}P_{t}B(B^{T}P_{t}B+R)^{-1}B^{T}P_{t}A,\,}
which is known as the discrete-time dynamic Riccati equation of this problem. The steady-state characterization of P, relevant for the infinite-horizon problem in which T goes to infinity, can be found by iterating the dynamic equation repeatedly until it converges; then P is characterized by removing the time subscripts from the dynamic equation.
Usually solvers try to find the unique stabilizing solution, if such a solution exists. A solution is stabilizing if using it for controlling the associated LQR system makes the closed loop system stable.
For the CARE, the control is
{\displaystyle K=R^{-1}B^{T}P}
and the closed loop state transfer matrix is
{\displaystyle A-BK=A-BR^{-1}B^{T}P}
which is stable if and only if all of its eigenvalues have strictly negative real part.
For the DARE, the control is
{\displaystyle K=(R+B^{T}PB)^{-1}B^{T}PA}
{\displaystyle A-BK=A-B(R+B^{T}PB)^{-1}B^{T}PA}
which is stable if and only if all of its eigenvalues are strictly inside the unit circle of the complex plane.
A solution to the algebraic Riccati equation can be obtained by matrix factorizations or by iterating on the Riccati equation. One type of iteration can be obtained in the discrete time case by using the dynamic Riccati equation that arises in the finite-horizon problem: in the latter type of problem each iteration of the value of the matrix is relevant for optimal choice at each period that is a finite distance in time from a final time period, and if it is iterated infinitely far back in time it converges to the specific matrix that is relevant for optimal choice an infinite length of time prior to a final period—that is, for when there is an infinite horizon.
It is also possible to find the solution by finding the eigendecomposition of a larger system. For the CARE, we define the Hamiltonian matrix
{\displaystyle Z={\begin{pmatrix}A&-BR^{-1}B^{T}\\-Q&-A^{T}\end{pmatrix}}}
{\displaystyle Z}
is Hamiltonian, if it does not have any eigenvalues on the imaginary axis, then exactly half of its eigenvalues have a negative real part. If we denote the
{\displaystyle 2n\times n}
matrix whose columns form a basis of the corresponding subspace, in block-matrix notation, as
{\displaystyle {\begin{pmatrix}U_{1,1}\\U_{2,1}\end{pmatrix}}}
{\displaystyle P=U_{2,1}U_{1,1}^{-1}}
is a solution of the Riccati equation; furthermore, the eigenvalues of
{\displaystyle A-BR^{-1}B^{T}P}
{\displaystyle Z}
with negative real part.
For the DARE, when
{\displaystyle A}
is invertible, we define the symplectic matrix
{\displaystyle Z={\begin{pmatrix}A+BR^{-1}B^{T}(A^{-1})^{T}Q&-BR^{-1}B^{T}(A^{-1})^{T}\\-(A^{-1})^{T}Q&(A^{-1})^{T}\end{pmatrix}}}
{\displaystyle Z}
is symplectic, if it does not have any eigenvalues on the unit circle, then exactly half of its eigenvalues are inside the unit circle. If we denote the
{\displaystyle 2n\times n}
{\displaystyle {\begin{pmatrix}U_{1,1}\\U_{2,1}\end{pmatrix}}}
{\displaystyle U_{1,1}}
{\displaystyle U_{2,1}}
result from the decomposition[2]
{\displaystyle Z={\begin{pmatrix}U_{1,1}&U_{1,2}\\U_{2,1}&U_{2,2}\end{pmatrix}}{\begin{pmatrix}\Lambda _{1,1}&\Lambda _{1,2}\\0&\Lambda _{2,2}\end{pmatrix}}{\begin{pmatrix}U_{1,1}^{T}&U_{2,1}^{T}\\U_{1,2}^{T}&U_{2,2}^{T}\end{pmatrix}}}
{\displaystyle P=U_{2,1}U_{1,1}^{-1}}
{\displaystyle A-B(R+B^{T}PB)^{-1}B^{T}PA}
{\displaystyle Z}
which are inside the unit circle.
^ Chow, Gregory (1975). Analysis and Control of Dynamic Economic Systems. New York: John Wiley & Sons. ISBN 0-471-15616-7.
^ William Arnold; Alan Laub (1984). "Generalized Eigenproblem Algorithms and Software for Algebraic Riccati Equations".
Peter Lancaster; Leiba Rodman (1995), Algebraic Riccati equations, Oxford University Press, p. 504, ISBN 0-19-853795-6
Alan J. Laub, A Schur method for solving algebraic Riccati equations
CARE solver help of MATLAB Control toolbox.
DARE solver help of MATLAB Control toolbox.
Online CARE solver for arbitrary sized matrices.
Python CARE and DARE solvers.
Mathematica function to solve the continuous-time algebraic Riccati equation.
Mathematica function to solve the discrete-time algebraic Riccati equation.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Algebraic_Riccati_equation&oldid=1053024710" |
Forced Convection Heat Transfer in Channels With Rib Turbulators Inclined at 45 deg | J. Turbomach. | ASME Digital Collection
DIPTEM/TEC,
, via all’Opera Pia 15a, I-16145 Genova, Italy
Roberto Abram, Ph.D. student
Giovanni Tanda Full Professor
Tanda, G., and Abram, R. (January 29, 2009). "Forced Convection Heat Transfer in Channels With Rib Turbulators Inclined at 45 deg." ASME. J. Turbomach. April 2009; 131(2): 021012. https://doi.org/10.1115/1.2987241
Local and average Nusselt numbers and friction factors are presented for rectangular channels with an aspect ratio of 5 and angled rib turbulators inclined at 45 deg with parallel orientations on one and two surfaces of the channel. The convective fluid was air, and the Reynolds number varied from 9000 to 35,500. The ratio of rib height to hydraulic diameter was 0.09, with the rib pitch-to-height ratio equal to 13.33 or 6.66. Experiments were based on the use of heating foils (for the attainment of uniform heat flux condition) and of the steady-state liquid crystal thermography (for the identification of isotherm lines and the reconstruction of local heat transfer coefficient). Local results showed quasiperiodic profiles of Nusselt number in the streamwise direction, whose features were strongly affected by the value of rib pitch and by the spanwise coordinate. For all the investigated geometries a heat transfer augmentation, relative to the fully developed smooth channel, was found; when inclined rib turbulators were placed on two opposite surfaces of the channel, the full-surface Nusselt number was higher (by 10–19%) than that for the one-ribbed wall channel, but pressure drop penalties also increased by a factor of about 3. For both the one- and two-ribbed wall channels, the best heat transfer performance for a constant pumping power, in the explored range of Reynolds number, was generally achieved by the larger rib pitch-to-height ratio
(=13.33)
channel flow, convection, engines, friction, gas turbines, turbulence
Flow (Dynamics), Friction, Heat transfer, Heat transfer coefficients, Liquid crystals, Reynolds number, Heat flux, Heating, Temperature, Pressure drop, Fluids, Forced convection
Forced Convection Heat Transfer From Surfaces Roughened by Transverse Ribs
Experimental Heat Transfer and Pressure Drop With Two-Dimensional Turbulence Promoter Applied to Two Opposite Walls of a Square Tube
Turbulent Convective Heat Transfer From Rough Surfaces With Two-Dimensional Rectangular Ribs
Enhanced Heat Transfer in a Flat Rectangular Duct With Streamwise-Periodic Disturbances at One Principal Wall
Experimental Heat Transfer and Friction in Channels Roughened With Angled, V-Shaped, and Discrete Ribs on Two Opposite Walls
45 deg Staggered Rib Heat Transfer Coefficient Measurements in a Square Channel
Experimental and Numerical Study of Developed Flow and Heat Transfer in Coolant Channels With 45 deg Ribs
Heat (Mass) Transfer in a Diagonally Oriented Rotating Two-Pass Channel With Rib-Roughened Walls
Local Heat/Mass Transfer Measurements in a Rectangular Duct With Discrete Rbs
Heat Transfer in Rotating Rectangular Cooling Channels (AR=4) With Angled Ribs
Heat Transfer and Friction in Channels With Very High Blockage 45° Staggered Turbulators
Proceedings of the ASME Turbo Expo 2003 Power for Land, Sea and Air
A Numerical Study of Flow and Heat Transfer in Rotating Rectangular Channels (AR=4) with 45 deg Rib Turbulators by Reynolds Stress Turbulence Model
Flow and Heat Transfer in Straight Cooling Passages With Inclined Ribs on Opposite Walls: An Experimental and Computational Study
Heat Transfer in 1:4 Rectangular Passages With Rotation
Spatially Resolved Surface Heat Transfer for Parallel Rib Turbulators With 45 deg Orientations Including Test Surface Conduction Analysis
Thermal Performance of Angled, V-Shaped and W-Shaped Rib Turbulators in Rotating Rectangular Cooling Channels (AR=4:1)
Heat Transfer in Two-Pass Rotating Rectangular Channels (AR=1:2 and AR=1:4) With 45 deg Angled Rib Turbulators
Experimental and Numerical Investigation of Convection Heat Transfer in a Rectangular Channel With Angled Ribs
Rib Spacing Effect on Heat Transfer in Rotating Two-Pass Ribbed Channels (AR=1:2)
Effect of Rib Spacing on Heat Transfer in a Two-Pass Rectangular Channel (AR=1:4) With a Sharp Entrance at High Rotation Number
Proceedings of the ASME Turbo Expo 2008 for Land, Sea and Air
Turbulent Flow Heat Transfer and Friction in a Rectangular Channel With Varying Numbers of Ribbed Walls
Heat Transfer and Friction Behaviours in Rectangular Channels With Varying Number of Ribbed Walls
Heat Transfer and Pressure Drop Measurements in Rib-Roughened Rectangular Ducts Distribution in Rectangular Ducts With V-Shaped Ribs
Heat Transfer and Pressure Drop in a Rectangular Channel With Diamond-Shaped Elements
Experiments on Turbulent Heat Transfer in an Asymmetrically Heated Rectangular Duct |
Differentially 4-uniform permutations - Boolean Functions
{\displaystyle x^{2^{i}+1}}
{\displaystyle gcd(i,n)=2,n=2t}
{\displaystyle x^{2^{2i}-2^{i}+1}}
{\displaystyle gcd(i,n)=2,n=2t}
{\displaystyle x^{2^{n}-2}}
{\displaystyle n=2t}
{\displaystyle x^{2^{2t}-2^{t}+1}}
{\displaystyle n=4t}
{\displaystyle \alpha x^{2^{s}+1}+\alpha ^{2^{t}}x^{{2-t}+2^{t+s}}}
{\displaystyle n=3t,t/2}
{\displaystyle gcd(n,s)=2,3|t+s}
{\displaystyle \alpha }
{\displaystyle \mathbb {F} _{2^{n}}}
{\displaystyle x^{-1}+\mathrm {Tr} _{1}^{n}(x+(x^{-1}+1)^{-1})}
{\displaystyle n=2t}
{\displaystyle x^{-1}+\mathrm {Tr} _{1}^{n}(x^{-3(2^{k}+1)}+(x^{-1}+1)^{3(2^{k}+1)})}
{\displaystyle n=2t}
{\displaystyle 2\leq k\leq t-1}
{\displaystyle L_{u}(F^{-1}(x))|_{H_{u}}}
{\displaystyle n=2t,F(x)}
{\displaystyle {\mathbb {F} }_{2^{n+1}},u\in {\mathbb {F} }_{2^{n+1}}^{*},}
{\displaystyle L_{u}(x)=F(x)+F(x+u)+F(u),}
{\displaystyle H_{u}=\{L_{u}(x)|x\in {\mathbb {F} }_{2^{n+1}}\}}
{\displaystyle \displaystyle \sum _{i=0}^{2^{n}-3}x^{i}}
{\displaystyle n=2t,}
{\displaystyle x^{-1}+t(x^{2^{s}}+x)^{2^{sn}-1}}
{\displaystyle s}
{\displaystyle ,t\in {\mathbb {F} }_{2^{s}}^{*},}
{\displaystyle s,n}
{\displaystyle t\in {\mathbb {F} }_{2^{s}}^{*}}
{\displaystyle x^{2^{k}+1}+t(x^{2^{s}}+x)^{2^{sn}-1}}
{\displaystyle n,s}
{\displaystyle t\in {\mathbb {F} }_{2^{s}}^{*},gcd(k,sn)=1}
{\displaystyle (x,x_{n})\mapsto }
{\displaystyle ((1+x_{n})x^{-1}+x_{n}\alpha x^{-1},f(x,x_{n}))}
{\displaystyle n}
{\displaystyle x,\alpha \in {\mathbb {F} }_{2^{n-1}},x_{n}\in {\mathbb {F} }_{2},\mathrm {Tr} _{1}^{n-1}(\alpha )=\mathrm {Tr} _{1}^{n-1}\left({\frac {1}{\alpha }}\right)=1,}
{\displaystyle f(x,x_{n})}
{\displaystyle (n,1)-} |
Magnetic Field in Two-Pole Electric Motor - MATLAB & Simulink - MathWorks 한êµ
The magnetic permeability of air and of copper are both close to the magnetic permeability of a vacuum, μ = μ0. The magnetic permeability of the stator and the rotor is μ = 5000μ0. The current density J is 0 everywhere except in the coil, where it is 10 A/m2.
The geometry of the problem makes the magnetic vector potential A symmetric with respect to the y-axis and antisymmetric with respect to the x-axis. Therefore, you can limit the domain to x ≥ 0, y ≥ 0, with the default boundary condition
nâ
\text{â}\left(\frac{1}{\mathrm{μ}}âA\right)=0 |
15 June 2017 Large greatest common divisor sums and extreme values of the Riemann zeta function
Andriy Bondarenko, Kristian Seip
It is shown that the maximum of
|\zeta \left(1/2+it\right)|
{T}^{1/2}\le t\le T
exp\left(\left(1/\sqrt{2}+o\left(1\right)\right)\sqrt{logTlogloglogT/loglogT}\right)
. Our proof uses Soundararajan’s resonance method and a certain large greatest common divisor sum. The method of proof shows that the absolute constant
A
in the inequality
{sup }_{1\le {n}_{1}<\cdots <{n}_{N}}{\sum }_{k,\ell =1}^{N}\frac{gcd\left({n}_{k},{n}_{\ell }\right)}{\sqrt{{n}_{k}{n}_{\ell }}}\ll Nexp\left(A\sqrt{\frac{logNlogloglogN}{loglogN}}\right),
established in a recent paper of ours, cannot be taken smaller than
1
Andriy Bondarenko. Kristian Seip. "Large greatest common divisor sums and extreme values of the Riemann zeta function." Duke Math. J. 166 (9) 1685 - 1701, 15 June 2017. https://doi.org/10.1215/00127094-0000005X
Received: 5 August 2015; Revised: 23 July 2016; Published: 15 June 2017
Keywords: Extreme values , greatest common divisor sums , Riemann zeta function
Andriy Bondarenko, Kristian Seip "Large greatest common divisor sums and extreme values of the Riemann zeta function," Duke Mathematical Journal, Duke Math. J. 166(9), 1685-1701, (15 June 2017) |
(Redirected from Electron-volt)
"meV", "keV", "MeV", "GeV", "TeV" and "PeV" redirect here. For other uses, see MEV, KEV, GEV, TEV and PEV.
In physics, an electronvolt (symbol eV, also written electron-volt and electron volt) is the measure of an amount of kinetic energy gained by a single electron accelerating from rest through an electric potential difference of one volt in vacuum. When used as a unit of energy, the numerical value of 1 eV in joules (symbol J) is equivalent to the numerical value of the charge of an electron in coulombs (symbol C). Under the 2019 redefinition of the SI base units, this sets 1 eV equal to the exact value 1.602176634×10−19 J.[1]
Historically, the electronvolt was devised as a standard unit of measure through its usefulness in electrostatic particle accelerator sciences, because a particle with electric charge q gains an energy E = qV after passing through a voltage of V. Since q must be an integer multiple of the elementary charge e for any isolated particle, the gained energy in units of electronvolts conveniently equals that integer times the voltage.
It is a common unit of energy within physics, widely used in solid state, atomic, nuclear, and particle physics, and high-energy astrophysics. It is commonly used with the metric prefixes milli-, kilo-, mega-, giga-, tera-, peta- or exa- (meV, keV, MeV, GeV, TeV, PeV and EeV respectively). In some older documents, and in the name Bevatron, the symbol BeV is used, which stands for billion (109) electronvolts; it is equivalent to the GeV.
Energy eV 1.602176634×10−19 J
Momentum eV/c 5.344286×10−28 kg·m/s
Temperature eV/kB 1.160451812×104 K
7 Scattering experiments
8 Energy comparisons
8.1 Per mole
An electronvolt is the amount of kinetic energy gained or lost by a single electron accelerating from rest through an electric potential difference of one volt in vacuum. Hence, it has a value of one volt, 1 J/C, multiplied by the electron's elementary charge e, 1.602176634×10−19 C.[2] Therefore, one electronvolt is equal to 1.602176634×10−19 J.[3]
The electronvolt, as opposed to the volt, is not an SI unit. The electronvolt (eV) is a unit of energy whereas the volt (V) is the derived SI unit of electric potential. The SI unit for energy is the joule (J).
By mass–energy equivalence, the electronvolt corresponds to a unit of mass. It is common in particle physics, where units of mass and energy are often interchanged, to express mass in units of eV/c2, where c is the speed of light in vacuum (from E = mc2). It is common to informally express mass in terms of eV as a unit of mass, effectively using a system of natural units with c set to 1.[4] The kilogram equivalent of 1 eV/c2 is:
{\displaystyle 1\;{\text{eV}}/c^{2}={\frac {(1.602\ 176\ 634\times 10^{-19}\;{\text{C}})\cdot 1\,{\text{V}}}{(2.99\ 792\ 458\times 10^{8}\;\mathrm {m/s} )^{2}}}=1.782\ 661\ 92\times 10^{-36}\;{\text{kg}}.}
For example, an electron and a positron, each with a mass of 0.511 MeV/c2, can annihilate to yield 1.022 MeV of energy. The proton has a mass of 0.938 GeV/c2. In general, the masses of all hadrons are of the order of 1 GeV/c2, which makes the gigaelectronvolt a convenient unit of mass for particle physics:
1 GeV/c2 = 1.78266192×10−27 kg.
The unified atomic mass unit (u), almost exactly 1 gram divided by the Avogadro number, is almost the mass of a hydrogen atom, which is mostly the mass of the proton. To convert to electron volts, use the formula:
Momentum[edit]
By dividing a particle's kinetic energy in electronvolts by the fundamental constant c (the speed of light), one can describe the particle's momentum in units of eV/c.[5] In natural units where the fundamental velocity constant c is 1, the c may informally be omitted to express momentum as electronvolts.
The energy–momentum relation in natural units,
{\displaystyle E^{2}=p^{2}+m_{0}^{2}}
, is a Pythagorean equation that can be visualized as a right triangle where the total energy
{\displaystyle E}
is the hypotenuse and the momentum
{\displaystyle p}
{\displaystyle m_{0}}
are the two legs.
The energy momentum relation
{\displaystyle E^{2}=p^{2}c^{2}+m_{0}^{2}c^{4}}
in natural units (with
{\displaystyle c=1}
{\displaystyle E^{2}=p^{2}+m_{0}^{2}}
is a Pythagorean equation. When a relatively high energy is applied to a particle with relatively low rest mass, it can be approximated as
{\displaystyle E\simeq p}
in high-energy physics such that an applied energy in units of eV conveniently results in an approximately equivalent change of momentum in units of eV/c.
The dimensions of momentum units are T−1LM. The dimensions of energy units are T−2L2M. Dividing the units of energy (such as eV) by a fundamental constant (such as the speed of light) that has units of velocity (T−1L) facilitates the required conversion for using energy units to describe momentum.
For example, if the momentum p of an electron is said to be 1 GeV, then the conversion to MKS system of units can be achieved by:
{\displaystyle p=1\;{\text{GeV}}/c={\frac {(1\times 10^{9})\cdot (1.602\ 176\ 634\times 10^{-19}\;{\text{C}})\cdot (1\;{\text{V}})}{2.99\ 792\ 458\times 10^{8}\;{\text{m}}/{\text{s}}}}=5.344\ 286\times 10^{-19}\;{\text{kg}}\cdot {\text{m}}/{\text{s}}.}
In particle physics, a system of natural units in which the speed of light in vacuum c and the reduced Planck constant ħ are dimensionless and equal to unity is widely used: c = ħ = 1. In these units, both distances and times are expressed in inverse energy units (while energy and mass are expressed in the same units, see mass–energy equivalence). In particular, particle scattering lengths are often presented in units of inverse particle masses.
{\displaystyle \hbar ={\frac {h}{2\pi }}=1.054\ 571\ 817\ 646\times 10^{-34}\ {\mbox{J s}}=6.582\ 119\ 569\ 509\times 10^{-16}\ {\mbox{eV s}}.}
The above relations also allow expressing the mean lifetime τ of an unstable particle (in seconds) in terms of its decay width Γ (in eV) via Γ = ħ/τ. For example, the B0 meson has a lifetime of 1.530(9) picoseconds, mean decay length is cτ = 459.7 μm, or a decay width of (4.302±25)×10−4 eV.
Energy in electronvolts is sometimes expressed through the wavelength of light with photons of the same energy:
{\displaystyle {\frac {1\;{\text{eV}}}{hc}}={\frac {1.602\ 176\ 634\times 10^{-19}\;{\text{J}}}{(2.99\ 792\ 458\times 10^{10}\;{\text{cm}}/{\text{s}})\cdot (6.62\ 607\ 015\times 10^{-34}\;{\text{J}}\cdot {\text{s}})}}\thickapprox 8065.5439\;{\text{cm}}^{-1}.}
In certain fields, such as plasma physics, it is convenient to use the electronvolt to express temperature. The electronvolt is divided by the Boltzmann constant to convert to the Kelvin scale:
{\displaystyle {1eV \over k_{\text{B}}}={1.602\ 176\ 634\times 10^{-19}{\text{ J}} \over 1.380\ 649\times 10^{-23}{\text{ J/K}}}=11\ 604.518\ 12{\text{ K}}.}
Where kB is the Boltzmann constant, K is Kelvin, J is Joules, eV is electronvolts.
The kB is assumed when using the electronvolt to express temperature, for example, a typical magnetic confinement fusion plasma is 15 keV (kilo-electronvolts), which is equal to 174 MK (million Kelvin).
{\displaystyle E=h\nu ={\frac {hc}{\lambda }}={\frac {(4.135\ 667\ 516\times 10^{-15}\,\mathrm {eV\,s} )(299\,792\,458\,\mathrm {m/s} )}{\lambda }}}
where h is the Planck constant, c is the speed of light. This reduces to[1]
{\displaystyle {\begin{aligned}E\mathrm {(eV)} &=4.135\ 667\ 516\,{\mbox{feVs}}\cdot \nu \ {\mbox{(PHz)}}\\[4pt]&={\frac {1\ 239.841\ 93\,{\mbox{eV}}\,{\mbox{nm}}}{\lambda \ {\mbox{(nm)}}}}.\end{aligned}}}
Scattering experiments[edit]
Energy comparisons[edit]
Photon frequency vs. energy particle in electronvolts. The energy of a photon varies only with the frequency of the photon, related by speed of light constant. This contrasts with a massive particle of which the energy depends on its velocity and rest mass.[6][7][8] Legend
γ: Gamma rays MIR: Mid infrared HF: High freq.
HX: Hard X-rays FIR: Far infrared MF: Medium freq.
SX: Soft X-rays Radio waves LF: Low freq.
EUV: Extreme ultraviolet EHF: Extremely high freq. VLF: Very low freq.
NUV: Near ultraviolet SHF: Super high freq. VF/ULF: Voice freq.
Visible light UHF: Ultra high freq. SLF: Super low freq.
NIR: Near Infrared VHF: Very high freq. ELF: Extremely low freq.
5.25×1032 eV total energy released from a 20 kt nuclear fission device
1.22×1028 eV the Planck energy
10 YeV (1×1025 eV) approximate grand unification energy
~624 EeV (6.24×1020 eV) energy consumed by a single 100-watt light bulb in one second (100 W = 100 J/s ≈ 6.24×1020 eV/s)
300 EeV (3×1020 eV = ~50 J) The first ultra-high-energy cosmic ray particle observed, the so-called Oh-My-God particle.[9]
2 PeV two petaelectronvolts, the most high-energetic neutrino detected by the IceCube neutrino telescope in Antarctica[10]
14 TeV designed proton center-of-mass collision energy at the Large Hadron Collider (operated at 3.5 TeV since its start on 30 March 2010, reached 13 TeV in May 2015)
1 TeV a trillion electronvolts, or 1.602×10−7 J, about the kinetic energy of a flying mosquito[11]
172 GeV rest energy of top quark, the heaviest measured elementary particle
125.1±0.2 GeV energy corresponding to the mass of the Higgs boson, as measured by two separate detectors at the LHC to a certainty better than 5 sigma[12]
210 MeV average energy released in fission of one Pu-239 atom
200 MeV approximate average energy released in nuclear fission fission fragments of one U-235 atom.
105.7 MeV rest energy of a muon
17.6 MeV average energy released in the nuclear fusion of deuterium and tritium to form He-4; this is 0.41 PJ per kilogram of product produced
2 MeV approximate average energy released in a nuclear fission neutron released from one U-235 atom.
1.9 MeV rest energy of up quark, the lowest mass quark.
1 MeV (1.602×10−13 J) about twice the rest energy of an electron
1 to 10 keV approximate thermal temperature,
{\displaystyle k_{B}T}
, in nuclear fusion systems, like the core of the sun, magnetically confined plasma, inertial confinement and nuclear weapons
13.6 eV the energy required to ionize atomic hydrogen; molecular bond energies are on the order of 1 eV to 10 eV per bond
1.6 eV to 3.4 eV the photon energy of visible light
1.1 eV energy
{\displaystyle E_{g}}
required to break a covalent bond in silicon
720 meV energy
{\displaystyle E_{g}}
required to break a covalent bond in germanium
< 120 meV approximate rest energy of neutrinos (sum of 3 flavors)[13]
25 meV thermal energy,
{\displaystyle k_{B}T}
, at room temperature; one air molecule has an average kinetic energy 38 meV
230 μeV thermal energy,
{\displaystyle k_{B}T}
, of the cosmic microwave background
Per mole[edit]
One mole of particles given 1 eV of energy has approximately 96.5 kJ of energy – this corresponds to the Faraday constant (F ≈ 96485 C mol−1), where the energy in joules of n moles of particles each with energy E eV is equal to E·F·n.
^ a b "CODATA Value: Planck constant in eV s". Archived from the original on 22 January 2015. Retrieved 30 March 2015.
^ "2018 CODATA Value: elementary charge". The NIST Reference on Constants, Units, and Uncertainty. NIST. 20 May 2019. Retrieved 2019-05-20.
^ "2018 CODATA Value: electron volt". The NIST Reference on Constants, Units, and Uncertainty. NIST. 20 May 2019. Retrieved 2019-05-20.
^ Bibcode:1983QJRAS..24...24B
^ "Units in particle physics". Associate Teacher Institute Toolkit. Fermilab. 22 March 2002. Archived from the original on 14 May 2011. Retrieved 13 February 2011.
^ Elert, Glenn. "Electromagnetic Spectrum, The Physics Hypertextbook". hypertextbook.com. Archived from the original on 2016-07-29. Retrieved 2016-07-30.
^ "Definition of frequency bands on". Vlf.it. Archived from the original on 2010-04-30. Retrieved 2010-10-16.
^ Open Questions in Physics. Archived 2014-08-08 at the Wayback Machine German Electron-Synchrotron. A Research Centre of the Helmholtz Association. Updated March 2006 by JCB. Original by John Baez.
^ "A growing astrophysical neutrino signal in IceCube now features a 2-PeV neutrino". Archived from the original on 2015-03-19.
^ Glossary Archived 2014-09-15 at the Wayback Machine - CMS Collaboration, CERN
^ ATLAS; CMS (26 March 2015). "Combined Measurement of the Higgs Boson Mass in pp Collisions at √s=7 and 8 TeV with the ATLAS and CMS Experiments". Physical Review Letters. 114 (19): 191803. arXiv:1503.07589. Bibcode:2015PhRvL.114s1803A. doi:10.1103/PhysRevLett.114.191803. PMID 26024162.
^ Mertens, Susanne (2016). "Direct neutrino mass experiments". Journal of Physics: Conference Series. 718 (2): 022013. arXiv:1605.01579. Bibcode:2016JPhCS.718b2013M. doi:10.1088/1742-6596/718/2/022013. S2CID 56355240.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Electronvolt&oldid=1088203894" |
Active-Disturbance-Rejection-Control for Temperature Control of the HVAC System ()
Heating, ventilation, and air conditioning (HVAC) system is significant to the energy efficiency in buildings. In this paper, temperature control of HVAC system is studied in winter operation season. The physical model of the zone, the fan, the heating coil and sensor are built. HVAC is a non-linear, strong disturbance and coupling system. Linear active-rejection-disturbance-control is an appreciate control algorithm which can adapt to less information, strong-disturbance influence, and has relative-fixed structure and simple tuning process of the controller parameters. Active-rejection-disturbance-control of the HVAC system is proposed. Simulation in Matlab/Simulink was done. Simulation results show that linear active-rejection-disturbance-control was prior to PID and integral-fuzzy controllers in rising time, overshoot and response time of step disturbance. The study can provide fundamental basis for the control of the air-condition system with strong-disturbance and high-precision needed.
HVAC System, Linear Active-Rejection-Disturbance-Control, PID Control, Integral-Fuzzy Control, Temperature Control
Huang, C. , Li, C. and Ma, X. (2018) Active-Disturbance-Rejection-Control for Temperature Control of the HVAC System. Intelligent Control and Automation, 9, 1-9. doi: 10.4236/ica.2018.91001.
{\rho }_{a}{C}_{pa}V\frac{\text{d}{T}_{z}}{\text{d}t}={m}_{a}{C}_{pa}\left({T}_{s}-{T}_{z}\right)+KF\left({T}_{o}-{T}_{z}\right)+q\left(t\right)
{T}_{z}\left(s\right)={G}_{z}\left(s\right)\left[\lambda T\left(s\right)+\gamma {T}_{o}\left(s\right)+q\left(s\right)\right]
{G}_{z}\left(s\right)=\frac{1}{{\rho }_{a}{C}_{pa}Vs+{m}_{a}{C}_{pa}+KF}=\frac{1}{45.23s+60.24}
\lambda ={m}_{a}{C}_{pa}
\gamma =KF
{G}_{F}\left(s\right)=\frac{1}{s+1}
{G}_{S}\left(s\right)=\frac{1}{s+1}
{C}_{ah}\frac{\text{d}{T}_{co}}{\text{d}t}={f}_{sw}{\rho }_{w}{C}_{pw}\left({T}_{wi}-{T}_{wo}\right)+{\left(UA\right)}_{ah}\left({T}_{o}-{T}_{co}\right)+{f}_{sa}{\rho }_{a}{C}_{pa}\left({T}_{m}-{T}_{co}\right)
{T}_{co}\left(s\right)={G}_{A}\left(s\right)\left[\alpha {T}_{wi}\left(s\right)+{\left(UA\right)}_{ah}{T}_{o}\left(s\right)+\beta {T}_{m}\left(s\right)\right]
{G}_{A}\left(s\right)=\frac{1}{{C}_{ah}s+\left[{\left(UA\right)}_{ah}+\beta \right]}=\frac{1}{4.5s+0.28}
{G}_{p}\left(s\right)=\frac{1}{0.63s+0.21}
\left\{\begin{array}{l}{\stackrel{˙}{z}}_{1}={z}_{2}+{\beta }_{1}\left(-{z}_{1}+y\right)+{b}_{0}u\\ {\stackrel{˙}{z}}_{2}={\beta }_{2}\left(-{z}_{1}+y\right)\end{array}
{\beta }_{1}
{\beta }_{2}
{z}_{1}
{z}_{2}
\left\{\begin{array}{l}u=\left(-{z}_{2}+{u}_{0}\right)/{b}_{0}\\ {u}_{0}={k}_{p}\left({T}_{r}-{z}_{1}\right)\end{array}
\text{d}e/\text{d}t
e\in \left[-2,2\right]
\text{d}e/\text{d}t\in \left[-0.5,0.5\right]
\text{d}e/\text{d}t
\text{d}e/\text{d}t |
Beam (nautical) — Wikipedia Republished // WIKI 2
Width of a ship at its widest point
Graphical representation of the dimensions used to describe a ship. Dimension "b" is the beam at waterline.
The beam of a ship is its width at its widest point. The maximum beam (BMAX) is the distance between planes passing through the outer extremities of the ship, beam of the hull (BH) only includes permanently fixed parts of the hull, and beam at waterline (BWL) is the maximum width where the hull intersects the surface of the water.[1]
Generally speaking, the wider the beam of a ship (or boat), the more initial stability it has, at the expense of secondary stability in the event of a capsize, where more energy is required to right the vessel from its inverted position. A ship that heels on her beam ends has her deck beams nearly vertical.[2]
How Do Lighthouses Work?
1 Typical values
2 Rule of thumb - formula
3 BOC
4 Other beams
Typical length-to-beam ratios (aspect ratios) for small sailboats are from 2:1 (dinghies to trailerable sailboats around 20 ft or 6 m) to 5:1 (racing sailboats over 30 ft or 10 m).
Rowing shells designed for flatwater racing may have length to beam ratios as high as 30:1,[3] while a coracle has a ratio of almost 1:1 – it is nearly circular.
[further explanation needed]
{\displaystyle Beam=LOA^{\frac {2}{3}}+1}
Where LOA is Length OverAll and all lengths are in feet.
For a standard 27 ft (8.2 m) yacht: the cube root of 27 is 3, 3 squared is 9 plus 1 = 10. The beam of many 27 ft monohulls is 10 ft (3.05 m).
For a Volvo Open 70 yacht: 70.5 to the power of 2/3 = 17 plus 1 = 18. The beam is often around 18 ft (5.5 m).
For a 741 ft (226 m) long ship: the cube root is 9, and 9 squared is 81, plus 1. The beam will usually be around 82 ft (25 m), e.g. Seawaymax.
As catamarans have more than one hull, there is a different beam calculation for this kind of vessel.
BOC stands for Beam On Centerline. This term in typically used in conjunction with LOA (Length overall). The ratio of LOA/BOC is used to estimate the stability of multihull vessels. The lower the ratio the greater the boat's stability.
The BOC for vessels is measured as follows: For a catamaran: the perpendicular distance from the centerline of one hull to the centerline of the other hull, measured at deck level. For a trimaran: the perpendicular distance between the centerline of the main hull and the centerline of either ama, measured at deck level
Look up beam (nautical) in Wiktionary, the free dictionary.
Other meanings of 'beam' in the nautical context are:
Beam – a timber similar in use to a floor joist, which runs horizontally from one side of the hull to the other athwartships.
Carlin – similar to a beam, except running in a fore and aft direction.
Beam – the direction across the vessel, perpendicular to fore-and-aft; something lying in that direction is said to be abeam.
^ "ISO 8666:2016". International Organization for Standardization. July 2016. Retrieved 31 March 2020.
^ "Definition of BEAM-ENDS". www.merriam-webster.com. Retrieved 2020-06-05.
^ Science News Online: Ivars Peterson's MathTrek (7/17/99): Row Your Boat
Ship measurements
Length overall
Length between perpendiculars
Length at the waterline
Moulded depth
Waterline (Plimsoll Line)
Air draft
Gross tonnage
Compensated gross tonnage
Net tonnage
Panama Canal/Universal Measurement System
Thames measurement tonnage
Gross register tonnage
Net register tonnage
Deadweight tonnage
Twenty-foot equivalent unit (Intermodal containers)
Builder's Old Measurement
Moorsom System
Loaded displacement
Standard displacement
Light displacement
Normal displacement
Inclining test
Angle of loll
Metacentric height (GM)
VLCC and ULCC |
Writeup SVATTT 2021 (ASCIS 2021)
Challenges in 2 section of this CTF (crypto and misc) are all built from practical ideas, so it's not really hard as usual CTF.
A challenge about digital certificate problem, just the basic things. Full source code can found at here
Read the source code, I figured out that there is a route /flag which will tell us the flag of this challenge, but only admin can access content of flag
if session["role"] == ROLE_ADMIN:
flag = "ASCIS{xxxxxx}"
There is a register function, but we can't register as admin. It just allow us to register as a normal user
However, examine carefully the source code, I found that there is another way to login without admin account. It's the /logincert route
# This function only for admin
@app.route("/logincert", methods=('GET', 'POST'))
def logincert():
split_tup = os.path.splitext(uploaded_file.filename)
if split_tup[1] != ".pem":
flash('Cert file is invalid')
return render_template('logincert.html')
username = validate_certificate(uploaded_file)
flash('Login cert is invalid!')
session["role"] = ROLE_ADMIN
Notice the line code username = validate_certificate(uploaded_file). Follow the code, it leads us to verify_certificate_chain(cert_pem, trusted_certs) function in file certutils.py.
def verify_certificate_chain(cert_pem, trusted_certs):
certificate = crypto.load_certificate(crypto.FILETYPE_PEM, cert_pem)
# parse ceritificate information
clientcert = CertInfo(certificate)
# get subject common name
subject = clientcert.subject_cn
issuer = clientcert.issuer_cn
# Check if subject is admin user
if subject != "admin":
raise Exception("Not trusted user")
# validate issuer
if issuer != "ca":
raise Exception("Not trusted ca")
thumbprint = clientcert.digest_sha256.decode('utf-8')
#TODO: validate thumbprint
#Create a certificate store and add your trusted certs
store = crypto.X509Store()
# Assuming the certificates are in PEM format in a trusted_certs list
for _cert in trusted_certs:
cert_file = open(_cert, 'r')
cert_data = cert_file.read()
client_certificate = crypto.load_certificate(crypto.FILETYPE_PEM, cert_data)
store.add_cert(client_certificate)
# Create a certificate context using the store
store_ctx = crypto.X509StoreContext(store, certificate)
# Verify the certificate signature, returns None if it can validate the certificate
print("[+] Debug certificate validation failed")
For the one that does not really understand the detail code of digital certificate as me, I have been overwhelmed and confused a little bit. However, just pay attention to the output and the the requirement in 2 if statements, I can draw out the 2 following conclusions.
There are 2 requirements to successfully login by certificate: the subject must be admin and the issuers must be ca. It's really easy~~
After passing these 2 requirements, it return us subject, which is actually the admin. Now we will a admin session, thus get the flag.
So, let's go on to create a digital ceritificate.
2. Create digital certificate
I use openssl on Kali Linux machine to create digital certiifcate. The ideas is simple:
Create a certificate owned by ca
\rightarrow
subject = ca and issuer = ca
Create another certificate owned by admin
\rightarrow
subject = admin and issuer = admin
Sign the second certificate by the first certificate
\rightarrow
subject = admin and issuer = ca
For the technical details, follow step by step as following
Create a RSA key pair for ca certificate
[email protected]:~$ openssl genrsa -out ca.key 2048
Create a certificate owned by ca. Left all other blanks and fill Common Name as ca
[email protected]:~$ openssl req -new -x509 -days 1826 -key ca.key -out ca.crt
After create, you can see the CA look like this
Create a RSA key pair for admin certificate
[email protected]:~$openssl genrsa -out ia.key 2048
Create a certificate owned by admin. Left all other blanks and fill Common Name as admin
[email protected]:~$ openssl req -new -key ia.key -out ia.csr
Common Name (e.g. server FQDN or YOUR name) []:admin
The created certificate will look as following
5. Sign the second certificate by the first certificate
[email protected]:~$ openssl x509 -req -days 730 -in ia.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out ia.crt
subject=C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = admin
The signed certificate will look as following
Finally, convert the certificate to .pem format to fit the code requirement
[email protected]:~$ openssl x509 -in ia.crt -out ia.pem -outform PEM
3. Submit the digital certificate and get the flag
Submit file ia.pem to server
Logon!
And get the flag!
ConfuseOne
This is blackbox crypto challenge! There is no source, and actually the most practical one.
First at all, just register and login. Browse the web and I see the profile page has a suspected line, which is "You are not admin"
The flag may in this page, but we need to login as admin. It impossible as normal. Now, intercept the request, I see the web use jwt token. Try to decode, I get the following result
The point this token is signed by RS256 algorithm. I google the current vulnerability of jwt token and found that there is critical vulnerability related to it, which can change the token by change algorithm from RS256 to HS256. You can learn more the attack at here as I will not reinvent the wheel
The last problem which paramenter's value we need to change? As we look from the jwt token, the only paramenter that is most susceptible is username, as the other parameters are either non-determined or trivival for authorization. So, we need to change the value of username to admin to get the flag!
2. Attack and get the flag
I use TokenBreaker on github to help me perform this attack. Everything I need is this to pull out the public key of server. It's easy as we can get it by openssl
a. Get public key
The technical detail step as following
Connect to the server using openssl to get the public certificate
[email protected]:~$ openssl s_client -connect 139.180.213.39:443
MIIDETCCAfmgAwIBAgIUaYCW/HwHq1b/axHRKM0BpixnwugwDQYJKoZIhvcNAQEL
BQAwGDEWMBQGA1UEAwwNY3J5cHRvMjAwLmNvbTAeFw0yMTEwMTQwMjM2NDBaFw0y
MjEwMTQwMjM2NDBaMBgxFjAUBgNVBAMMDWNyeXB0bzIwMC5jb20wggEiMA0GCSqG
SIb3DQEBAQUAA4IBDwAwggEKAoIBAQDunk8oVD+9cKXT96aOdl/xZ5RqCpxsStFT
f8l/DW2/m4X5scbhq8Qhco0Mvns75KYtCWAKSvwCzgTSMDcO1/Fzt6xRI4EZPtVS
WE2Mq0VffFCYAzS6q07XWbFZ2tyFqbi/Xudh7tAA6TI098AGHKLjWZDJCA/ZbiQJ
u+7XL1y7TjCWBOEmrcWS7G1Cte1oUhUFfXygmskiTpxX+r3ABJuXT9FZcWu8ZMhl
fMGp/y00sBDCp8xxAcIl/D5lAUzWKyyxW5g46s5WSRHkGpxX/uQUGMwV/WM3/199
uvtVkQri88toQMzd03sWKJJZxuvJpwpw8vi/rbnB4c5/4wfuFjtHAgMBAAGjUzBR
MB0GA1UdDgQWBBTmW/TdQlcea4S2DtpxVqa6n6jYFTAfBgNVHSMEGDAWgBTmW/Td
Qlcea4S2DtpxVqa6n6jYFTAPBgNVHRMBAf8EBTADAQH/MA0GCSqGSIb3DQEBCwUA
A4IBAQBUBWMa50jaKO5GtqdCe2jLhfmEgtc6iLr+XO8jGsK2OzaHTHO9N/mjDOJ0
AAdINbCO2qfYsXBLTgzBLiAsE+IuzfxIiTmzVoLhiV0iWuy1NMXMEy1khAtVjdkx
D1zxCdCw/xe70tmGEfVFGF45OPkdsbDa3fr6tSF2Cl7ZXehdpxuzogWAqV4zqn49
XqLzZvB5gL5LbsbjzoUImce0eIxHgrkxM1RurgyN5EwV+SxkXCGxTmdMHI3Gzebf
t5xM393St030npRIRiAIpiLZUX7Yh7+PU079rE0wHtNvqorW+CrGD92TtYS7IufT
E9PrY2ghO453/QM0jW/E429p/aha
subject=CN = crypto200.com
issuer=CN = crypto200.com
Save the certificate to files.
Export public key from the certificate
[email protected]:~$ openssl x509 -pubkey -noout -in cf.pem > pubkey.pem
The public key may look as following
[email protected]:~$ cat pubkey.pem | xxd -p | tr -d "\\n"
2d2d2d2d2d424547494e205055424c4943204b45592d2d2d2d2d0a4d494942496a414e42676b71686b6947397730424151454641414f43415138414d49494243674b4341514541377035504b46512f7658436c302f656d6a6e5a660a38576555616771636245725255332f4a66773174763575462b6248473461764549584b4e444c35374f2b536d4c516c67436b723841733445306a4133447466780a6337657355534f4247543756556c684e6a4b7446583378516d414d307571744f31316d785764726368616d347631376e59653751414f6b794e506641426879690a34316d51795167503257346b4362767531793963753034776c6754684a7133466b7578745172587461464956425831386f4a724a496b3663562f7139774153620a6c302f5257584672764754495a587a42716638744e4c41517771664d635148434a66772b5a51464d31697373735675594f4f724f566b6b5235427163562f376b0a46426a4d4666316a4e2f396666627237565a454b3476504c6145444d33644e3746696953576362727961634b63504c34763632357765484f662b4d48376859370a52774944415141420a2d2d2d2d2d454e44205055424c4943204b45592d2d2d2d2d0a
Run TokenBreaker tool to generate new token. Remember to change the value of username to admin. The new token is shorter than the origianl one.
[email protected]:~$ python3 RsaToHmac.py -t eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiJ9.eyJpYXQiOjE2MzQ0ODE0OTIsIm5iZiI6MTYzNDQ4MjQ5MiwiZXhwIjoxNjM0NDg3NDkyLCJkYXRhIjp7ImlkIjoiMTM3IiwidXNlcm5hbWUiOiJ0aGFuZ3BkMTEiLCJlbWFpbCI6InRoYW5ncGQxMUBnbWFpbC5jb20ifX0.n7t8HqHsWYCdR4fk_-VPgRHtJuNKb1DGQPAGWcNrlaxjaRnft8fbPUOLBmgUD1xY6Xp0OL4ov4BuhvbzbOvjrAbzfjXq4MEDiadDxnObQr9c3gPrB82uoY3YyVqtg_TXa8yfz5HMWsMGpKg5QjRNVqWYCqF1-6-LNuLkp54mjPeJctcQHVONCy8tIpCR08E9_G4vpLEEYBPcXPkcD44FH56xnNUlMpDkTayhv5wZ-2nPuFiBsuNP_glp-6abAsDgMSbSHLSQc-mPEecTVx929lNHCjhzFIFqXEFdNNXt3Y3JWdx-VXIIUM2yfxKkubV8NCn8s9nfwXpbIMfIPA9rPQ -p pubkey.pem
___ ___ _ _ _ _ __ __ _ ___
| _ \/ __| /_\ | |_ ___ | || | \/ | /_\ / __|
| /\__ \/ _ \ | _/ _ \ | __ | |\/| |/ _ \ (__
|_|_\|___/_/ \_\ \__\___/ |_||_|_| |_/_/ \_\___|
[*] Decoded Header value: {"typ":"JWT","alg":"RS256"}
[*] Decode Payload value: {"iat":1634481492,"nbf":1634482492,"exp":1634487492,"data":{"id":"137","username":"thangpd11","email":"[email protected]"}}
[*] New header value with HMAC: {"typ":"JWT","alg":"HS256"}
[<] Modify Header? [y/N]:
[<] Enter Your Payload value: {"iat":1634481492,"nbf":1634482492,"exp":1634487492,"data":{"id":"137","username":"admin","email":"[email protected]"}}
[+] Successfully Encoded Token: eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpYXQiOjE2MzQ0ODE0OTIsIm5iZiI6MTYzNDQ4MjQ5MiwiZXhwIjoxNjM0NDg3NDkyLCJkYXRhIjp7ImlkIjoiMTM3IiwidXNlcm5hbWUiOiJhZG1pbiIsImVtYWlsIjoidGhhbmdwZDExQGdtYWlsLmNvbSJ9fQ.HuhcSbgVOFqbUGfY9KQ2g4thh_v4TuQioNujlMiXNOY
Replace token value in burp request, then submit to get the flag
This is the last challenge, but actually the easiest challenge ever. The idea of this challenge is just fipping one bit, so that is the role from user (1) to admin (0), then we got the flag. Full source code can be found at here
This challenge is same as the EasyOne challenge, we also need to login as admin, but can by normal register. The different is there is no /logincert anymore.
Follow the code, the login_required function tell us that it extract the user role from the authtoken value in cookies
Follow the code, I see that the authtokenvalue is encrypted using AES in CFB mode.
And the value of authtoken is generated after we logon. I registered with username thangpd11, so that the role bytes value will lie in the first block.
2. Bit Flipping Attack on AES CFB
I take the folllowing from gooogle to help me easily illustrate the attack. The concept is actually very similar to the bit flipping attack on AES CBC (the classic one)
The decryption process of the first block can be interpreted in mathematic formula as
P_{1} = E(IV) \oplus C_{1}
E(IV) = P_{1} \oplus C_{1}
The point is we wanna change P_{1} to another
P_{1}^{'}
. Simply, we just need to change the
C_1
value to another
C_{1}^{'}
value, such that the
E(IV)
value remains. In mathematical formula, we can interpret the above words as
E(IV) = P_{1} \oplus C_{1} =P_{1}^{'} \oplus C_{1}^{'}
P_{1} \oplus C_{1} =P_{1}^{'} \oplus C_{1}^{'}
3. Exploit and get the flag
With that in mind, now I code the exploit tool as follow. As I'm so lazy, I don't code the full exploit, so I change the cookies value manually through Burp Suite
def xor(a: bytes, b: bytes):
return bytes([_a ^ _b for _a, _b in zip(a, b)])
key = Random.new().read(AES.block_size)
username = 'thangpd11'
cipherbytes = b64decode(b'tzRxbyN82l8uJK06ZdSQSI+kc1x1vnjPTLXL6w==') #authtoken in cookies value
cipherbytes = cipherbytes[AES.block_size:]
new_role = 0
plainbytes_new = len(usernamebytes).to_bytes(2, "little") + usernamebytes + new_role.to_bytes(1, "little")
cipherbytes_new = xor(xor(plainbytes_new, plainbytes), cipherbytes)
ciphertext_new = b64encode(iv + cipherbytes_new)
print(ciphertext_new)
The new authtoken is tzRxbyN82l8uJK06ZdSQSI+kc1x1vnjPTLXL6g==. Let's submit it and get flag.
This challenge is designed based on practical of covid-19 epidemic. It just simple as we need to make a system that automatically scan the QR code.
There are ID Number, Name and Expried date must be entered to check, just as the description of the challenge.
The work is simple (although I can't solve during challenge time 😭, thus lose the ticket for final round). Connect to the server, save the text as image, use the QR scan lib to read the QR code, then submit the answer.
The full code can be found at here
I can't belive that I stuck at the part "save the text as image" for nearly 1 hour, while it can resolve by just by downding the font! Lesson learned is please read the code carefully before googliing the error 😭.
Flag: ASCIS{c0r0n4_v1rus_1s_g0n3}
That's all for my first and final ASCIS. Good time and fun. I would like to thanks all of the venerable mentors and teammates for an enthusiasm play!
WriteUp Misc CTF Crypto |
Strip photography — Wikipedia Republished // WIKI 2
Fixed slit photo of a San Francisco cable car, showing prominent striped background. The vertical axis of the photo is a spatial dimension as with normal photos, but the horizontal axis is a time axis, showing the same point on the street as the cable car passed.
Strip photography, or slit photography, is a photographic technique of capturing a two-dimensional image as a sequence of one-dimensional images over time, in contrast to a normal photo which is a single two-dimensional image (the full field) at one point in time. A moving scene is recorded, over a period of time, using a camera that observes a narrow strip rather than the full field. If the subject is moving through this observed strip at constant speed, they will appear in the finished photo as a visible object. Stationary objects, like the background, will be the same the whole way across the photo and appear as stripes along the time axis; see examples on this page.
From Impact to Art in Wedding Photography
THE PORTRAITS OF IRVING PENN
Camera: A History of Photography from Daguerreotype to Digital by Todd Gustavson
1 Implementation
2 Aesthetics
2.1.1 Distortion
2.1.2 Striped background
2.1.3 Perspective
2.1.4 Changing scene
2.2 Layout
4 Sports photography
5 Artistic uses
Many photographic devices use a form of strip photography due to the use of a rolling shutter for engineering reasons, and exhibit similar effects. This is common both on cheaper cameras with an electronic shutter (more sophisticated electronic shutters are global, not rolling), as well as cameras with mechanical focal-plane shutters.
This technique can be implemented in multiple ways. In film photography, a camera with a vertical slit aperture can either have fixed film and a moving slit, or a fixed slit and moving film. In digital photography, one can use a line sensor, generally one that is moving, as in a rotating line camera, but also an image scanner (flatbed or hand).
The fundamental property of strip photography is that one axis of the photo shows the scene changing over time, while the other axis does not. The simplest method of this is recording a stationary slice, perpendicular to the frame, so one axis of the photo is a spatial dimension (along the slit) and the opposite axis represents time (along the exposure time). For example, a photo finish shows one strip (e.g., the finish line) over time, where the scanning direction (e.g., horizontal) represents time, not space.
If the camera moves during the shot, like when taking a panoramic photo, there is no longer just one space axis and one time axis. Instead, for the example of a camera moving horizontally from left to right to take a panoramic shot of a landscape, the vertical axis is still just a spatial axis, but as you look from left to right along the photo, you see an image that is both further to the right of the subject (a spatial dimension) and later in the shot (the time dimension). The vertical axis of the photo is therefore a mixed spatial and time axis, and the panorama represents a period of time not a single instant in time.
In strip photography, distance is interchanged (or mixed) with time, so width in the scanning direction (say, horizontal) is proportional to time, and thus inversely proportional to speed. Slower-moving objects occupy more time, and thus appear wider, while faster-moving objects are narrower, as they occupy the slit for a shorter period of time. In extreme cases a very rapidly moving object can be captured for only a single strip or even none at all (if discrete capture, as in a digital camera), while a stationary object (notably background) will appear as a horizontal line (a stripe). These differences are particularly notable in cases of movement at differing constant speeds, such as parallax from a train window or differing speeds of traffic (esp. if in differing directions diagonal to the camera) or people walking. Further, in the case of motion towards or away from the camera, size changes, creating additional distortion.
In the case of diagonal motion in the direction of capture and towards or away from the camera, objects flare (expand vertically and compressing horizontally) as they approach and taper (compress vertically and stretch out horizontally) as they recede; this is because (by perspective) objects appear larger the closer they are (approximately as the inverse of distance), with increase in horizontal size yielding faster movement (parallax) and thus decreased size in the strip photograph. These effects are inverse in magnitude (as horizontal shrinks vertical grows, so area does not change), so objects effectively undergo a squeeze mapping, properly an inhomogeneous squeeze (magnitude of squeeze varies with distance to the camera), hence the flared shapes.
In other cases, however, particularly movement not in the direction of capture, very unusual distortions result, resembling smears. These may be compared with surrealism, such as the work of Pablo Picasso or Salvador Dalí. In cases when the exposure time (for a given location on the sensor) is slow relative to movement, this distortion combines with motion blur, yielding soft blurs.
More subtly, for fixed slit photography, as all capture is in a constant direction, there is no perspective in the image. This is conspicuous in long strips of races, where all the racers are viewed directly from the side, rather than from an angle depending on their position. This effect looks the same as the maximum possible perspective distortion in a photo taken from very far away, in which case perspective flattens ("compression distortion").
Vertical strips – so time is horizontal – is most common, and accords with horizontal scanning, as in reading, though horizontal strips – so time is vertical – are also found. In addition to horizontal and vertical strips, other forms are possible, such as radial strips.[3] Aspect ratio varies, with some photos being similar to ordinary photos ("tableau" format), emphasizing a single image, while others are long ("strip" format), emphasizing the passage of time, as in a (single panel) comic strip or traditional scroll paintings.
This is most basically used in panoramic photography, to capture a large, static scene that would be difficult to capture via other techniques; scanning cameras are designed for this.
Peripheral photography, notably rollout photography
Photo finish
Slit-scan photography
Strip aerial photography
Sports are a common use of strip photography, both for photo finishes and artistic purposes. It is particularly common for racing, where movement is largely regular and predictable, but by no means limited to it. Due to the movement in sports, which is a combination of movement at a regular rate and at a changing rate, various forms of distortion are possible. An early accidental example of distortion is "Grand Prix de Circuit de la Seine" (June 26, 1912) by Jacques Henri Lartigue, where the skew caused by the vertically traveling slit makes the race car appear to lean forwards, creating a sense of speed.
Strip photography was notably used by George Silk at the US tryouts for the 1960 Summer Olympics,[1][4] Further photography at Life and Sports Illustrated that used strip photography included John G. Zimmerman, who borrowed Silk's camera to photograph Pete Rose and later photographed basketball players Nate Archibald and Julius Erving using a slit-scan camera for Sports Illustrated, and Neil Leifer, who used it frequently in the 1970s for athletes including Gaylord Perry and Billy Kidd, and for sports such as IndyCar racing.[1][5] More recently, Bill Frakes (assisted by David Callow) captured Marion Jones winning the 100m event at the 2000 Summer Olympics using a strip camera.[1]
Strip photography can be used for artistic effect, which has been done regularly since the 1960s.[6] In addition to sports, early examples include work by Silk and other Life (later Time–Life) photographers for various subjects,[7][better source needed] such as the cover of the Halloween issue of Life 1960.[1] William Larson pioneered modern artistic uses of strip photography from the late 1960s. Michael Golembewski has been a practitioner of scanography.[citation needed]
More recently, Jay Mark Johnson has used slit cameras for artistic effect.[8] Adam Magyar used a custom "slit scan" camera to record city traffic over time in his panoramic photo series Urban Flow (2006–2009).[9] In his next project, Stainless (2010–2011), Magyar made use of an industrial line scan camera and custom software to capture panoramic photos of moving subway traffic in major metropolitan cities, including New York, Paris and Tokyo. He later included high speed video from the perspective of the moving subway car, which captured exceptional three-dimensional detail of people waiting on the platform over a very small amount of time.[10][non-primary source needed]
See also: Slit-scan photography
In cinematography, strip photography can be used manually as a special effect, assembling a video sequence strip-by-strip, particularly in science-fiction movies of the 1960s through 1980s – see slit-scan photography.
Alternatively, a digital video can be sourced to produce either a single strip photograph or an entire video; with the advent of consumer video editing, some amateurs have created such videos, from circa 2008.[11][12][13][14] For clarity assume that the strips are vertical, so they are lined up horizontally; this is commonly done in actual strip photography, due to the frequency of left-right motion in the real world.[15] In the latter case, this corresponds to considering the video as a three-dimensional array and transposing one spatial variable with the temporal variable,[15] transforming
{\displaystyle (x,y,t)}
{\displaystyle (t,y,x).}
Assuming x starts at 0 on the left, as usual, this corresponds to time increasing from left to right; using the opposite convention of time increasing right to left corresponds to adding a reflection, so
{\displaystyle (x,y,t)}
{\displaystyle (t_{\text{max}}-t,y,x),}
which can also be interpreted as rotation in the
{\displaystyle (x,t)}
plane (followed by a translation).
As with static strip photography, videos can be produced both in "tableau" format (conventional almost square aspect ratio), or in "strip" format (very wide), and in fact can have exactly the same dimensions as the input video if input frames = input x-resolution (so the
{\displaystyle (x,y,t)}
array is square in the
{\displaystyle (x,t)}
dimensions); in this case if input and output frame rate are the same, then the input and output videos will have the same duration as well.
Videos in the wide "strip" format can be arbitrarily wide, possibly too wide to fit on a given display. One way to view them is by panning the image horizontally (display only part of the image, panning the frame sideways) while looping through the video.[12] Geometrically this corresponds to replicating the output
{\displaystyle (t,y,x)}
array in the third variable, then cutting diagonally. If the frame advances 1 pixel at a time and the output resolution and frame rate are the same as the input, then the entire strip video will be seen and (with technical assumptions on start and end) the output video will be the same resolution and duration as the input. Rather than looping, which introduces a jump, the video can bounce back and forth between endpoints.[12]
Peripheral Portrait photography by Andrew Davidhazy of Sergio Valle Duarte
Strip photography dates to early panoramic cameras in the 19th century, from 1843.[6] It was initially used for technical and scientific purposes, with Italian scientist Ignazio Porro developing a strip-based camera for mapping in 1853; a similar device was developed by the French inventor Charles Chevallier in the same year.[6] Peripheral photography was pioneered by the British Museum for photographs of Greek vases in the late 19th century. The development of aviation allowed strip aerial photography to replace previous land-based, which was notably used during the Palestine campaign (1915–18).[6] Photo finish cameras were used from 1937 onward,[6] and strip photography was used in synchroballistic photography for ballistics research. Artistic uses have occurred since the 1960s, with the pioneering work of George Silk, and markedly increased since the 1980s, though irregularly, with practitioners often rediscovering the technique independently and being unaware of the history.[6] The articles of Andrew Davidhazy from the 1970s have provided both a scientific background and technical guide for constructing strip cameras and engaging in strip photography.[6]
N700 series Shinkansen pictured with this technique.
^ a b c d e "Strip Tease: An introduction to the strip camera, how Tom Dahlin made his, and how you can too.", Tom Dahlin, SportsShooter, 2008-08-18
^ "School Photo Day". h2g2. BBC. 2012.
^ Radial strips are found in the work of Ansen Seale, starting from his Vortex series (description)
^ Neil Leifer, Sports! features several such images
^ a b c d e f g "How strip-photography complicated the interpretation of the still photographic image", Maarten Vanvolsem
^ Herman, Judith B. (Oct 15, 2012). "A Very Unusual Camera That Emphasizes Time Over Space". Slate. Retrieved 2012-10-16.
^ Hammer, Joshua (January 8, 2014). "Einstein's Camera: How one renegade photographer is hacking the concept of time". Matter. Retrieved 2015-01-26.
^ Magyar, Adam (August 6, 2014). "Adam Magyar at TEDSalon Berlin 2014". TED Ideas. Retrieved 2015-01-26.
^ "Temporal Video Experiment", Peter Marquardt (lastfuture), July 6th, 2011
^ a b c "Temporal Video Experiment – Making Of" Peter Marquardt (lastfuture), July 6th, 2011
^ a b Alex Hunsley. "8/52: Video space-time transposition". Archived from the original on 21 November 2008.
^ "Video space/time rotation", Alex Hunsley (lardus)
^ a b c d Alex Hunsley. "Video space/time transposition". Archived from the original on 2012-01-19. Alt URL
Vanvolsem, Maarten (July 2011). The Art of Strip Photography: Making Still Images with a Moving Camera. Lieven Gevaert Series. Leuven: Leuven University Press. ISBN 978-90-5867-840-9.
"Basics of Strip Photography", Andrew Davidhazy (Articles)
Wikimedia Commons has media related to Strip photography.
"Grand Prix de Circuit de la Seine", photo
"The Art of Strip Photography", Hilde Van Gelde, 2011 May 27
"One year in one image," By Eirik Solheim |
dispersion - Maple Help
Home : Support : Online Help : Mathematics : Discrete Mathematics : Summation and Difference Equations : LREtools : dispersion
compute dispersion of two polynomials
autodispersion
compute self-dispersion of a polynomial
dispersion(p1, p2, n)
dispersion(p1, p2, n, 'maximal')
autodispersion(p1, n)
polynomials in n with any coefficient type
dispersion computes the set of non-negative integers i such that
\mathrm{gcd}\left(\mathrm{LREtools}[\mathrm{shift}]\left(\mathrm{p1},n,i\right),\mathrm{p2}\right)\ne 1
. If there are no such integers, the function returns FAIL. Effectively, the dispersion measures the integers that can be added to the indeterminate in p1 and get a polynomial that has a common factor with p2.
If any of the polynomials contain parameters, then the returned answer is the generic dispersion, in other words the dispersion of the polynomials obtained by replacing the parameters by random numbers. As such, the answer returned will be subject to specialization problems.
The optional argument 'maximal' can be used to indicate that only the maximal dispersion is wanted. Returns FAIL if there are no such integers.
autodispersion computes dispersion(p1, subs(var=var-1, p1), n). This function is provided as this quantity can be computed more efficiently than the equivalent call to dispersion.
This notion originated in the works of Abramov. The algorithm used is based on the work of Yiu-Kwong Man and Francis J. Wright. (See References).
\mathrm{with}\left(\mathrm{LREtools}\right):
\mathrm{dispersion}\left(x,x+4,x\right)
{\textcolor[rgb]{0,0,1}{4}}
\mathrm{dispersion}\left({x}^{2}+2,x-6,x\right)
\textcolor[rgb]{0,0,1}{\mathrm{FAIL}}
\mathrm{dispersion}\left(\left(x-1\right)\left(x-a\right),\left(x-7\right)\left({x}^{5}-3\right),x\right)
\textcolor[rgb]{0,0,1}{\mathrm{FAIL}}
\mathrm{dispersion}\left(\left(x-7\right)\left({x}^{5}-3\right),\left(x-1\right)\left(x-a\right),x\right)
{\textcolor[rgb]{0,0,1}{6}}
\mathrm{dispersion}\left(\left(x-7\right)\left({x}^{5}-3\right),\left(x-1\right)\left(x-a\right),x,'\mathrm{maximal}'\right)
\textcolor[rgb]{0,0,1}{6}
\mathrm{dispersion}\left({x}^{15}-1,{\left(x+1\right)}^{15}-1,x\right)
{\textcolor[rgb]{0,0,1}{1}}
Abramov, S.A.. "On the summation of rational functions." USSR Comp. Math. Phys. 11. (1971): 324-330.
Abramov, S.A.. "Rational solutions of linear differential and difference equations with polynomial coefficients." USSR Comp. Math. Phys. 29. (1989): 7-12.
Man, Yiu-Kwong, and Wright, Francis J. "Fast Polynomial Dispersion Computation and its Application to Indefinite Summation." Proceedings of ISSAC '94: 175-180 |
A Comprehensive Hazard Assessment of the Caribbean Region | Bulletin of the Seismological Society of America | GeoScienceWorld
Megan Torpey Zimmerman;
Megan Torpey Zimmerman *
AIR Worldwide, Lafayette City Center, Boston, Massachusetts, U.S.A.
Corresponding author: m.zimmerman@air-worldwide.com
Bingming Shen‐Tu;
Bingming Shen‐Tu
Khosrow Shabestari;
Khosrow Shabestari
Mehrdad Mahdyiar
Megan Torpey Zimmerman, Bingming Shen‐Tu, Khosrow Shabestari, Mehrdad Mahdyiar; A Comprehensive Hazard Assessment of the Caribbean Region. Bulletin of the Seismological Society of America 2022;; 112 (2): 1120–1148. doi: https://doi.org/10.1785/0120210157
We present a probabilistic seismic hazard study for the Caribbean (CAR) that integrates global and regional historic earthquake catalogs, a comprehensive fault database, and geodetic data. To account for the heterogeneity of historic earthquake magnitude types (e.g.,
mb
mL
), we developed regression relationships to convert non‐moment magnitudes to moment magnitudes (
Mw
). We used a combination of areal sources and fault sources to model seismicity across the entire CAR domain capturing hazard from both shallow and deep earthquakes. Fault sources were modeled using both the characteristic earthquake model of Schwartz and Coppersmith (1984) and the Gutenberg and Richter (1954) exponential magnitude–frequency distribution models, accounting for single and multi‐segment rupture scenarios, as well as balancing of seismic moments constrained by kinematic modeling results. Data from a Global Positioning System survey in conjunction with earthquake information were used to balance seismic moments for different source zones. We also incorporated time‐dependent rupture probabilities for selected faults that have ruptured in recent large earthquakes. The complex tectonics of the CAR and lack of local strong‐motion data necessitates the use of weighted logic trees of the most up to date ground motion prediction equations to account for uncertainty. We present our modeling methodology and hazard results for peak ground acceleration at key return periods, and compare them to recently published regional probabilistic seismic hazard analysis studies. |
Bubbling location for <em>F</em>-harmonic maps and inhomogeneous Landau–Lifshitz equations | EMS Press
Bubbling location for <em>F</em>-harmonic maps and inhomogeneous Landau–Lifshitz equations
Salah Najib
f
be a positive smooth function on a closed Riemann surface
(M,g)
f
-energy of a map
u
M
to a Riemannian manifold
(N,h)
E_f(u)=\int_Mf|\nabla u|^2 dV_g.
In this paper, we will study the blow-up properties of Palais--Smale sequences for
E_f
. We will show that, if a Palais--Smale sequence is not compact, then it must blow up at some critical points of
. As a consequence, if an inhomogeneous Landau--Lifshitz system, i.e. a solution of
u_t=u\times\tau_f(u)+\tau_f(u), u: M\rightarrow S^2,
blows up at time
\infty
, then the blow-up points must be the critical points of
Salah Najib, Pigong Han, Bubbling location for <em>F</em>-harmonic maps and inhomogeneous Landau–Lifshitz equations. Comment. Math. Helv. 81 (2006), no. 2, pp. 433–448 |
Computer science - 3D Computer
Computer science (11850 views - Computer)
Computer science is the study of processes that interact with data and that can be represented as data in the form of programs. It enables the use of algorithms to manipulate, store, and communicate digital information. A computer scientist studies the theory of computation and the practice of designing software systems.Its fields can be divided into theoretical and practical disciplines. Computational complexity theory is highly abstract, while computer graphics emphasizes real-world applications. Programming language theory considers approaches to the description of computational processes, while computer programming itself involves the use of programming languages and complex systems. Human–computer interaction considers the challenges in making computers useful, usable, and accessible.
Study of the theoretical foundations of information and computation
Computer science is the study of processes that interact with data and that can be represented as data in the form of programs. It enables the use of algorithms to manipulate, store, and communicate digital information. A computer scientist studies the theory of computation and the practice of designing software systems.[1]
Wilhelm Schickard designed and constructed the first working mechanical calculator in 1623.[4] In 1673, Gottfried Leibniz demonstrated a digital mechanical calculator, called the Stepped Reckoner.[5] He may be considered the first computer scientist and information theorist, for, among other reasons, documenting the binary number system. In 1820, Thomas de Colmar launched the mechanical calculator industry[note 1] when he released his simplified arithmometer, which was the first calculating machine strong enough and reliable enough to be used daily in an office environment. Charles Babbage started the design of the first automatic mechanical calculator, his Difference Engine, in 1822, which eventually gave him the idea of the first programmable mechanical calculator, his Analytical Engine.[6] He started developing this machine in 1834, and "in less than two years, he had sketched out many of the salient features of the modern computer".[7] "A crucial step was the adoption of a punched card system derived from the Jacquard loom"[7] making it infinitely programmable.[note 2] In 1843, during the translation of a French article on the Analytical Engine, Ada Lovelace wrote, in one of the many notes she included, an algorithm to compute the Bernoulli numbers, which is considered to be the first computer program.[8] Around 1885, Herman Hollerith invented the tabulator, which used punched cards to process statistical information; eventually his company became part of IBM. In 1937, one hundred years after Babbage's impossible dream, Howard Aiken convinced IBM, which was making all kinds of punched card equipment and was also in the calculator business[9] to develop his giant programmable calculator, the ASCC/Harvard Mark I, based on Babbage's Analytical Engine, which itself used cards and a central computing unit. When the machine was finished, some hailed it as "Babbage's dream come true".[10]
During the 1940s, as new and more powerful computing machines were developed, the term computer came to refer to the machines rather than their human predecessors.[11] As it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to study computation in general. In 1945, IBM founded the Watson Scientific Computing Laboratory at Columbia University in New York City. The renovated fraternity house on Manhattan's West Side was IBM's first laboratory devoted to pure science. The lab is the forerunner of IBM's Research Division, which today operates research facilities around the world.[12] Ultimately, the close relationship between IBM and the university was instrumental in the emergence of a new scientific discipline, with Columbia offering one of the first academic-credit courses in computer science in 1946.[13] Computer science began to be established as a distinct academic discipline in the 1950s and early 1960s.[14][15] The world's first computer science degree program, the Cambridge Diploma in Computer Science, began at the University of Cambridge Computer Laboratory in 1953. The first computer science department in the United States was formed at Purdue University in 1962.[16] Since practical computers became available, many applications of computing have become distinct areas of study in their own rights.
{\displaystyle \Gamma \vdash x:{\text{Int}}}
Computer performance analysis is the study of work flowing through computers with the general goals of improving throughput, controlling response time, using resources efficiently, eliminating bottlenecks, and predicting performance under anticipated peak loads.[51] Benchmarks provide a method of comparing the performance of various subsystems across different chip/system architectures.
Functional programming, a style of building the structure and elements of computer programs that treats computation as the evaluation of mathematical functions and avoids state and mutable data. It is a declarative programming paradigm, which means programming is done with expressions or declarations instead of statements.
Imperative programming, a programming paradigm that uses statements that change a program's state. In much the same way that the imperative mood in natural languages expresses commands, an imperative program consists of commands for the computer to perform. Imperative programming focuses on describing how a program operates.
Object-oriented programming, a programming paradigm based on the concept of "objects", which may contain data, in the form of fields, often known as attributes; and code, in the form of procedures, often known as methods. A feature of objects is that an object's procedures can access and often modify the data fields of the object with which they are associated. Thus Object-oriented computer programs are made out of objects that interact with one another.
Institute of Electrical and Electronics Engineers (IEEE) produces over 30% of the world's literature in the electrical and electronics engineering and computer science fields, publishing well over 100 peer-reviewed journals.
ArduinoBluetoothCommodore 128Commodore 64Computer caseComputer engineeringComputer hardwareComputer monitorComputer simulationDatabaseDepth perceptionDisplay deviceElectronic visual displayExpansion cardFloppy diskHard disk driveHP Pavilion (computer)Inkjet printingMemristorNetbookNetwork-attached storageOlivetti P6060Olivetti P6066Personal computerPocket PCPortable media playerPower supply unit (computer)SoftwareSolid-state driveUltra-mobile PCUser (computing)Video cardWater coolerXbox OneXenixComputer programProgramming languageAlgorithmProgrammerPython (programming language)Java (programming language)Microsoft WindowsMicrosoftBill GatesPatch panelInternetFile sharingUnicodeCharacter (computing)Online and offlineOnline identitySoftware suiteApplication softwareProgramming toolScience museum
This article uses material from the Wikipedia article "Computer science", which is released under the Creative Commons Attribution-Share-Alike License 3.0. There is a list of all authors in Wikipedia |
Series Active Variable Geometry Suspension Robust Control Based on Full-Vehicle Dynamics | J. Dyn. Sys., Meas., Control. | ASME Digital Collection
Series Active Variable Geometry Suspension Robust Control Based on Full-Vehicle Dynamics
e-mail: cheng.cheng12@imperial.ac.uk
e-mail: s.evangelou@imperial.ac.uk
Contributed by the Dynamic Systems Division of ASME for publication in the JOURNAL OF DYNAMIC SYSTEMS, MEASUREMENT,AND CONTROL. Manuscript received February 21, 2018; final manuscript received November 22, 2018; published online January 14, 2019. Assoc. Editor: Shankar Coimbatore Subramanian.
Cheng, C., and Evangelou, S. A. (January 14, 2019). "Series Active Variable Geometry Suspension Robust Control Based on Full-Vehicle Dynamics." ASME. J. Dyn. Sys., Meas., Control. May 2019; 141(5): 051002. https://doi.org/10.1115/1.4042133
This paper demonstrates the ride comfort and road holding performance enhancement of the new road vehicle series active variable geometry suspension (SAVGS) concept using an
H∞
control technique. In contrast with the previously reported work that considered simpler quarter-car models, the present work designs and evaluates control systems using full-car dynamics thereby taking into account the coupled responses from the four independently actuated corners of the vehicle. Thus, the study utilizes a nonlinear full-car model that represents accurately the dynamics and geometry of a high performance car with the new double wishbone active suspension concept. The robust
H∞
control design exploits the linearized dynamics of the nonlinear model at a trim state, and it is formulated as a disturbance rejection problem that aims to reduce the body vertical accelerations and tire deflections while guaranteeing operation inside the existing physical constraints. The proposed controller is installed on the nonlinear full-car model, and its performance is examined in the frequency and time domains for various operating maneuvers, with respect to the conventional passive suspension and the previously designed SAVGS
H∞
control schemes with simpler vehicle models.
Control equipment, Corners (Structural elements), Dynamics (Mechanics), Geometry, Roads, Tires, Vehicles, Deflection, Wheels, Design, Robust control, Actuators
Tuning Passive, Semi-Active, and Fully Active Suspension Systems
Conference on Decision and Control, Austin, TX, Dec. 7–9, pp.
Road Vehicle Suspension System Design—A Review
Optimal Linear Preview Control of Active Vehicle Suspension
Skyhook and H-Infinity Control of Semi-Active Suspensions: Some Practical Aspects
Mechatronic Semi-Active and Active Vehicle Suspensions
Tocatlian
Variable-Geometry Suspension Apparatus and Vehicle Comprising Such Apparatus
,” Imperial Innovations, London, U.S. Patent No.
.https://patents.google.com/patent/US20140156143
Series Active Variable Geometry Suspension for Road Vehicles
Car Attitude Control by Series Mechatronic Suspension
19th IFAC World Congress on Automatic Control
, Cape Town, South Africa, Aug. 24–29, pp. 10688–10693.http://folk.ntnu.no/skoge/prost/proceedings/ifac2014/media/files/0318.pdf
Series Active Variable Geometry Suspension Application to Chassis Attitude Control
Pitch Angle Reduction for Cars Under Acceleration and Braking by Active Variable Geometry Suspension
51st Annual Conference on Decision and Control (CDC), Maui, HI, Dec. 10–13, pp.
Active Variable Geometry Suspension Robust Control for Improved Vehicle Ride Comfort and Road Holding
), Chicago, IL, July 1–3, pp.
Series Active Variable Geometry Suspension Application to Comfort Enhancement
Ga-Based PID and Fuzzy Logic Control for Active Vehicle Suspension System
.http://www.ijat.net/journal/view.php?number=159
Keviczky
Receding Horizon Control of an F-16 Aircraft: A Comparative Study
Application of Constrained H Control to Active Suspension Systems on Half-Car Models
H∞
Control to Active Suspension Systems
Nonlinear Active Suspension System With Backstepping Control Strategy
Second IEEE Conference on Industrial Electronics and Applications
ICIEA
), Harbin, China, May 23–25, pp.
Hybrid Fuzzy Skyhook Surface Control Using Multi-Objective Microgenetic Algorithm for Semi-Active Vehicle Suspension System Ride Comfort Stability Analysis
Fuzzy Control for Nonlinear Uncertain Electrohydraulic Active Suspensions With Input Constraint
Intelligent Control of Active Suspension Systems
Evaluation of Human Exposure to Whole-Body Vibration—Part 1: General Requirements
.https://www.iso.org/standard/7611.html
Autosim 2.5+ Reference Manual
,” Mechanical Simulation Corporation, Ann Arbor, MI.
Thommyppillai
Advances in the Development of a Virtual Car Driver
Optimal Path-Tracking of Virtual Race-Cars Using Gain-Scheduled Preview Control
, Imperial College London, London.https://spiral.imperial.ac.uk/bitstream/10044/1/6098/1/Thommyppillai-MP-2010-PhD-Thesis.pdf
,” Ph.D. thesis, Imperial College London, London.
Shear Force and Moment Descriptions by Normalisation of Parameters and the ‘Magic Formula’
Testing and Improving a Tyre Shear Force Computation Algorithm
Modeling, Simulation, and Analysis of Permanent-Magnet Motor Drives—I: The Permanent-Magnet Synchronous Motor Drive
, Courier Corporation, Chelmsford, MA.
Genetic Optimization of a Fuzzy Active Suspension System Based on Human Sensitivity to the Transmitted Vibrations
J. Automob. Eng.
Road Vehicles—Vehicle Dynamics and Road-Holding Ability
Robust Control of Automotive Active Seat-Suspension System Subject to Actuator Saturation
On Steering Control of Commercial Three-Axle Vehicle
Model Complexity Requirements in Design of Half Car Active Suspension Controllers |
Study Of Compounds, Studymaterial: ICSE Class 10 CHEMISTRY, Concise Chemistry 10 - Meritnation
Evolved gas (i.e., hydrogen chloride gas) can be dried by passing through concentrated H2SO4. The dry hydrogen chloride gas is collected by upward displacement of water since it is heavier than air.
The composition of ammonia was determined by Claude Berthollet in 1785.
In the free state, ammonia is found in air and natural water.
In the combined state, it is obtained by destructive distillation of coal or wood. Also, it is found on the sides of craters and fissures of lava of volcanoes.
It has a trigonal pyramidal structure with nitrogen atom at the apex.
It has three bond pairs and one lone pair of electrons.
Ammonium salts on warming with caustic alkali produce salt, water and ammonia gas as shown in the reactions below:
2 {\mathrm{NH}}_{4}\mathrm{Cl} + \mathrm{Ca}{\left(\mathrm{OH}\right)}_{2} \stackrel{ \mathrm{\Delta } }{\to } {\mathrm{CaCl}}_{2} + 2 {\mathrm{H}}_{2}\mathrm{O} + 2 {\mathrm{NH}}_{3}\phantom{\rule{0ex}{0ex}}{\left({\mathrm{NH}}_{4}\right)}_{2}{\mathrm{SO}}_{4} + 2 \mathrm{NaOH} \stackrel{ \mathrm{\Delta } }{\to } {\mathrm{Na}}_{2}{\mathrm{SO}}_{4} + 2 {\mathrm{H}}_{2}\mathrm{O} + 2 {\mathrm{NH}}_{3}
Ammonia is formed by the decay of nitrogenous organic matter such as urea.
On a small scale, ammonia is obtained from ammonium salts, which decompose when treated with caustic soda or lime. It forms a metal salt, water, and ammonia gas.
In this method, the mixture of ammonium chloride and dry calcium hydroxide is placed in a round-bottomed flask. It is clamped to an iron stand so that its neck is tilting downwards. This stops water vapours formed by condensation to trickle back into the hot flask.
The ammonia gas contains water vapours as impurities. The water vapour has to be removed as the ammonia gas is extremely soluble in water. Thus, it is passed through drier containing quicklime.
The vapour density of ammonia is 8.5. Therefore, it is lighter than air. Thus, it is collected by downward displacement of the air. Also, since ammonia gas is extremely soluble in water, it cannot be collected over water.
Ammonia can also be prepared by treating metal nitrides like magnesium, sodium and aluminium with warm water.
In this method, magnesium nitride is placed in a conical flask. Warm water is allowed to trickle on it. This evolves moist ammonia gas.
The ammonia gas contains water vapours as impurities. The water vapour has to be removed as ammonia gas is extremely soluble in water. Hence, it is passed through drier containing quicklime.
The vapour density of ammonia is 8.5. Therefore, it is lighter than air. Thus, it is collected by downward displacement of air. Also, since ammonia gas is extremely soluble in water, it cannot be collected over water.
The following reactions take place when water reacts with sodium and magnesium nitride.
By dissolving ammonia in water, an aqueous solution of ammonia is obtained. Take water in a container and dip a small portion of the mouth of the funnel in water.
The level of the water decreases when ammonia dissolves in water at a higher rate than that of its production.
The water rushes into the funnel due to the decrease in the pressure above water level and this results in loss of contact between water and the rim of the funnel.
The funnel comes in contact with water again as the water is pushed down by the ammonia produced. This is how ammonia dissolves in water without a back suction.
On large scale, ammonia is obtained by Haber’s process.
A mixture of hydrogen and nitrogen gases in the ratio 3:1 is taken in the compressor. It is then compressed from 200 atm to 900 atm pressure and passed over the heated catalyst in catalyst chamber. The mixture is maintained at a temperature between 450 − 500°C. In the condenser, the hot mixture of ammonia gas and unreacted hydrogen and nitrogen gases coming out of catalyst chamber are led to cooling pipes.
The reaction is reversible and exothermic in nature.
Catalysts such as iron oxide with small amounts of molybdenum are used to increase the rate of attainment of equilibrium.
High pressure favours the formation of NH3.
Optimum condition :
Pressure = 200 × 105 Pa(about 200 atm)
Temperature ∼ 700 K
Finely divided iron as catalyst and molybdenum or Al2O3 as promoter increase the rate of reaction
The produced ammonia gas is collected by either liquefaction as ammonia has higher boiling point than nitrogen and hydrogen hence, it condenses easily, or absorption in water as ammonia is highly soluble in water. The unused mixture of hydrogen and nitrogen gases are recompressed and then recycled into catalyst chamber.
1. It is a colourless non-poisonous gas with a characteristic pungent odour. It is lighter than air and extremely soluble in water because of hydrogen bonding.
2. It can be liquefied when cooled to 10 o C under pressure of 6 atm. It forms white crystals on cooling.
3. Ammonia has higher melting point and boiling point, latent heat of vaporisation or fusion because of its ability to form hydrogen bonding with itself. Since it has third highest electronegativity so it can form hydrogen bonds with itself and also with water.
4. It has basic nature because of the presence of a lone pair of electrons.
5. It acts as a reducing agent
6. It is lighter than air with the vapour density of 8.5.
7. Inhaling this gas causes irritation to the eyes and respiratory system.
8. It is highly soluble in water.
1. Due to high dielectric constant, ammonia is a good solvent for ionic compounds.
2. It is used as a cleaning agent for removing grease in dry cleaning.
3. It is used in the manufacturing of artificial silk.
4. It is used as a laboratory reagent.
Dry ammonia gas (ga… |
Expanders with respect to Hadamard spaces and random graphs
1 June 2015 Expanders with respect to Hadamard spaces and random graphs
It is shown that there exist a sequence of
3
-regular graphs
\left\{{G}_{n}{\right\}}_{n=1}^{\infty }
and a Hadamard space
X
\left\{{G}_{n}{\right\}}_{n=1}^{\infty }
forms an expander sequence with respect to
X
, yet random regular graphs are not expanders with respect to
X
. This answers a question of the second author and Silberman. The graphs
\left\{{G}_{n}{\right\}}_{n=1}^{\infty }
are also shown to be expanders with respect to random regular graphs, yielding a deterministic sublinear-time constant-factor approximation algorithm for computing the average squared distance in subsets of a random graph. The proof uses the Euclidean cone over a random graph, an auxiliary continuous geometric object that allows for the implementation of martingale methods.
Manor Mendel. Assaf Naor. "Expanders with respect to Hadamard spaces and random graphs." Duke Math. J. 164 (8) 1471 - 1548, 1 June 2015. https://doi.org/10.1215/00127094-3119525
Received: 4 July 2013; Revised: 17 July 2014; Published: 1 June 2015
Secondary: 05C12 , 05C50 , 46B85
Keywords: bi-Lipschitz embeddings , CAT(0) spaces , Euclidean cones , Expanding graphs , Random graphs
Manor Mendel, Assaf Naor "Expanders with respect to Hadamard spaces and random graphs," Duke Mathematical Journal, Duke Math. J. 164(8), 1471-1548, (1 June 2015) |
Time-Weighted Rate of Return Intro
Formula for TWR
How to Calculate TWR
What Does TWR Tell You?
Examples of Using the TWR
Difference Between TWR and ROR
Limitations of the TWR
Time-Weighted Rate of Return – TWR
What is Time-Weighted Rate of Return – TWR?
The time-weighted rate of return (TWR) is a measure of the compound rate of growth in a portfolio. The TWR measure is often used to compare the returns of investment managers because it eliminates the distorting effects on growth rates created by inflows and outflows of money. The time-weighted return breaks up the return on an investment portfolio into separate intervals based on whether money was added or withdrawn from the fund.
The time-weighted return measure is also called the geometric mean return, which is a complicated way of stating that the returns for each sub-period are multiplied by each other.
Use this formula to determine the compounded rate of growth of your portfolio holdings.
\begin{aligned}&TWR = \left [(1 + HP_{1})\times(1 + HP_{2})\times\dots\times(1 + HP_{n}) \right ] - 1\\&\textbf{where:}\\&TWR = \text{ Time-weighted return}\\&n = \text{ Number of sub-periods}\\&HP =\ \dfrac{\text{End Value} - (\text{Initial Value} + \text{Cash Flow})}{(\text{Initial Value} + \text{Cash Flow})}\\&HP_{n} = \text{ Return for sub-period }n\end{aligned}
TWR=[(1+HP1)×(1+HP2)×⋯×(1+HPn)]−1where:TWR= Time-weighted returnn= Number of sub-periodsHP= (Initial Value+Cash Flow)End Value−(Initial Value+Cash Flow)HPn= Return for sub-period n
Calculate the rate of return for each sub-period by subtracting the beginning balance of the period from the ending balance of the period and divide the result by the beginning balance of the period.
Create a new sub-period for each period that there is a change in cash flow, whether it's a withdrawal or deposit. You'll be left with multiple periods, each with a rate of return. Add 1 to each rate of return, which simply makes negative returns easier to calculate.
Multiply the rate of return for each sub-period by each other. Subtract the result by 1 to achieve the TWR.
It can be difficult to determine how much money was earned on a portfolio when there are multiple deposits and withdrawals made over time. Investors can't simply subtract the beginning balance, after the initial deposit, from the ending balance since the ending balance reflects both the rate of return on the investments and any deposits or withdrawals during the time invested in the fund. In other words, deposits and withdrawals distort the value of the return on the portfolio.
The time-weighted return breaks up the return on an investment portfolio into separate intervals based on whether money was added or withdrawn from the fund. The TWR provides the rate of return for each sub-period or interval that had cash flow changes. By isolating the returns that had cash flow changes, the result is more accurate than simply taking the beginning balance and ending balance of the time invested in a fund. The time-weighted return multiplies the returns for each sub-period or holding-period, which links them together showing how the returns are compounded over time.
When calculating the time-weighted rate of return, it is assumed that all cash distributions are reinvested in the portfolio. Daily portfolio valuations are needed whenever there is external cash flow, such as a deposit or a withdrawal, which would denote the start of a new sub-period. In addition, sub-periods must be the same to compare the returns of different portfolios or investments. These periods are then geometrically linked to determine the time-weighted rate of return.
Because investment managers that deal in publicly traded securities do not typically have control over fund investors' cash flows, the time-weighted rate of return is a popular performance measure for these types of funds as opposed to the internal rate of return (IRR), which is more sensitive to cash-flow movements.
The time-weighted return (TWR) multiplies the returns for each sub-period or holding-period, which links them together showing how the returns are compounded over time.
The time-weighted return (TWR) helps eliminate the distorting effects on growth rates created by inflows and outflows of money.
As noted, the time-weighted return eliminates the effects of portfolio cash flows on returns. To see this how it works, consider the following two investor scenarios:
Investor 1 invests $1 million into Mutual Fund A on December 31. On August 15 of the following year, their portfolio is valued at $1,162,484. At that point (August 15), they add $100,000 to Mutual Fund A, bringing the total value to $1,262,484.
By the end of the year, the portfolio has decreased in value to $1,192,328. The holding-period return for the first period, from December 31 to August 15, would be calculated as:
Return = ($1,162,484 - $1,000,000) / $1,000,000 = 16.25%
The holding-period return for the second period, from August 15 to December 31, would be calculated as:
Return = ($1,192,328 - ($1,162,484 + $100,000)) / ($1,162,484 + $100,000) = -5.56%
The second sub-period is created following the $100,000 deposit so that the rate of return is calculated reflecting that deposit with its new starting balance of $1,262,484 or ($1,162,484 + $100,000).
The time-weighted return for the two time periods is calculated by multiplying each subperiod's rate of return by each other. The first period is the period leading up to the deposit, and the second period is after the $100,000 deposit.
Time-weighted return = (1 + 16.25%) x (1 + (-5.56%)) - 1 = 9.79%
Investor 2 invests $1 million into Mutual Fund A on December 31. On August 15 of the following year, their portfolio is valued at $1,162,484. At that point (August 15), they withdraw $100,000 from Mutual Fund A, bringing the total value down to $1,062,484.
Return = ($1,003,440 - ($1,162,484 - $100,000)) / ($1,162,484 - $100,000) = -5.56%
The time-weighted return over the two time periods is calculated by multiplying or geometrically linking these two returns:
As expected, both investors received the same 9.79% time-weighted return, even though one added money and the other withdrew money. Eliminating the cash flow effects is precisely why time-weighted return is an important concept that allows investors to compare the investment returns of their portfolios and any financial product.
However, the rate of return calculation does not account for the cash flow differences in the portfolio, whereas the TWR accounts for all deposits and withdrawals in determining the rate of return.
Due to changing cash flows in and out of funds on a daily basis, the TWR can be an extremely cumbersome way to calculate and keep track of the cash flows. It's best to use an online calculator or computational software. Another often-used rate of return calculation is the money-weighted rate of return. |
EUDML | On powers of -hyponormal and log-hyponormal operators. EuDML | On powers of -hyponormal and log-hyponormal operators.
On powers of
p
-hyponormal and log-hyponormal operators.
Furuta, Takayuki; Yanagida, Masahiro
Furuta, Takayuki, and Yanagida, Masahiro. "On powers of -hyponormal and log-hyponormal operators.." Journal of Inequalities and Applications [electronic only] 5.4 (2000): 367-380. <http://eudml.org/doc/121179>.
@article{Furuta2000,
author = {Furuta, Takayuki, Yanagida, Masahiro},
keywords = {-hyponormal operators; log-hyponormal operators; -hyponormal operators},
title = {On powers of -hyponormal and log-hyponormal operators.},
AU - Furuta, Takayuki
TI - On powers of -hyponormal and log-hyponormal operators.
KW - -hyponormal operators; log-hyponormal operators; -hyponormal operators
p
-hyponormal operators, log-hyponormal operators,
p
-hyponormal operators
Articles by Furuta
Articles by Yanagida |
To Emma Darwin [9 May 1842]
I am anxious for the post today to hear how you are & how the chicks are.— Yesterday felt quite a blank from not hearing— I hope your teeth have not been plaguing you & poor dear old Doddums temper I hope to hear is better.—
On Saturday I went in City & did a deal of Printing business— I came back gloomy & tired— the government money has gone much quicker than I thought & the expences of the coral-volume are greater being, as far as we can judge from 130£ to 140£.— How I am publish the remainder I know not, without taking 2 or 300£ out of the funds—& what will you say to that.— I am stomachy & be blue deviled— I am daily growing very very old, very very cold & I daresay very sly.—1 I will give you statistics of time spent on my coral-volume, not including all the work on board the Beagle— I commenced it 3 years & 7 months ago, & have done scarcely anything besides— I have actually spent 20 months out of this period on it! & nearly all the remainder sickness & visiting!!!
Catty stops till Saturday; notwithstanding all my boasting of not caring for solitude, I believe I should have been dreary without her.— She went to Foundling Church to hear Bishop Thirlwall preach,2 wh. lasted till
\frac{1}{2}
past two! owing to music & the immortal Fanny stood it all notwithstanding extreme crowd & closeness— Cath. liked sermon & Fanny did not, & I feel sure they differed more than they naturally would have done, to spite each other for their difference over Mr. Scot.—3 C. drank tea in evening there & had very pleasant evening—
I am very glad you have not missed seeing the Langtons. when do they go? I hope I shall see them & the little Doddy Secundus.—4
Ask Brodie where is Key of G. Square?5
The colourist has invented a clever plan to save me looking over the maps.— he counts the circles of each separate colour6 & so necessarily detects every error.—
Yesterday I went at 2 oclock & an hour’s hard talk with Horner on affairs of Geolog Soc & it quite knocked me up & this makes my letter rather blue in its early part.—
After long watching the Postman your letter has at last arrived. you cannot tell how much I enjoy hearing about you all.— How strange poor old Doddy seems to be— I grieve he does not get better; I agree with you it wd be very good to try calomel.— How astonishing your walking round Birth Hill, I believe now the country will do you good— What a nice account you give of Charlottes tranquil maternity— I wish the Baby was livlier,—for liveliness is an extreme charm in bab-chicks—
good bye.— I long to kiss Annie’s botty-wotty | C.D.—
An allusion to one of Harry Wedgwood’s verses—an ‘epitaph’ on Susan Darwin (Emma Darwin (1915) 2: 70 n.): Here the bones of Susan lie, She was old and cold and sly.
Connop Thirlwall.
The Reverend Alexander John Scott, a highly independent and successful preacher to whom Fanny Wedgwood was devoted. See Emma Darwin (1915) 1: 234 n.; Arbuckle 1983, p. 13 n. 18.
Edmund Langton.
The Darwin children’s nurse, Jessie Brodie, took the children for outings to nearby Gordon Square, where the garden was enclosed and residents required a gate key to enter the gardens.
Four colours were used for the maps in Coral reefs representing three types of reefs and active volcanoes (see letter to C. Lyell, 6 [July 1841]).
Arbuckle, Elisabeth Sanders, ed. 1983. Harriet Martineau’s letters to Fanny Wedgwood. Stanford, California: Stanford University Press.
Is "stomachy and be-blue-devilled" because of costs of publishing [Zoology and Coral reefs]. Wonders how the remainder [of the Zoology and Geology of "Beagle"] can be published without taking £200 or £300 out of their personal funds. |
EUDML | Invariants of -type links via an extension of the Kauffman bracket. EuDML | Invariants of -type links via an extension of the Kauffman bracket.
Invariants of
B
-type links via an extension of the Kauffman bracket.
Kulish, P.P.; Nikitin, A.M.
Kulish, P.P., and Nikitin, A.M.. "Invariants of -type links via an extension of the Kauffman bracket.." Zapiski Nauchnykh Seminarov POMI 266 (2000): 107-130. <http://eudml.org/doc/230668>.
@article{Kulish2000,
author = {Kulish, P.P., Nikitin, A.M.},
keywords = {Kauffman bracket},
title = {Invariants of -type links via an extension of the Kauffman bracket.},
AU - Kulish, P.P.
AU - Nikitin, A.M.
TI - Invariants of -type links via an extension of the Kauffman bracket.
KW - Kauffman bracket
Kauffman bracket
Articles by Kulish
Articles by Nikitin |
EUDML | Invariant operations on manifolds with almost Hermitian symmetric structures. II: Normal Cartan connections. EuDML | Invariant operations on manifolds with almost Hermitian symmetric structures. II: Normal Cartan connections.
Invariant operations on manifolds with almost Hermitian symmetric structures. II: Normal Cartan connections.
Čap, A., Slovák, J., and Souček, V.. "Invariant operations on manifolds with almost Hermitian symmetric structures. II: Normal Cartan connections.." Acta Mathematica Universitatis Comenianae. New Series 66.2 (1997): 203-220. <http://eudml.org/doc/123008>.
@article{Čap1997,
author = {Čap, A., Slovák, J., Souček, V.},
keywords = {Cartan connections; AHS-structure; almost Hermitian symmetric structures; prolongation of -structures; natural operators; prolongation of -structures},
title = {Invariant operations on manifolds with almost Hermitian symmetric structures. II: Normal Cartan connections.},
AU - Čap, A.
AU - Slovák, J.
AU - Souček, V.
TI - Invariant operations on manifolds with almost Hermitian symmetric structures. II: Normal Cartan connections.
KW - Cartan connections; AHS-structure; almost Hermitian symmetric structures; prolongation of -structures; natural operators; prolongation of -structures
Lukáš Krump, Construction of BGG sequences for AHS structures
Cartan connections, AHS-structure, almost Hermitian symmetric structures, prolongation of
G
-structures, natural operators, prolongation of
G
G
Articles by Čap
Articles by Slovák
Articles by Souček |
EUDML | Harmonic -morphisms. EuDML | Harmonic -morphisms.
\varphi
-morphisms.
Bejan, C. L., and Benyounes, M.. "Harmonic -morphisms.." Beiträge zur Algebra und Geometrie 44.2 (2003): 309-321. <http://eudml.org/doc/231556>.
@article{Bejan2003,
author = {Bejan, C. L., Benyounes, M.},
keywords = {harmonic morphism; harmonic map; complete lift metric; tangent bundle; -morphism; -morphism},
title = {Harmonic -morphisms.},
AU - Bejan, C. L.
AU - Benyounes, M.
TI - Harmonic -morphisms.
KW - harmonic morphism; harmonic map; complete lift metric; tangent bundle; -morphism; -morphism
harmonic morphism, harmonic map, complete lift metric, tangent bundle,
\varphi
-morphism,
\phi
-morphism
Differential geometric aspects of harmonic maps
Articles by Bejan
Articles by Benyounes |
EUDML | Fixed points of asymptotically regular nonexpansive mappings on nonconvex sets. EuDML | Fixed points of asymptotically regular nonexpansive mappings on nonconvex sets.
Fixed points of asymptotically regular nonexpansive mappings on nonconvex sets.
Kaczor, Wiesława
Kaczor, Wiesława. "Fixed points of asymptotically regular nonexpansive mappings on nonconvex sets.." Abstract and Applied Analysis 2003.2 (2003): 83-91. <http://eudml.org/doc/50616>.
@article{Kaczor2003,
author = {Kaczor, Wiesława},
keywords = {Banach space; fixed-point property; Goebel-Schöneberg theorem; -spaces; asymptotic regularity; -spaces},
title = {Fixed points of asymptotically regular nonexpansive mappings on nonconvex sets.},
AU - Kaczor, Wiesława
TI - Fixed points of asymptotically regular nonexpansive mappings on nonconvex sets.
KW - Banach space; fixed-point property; Goebel-Schöneberg theorem; -spaces; asymptotic regularity; -spaces
Banach space, fixed-point property, Goebel-Schöneberg theorem,
{L}_{p}
-spaces, asymptotic regularity,
{L}_{p}
A
Articles by Kaczor |
Negative interest rates - Woodcoin.info
Posted on June 8, 2015 August 21, 2019 by funkenstein
Echonomists have been very busy lately talking about how negative interest rates have arrived, and that this means something or other and has totally surprised someone or other. If you are confused by any of this, allow me to spell it out for you.
In point of fact, all fiat currency is issued at an effective interest rate of negative infinity. What someone charges you to borrow currency from them is another matter. When you borrow fiat, typically it is from somebody who borrowed from somebody else who also borrowed. How much you pay for the new units roughly defines your status in a hierarchy of recipients of these units. This hierarchy has also been described as "interest rate apartheid".
An example or two might as usual help elucidate things further.
If we browse the darknet for newly printed euros, shipped directly to your home, we find several purveyors. Because we expect low quality (if anything at all) from these folks, they offer us the notes at a discount. While a standard euro might go for four or five millies, these euros sell for one or two millies each. And why not? After all, these notes cost their manufacturers (a.k.a. counterfeiters) almost nothing to issue. It makes sense they can still make some profit even passing discount on to us. Thus for a shipment of 100 euros to our house, we repay with 50 euros. The repayment in full, for an amount less than the original amount of the loan, represents a payment with an effective negative interest rate. The business described here is exactly what is happening on a larger scale with all fiat currency purveyors. They can still make really good money with a negative interest rate, because they are paying an original rate of negative infinity percent. Why would anyone be surprised that they can charge a negative interest rate?
Recall the formula for the amount owed on a loan of principal P compounded continually:
A=Pe^{rt}
Where A is the amount owed, P is the principal, r is the interest rate, and t is the time passed. So for A to be zero, that is for you to not owe anything (at any time t and for any principal P), r must be equal to negative infinity.
So we can see, a negative interest rate is not so strange after all. It's what the sun charges you for UV light. An infinitely negative interest rate is how we say gratis in the language of usury. And so we see clearly that all fiat currency is in fact issued at an infinitely negative interest rate.
Unfortunately this post is rather useless in the department of actionable advice. If somebody is charging you 20% or 200% interest, and you are under contract, you are still going to be under that contract. You will pay up or go bankrupt. If somebody is charging you -10% and you are under contract, you will also have to pay up. How much are people issuing to themselves at
r=-\infty
? I don't have a clue. However it looks like we will find out eventually. That is another tale for another day.
Previous post: Satoshi on academia
Next post: The Stagularity Cometh
2 Replies to “Negative interest rates”
Pingback: The myth of debt based currency – Woodcoin.info
Cory Trevorton says:
I think anything lower than fed rate is considered illegal. Not that it doesn't happen, but just sayin - it ain't for small fry.
Smallblockism Explai...Woodcoin - Two Year...Top 5 incorrect thin...The Public Coin Tool...What is bitcoin disr... |
Smith and Eisner 2008:Dependency parsing by belief propagation - Cohen Courses
Smith and Eisner 2008:Dependency parsing by belief propagation
4.1 Graphical Models for Dependency Parsing
4.2 Training and Decoding using BP
6.1 Efficiency Evaluation: Comparing to Dynamic Programming (DP)
6.2 Accuracy Evaluation: Higher-Order Non-Projective Parsing
6.3 Accuracy Evaluation: Influences of Global Hard Constraints
Smith, David A. and Jason Eisner (2008). Dependency parsing by belief propagation. Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 145-156, Honolulu, October.
Smith and Eisner 2008
This is a crucial paper that presents a loopy Belief Propagation (BP) method for Dependency Parsing, which can also be easily applied to general problems in Named Entity Recognition, Word Alignment, Shallow Parsing, and Constituent Parsing. The paper formulates the dependency parsing problem as a learning and decoding problem on a graphical model with global constraints. The authors show that BP needs only
{\displaystyle O(n^{3})}
time to perform approximate inference on a graphical model, with second-order features and latent variables incorporated.
This paper first introduces the method of formulating the dependency parsing problem as training and decoding on Markov random fields, then discusses the use of Belief Propagation to lower asymptotic runtime during training and decoding. In this section, we will first summarize the method they use to formulate the problem, then briefly describe the method of using BP for this task. Regarding the detailed BP method for general probabilistic graphical models, please refer to the method page: Belief Propagation.
Graphical Models for Dependency Parsing
The input of the graph will be fully observed word sequence
{\displaystyle \mathbf {W} ={W_{0},W_{1},W_{2},...,W_{n}}}
. The corresponding parts-of-speech tags can be written as
{\displaystyle \mathbf {T} ={T_{0},T_{1},T_{2},...,T_{n}}}
. The dependency arcs between the words i and j can be denoted by
{\displaystyle L_{ij}=true}
, where i represents the parent node, and j as the child. Then, a probability distribution of the assignment of all variables can be represented using the Markov random field (MRF).
{\displaystyle p(\Lambda ){\overset {\underset {\mathrm {def} }{}}{=}}{\frac {1}{Z(\theta )}}\prod {m}F_{m}(\Lambda )}
Here, we can say that there exists a function that consults a subset of
{\displaystyle \Lambda }
, and the function has either unary, binary, ternary or global degrees. Given a learned parameter
{\displaystyle \theta }
and a factor function
{\displaystyle F_{m}(\Lambda )}
, we can decode the possible dependency parses.
Training and Decoding using BP
The basic idea for training is to train a weight vector
{\displaystyle \theta }
that maximizes the log-likelihood of the training data. Note that the difficulty here is to calculate the gradient of the objective function
{\displaystyle \nabla _{\theta }\log Z=\sum _{m}E_{p(\Lambda )}[\nabla _{\theta }F_{m}(\Lambda )]}
Remember that in Belief Propagation, we use an estimate of the marginal distribution, which can be viewed as the belief of the factor
{\displaystyle F_{m}}
, given W and \theta. The authors then use the stochastic gradient descent to estimate its gradient, rather than compute the objection function.
BP is used to compute the conditional probability of a link
{\displaystyle L_{ij}}
is present. The basic idea here is to use BP and propagate the local factors in the graph. Note that the authors have also employed global factors in this work. So, the trick here is to run backward algorithms from marginal distributions (this is very similar to the forward-backward algorithm for extracting posteriors from hidden Markov models). The key component of decoding task here is to find a single assignment that satisfies the parse TREE constraint. So, a simple approach would be returning the 1-best tree, k-best trees, or take samples. The authors take the full marginal distributions at the links, and feed them to a maximum spanning tree algorithm. The entire process somehow resembles the minimum Bayes risk parsing with a dependency loss function.
The author used three languages from the 2007 CoNLL Dependency Parsing Shared Task. The English data were converted from the Penn Treebank, with around 1% of links crossed other links. In terms of the Danish data, it contained slightly more crossing arcs (3% in total). When comparing to these two languages, Dutch was the most non-projective language (11%).
In the experiment section of this paper, the authors conducted three major experiments. First, they explored whether BP can beat Dynamic Programming (DP), in terms of the efficiency. Secondly, they looked at the non-projective parsing problem, and checked whether high-order features were useful, and how BP could make it tractable. Last but not the least, they have also examined whether global constraints contribute to the accuracy of dependency parsing, under this proposed BP framework. To precisely present the original results in the following subsections, we use the original figures and tables taken from this paper.
Efficiency Evaluation: Comparing to Dynamic Programming (DP)
In Figure 2 and 3, it is clear that BP is much faster than DP under various settings. And when comparing Figure 2 and 3, it is shown that when adding a higher order (more complex model), the gap between BP and DP is widen. Figure 4 shows the speed vs. error trade-off. It is observed that 5 iterations of BP reaches the best speech with lowest error rate. However, note that this comparison was done in a lower-order setting, where the DP approach was still relatively fast.
Accuracy Evaluation: Higher-Order Non-Projective Parsing
In this experiment, the authors attempted to examine whether adding more higher-order features can improve parsing accuracy, under the proposed BP framework. Table 2 shows that by adding "NoCross", "Grand", and "ChildSEQ" features, the system performance significantly outperforms the first-order baseline. Table 2 also shows that even though a hill-climbing variant of DP can improve over the standard DP, but running non-projective BP is much faster and has slightly higher accuracy.
Accuracy Evaluation: Influences of Global Hard Constraints
In this final experiment, the authors investigate the influences of global hard constraints. The Table 3 shows that the idea of using TREE in training is really critical in this work, and global constraints generally improve the overall results.
This paper is related to many papers in three dimensions. First of all, from a natural language parsing perspective, this paper presents a state-of-the-art inference algorithm for dependency parsing. Secondly, from a machine learning and structured prediction point of view, this work is closely related to many other approximation inference algorithms on probabilistic graphical models (e.g. HMMs, CRFs, MRFs, and Bayesian Networks). Finally, the proposed approach might also be applied to other sequential modeling natural language processing tasks, for example, Named Entity Tagging, Parts-of-speech Tagging, and Constituent Parsing. Below shows some of the related papers to this work.
Berg-Kirkpatrick et al, ACL 2010: Painless Unsupervised Learning with Features
Benajiba and Rosso, LREC 2008
T. Meltzer et al., 2005: Globally Optimal Solutions for Energy Minimization in Stereo Vision using Reweighted Belief Propagation, ICCV 2005
R. Szeliski et al., 2008: A comparative study of energy minimization methods for Markov random fields with smoothness-based priors, IEEE Transactions on Pattern Analysis and Machine Intelligence 2008
T. Ott, and R. Stoop, 2006: The neurodynamics of belief propagation on binary markov random fields, NIPS 2006
Retrieved from "http://curtis.ml.cmu.edu/w/courses/index.php?title=Smith_and_Eisner_2008:Dependency_parsing_by_belief_propagation&oldid=9093" |
Could you please make it clear what is an 'A' and what is a 'Lambda' (matrix)? - Murray Wiki
Could you please make it clear what is an 'A' and what is a 'Lambda' (matrix)?
The 'A' matrix is the same 'A' matrix in the state-space model (
{\displaystyle {\dot {x}}=Ax\,}
{\displaystyle \Lambda \,}
matrix is a diagonal matrix whose diagonal consists of all the eigenvalues of A.
Retrieved from "https://murray.cds.caltech.edu/index.php?title=Could_you_please_make_it_clear_what_is_an_%27A%27_and_what_is_a_%27Lambda%27_(matrix)%3F&oldid=8260" |
LUApply - Maple Help
Home : Support : Online Help : Mathematics : Linear Algebra : LinearAlgebra Package : Modular Subpackage : LUApply
apply PLU Decomposition to a mod m Matrix or Vector
LUApply(m, A, pvec, B)
mod m Matrix (from LUDecomposition)
permutation vector (from LUDecomposition)
mod m Matrix or Vector representing right-hand side of problem
The LUApply function applies the permutation pvec and the forward and backward substitutions encoded in A directly to the right-hand side mod m Matrix or Vector B, where pvec and A are the output of the LUDecomposition function.
B must have the same number of rows as columns in A.
The function works directly on B, returning the solution in B on successful completion. If the function fails, B can be altered.
LUApply is simply the application of Permute, ForwardSubstitute, and then BackwardSubstitute.
This command is part of the LinearAlgebra[Modular] package, so it can be used in the form LUApply(..) only after executing the command with(LinearAlgebra[Modular]). However, it can always be used in the form LinearAlgebra[Modular][LUApply](..).
Compute LU decomposition of a random 5 x 5 Matrix, and use LUApply to obtain a solution.
\mathrm{with}\left(\mathrm{LinearAlgebra}[\mathrm{Modular}]\right):
p≔97
\textcolor[rgb]{0,0,1}{p}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{97}
A≔\mathrm{Mod}\left(p,\mathrm{Matrix}\left(5,5,\left(i,j\right)↦\mathrm{rand}\left(\right)\right),\mathrm{integer}[]\right):
A
[\begin{array}{ccccc}\textcolor[rgb]{0,0,1}{77}& \textcolor[rgb]{0,0,1}{96}& \textcolor[rgb]{0,0,1}{10}& \textcolor[rgb]{0,0,1}{86}& \textcolor[rgb]{0,0,1}{58}\\ \textcolor[rgb]{0,0,1}{36}& \textcolor[rgb]{0,0,1}{80}& \textcolor[rgb]{0,0,1}{22}& \textcolor[rgb]{0,0,1}{44}& \textcolor[rgb]{0,0,1}{39}\\ \textcolor[rgb]{0,0,1}{60}& \textcolor[rgb]{0,0,1}{39}& \textcolor[rgb]{0,0,1}{43}& \textcolor[rgb]{0,0,1}{12}& \textcolor[rgb]{0,0,1}{55}\\ \textcolor[rgb]{0,0,1}{2}& \textcolor[rgb]{0,0,1}{24}& \textcolor[rgb]{0,0,1}{71}& \textcolor[rgb]{0,0,1}{45}& \textcolor[rgb]{0,0,1}{29}\\ \textcolor[rgb]{0,0,1}{21}& \textcolor[rgb]{0,0,1}{48}& \textcolor[rgb]{0,0,1}{7}& \textcolor[rgb]{0,0,1}{33}& \textcolor[rgb]{0,0,1}{57}\end{array}]
\mathrm{A2}≔\mathrm{Copy}\left(p,A\right):
\mathrm{pv}≔\mathrm{Vector}\left(4\right):
\mathrm{LUDecomposition}\left(p,\mathrm{A2},\mathrm{pv},0\right):
\mathrm{A2},\mathrm{pv}
[\begin{array}{ccccc}\textcolor[rgb]{0,0,1}{77}& \textcolor[rgb]{0,0,1}{96}& \textcolor[rgb]{0,0,1}{10}& \textcolor[rgb]{0,0,1}{86}& \textcolor[rgb]{0,0,1}{58}\\ \textcolor[rgb]{0,0,1}{37}& \textcolor[rgb]{0,0,1}{20}& \textcolor[rgb]{0,0,1}{40}& \textcolor[rgb]{0,0,1}{63}& \textcolor[rgb]{0,0,1}{27}\\ \textcolor[rgb]{0,0,1}{94}& \textcolor[rgb]{0,0,1}{60}& \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{79}& \textcolor[rgb]{0,0,1}{64}\\ \textcolor[rgb]{0,0,1}{29}& \textcolor[rgb]{0,0,1}{56}& \textcolor[rgb]{0,0,1}{63}& \textcolor[rgb]{0,0,1}{7}& \textcolor[rgb]{0,0,1}{78}\\ \textcolor[rgb]{0,0,1}{62}& \textcolor[rgb]{0,0,1}{54}& \textcolor[rgb]{0,0,1}{40}& \textcolor[rgb]{0,0,1}{10}& \textcolor[rgb]{0,0,1}{5}\end{array}]\textcolor[rgb]{0,0,1}{,}[\begin{array}{c}\textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{2}\\ \textcolor[rgb]{0,0,1}{3}\\ \textcolor[rgb]{0,0,1}{4}\end{array}]
B≔\mathrm{Mod}\left(p,\mathrm{Matrix}\left(5,2,\left(i,j\right)↦\mathrm{rand}\left(\right)\right),\mathrm{integer}[]\right)
\textcolor[rgb]{0,0,1}{B}\textcolor[rgb]{0,0,1}{≔}[\begin{array}{cc}\textcolor[rgb]{0,0,1}{65}& \textcolor[rgb]{0,0,1}{16}\\ \textcolor[rgb]{0,0,1}{93}& \textcolor[rgb]{0,0,1}{96}\\ \textcolor[rgb]{0,0,1}{71}& \textcolor[rgb]{0,0,1}{44}\\ \textcolor[rgb]{0,0,1}{70}& \textcolor[rgb]{0,0,1}{58}\\ \textcolor[rgb]{0,0,1}{25}& \textcolor[rgb]{0,0,1}{29}\end{array}]
X≔\mathrm{Copy}\left(p,B\right):
\mathrm{LUApply}\left(p,\mathrm{A2},\mathrm{pv},X\right):
X
[\begin{array}{cc}\textcolor[rgb]{0,0,1}{11}& \textcolor[rgb]{0,0,1}{37}\\ \textcolor[rgb]{0,0,1}{12}& \textcolor[rgb]{0,0,1}{20}\\ \textcolor[rgb]{0,0,1}{57}& \textcolor[rgb]{0,0,1}{11}\\ \textcolor[rgb]{0,0,1}{83}& \textcolor[rgb]{0,0,1}{68}\\ \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{11}\end{array}]
\mathrm{Multiply}\left(p,A,X\right)-B
[\begin{array}{cc}\textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\end{array}]
Use float[8] with a nontrivial permutation.
p≔13:
A≔\mathrm{Mod}\left(13,[[0,0,12],[12,0,3],[1,1,1]],\mathrm{float}[8]\right)
\textcolor[rgb]{0,0,1}{A}\textcolor[rgb]{0,0,1}{≔}[\begin{array}{ccc}\textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{12.}\\ \textcolor[rgb]{0,0,1}{12.}& \textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{3.}\\ \textcolor[rgb]{0,0,1}{1.}& \textcolor[rgb]{0,0,1}{1.}& \textcolor[rgb]{0,0,1}{1.}\end{array}]
\mathrm{A2}≔\mathrm{Copy}\left(p,A\right):
\mathrm{pv}≔\mathrm{Vector}\left(2\right):
\mathrm{LUDecomposition}\left(p,\mathrm{A2},\mathrm{pv},0\right):
\mathrm{A2},\mathrm{pv}
[\begin{array}{ccc}\textcolor[rgb]{0,0,1}{12.}& \textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{3.}\\ \textcolor[rgb]{0,0,1}{12.}& \textcolor[rgb]{0,0,1}{1.}& \textcolor[rgb]{0,0,1}{4.}\\ \textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{12.}\end{array}]\textcolor[rgb]{0,0,1}{,}[\begin{array}{c}\textcolor[rgb]{0,0,1}{2}\\ \textcolor[rgb]{0,0,1}{3}\end{array}]
Now apply to a random vector and check it.
B≔\mathrm{Mod}\left(p,\mathrm{Vector}\left(3,i↦\mathrm{rand}\left(\right)\right),\mathrm{float}[8]\right)
\textcolor[rgb]{0,0,1}{B}\textcolor[rgb]{0,0,1}{≔}[\begin{array}{c}\textcolor[rgb]{0,0,1}{5.}\\ \textcolor[rgb]{0,0,1}{3.}\\ \textcolor[rgb]{0,0,1}{5.}\end{array}]
X≔\mathrm{Copy}\left(p,B\right):
\mathrm{LUApply}\left(p,\mathrm{A2},\mathrm{pv},X\right):
X
[\begin{array}{c}\textcolor[rgb]{0,0,1}{8.}\\ \textcolor[rgb]{0,0,1}{2.}\\ \textcolor[rgb]{0,0,1}{8.}\end{array}]
\mathrm{Multiply}\left(p,A,X\right)-B
[\begin{array}{c}\textcolor[rgb]{0,0,1}{0.}\\ \textcolor[rgb]{0,0,1}{0.}\\ \textcolor[rgb]{0,0,1}{0.}\end{array}] |
The price of homes (in thousands of dollars) is associated with the number of square feet in a home. Home prices in Smallville can be modeled with the equation
p=150+0.041a
p
a
is the area of the house in square feet. Home prices in Fancyville can be modeled with the equation
p=250+0.198a
2800
\$280{,}000
How much does a 2800 square foot home cost in Smallville?
The price of a 2800 square foot home in Smallville is 150 + 0.041(2800).
The price in Fancyville is 250 + 0.981(2800).
Which is closer to the price of the house that Ngoc saw? |
Computes fast-fourier-transform (FFT) for LTE standard transmission bandwidth of 15 MHz - Simulink - MathWorks France
Normalize butterfly output
Computes fast-fourier-transform (FFT) for LTE standard transmission bandwidth of 15 MHz
The FFT 1536 block is designed to support LTE standard transmission bandwidth of 15 MHz. This block is used in LTE OFDM Demodulator block operation. The block accepts input data, along with a valid control signal and outputs streaming data with a samplecontrol bus.
The block provides an architecture suitable for HDL code generation and hardware deployment.
scalar of real or complex values
Input data, specified as a scalar of real or complex values.
The more the fractional bits you provide in the input word length, the better the accuracy you receive in the output.
Data Types: double | single | int8 | int16 | int32 | fixed point
Indicates if the input data is valid. When the input valid is 1 (true), the block captures the value on the input data port. When the input valid is 0 (false), the block ignores the input data samples.
reset — Reset control signal
When this value is 1 (true), the block stops the current calculation and clears all internal states.
Frequency channel output data, returned as a scalar of real or complex values.
When the input is of fixed point data type, the output data type is the same as the input data type. When the input is of integer type, the output data type is of fixed point type.
Specifies the complex multiplier type for HDL implementation. Each multiplication is implemented either with Use 3 multipliers and 5 adders or with Use 4 multipliers and 2 adders. The implementation speed depends on the synthesis tool and the target device that you use.
Specifies the type of rounding method for internal fixed-point calculations. For more information about rounding methods, see Rounding Modes (DSP System Toolbox). When the input is any integer or fixed-point data type, this block uses fixed-point arithmetic for internal calculations. This parameter does not apply when the input data is single or double.
Normalize butterfly output — Output normalization
When you select this parameter, the block divides the output by 1536. This option is useful when you want the output of the block to stay in the same amplitude range as its input. You require this option when the input is of fixed point type.
When you select this parameter, the output word length increases by 2 bits and when you clear this parameter the output word length increases by 11 bits.
Select this parameter to enable the reset port.
Application of FFT 1536 block in LTE OFDM Demodulation
Use the FFT 1536 block in LTE OFDM demodulation.
To design an FFT 1536 block, radix-3 decimation-in-time (DIT) algorithm is implemented. The input sequence x(n) for all n = {0,1,2....1535} is divided into three DIT sequences, x(3n), x(3n+1), x(3n+2) for all n = {0,1,2....511}.
This equation defines FFT 1536 computation of a given sequence x(n).
x\left(k\right)=\sum _{n=0}^{1535}x\left(n\right){W}_{1536}{}^{nk};k=0,1,2,...,1535
The equation can be implemented by dividing it into three parts, where P(k), Q(k), R(k) are the N/3 (FFT 512) point FFT of x(3n), x(3n+1), and x(3n+2), respectively. Here, N = 1536, and k = 0,1,2,.....,511.
x\left(k\right)=P\left(k\right)+{W}_{N}{}^{k}Q\left(k\right)+{W}_{N}{}^{2k}R\left(k\right)
x\left(k+N/3\right)=P\left(k\right)+{W}_{3}{}^{1}{W}_{N}{}^{k}Q\left(k\right)+{W}_{3}{}^{2}{W}_{N}{}^{2k}R\left(k\right)
x\left(k+2N/3\right)=P\left(k\right)+{W}_{3}{}^{2}{W}_{N}{}^{k}Q\left(k\right)+{W}_{3}{}^{1}{W}_{N}{}^{2k}R\left(k\right)
This diagram shows the internal architecture of the block and how the input sequence streams through the components of the block.
The input sequence x(n) is demultiplexed into three DIT sequences, x(3n), x(3n+1), x(3n+2), each of length 512. Three first-input first-output (FIFO) memories store these sequences. These DIT sequences are serialized and streamed through the FFT 512 block.
This image shows the output waveform of the block when operated with default configuration parameters. The block provides output data after a latency of 3180 clock cycles. The length of the output data between start (Ctrl.(1)) and end (Ctrl.(2)) output control signals is 1536 clock cycles.
The performance of the synthesized HDL code varies with your target and synthesis options. This table shows the resource and performance data synthesis results of the block with default configuration parameters, along with normalization feature enabled, and with an input data in fixdt(1,17,15) format. The generated HDL is targeted to Xilinx® Zynq® XC7Z045-FFG900-2 FPGA board. The design achieves a clock frequency of 355 MHz. |
Structural Causal Models | Causal Flows
Describing Causality
Recall from my previous post that causal inference techniques are largely concerned with discerning associative relationships (relationships between two variables for which a change in one variable can cause a change in another) from causal relationships (relationships between the data describing a cause and an effect, for which the cause is an event that contributes to the production of another event, the effect). In order to complete these causal inference tasks, it is helpful to develop a concise language for describing causal and associative relationships. A well designed analytical language can provide the descriptive tools necessary to construct and validate hypothesized causal relationships. Before we go any further in our exploration of causal inference, we must first describe a simple yet expressive notation of hypothesized causal relationships between variables. This notation, referred to in causal inference literature as Structural Causal Models (SCMs), will help to simplify our further discussion of the relationship between causality and probability.
The earliest known version of SCMs, were introduced by geneticist Sewell Wright1 around 1918, originally for infering the relative importance of factors which determine the birth weight of guinea pigs. He used the construction to develop the methodology of path analysis, a technique commonly used for causal inference tasks over layered and complex processes, such as phenotypic inheritence.
Figure 1: Drawings from Wright's 1921 paper, Correlation and Causation. The bottom image presents an ancestor to causal graphs, a representation of structural causal models describing the relationships between a variety of genetic factors and a guinea pigs birth weight. Wright's path tracing rules defined a set of rules for using a set of associative relationships, to generate a causal graph, presented above with the top and bottom image respectively.
Before one can dive into the definition of a structural causal model one must ensure familiariaty with directed acyclic graphs (DAGs) which are commonly used to describe the relationships between causes and their corresponding effects. A DAG is a graph, comprised of nodes and edges, for which the direction of an edge determines the relationship between the two nodes on either side. DAGs also do not have any cycles or paths comprised of at least one edge that start and end with the same node. Below are some examples of some DAGs and some graphs that cannot be classified as a DAG:
Figure 1: One graph that can be classified as directed and acyclic and two that cannot. Note that the undirected graph lacks directed edges (represented by arrows) and thus cannot represent concepts such as causality, in which a cause has influence on an effect, and not the other way around. Also note the purple arrows in the directed cyclic graph, denoting a path from the red node back to itself, comprised of 3 edges.
Now that you are armed with a understanding of the structures which define a DAG, I can easily describe how exactly a structural causal model is constructed.
Constructing A Structural Causal Model
A structural causal model is comprised of three components:
A set of variables describing the state of the universe and how it relates to a particular data set we are provided. These variables are: explanatory variables, outcome variables, and unobserved variables. Both outcome variables and explanatory are observed variables, or variables describing processes measured in our data set, while unobserved variables are “background processes” for which we do not have observational data. For practical causal inference, its helpful to make a distinction between outcome variables, which an analyst is interested in changing via intervention and explanatory variables, which an analyst believes can be altered in order to cause a desired change. In an SCM, observed variables are represented by an arbitrary single letter variable name, while unobserved variables are represented by the letter
\textcolor{#A93F55}{u}
, with an arbitrary single letter subscript. For example, for the analysis of the effect of Ice Cream Consumption on Drownings described in my previous post, we can represent explanatory variable Ice Cream Consumption as
\textcolor{#7A28CB}{i}
, outcome variable Drownings as
\textcolor{#EF3E36}{d}
, and unobserved variable Temperature as
\textcolor{#A93F55}{u_t}
Causal relationships, which describe the causal effect variables have on one another. Specifically, causal relationships extend from observed and unobserved variables to observed variables. Such relationships are written using the assignment operator (
\textcolor{#52414C}{\leftarrow}
) and function notation (
\textcolor{#52414C}{f}
) with a subscript labelling the variable which they effect. For example, we can represesent the causal relationship of an unobserved variable Temperature on an explanatory variable Ice Cream Consumption.
\color{#52414C}\textcolor{#7A28CB}{i} \leftarrow f_i(\textcolor{#A93F55}{u_t})
A probability distibution defined over unobserved variables in the model, describing the likelihood that each variable takes a particular value.
Structural causal models are tightly linked with directed acyclic graphs, in that the relationships between the observed variables included within an SCMs adhere to the same set of restrictions defining DAGs. All causal relationships between said variables must be one directional, and no variable can have causal influence on itself as the result of a cycle, commonly referred to as a feedback loop. Why must we place such a restriction on SCMs? Hold that question, I will revisit it towards the end of the post.
The SCM of the example presented in my previous post can be represented as follows, in conjunction with an arbitrary probability distribution defined over the unobserved variable Temperature (describing likelihood that a given month has a particular average monthly temperature). It describes the an hypothesized causal effect of Temperature on Ice Cream Consumption as well as an effect of Temperature and Ice Cream Consumption on Drownings.
\color{#52414C}\textcolor{#7A28CB}{i} \leftarrow f_i(\textcolor{#A93F55}{u_t})
\color{#52414C}\textcolor{#EF3E36}{d} \leftarrow f_d(\textcolor{#A93F55}{u_t}, \textcolor{#7A28CB}{i})
For now, we will focus on how the first two of these components interact to comprise a structural causal model. We will discuss how structural causal models allow us to use probability to infer causal relationships in a future post.
As previously mentioned, the relationships between observed variables in a structural causal model adhere to the same set of restrictions which define a directed acyclic graph. Thus, structural causal models are commonly represented with causal graphs, extensions of directed acyclic graphs used to thoroughly communicate hypotheses of causal relationships between variables. The rules defining the construction of these causal graphs are as follows.
Causal Graphs Are The DAG Representations Of Structural Causal Models
Every SCM can be represented as a DAG, with variables represented as nodes, and relationships between variables represented as edges. Hypothesized causal relationships amongst outcome and explanatory variables are represented by solid arrows in the direction of causality. For example, the SCM defining a single causal relationship between an explantory variable and an outcome variable:
\color{#52414C}\textcolor{#7A28CB}{o} \leftarrow f_o(\textcolor{#A93F55}{e})
can be represented by the following causal graph.
Figure 2: A causal graph representing a singular causal relationship.
Causal Graphs Use Dashed Arrows To Represent Causal Relationships From Unobserved Variables
As previously mentioned, unobserved variables represent processes we cannot see in our data and for which we cannot test hypotheses of their causal effect. Thus, we cannot use unobserved variables to explain changes in explanatory and outcome variables. To represent this restricted utility of unobserved variables, we use a dashed line to represent a possible causal relationship from an unobserved variable to an observed variable. For example, the SCM defining a single causal relationship between an unobserved variable and an outcome variable:
\color{#52414C}\textcolor{#7A28CB}{o} \leftarrow f_o(\textcolor{#A93F55}{u})
Figure 2: A causal graph representing an unobserved variable's effect on an observed variable
To illustrate the utility of this notation, let’s use a new example. consider the impact of Aptitude on Years Of Higher Education and Income. With Aptitude being a catch all term for the traits that influence students to spend more time in school and make more later in life. This is a structural model commonly analyzed by labor economics researchers, interested in quantifying the value of additional education after high school. Aptitude cannot be easily measured, as there are a variety of factors that effect both educational and socioeconomic outcomes (possible explanations include: inate intelligence, work ethic, cultural values, or greed). In conjunction with an arbitrary probability distribution over Aptitude, the SCM describing said causal relationships is as follows.
\color{#52414C}\textcolor{#7A28CB}{y} \leftarrow f_a(\textcolor{#A93F55}{u_a})
\color{#52414C}\textcolor{#EF3E36}{i} \leftarrow f_i(\textcolor{#A93F55}{u_a}, \textcolor{#7A28CB}{y})
This SCM can also be represented by the following causal graph.
Figure 2: A causal graph representing a hypothesis for the causal relationships that define the effect that higher education has on income.
Causal Graphs Use Bidirectional Arrows To Represent Possible Associative Relationships Between Unobserved Variables
For some analytic strategies over causal graphs, it is helpful to represent a possible associative relationship between two unobserved variables. Since these variables are unobserved, and describe processes for which we have no data, it is impossible to infer a causal direction of this associative relationship, or ensure that an associative relationship even exists. To visualize this ambiguity, we represent these “possible” relationships with a dashed bidirectional arrow when drawing a causal graph. Consider an SCM describing a process in which two unobserved variables have a possible associative relationship, each having an affect on one of two observed variables, hypothesized to have a causal relationship:
\color{#52414C}\textcolor{#7A28CB}{e} \leftarrow f_a(\textcolor{#A93F55}{u_a})
\color{#52414C}\textcolor{#EF3E36}{o} \leftarrow f_i(\textcolor{#A93F55}{u_b}, \textcolor{#7A28CB}{e})
\color{#52414C}\textcolor{#A93F55}{u_a} \not\!\perp\!\!\!\perp \textcolor{#A93F55}{u_b}
The last line of this SCM represents the possible associative relationship between
\textcolor{#A93F55}{u_a}
\textcolor{#A93F55}{u_b}
\not\!\perp\!\!\!\perp
is the mathematical symbol for “not independent”. The causal graph of this SCM is as follows.
Figure 2: A causal graph representing a structural model containing two unobserved variables and two observed variables.
Why Must An SCM Define A DAG?
Now that I have presented the structures which define a causal graph, I can illustrate an answer to this question, posed when I first introduced the concept of an SCM. To many, the requirement of edges to have a one directional representation is intuitive, as causal relationships similarly flow in one direction. However, it is not as clear exactly why SCMs must be represented by acyclic graphs. This becomes clearer after analyzing a familiar example. Consider a hypothesized causal relationship between three explanatory variables Buyers (
\textcolor{#7A28CB}{b}
), Sellers (
\textcolor{#7A28CB}{s}
), Marketing Spend (
\textcolor{#7A28CB}{m}
) and an outcome variable Revenue (
\textcolor{#EF3E36}{r}
) described by the following causal relationships.
\color{#52414C}\textcolor{#7A28CB}{m} \leftarrow f_r(\textcolor{#EF3E36}{r})
\color{#52414C}\textcolor{#7A28CB}{s} \leftarrow f_r(\textcolor{#7A28CB}{m})
\color{#52414C}\textcolor{#7A28CB}{b} \leftarrow f_r(\textcolor{#7A28CB}{s})
\color{#52414C}\textcolor{#EF3E36}{r} \leftarrow f_r(\textcolor{#7A28CB}{b})
Which corresponds to the following causal graph.
Figure 2: A DAG representing a cyclical causal relationship.
Such a cycle is an example the virtuous cycle of marketplace dynamics, describing the many moving parts which must be aligned to kick start a successful marketplace business (please checkout Lenny Rachitsky’s amazing blog series for more on this topic).
Note that within this graph, which consists of a 4 edge cycle, there exists an edge from Buyers to Revenue, implying that a change in the amount of buyers on a platform causes a change in that platforms monthly revenue. In addition, note that there exist an edge from Revenue to Marketing Spend, from Marketing Spend to Sellers, and from Sellers to Buyers, and thus, a change in monthly revenue can cause businesses to change their marketing spend, eventually attracting more buyers to their platform. However, these two opposite causal relationships over the same variables, Buyers and Revenue, contradict the definition of a causal relationship presented in my previous post, as one directional relationships from a cause to an effect. Thus, we cannot define a SCM from these hypothesized causal relationships.
There is significant scholarship regarding analysis of a variation of causal models which allow for cyclic causal graphs 2 3 , and hopefully I’ll get to cover this in a future post.
As we will see in future posts, structural causal models provide a powerful representation of causal relationships, enabling the abstract analyses that often yield powerful practical methodologies for determining causal effects. Causal graphs, the graph-based counterparts of SCMs are similarly useful to analysts; they facilitate visualizations as well as utilizations of graph theory for causal inference tasks. For example, in future posts I will discuss algorithms for automatically identifying structural causal models out of undirected graphs which represent solely associative relationships. In my next post, I will use structural causal models to explain confounding bias, a term used to describe processes for which unobserved variables have a direct affect on both explanatory and outcome variables.
Wright, S. (1921). “Correlation and causation”. Journal of Agricultural Research. 20: 557–585.↩
Lacerda, Gustavo, et al. “Discovering Cyclic Causal Models by Independent Components Analysis.” ArXiv:1206.3273 [Cs, Stat], June 2012. arXiv.org, http://arxiv.org/abs/1206.3273.↩
Richardson, Thomas S. “A Discovery Algorithm for Directed Cyclis Graphs.” ArXiv:1302.3599 [Cs], Feb. 2013. arXiv.org, http://arxiv.org/abs/1302.3599.↩
← Getting In To A Causal Flow
Potential Outcomes Model → |
Combined Effects of Magnetic Field and Thermal Radiation on Fluid Flow and Heat Transfer of Mixed Convection in a Vertical Cylindrical Annulus | J. Heat Transfer | ASME Digital Collection
Institute of Thermal Engineering,
e-mails: heatli@hotmail.com; heatli@dlut.edu.cn
e-mail: wangwei_neu_china@hotmail.com
The State key Laboratory of Refractories and
e-mail: zk_neu@163.com
Contributed by the Heat Transfer Division of ASME for publication in the JOURNAL OF HEAT TRANSFER. Manuscript received December 21, 2014; final manuscript received December 27, 2015; published online March 8, 2016. Assoc. Editor: Sujoy Kumar Saha.
Li, B., Wang, W., and Zhang, J. (March 8, 2016). "Combined Effects of Magnetic Field and Thermal Radiation on Fluid Flow and Heat Transfer of Mixed Convection in a Vertical Cylindrical Annulus." ASME. J. Heat Transfer. June 2016; 138(6): 062501. https://doi.org/10.1115/1.4032609
Magnetohydrodynamic (MHD, also for magnetohydrodynamics) mixed convection of electrically conducting and radiative participating fluid is studied in a differentially heated vertical annulus. The outer cylinder is stationary, and the inner cylinder is rotating at a constant angular speed around its axis. The temperature difference between the two cylindrical walls creates buoyancy force, due to the density variation. A constant axial magnetic field is also imposed to resist the fluid motion. The nonlinear integro-differential equation, which characterizes the radiation transfer, is solved by the discrete ordinates method (DOM). The MHD equations, which describe the magnetic and transport phenomena, are solved by the collocation spectral method (CSM). Detailed numerical results of heat transfer rate, velocity, and temperature fields are presented for
0≤Ha≤100
0.1≤τL≤10
0≤ω≤1
0.2≤εW≤1
. The computational results reveal that the fluid flow and heat transfer are effectively suppressed by the magnetic field as expected. Substantial changes occur in flow patterns as well as in isotherms, when the optical thickness and emissivity of the walls vary in the specified ranges. However, the flow structure and the temperature distribution change slightly when the scattering albedo increases from 0 to 0.5, but a substantial change is observed when it increases to 1.
MHD, Natural convection , Mixed convection
Albedo, Annulus, Electromagnetic scattering, Emissivity, Flow (Dynamics), Fluid dynamics, Fluids, Heat transfer, Magnetic fields, Magnetohydrodynamics, Mixed convection, Radiation (Physics), Radiation scattering, Scattering (Physics), Temperature, Thermal radiation, Temperature distribution, Cylinders, Boundary-value problems
Radiation Effect on MHD Mixed Convection Flow About a Permeable Vertical Plate
Borkakati
MHD Heat Transfer in the Flow Between Two Coaxial Cylinders
Separation of Species of a Binary Fluid Mixture Confined Between Two Concentric Rotating Circular Cylinders in Presence of a Strong Radial Magnetic Field
Hatzikonstantinou
Mozayyeni
Mixed Convection in Cylindrical Annulus With an Effect in the Radial Direction
Omubo-Pepple
Influence of Viscous Dissipation and Radiation on Unsteady MHD Free-Convection Flow Past an Infinite Heated Vertical Plate in a Porous Medium With Time-Dependent Suction
Abd El-Naby
AbdElazem
Finite Difference Solution of Radiation Effects on MHD Unsteady Free-Convection Flow Over Vertical Porous Plate
Effects of Thermal Radiation and Variable Fluid Viscosity on Free Convective Flow and Heat Transfer Past a Porous Stretching Surface
Convection Heat and Mass Transfer in a Hydromagnetic Flow of a Second Grade Fluid in the Presence of Thermal Radiation and Thermal Diffusion
The Effect of Thermal Radiation on the Flow of a Second Grade Fluid
Analysis of Mixed Convection-Radiation Interaction in a Vertical Channel: Entropy Generation
Exergy Int. J.
Numerical Modeling of Internal Radiation and Solidification in Semitransparent Melts in Magnetic Fields
Hydromagnetic Free Convection of a Radiating Fluid
Effects Optical Parameters on Fluid Flow and Heat Transfer of Participating Magnetic Fluid
Hall Effects on Natural Convection of Participating MHD With Thermal Radiation in a Cavity
Simulation of Thermal Radiation Effects on MHD Free Convection in a Square Cavity Using the Chebyshev Collocation Spectral Method
Hydromagnetic Double-Diffusive Laminar Natural Convection in a Radiatively Participating Fluid
Numerical Study on Coupled Fluid Flow and Heat Transfer Process in Parabolic Trough Solar Collector Tube
Stabilities of Combined Radiation and Rayleigh–Benard–Marangoni Convection in an Open Vertical Cylinder
Using a Wall-Driven Flow to Reduce the External Mass-Transfer Resistance of a Bio-Reaction System
Opposing Mixed Convection and Its Interaction With Radiation Inside Eccentric Horizontal Cylindrical Annulus
Randriamampianina
An Improved Projection Scheme Applied to Pseudospectral Methods for the Incompressible Navier–Stokes Equations
Three-Dimensional Transient Navier–Stokes Solvers in Cylindrical Coordinate System Based on a Spectral Collocation Method Using Explicit Treatment of the Pressure
The Effect of a Uniform Magnetic Field on Vortex Breakdown in a Cylinder With Rotating Upper Lid
Prediction of Spectral Radiative Transfer in a Condensed Cylindrical Medium Using Discrete Ordinates Method
Modeling of Radiative Heat Transfer in an Axisymmetric Cylindrical Enclosure With Participating Medium
On the Development of Taylor Vortices in a Vertical Annulus With a Heated Rotating Inner Cylinder
Numerical Study of MHD Taylor Vortex Flow With Low Magnetic Reynolds Number in Finite Length Annulus Under Uniform Magnetic Field
Influence of Optical Parameters on Magnetohydrodynamic Natural Convection in a Horizontal Cylindrical Annulus
Simultaneous Effects of Nonlinear Mixed Convection and Radiative Flow Due to Riga-Plate With Double Stratification |
Time signature - Simple English Wikipedia, the free encyclopedia
specification of beats in a musical bar or measure
A time signature is a set of two numbers, one on top of the other one, written right after the key signature in a piece of music. The two numbers in a time signature tell you how many of one kind of note there are in each measure in the song. For example,
{\displaystyle {\frac {4}{4}}}
means that there are four beats in each measure and the quarter note gets one beat.
1 Finding out what certain time signatures mean
2 Time signatures that are used very often
3 Symbols that are used instead of time signatures
Finding out what certain time signatures meanEdit
The number in the top of the time signature tells a player how many of a certain kind of note there are in each measure. The number in the bottom of the time signature tells what kind of note is used a certain number of times in each measure. The number on the bottom of the time signature can be any exponent of 2. So, 64 could be a number that is put in the bottom of the time signature, but 65 could not be one.[1]
Number on the bottom of the time signature
1 A whole note lasts one beat
2 A half note lasts one beat
4 A quarter note lasts one beat
8 An eighth note lasts one beat
16 A sixteenth note lasts one beat
This table shows different numbers that could be the bottom of a time signature, and what they mean. Note that each note down lasts for one-half of the amount the note above it lasted. For example, one quarter note lasts one beat, and one-half note lasts one half of a beat, because one divided by two is one-half.
Time signatures that are used very oftenEdit
{\displaystyle {\frac {4}{4}}}
Four quarter notes in each measure[2]
{\displaystyle {\frac {3}{4}}}
Three quarter notes in each measure[2]
{\displaystyle {\frac {2}{4}}}
Two quarter notes in each measure[2]
{\displaystyle {\frac {6}{8}}}
Six eighth notes in each measure[2]
Symbols that are used instead of time signaturesEdit
The letter C has been used instead of using
{\displaystyle {\frac {4}{4}}}
The symbol "
" is called "cut time" (alla breve) and has been used instead of using
{\displaystyle {\frac {2}{2}}}
time, where every note is cut in half. So, in "cut time", a quarter note, which usually gets one beat, gets one-half of a beat.
↑ Time signature information
↑ 2.0 2.1 2.2 2.3 Connexions time signature summary
Retrieved from "https://simple.wikipedia.org/w/index.php?title=Time_signature&oldid=6655010" |
EUDML | Torus knots and Dunwoody manifolds. EuDML | Torus knots and Dunwoody manifolds.
Torus knots and Dunwoody manifolds.
Aydin, Hüseyin, Gültekyn, Inci, and Mulazzani, Michele. "Torus knots and Dunwoody manifolds.." Sibirskij Matematicheskij Zhurnal 45.1 (2004): 3-10. <http://eudml.org/doc/51021>.
author = {Aydin, Hüseyin, Gültekyn, Inci, Mulazzani, Michele},
keywords = {branched covering; Heegaard decomposition; Dunwoody manifold; cyclic presentation of the fundamental group},
title = {Torus knots and Dunwoody manifolds.},
AU - Aydin, Hüseyin
AU - Gültekyn, Inci
AU - Mulazzani, Michele
TI - Torus knots and Dunwoody manifolds.
KW - branched covering; Heegaard decomposition; Dunwoody manifold; cyclic presentation of the fundamental group
branched covering, Heegaard decomposition, Dunwoody manifold, cyclic presentation of the fundamental group
Special coverings, e.g. branched
{S}^{3}
Articles by Gültekyn
Articles by Mulazzani |
-1,0,1
\mathrm{ShortLexOrder}\left(u,v\right)
\mathrm{ShortLexOrder}\left(\mathrm{cat}\left(u,w\right),\mathrm{cat}\left(v,w\right)\right)
\mathrm{ShortLexOrder}\left(u,v\right)
\mathrm{ShortLexOrder}\left(\mathrm{cat}\left(w,u\right),\mathrm{cat}\left(w,v\right)\right)
\mathrm{ShortRevLexOrder}\left(u,v\right)
\mathrm{ShortRevLexOrder}\left(\mathrm{cat}\left(u,w\right),\mathrm{cat}\left(v,w\right)\right)
\mathrm{ShortRevLexOrder}\left(u,v\right)
\mathrm{ShortRevLexOrder}\left(\mathrm{cat}\left(w,u\right),\mathrm{cat}\left(w,v\right)\right)
The left and right recursive path orders, manifest in the LeftRecursivePathOrder(s1, s2) and RightRecursivePathOrder(s1, s2) commands, are defined for two strings
t
s
t
{s}_{-1}={t}_{-1}
{s}_{1..-2}
{t}_{1..-2}
{t}_{-1}<{s}_{-1}
{s}_{1..-2}
t
{s}_{-1}<{t}_{-1}
s
{t}_{1..-2}
s
t
{s}_{1}={t}_{1}
{s}_{2..-1}
{t}_{2..-1}
{t}_{1}<{s}_{1}
{s}_{2..-1}
t
{s}_{1}<{t}_{1}
s
{t}_{2..-1}
8
\mathrm{with}\left(\mathrm{StringTools}\right):
\mathrm{LexOrder}\left("abc","abd"\right)
\textcolor[rgb]{0,0,1}{-1}
\mathrm{LexOrder}\left("abd","abcd"\right)
\textcolor[rgb]{0,0,1}{1}
\mathrm{ShortLexOrder}\left("abd","abcd"\right)
\textcolor[rgb]{0,0,1}{-1}
\mathrm{RevLexOrder}\left("bcd","abd"\right)
\textcolor[rgb]{0,0,1}{1}
\mathrm{RevLexOrder}\left("bcd","bd"\right)
\textcolor[rgb]{0,0,1}{1}
\mathrm{ShortRevLexOrder}\left("aba","abc"\right)
\textcolor[rgb]{0,0,1}{-1}
\mathrm{ShortRevLexOrder}\left("abc","abc"\right)
\textcolor[rgb]{0,0,1}{0}
\mathrm{LeftRecursivePathOrder}\left("abc","abcc"\right)
\textcolor[rgb]{0,0,1}{-1}
\mathrm{RightRecursivePathOrder}\left("abc","acc"\right)
\textcolor[rgb]{0,0,1}{-1} |
EUDML | The injective hull and the bc-hull of a topological space. EuDML | The injective hull and the bc-hull of a topological space.
The injective hull and the bc-hull of a topological space.
Ershov, Yuri L.
Ershov, Yuri L.. "The injective hull and the bc-hull of a topological space.." Novi Sad Journal of Mathematics 29.2 (1999): 1-6. <http://eudml.org/doc/230670>.
@article{Ershov1999,
author = {Ershov, Yuri L.},
keywords = {topological -space; -hull; topological -space; -hull},
title = {The injective hull and the bc-hull of a topological space.},
AU - Ershov, Yuri L.
TI - The injective hull and the bc-hull of a topological space.
KW - topological -space; -hull; topological -space; -hull
{T}_{0}
bc
-hull, topological
{T}_{0}
bc
-hull
{T}_{0}
{T}_{3}
Articles by Ershov |
G
e
G
M
\mathrm{μ} :G→.M
\mathrm{μ}\left(e, x\right) = x
\mathrm{μ}\left(a*b,x\right) = \mathrm{μ}\left(a, \mathrm{μ}\left(b, x\right)\right)
a, b ∈G
x ∈ M
\mathrm{μ},
{\mathrm{μ}}_{1,a}:M→M
{\mathrm{\mu }}_{1,a}\left(x\right) = \mathrm{μ}\left(a, x\right)
{\mathrm{μ}}_{2,x}:G→M
{\mathrm{\mu }}_{2,x}\left(a\right) = \mathrm{μ}\left(a, x\right)
\mathrm{μ}
{\mathrm{Γ}}_{\mathrm{μ}}
M
{\mathrm{\mu }}_{2,x}
G
{\mathrm{Γ}}_{\mathrm{μ} }
{\mathrm{\mu }}_{2,x}
and evaluating the results at the identity.
\mathrm{μ}
{\mathrm{Γ}}_{\mathrm{μ}} = \mathrm{Γ}
{\mathrm{\mu }}_{1,a}
{\mathrm{\mu }}_{2,x}
\mathrm{μ}
\mathrm{𝔤}
\mathrm{Γ}
\mathrm{𝔤}\mathit{ }
\mathrm{𝔤}
\mathrm{Γ}
\mathrm{𝔤}
\mathrm{with}\left(\mathrm{DifferentialGeometry}\right):
\mathrm{with}\left(\mathrm{GroupActions}\right):
\mathrm{with}\left(\mathrm{LieAlgebras}\right):
\mathrm{with}\left(\mathrm{Library}\right):
\left[x,y\right].
\mathrm{DGsetup}\left([x,y],M\right):
\mathrm{Γ}
\mathrm{Gamma}≔\mathrm{evalDG}\left([\mathrm{D_x},\mathrm{D_y},y\mathrm{D_x}]\right)
\textcolor[rgb]{0,0,1}{\mathrm{Γ}}\textcolor[rgb]{0,0,1}{:=}\left[\textcolor[rgb]{0,0,1}{\mathrm{D_x}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{D_y}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{D_x}}\right]
\mathrm{Γ}
\mathrm{LieAlgebraData}\left(\mathrm{Gamma}\right)
\left[\left[\textcolor[rgb]{0,0,1}{\mathrm{e2}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{e3}}\right]\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{\mathrm{e1}}\right]
\mathrm{DGsetup}\left([\mathrm{z1},\mathrm{z2},\mathrm{z3}],G\right):
\mathrm{μ1}≔\mathrm{Action}\left(\mathrm{Gamma},G\right)
\textcolor[rgb]{0,0,1}{\mathrm{μ1}}\textcolor[rgb]{0,0,1}{:=}\left[\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{z3}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{\mathrm{z2}}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{z3}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{\mathrm{z1}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{\mathrm{z2}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{y}\right]
\mathrm{newGamma}≔\mathrm{InfinitesimalTransformation}\left(\mathrm{μ1},[\mathrm{z1},\mathrm{z2},\mathrm{z3}]\right)
\textcolor[rgb]{0,0,1}{\mathrm{newGamma}}\textcolor[rgb]{0,0,1}{:=}\left[\textcolor[rgb]{0,0,1}{\mathrm{D_x}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{D_y}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{D_x}}\right]
\mathrm{Γ2}≔\mathrm{evalDG}\left([y\mathrm{D_x},\mathrm{D_x},\mathrm{D_y}]\right)
\textcolor[rgb]{0,0,1}{\mathrm{Γ2}}\textcolor[rgb]{0,0,1}{:=}\left[\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{D_x}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{D_x}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{D_y}}\right]
\mathrm{L2}≔\mathrm{LieAlgebraData}\left(\mathrm{Γ2},\mathrm{Alg2}\right)
\textcolor[rgb]{0,0,1}{\mathrm{L2}}\textcolor[rgb]{0,0,1}{:=}\left[\left[\textcolor[rgb]{0,0,1}{\mathrm{e1}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{e3}}\right]\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{\mathrm{e2}}\right]
\mathrm{DGsetup}\left(\mathrm{L2}\right)
\textcolor[rgb]{0,0,1}{\mathrm{Lie algebra: Alg2}}
\mathrm{Adjoint}\left(\right)
\left[\left[\begin{array}{rrr}\textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\end{array}\right]\textcolor[rgb]{0,0,1}{,}\left[\begin{array}{rrr}\textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\end{array}\right]\textcolor[rgb]{0,0,1}{,}\left[\begin{array}{rrr}\textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\end{array}\right]\right]
\mathrm{μ1},B≔\mathrm{Action}\left(\mathrm{Γ2},G,\mathrm{output}=["ManifoldToManifold","Basis"]\right)
\textcolor[rgb]{0,0,1}{\mathrm{μ1}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{B}\textcolor[rgb]{0,0,1}{:=}\left[\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{z2}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{\mathrm{z1}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{\mathrm{z3}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{y}\right]\textcolor[rgb]{0,0,1}{,}\left[\left[\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0}\right]\textcolor[rgb]{0,0,1}{,}\left[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0}\right]\textcolor[rgb]{0,0,1}{,}\left[\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}\right]\right]
\mathrm{newGamma}≔\mathrm{InfinitesimalTransformation}\left(\mathrm{μ1},[\mathrm{z1},\mathrm{z2},\mathrm{z3}]\right)
\textcolor[rgb]{0,0,1}{\mathrm{newGamma}}\textcolor[rgb]{0,0,1}{:=}\left[\textcolor[rgb]{0,0,1}{\mathrm{D_x}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{D_x}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{D_y}}\right]
\mathrm{map}\left(\mathrm{DGzip},B,\mathrm{Γ2},"plus"\right)
\left[\textcolor[rgb]{0,0,1}{\mathrm{D_x}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{D_x}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{D_y}}\right]
\mathrm{DGsetup}\left([x,y],M\right):
\mathrm{Γ3}≔\mathrm{Retrieve}\left("Gonzalez-Lopez",1,[22,17],\mathrm{manifold}=M\right)
\textcolor[rgb]{0,0,1}{\mathrm{Γ3}}\textcolor[rgb]{0,0,1}{:=}\left[\textcolor[rgb]{0,0,1}{\mathrm{D_x}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{D_y}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{D_y}}\textcolor[rgb]{0,0,1}{,}\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{D_y}}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{ⅇ}}^{\textcolor[rgb]{0,0,1}{x}}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{D_y}}\right]
\mathrm{DGsetup}\left([\mathrm{z1},\mathrm{z2},\mathrm{z3},\mathrm{z4},\mathrm{z5}],\mathrm{G3}\right)
\textcolor[rgb]{0,0,1}{\mathrm{frame name: G3}}
\mathrm{\mu }≔\mathrm{Action}\left(\mathrm{Γ3},\mathrm{G3}\right)
\textcolor[rgb]{0,0,1}{\mathrm{μ}}\textcolor[rgb]{0,0,1}{:=}\left[\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{\mathrm{z5}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{=}{\textcolor[rgb]{0,0,1}{ⅇ}}^{\textcolor[rgb]{0,0,1}{\mathrm{z5}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{x}}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{z1}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{\mathrm{z2}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{\mathrm{z3}}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{z4}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{y}\right]
\mathrm{InfinitesimalTransformation}\left(\mathrm{\mu },[\mathrm{z1},\mathrm{z2},\mathrm{z3},\mathrm{z4},\mathrm{z5}]\right)
\left[{\textcolor[rgb]{0,0,1}{ⅇ}}^{\textcolor[rgb]{0,0,1}{x}}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{D_y}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{D_y}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{D_y}}\textcolor[rgb]{0,0,1}{,}\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{D_y}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{D_x}}\right]
\mathrm{DGsetup}\left([x,y,u,v],\mathrm{M4}\right):
\mathrm{Γ4}≔\mathrm{Retrieve}\left("Petrov",1,[32,6],\mathrm{manifold}=\mathrm{M4}\right)
\textcolor[rgb]{0,0,1}{\mathrm{Γ4}}\textcolor[rgb]{0,0,1}{:=}\left[\textcolor[rgb]{0,0,1}{\mathrm{D_y}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{D_u}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{D_u}}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{u}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{\mathrm{D_y}}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{\mathrm{D_x}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{D_u}}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{\mathrm{D_y}}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{u}\right]
\mathrm{DGsetup}\left([\mathrm{z1},\mathrm{z2},\mathrm{z3},\mathrm{z4}],\mathrm{G4}\right)
\textcolor[rgb]{0,0,1}{\mathrm{frame name: G4}}
\mathrm{\mu }≔\mathrm{Action}\left(\mathrm{Γ4},\mathrm{G4}\right)
\textcolor[rgb]{0,0,1}{\mathrm{μ}}\textcolor[rgb]{0,0,1}{:=}\left[\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{\mathrm{z3}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{\mathrm{sin}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{\mathrm{z4}}\right)\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{ⅇ}}^{\textcolor[rgb]{0,0,1}{\mathrm{z3}}}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{u}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{\mathrm{cos}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{\mathrm{z4}}\right)\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{ⅇ}}^{\textcolor[rgb]{0,0,1}{\mathrm{z3}}}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{\mathrm{z1}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{u}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{\mathrm{sin}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{\mathrm{z4}}\right)\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{ⅇ}}^{\textcolor[rgb]{0,0,1}{\mathrm{z3}}}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{\mathrm{cos}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{\mathrm{z4}}\right)\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{ⅇ}}^{\textcolor[rgb]{0,0,1}{\mathrm{z3}}}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{u}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{\mathrm{z2}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{v}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{v}\right]
\mathrm{InfinitesimalTransformation}\left(\mathrm{\mu },[\mathrm{z1},\mathrm{z2},\mathrm{z3},\mathrm{z4}]\right)
\left[\textcolor[rgb]{0,0,1}{\mathrm{D_y}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{D_u}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{D_u}}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{u}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{\mathrm{D_y}}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{\mathrm{D_x}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{D_u}}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{\mathrm{D_y}}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{u}\right] |
Loss of k-nearest neighbor classifier - MATLAB loss - MathWorks India
Loss of k-nearest neighbor classifier
L = loss(mdl,Tbl,ResponseVarName) returns a scalar representing how well mdl classifies the data in Tbl when Tbl.ResponseVarName contains the true classifications. If Tbl contains the response variable used to train mdl, then you do not need to specify ResponseVarName.
When computing the loss, the loss function normalizes the class probabilities in Tbl.ResponseVarName to the class probabilities used for training, which are stored in the Prior property of mdl.
The meaning of the classification loss (L) depends on the loss function and weighting scheme, but, in general, better classifiers yield smaller classification loss values. For more details, see Classification Loss.
L = loss(mdl,Tbl,Y) returns a scalar representing how well mdl classifies the data in Tbl when Y contains the true classifications.
When computing the loss, the loss function normalizes the class probabilities in Y to the class probabilities used for training, which are stored in the Prior property of mdl.
L = loss(mdl,X,Y) returns a scalar representing how well mdl classifies the data in X when Y contains the true classifications.
Examine the loss of the classifier for a mean observation classified as 'versicolor'.
Example: loss(mdl,Tbl,'response','LossFun','exponential','Weights','w') returns the weighted exponential loss of mdl classifying the data in Tbl. Here, Tbl.response is the response variable, and Tbl.w is the weight variable.
'mincost' is appropriate for classification scores that are posterior probabilities. By default, k-nearest neighbor models return posterior probabilities as classification scores (see predict).
You can specify a function handle for a custom loss function using @ (for example, @lossfun). Let n be the number of observations in X and K be the number of distinct classes (numel(mdl.ClassNames)). Your custom loss function must have this form:
function lossvalue = lossfun(C,S,W,Cost)
C is an n-by-K logical matrix with rows indicating the class to which the corresponding observation belongs. The column order corresponds to the class order in mdl.ClassNames. Construct C by setting C(p,q) = 1, if observation p is in class q, for each row. Set all other elements of row p to 0.
S is an n-by-K numeric matrix of classification scores. The column order corresponds to the class order in mdl.ClassNames. The argument S is a matrix of classification scores, similar to the output of predict.
ones(size(X,1),1) (default) | numeric vector | name of a variable in Tbl
If you specify Weights as the name of a variable in Tbl, the name must be a character vector or string scalar. For example, if the weights are stored as Tbl.w, then specify Weights as 'w'. Otherwise, the software treats all columns of Tbl, including Tbl.w, as predictors.
loss normalizes the weights so that observation weights in each class sum to the prior probability of that class. When you supply Weights, loss computes the weighted classification loss.
\sum _{j=1}^{n}{w}_{j}=1.
L=\sum _{j=1}^{n}{w}_{j}\mathrm{log}\left\{1+\mathrm{exp}\left[-2{m}_{j}\right]\right\}.
L=\sum _{j=1}^{n}{w}_{j}{c}_{{y}_{j}{\stackrel{^}{y}}_{j}},
{\stackrel{^}{y}}_{j}
{c}_{{y}_{j}{\stackrel{^}{y}}_{j}}
{\stackrel{^}{y}}_{j}
L=\sum _{j=1}^{n}{w}_{j}I\left\{{\stackrel{^}{y}}_{j}\ne {y}_{j}\right\},
L=-\sum _{j=1}^{n}\frac{{\stackrel{˜}{w}}_{j}\mathrm{log}\left({m}_{j}\right)}{Kn},
{\stackrel{˜}{w}}_{j}
L=\sum _{j=1}^{n}{w}_{j}\mathrm{exp}\left(-{m}_{j}\right).
L=\sum _{j=1}^{n}{w}_{j}\mathrm{max}\left\{0,1-{m}_{j}\right\}.
L=\sum _{j=1}^{n}{w}_{j}\mathrm{log}\left(1+\mathrm{exp}\left(-{m}_{j}\right)\right).
{\gamma }_{jk}={\left(f{\left({X}_{j}\right)}^{\prime }C\right)}_{k}.
{\stackrel{^}{y}}_{j}=\underset{k=1,...,K}{\text{argmin}}{\gamma }_{jk}.
L=\sum _{j=1}^{n}{w}_{j}{c}_{j}.
L=\sum _{j=1}^{n}{w}_{j}{\left(1-{m}_{j}\right)}^{2}.
\sum _{i=1}^{K}\stackrel{^}{P}\left(i|Xnew\left(n\right)\right)C\left(j|i\right),
\stackrel{^}{P}\left(i|X\left(n\right)\right)
is the posterior probability of class i for observation Xnew(n).
C\left(j|i\right)
is the true misclassification cost of classifying an observation as j when its true class is i.
loss does not support GPU arrays for ClassificationKNN models with the following specifications:
ClassificationKNN | fitcknn | edge | margin |
e (mathematics) - Citizendium
e is a constant real number equal to 2.71828 18284 59045 23536.... Irrational and transcendental, e is the base of the natural logarithms. Its inverse, the exponential function
{\displaystyle f(x)=e^{x}\,}
is equal to its derivative, i.e.
{\displaystyle f^{'}(x)=f(x)\,}
. More generally, for any differentiable function
{\displaystyle u}
{\displaystyle {\frac {d}{dx}}(Ke^{u})=Ke^{u}{\frac {du}{dx}}}
for K constant and
{\displaystyle \int Ke^{u}du=Ke^{u}+C}
for K and C constants. For this reason, the exponential function plays a central role in analysis.
e is sometimes called "Euler's number" in honor of the Swiss mathematician Leonhard Euler who studied it and has shown its mathematical importance. Equally, it is sometimes called "Napier's constant" in honor of the Scottish mathematician John Napier who introduced logarithms.
In 1737, Leonhard Euler proved that e is an irrational number[1], i.e. it cannot be expressed as a fraction, only as an infinite continued fraction. In 1873, Charles Hermite proved it was a transcendental number[1], i.e. it is not solution of any polynomial having a finite number of rational coefficients.
e is the base of the natural logarithms. The exponential function
{\displaystyle f(x)=Ke^{x}\,}
for K constant, is equal to its derivative, i.e.
{\displaystyle f^{'}(x)=f(x)\,}
. For any differentiable function
{\displaystyle u}
{\displaystyle {\frac {d}{dx}}(Ke^{u})=Ke^{u}{\frac {du}{dx}}}
{\displaystyle \int Ke^{u}du=Ke^{u}+C}
for K and C constants. The solutions of many differential equations are based on those properties.
There is no precise date for the discovery of this number[2]. In 1624, Henry Briggs, one of the first to publish a logarithm table, gives its logarithm, but does not formally identify e. In 1661, Christiaan Huygens remarks the match between the area under the hyperbola and logarithmic functions. In 1683, Jakob Bernoulli studies the limit of
{\displaystyle \scriptstyle (1+{\frac {1}{n}})^{n}}
, but nobody links that limit to natural logarithms. Finally, in a letter sent to Huyghens, Gottfried Leibniz sets e as the base of natural logarithm (even if he names it b).[3]
There are many ways to define e. The most common are probably
{\displaystyle e=\lim _{n\rightarrow \infty }\left(1+{\frac {1}{n}}\right)^{n}}
{\displaystyle e={\frac {1}{0!}}+{\frac {1}{1!}}+{\frac {1}{2!}}+{\frac {1}{3!}}+{\frac {1}{4!}}+{\frac {1}{5!}}+\cdots }
↑ 1.0 1.1 Eli Maor, e: The Story of a Number, Princeton University Press, 1994, p.37. ISBN 0-691-05854-7.
↑ John J. O'Connor et Edmund F. Robertson, The number e, MacTutor History of Mathematics archive. Consulted 2008-01-10.
↑ Eli Maor, e: The Story of a Number, Princeton University Press, 1994. ISBN 0-691-05854-7.
↑ This equation is a special case of the exponential function :
{\displaystyle e^{x}={\frac {x^{0}}{0!}}+{\frac {x^{1}}{1!}}+{\frac {x^{2}}{2!}}+{\frac {x^{3}}{3!}}+{\frac {x^{4}}{4!}}+{\frac {x^{5}}{5!}}+\cdots }
with x set to 1.
Retrieved from "https://citizendium.org/wiki/index.php?title=E_(mathematics)&oldid=375400" |
EUDML | Approximately inner automorphisms of semi-finite von Neumann algebras. EuDML | Approximately inner automorphisms of semi-finite von Neumann algebras.
Approximately inner automorphisms of semi-finite von Neumann algebras.
Thierry Giordano; James A. Mingo
Giordano, Thierry, and Mingo, James A.. "Approximately inner automorphisms of semi-finite von Neumann algebras.." Mathematica Scandinavica 72.1 (1993): 131-147. <http://eudml.org/doc/167242>.
@article{Giordano1993,
author = {Giordano, Thierry, Mingo, James A.},
keywords = {-finite semifinite von Neumann algebra; von Neumann algebra in standard form; faithful normal semi-finite trace; approximately inner automorphism; approximately pointwise inner; modular automorphism; faithful normal semifinite weight},
title = {Approximately inner automorphisms of semi-finite von Neumann algebras.},
AU - Giordano, Thierry
AU - Mingo, James A.
TI - Approximately inner automorphisms of semi-finite von Neumann algebras.
KW - -finite semifinite von Neumann algebra; von Neumann algebra in standard form; faithful normal semi-finite trace; approximately inner automorphism; approximately pointwise inner; modular automorphism; faithful normal semifinite weight
\sigma
-finite semifinite von Neumann algebra, von Neumann algebra in standard form, faithful normal semi-finite trace, approximately inner automorphism, approximately pointwise inner, modular automorphism, faithful normal semifinite weight
{C}^{*}
{W}^{*}
Articles by Thierry Giordano
Articles by James A. Mingo |
EUDML | Lifts of the almost complex structures to . EuDML | Lifts of the almost complex structures to .
Lifts of the almost complex structures to
T\left({Osc}^{2}M\right)
Atanasiu, Gheorghe; Voicu, Nicoleta
Atanasiu, Gheorghe, and Voicu, Nicoleta. "Lifts of the almost complex structures to .." Novi Sad Journal of Mathematics 29.3 (1999): 35-53. <http://eudml.org/doc/223771>.
@article{Atanasiu1999,
author = {Atanasiu, Gheorghe, Voicu, Nicoleta},
keywords = {bundle of accelerations},
title = {Lifts of the almost complex structures to .},
AU - Atanasiu, Gheorghe
AU - Voicu, Nicoleta
TI - Lifts of the almost complex structures to .
KW - bundle of accelerations
bundle of accelerations
Articles by Atanasiu
Articles by Voicu |
April 2021 A gap theorem for half-conformally flat manifolds
Brian Weber,1 Martin Citoler-Saumell1
1ShanghaiTech University, Shanghai, China
Illinois J. Math. 65(1): 71-96 (April 2021). DOI: 10.1215/00192082-8886951
{L}^{2}
{L}^{2}
d\mathit{\omega }=0
O\left({r}^{-4}\right)
Brian Weber. Martin Citoler-Saumell. "A gap theorem for half-conformally flat manifolds." Illinois J. Math. 65 (1) 71 - 96, April 2021. https://doi.org/10.1215/00192082-8886951
Received: 1 November 2019; Revised: 5 October 2020; Published: April 2021
Secondary: 53A31 , 53C21
Brian Weber, Martin Citoler-Saumell "A gap theorem for half-conformally flat manifolds," Illinois Journal of Mathematics, Illinois J. Math. 65(1), 71-96, (April 2021) |
Normal boiling points values are for NH3, - 34C, bromine 58°C, carbon tetrachloride 77°C and…
Normal boiling points values are for
N{H}_{3}
, - 34C, bromine 58°C, carbon tetrachloride 77°C and oxygen - 183°C. Which of these liquids has the highest vapour pressure at room temperature?
N{H}_{3}
B{r}_{2}
CC{l}_{4}
{O}_{2}
Networking MCQ IBPS Common Written Exam (PO/MT) Main 2016 Solved Paper MCQ Rearrangement Set 13 MCQ Urologic infections and inflammations MCQ Manufacturing & Production Engineering MCQ Chemical Process MCQ Line Graph MCQ Co-Ordination Compound MCQ Phylum - Amphibia MCQ Thermodynamics MCQ Nervous System MCQ Rearrangement Set 9 MCQ |
Globerson et al. ICML 2007. Exponentiated Gradient Algorithms for Log Linear Structured Prediction - Cohen Courses
Globerson et al. ICML 2007. Exponentiated Gradient Algorithms for Log Linear Structured Prediction
(Redirected from Globerson et al ICML 2007)
Exponentiated gradient algorithms for log-linear structured prediction, by A. Globerson, T. Y Koo, X. Carreras, M. Collins. In Proceedings of the 24th international conference on Machine learning, 2007.
3 EG Algorithm
3.1 Batch learning
3.3 Convergence rate of batch algorithm
4.2 Structured learning (dependency parsing)
This paper describes an exponentiated gradient (EG) algorithm for training conditional log-linear models. Conditional log-linear models are used for several key structured prediction tasks such as NER, POS tagging, Parsing. In this paper, they propose a fast & efficient algorithm for optimizing log-linear models, such as CRFs.
The common practice of optimizing the conditional log likelihood of a CRF is often via conjugate-gradient or L-BFGS algorithms (Sha & Pereira, 2003), which typically would require at least one pass through the entire dataset before updating the weight vector. The EG algorithm described in the paper is online, meaning the weight vector can be updated as we see more training data. This is a useful property to have if we do not know the size of the training data in advance.
Consider a supervised learning setting with objects
{\displaystyle x\in {\mathcal {X}}}
and corresponding labels
{\displaystyle y\in {\mathcal {Y}}}
, which maybe trees, sequences or other high dimensional structure. Also, assume we are given a function
{\displaystyle \phi (x,y)}
{\displaystyle (x,y)}
pairs to feature vectors
{\displaystyle {\mathcal {R}}^{d}}
. Given a parameter vector
{\displaystyle \mathbf {w} \in {\mathcal {R}}^{d}}
, a conditional log-linear model defines a distribution over labels as:
{\displaystyle p(y|x;\mathbf {w} )={\frac {1}{Z_{x}}}\exp \left(\mathbf {w} \phi (x,y)\right)}
{\displaystyle Z_{x}}
is a partition function.
The problem of learning
{\displaystyle \mathbf {w} }
from the training data is thus finding
{\displaystyle \mathbf {w} }
which maximizes the regularized log-likelihood:
{\displaystyle \mathbf {w} ^{*}=\arg \max _{w}\sum _{i}\log p(y_{i}|x_{i};\mathbf {w} )-{\frac {C}{2}}\lVert \mathbf {w} \rVert ^{2}}
{\displaystyle C}
is the regularization parameter. The above equation has a convex dual which is derived in Lebanon and Lafferty NIPS 2001. With dual variables
{\displaystyle \alpha _{i,y}}
{\displaystyle \mathbf {\alpha } =[\mathbf {\alpha } _{1},\mathbf {\alpha } _{2},\cdots ,\mathbf {\alpha } _{n}]}
{\displaystyle Q(\mathbf {\alpha } )=\sum _{i}\sum _{y}\alpha _{i,y}\log \alpha _{i,y}+{\frac {1}{2C}}\lVert \mathbf {w} (\alpha )\rVert ^{2}}
{\displaystyle \mathbf {w} (\alpha )=\sum _{i}\sum _{y}\alpha _{i,y}\left(\phi (x_{i},y_{i})-\phi (x_{i},y)\right)}
The dual problem is thus
{\displaystyle \alpha ^{*}=\arg \min _{\alpha \in \Delta ^{n}}Q(\alpha )}
EG Algorithm
Given a set of distributions
{\displaystyle \alpha \in \Delta ^{n}}
, the EG algorithm gives up the update equations
{\displaystyle \alpha _{i,y}^{'}={\frac {1}{Z}}\alpha _{i,y}\exp(-\eta \nabla _{i,y})}
{\displaystyle Z_{i}=\sum _{\hat {y}}\alpha _{i,{\hat {y}}}\exp(-\eta \nabla _{i,{\hat {y}}})}
{\displaystyle \nabla {i,y}={\frac {\partial Q(\alpha )}{\partial \alpha _{i,y}}}=1+\log \alpha _{i,y}+{\frac {1}{C}}\mathbf {w} (\alpha )\cdot \left(\phi (x_{i},y_{i})-\phi (x_{i},y)\right)}
At each iteration,
{\displaystyle \alpha '}
is updated simultaneously with all (or subset of) the available training instances.
At each iteration, we choose a single training instance, and update
{\displaystyle \alpha '}
Convergence rate of batch algorithm
To get within
{\displaystyle \epsilon }
of the optimum parameters, we would need
{\displaystyle O({\frac {1}{\eta \epsilon }})}
The authors compared the performance of the EG algorithm to conjugated-gradient and L-BFGS methods.
The authors used a subset of the MNIST handwritten digits classification.
It can be seen that the EG algorithm converges considerably faster than the other methods.
Structured learning (dependency parsing)
The author used the Slovene data in CoNLL-X Shared Task on Multilingual dependency parsing.
It can be seen that the EG algorithm converges faster in terms of objective function and accuracy measures.
In Bartlett et al NIPS 2004, they used the EG algorithm for large margin structured classification.
Retrieved from "http://curtis.ml.cmu.edu/w/courses/index.php?title=Globerson_et_al._ICML_2007._Exponentiated_Gradient_Algorithms_for_Log_Linear_Structured_Prediction&oldid=7175" |
EnWik > Physics
{\displaystyle {\hat {H}}|\psi _{n}(t)\rangle =i\hbar {\frac {\partial }{\partial t}}|\psi _{n}(t)\rangle }
{\displaystyle {\frac {1}{{c}^{2}}}{\frac {{\partial }^{2}{\phi }_{n}}{{\partial t}^{2}}}-{{\nabla }^{2}{\phi }_{n}}+{\left({\frac {mc}{\hbar }}\right)}^{2}{\phi }_{n}=0}
The distinction is clear-cut, but not always obvious. For example, mathematical physics is the application of mathematics in physics. Its methods are mathematical, but its subject is physical.[53] The problems in this field start with a "mathematical model of a physical situation" (system) and a "mathematical description of a physical law" that will be applied to that system. Every mathematical statement used for solving has a hard-to-find physical meaning. The final mathematical solution has an easier-to-find meaning, because it is what the solver is looking for.
{\displaystyle \Xi _{b}^{-}}
". Physical Review Letters. 99 (5): 052001. arXiv:0706.1690v2. Bibcode:2007PhRvL..99e2001A. doi:10.1103/PhysRevLett.99.052001. PMID 17930744. S2CID 11568965.
Modernphysicsfields.svg
A simplified view on fields of modern physics theories. Please note that from a historical point of view, this diagram is very simplified. In fact, when quantum mechanics was originally formulated, it was applied to models whose correspondence limit was non-relativistic. Many attempts were made to merge quantum mechanics with special relativity with a covariant equation such as the Dirac equation. These days, relativistic quantum mechanics is now abandoned in favour of the quantum theory of fields.
Solvay conference 1927.jpg
1927 Solvay Conference on Quantum Mechanics. Photograph by Benjamin Couprie, Institut International de Physique Solvay, Brussels, Belgium.
This picture is also available with names at the bottom.
Physics and other sciences.png
Author/Creator: Saeed Veradi, Licence: CC BY-SA 3.0
Math is prerequisite to Ontology.
Math and Ontology are prerequisite to Physics.
Math, Ontology and Physics are prerequisite to Chemistry
Lightning in Arlington.jpg
Lightning over Pentagon City in Arlington, Virginia.
Prediction of sound scattering from Schroeder Diffuser.jpg
Author/Creator: Trevor Cox, Licence: CC BY-SA 3.0
Prediction using FDTD
Hubble ultra deep field high rez edit1.jpg
The Hubble Ultra Deep Field, is an image of a small region of space in the constellation Fornax, composited from Hubble Space Telescope data accumulated over a period from September 3, 2003 through January 16, 2004. The patch of sky in which the galaxies reside was chosen because it had a low density of bright stars in the near-field.
Pinhole-camera.svg
SVG redraw of Image:Pinhole-camera.png
Acceleration components.JPG
Components of acceleration vector for planar curve
Feynman'sDiagram.JPG
Author/Creator: The original uploader was Ancheta Wis at English Wikipedia., Licence: CC BY-SA 2.5
Picture of a Feynman diagram, inscribed by Richard P. Feynman to me, in my copy of Volume 3 of his Feynman Lectures on Physics (Quantum Mechanics). Picture taken by self. if you can't read the symbols, they are
{\displaystyle \gamma _{\mu }}
{\displaystyle \gamma _{\mu }}
{\displaystyle 1/q^{2}}
Einstein1921 by F Schmutzer 2.jpg
Albert Einstein during a lecture in Vienna in 1921 (age 42).
CollageFisica.jpg
Some images about Physics:
(from top-left, clockwise)
Refraction of light (which is described by w:en:Optics)
A spintop (whose movement is described by classical mechanics)
The effects of an inelastic collision
Atomic orbitals of hydrogen (which are described by w:en:quantum mechanics)
Lightning (which is an electrical phenomenon)
Galaxies (photo made by the Hubble Space Telescope)
Meissner effect p1390048.jpg
Author/Creator: Mai-Linh Doan, Licence: CC-BY-SA-3.0
Meissner effect: levitation of a magnet above a superconductor
Pahoeoe fountain original.jpg
Image of a Pahoehoe fountain. Original caption: "Arching fountain approximately 10 m high issuing from the western end of the 0740 vents, a series of spatter cones 170 m long, south of Pu‘u Kahaualea. Episodes 2 and 3 were characterized by spatter and cinder cones, such as Pu‘u Halulu, which was 60 m high by episode 3 (photo by J.D. Griggs, 02/25/83, JG928) (picture #004)."
Mathematical Physics and other sciences.png
Author/Creator: Saeed.Veradi, Licence: CC BY-SA 3.0
Ontology is a prerequisite for Physics, but not for Mathematics. It means Physics is ultimately concerned with descriptions of the real world, while mathematics is concerned with abstract patterns, even beyond the real world. Thus Physics statements are synthetic, while Mathematical statements are analytic. Mathematics contains hypotheses, while Physics contains theories. Mathematical statements have to be only logically true, while Physics statements must match observed and experimental data. The distinction is clear-cut, but not always obvious. Mathematical physics is the application of Mathematics to Physics. The problems in this field start with a "mathematical model of a physical situation" and a "mathematical description of a physical law". Every mathematical statement used for a solution has a hard-to-find physical meaning; the final mathematical solution has an easier-to-find meaning.
Military laser experiment.jpg
A military scientist operates a laser in a test environment. The United states Air Force Research Laboratory (AFRL) Directed Energy Directorate conducts research on a variety of solid-state and chemical lasers.
Mission: STS-41-B
Title: Views of the extravehicular activity during STS 41-B
Description: Astronaut Bruce McCandless II, mission specialist, participates in a extra-vehicular activity (EVA), a few meters away from the cabin of Space Shuttle Challenger. He is using a nitrogen-propelled hand-controlled Manned Maneuvering Unit (MMU). He is performing this EVA without being tethered to the shuttle. The picture shows a cloud view of the Earth in the background.
Bose Einstein condensate.png
Bose–Einstein condensate — In the July 14, 1995 issue of Science Magazine, researchers from JILA reported achieving a temperature far lower than had ever been produced before and creating an entirely new state of matter predicted decades ago by Albert Einstein and Indian physicist Satyendra Nath Bose. Cooling rubidium atoms to less than 170 billionths of a degree above absolute zero caused the individual atoms to condense into a "superatom" behaving as a single entity. The graphic shows three-dimensional successive snap shots in time in which the atoms condensed from less dense red, yellow and green areas into very dense blue to white areas. JILA is jointly operated by NIST and the University of Colorado at Boulder. |
Cardinal invariants associated with predictors II
January, 2001 Cardinal invariants associated with predictors II
We call a function from
{\omega }^{<\omega }
\omega
a predictor. A function
f\in {\omega }^{\omega }
is said to be constantly predicted by a predictor
\pi
, if there is an
n<\omega
\forall i<\omega \exists j\in \left[i,i+n\right)\left(f\left(j\right)=\pi \left(f↑j\right)\right)
{\theta }_{\omega }
denote the smallest size of a set
\Phi
of predictors such that every
f\in {\omega }^{\omega }
can be constantly predicted by some predictor in
\Phi
. In [7], we showed that
{\theta }_{\omega }
cof\left(\mathcal{N}\right)
. In the present paper, we will prove that
{\theta }_{\omega }
d
Shizuo KAMO. "Cardinal invariants associated with predictors II." J. Math. Soc. Japan 53 (1) 35 - 57, January, 2001. https://doi.org/10.2969/jmsj/05310035
Keywords: countable support iteration , Predictor , rational perfect tree forcing
Shizuo KAMO "Cardinal invariants associated with predictors II," Journal of the Mathematical Society of Japan, J. Math. Soc. Japan 53(1), 35-57, (January, 2001) |
Proof of Bayes' Rule
A law of probability that describes the proper way to incorporate new evidence into prior probabilities to form an updated probability estimate. Bayesian rationality takes its name from this theorem, as it is regarded as the foundation of consistent rational reasoning under uncertainty. A.k.a. "Bayes's Theorem" or "Bayes's Rule".
{\displaystyle P(A|B)={\frac {P(B|A)\,P(A)}{P(B)}}}
{\displaystyle {\frac {P(A|B)}{P(\neg A|B)}}={\frac {P(A)}{P(\neg A)}}\cdot {\frac {P(B|A)}{P(B|\neg A)}}}
or in words,
{\displaystyle Posterior~odds=Prior~odds\times Likelihood~ratio}
Bayes' Theorem Illustrated (My Way) by komponisto.
Visualizing Bayes' theorem by Oscar Bonilla
Using Venn pies to illustrate Bayes' theorem by oracleaide
Arbital Guide to Bayes' Rule
A Guide to Bayes’ Theorem – A few links by Alexander Kruel
Retrieved from "http://wiki.lesswrong.com/index.php?title=Bayes%27_theorem&oldid=16840" |
Let's build a rainbow on a canvas from scratch! 🌈📐 - DEV Community
It's raining since a few days at my place. And even though it actually just stopped raining as I'm writing this post, the sun hardly comes out anymore. It's autumn on the northern hemisphere. The chances of seeing what is probably nature's most colorful phenomenon this year are close to zero. What a pity.
But there's a remedy: Let's just build our own rainbow with JavaScript, some HTML and some mathematics! And no, we're not using any built-in linear gradient functions or CSS today.
But first, I'd like to thank @doekenorg for supporting me via Buy Me A Coffee! Your support is highly appreciated and the coffee was delicious, just the right thing on a cold autumn morning! Thank you!
No built-in linear gradient? How are we going to do this?
With mathematics and a color scheme called HLS. With a few parameters, namely the width and height of the canvas, the angle of the rainbow, which color to start with and which color to end with, we can construct an algorithm that will tell us the exact color of every pixel.
The nice thing: We can also do other things than painting with the result. For example coloring a monospaced text in a rainbow pattern!
HLS? What's that?
Good question! Most people that worked with CSS have seen RGB values before. RGB stands for "Red, Green, Blue". All colors are mixed by telling the machine the amount of red, green and blue. This is an additive color model (all colors together end of in white), red green and yellow on the other hand, is a subtractive color model (all colors together end up black).
HLS is a bit different. Instead of setting the amount of different colors, we describe the color on a cylinder. HLS stands for "hue, lightness, saturation":
(Image by Wikimedia user SharkD, released under the CC BY-SA 3.0, no changes made to the image)
The lightness determines how bright the color is. 0% always means black, 100% means white. The saturation describes how intense the color is. 0% would mean gray-scale, 100% means the colors are very rich. This image I found on Stackoverflow describes it very well:
Now, the hue part is what's interesting to us. It describes the actual color on a scale from 0 degrees to 360 degrees. For better understanding, the Stackoverflow post I mentioned above also has a very nice illustration for that:
If we want to make a rainbow with HLS, we set the colors as always mid-brightness (not black nor white), full saturation (the colors should be visible and rich) and go around the circle, so from 0 to 360 degrees.
So first, we start with the usual boilerplating: A canvas and a script linking to the rainbow.
<script src="./rainbow.js"></script>
In there, I start with an array of arrays the same size as the canvas. I want to make this as generic as possible so I can also use it without the canvas or for any other gradient.
* Creates an array of arrays containing a gradient at a given angle.
* @param valueFrom
* @param valueTo
* @returns {any[][]}
const createGradientMatrix = (valueFrom, valueTo, width, height, angle) => {
let grid = Array(height)
() => Array(width).fill(null)
I also normalize valueTo, so I can use percentages to determine which value I want. For example, 50% is should be halfway between valueFrom and valueTo.
const normalizedValueTo = valueTo - valueFrom
Determining the color of a pixel
This is where the mathematics come in. In a gradient, all pixels lie on parallel lines. All pixels on the same line have the same colors. A line is defined as follows:
y = mx + a
Where m is the slope of the line and a describes the offset on the Y axis.
Desmos can illustrate that pretty well:
Now, to create a gradient, we can gradually increase the Y axis offset and start to color the lines differently:
Now, how can we use this to determine the color of each and every pixel?
We need to figure out which line it is on. The only difference between all the lines of the gradient shown with Desmos is the Y axis offset a. We know the coordinates X and Y of the pixel and we know the slope (given by the angle), so we can determine the Y axis offset like this:
a = y - m * x
We can define this as a JS function right away:
* Determines the a of `y = mx + a`
const getYOffset = (x, y, m) => y - m * x
Now we know the line the pixel is on. Next, we need to figure out which color the line has. Remember how we normalized the valueTo in order to figure out a value with percentages? We can dos something similar here:
// Some trigonometry to figure out the slope from an angle.
let m = 1 / Math.tan(angle * Math.PI / 180)
if (Math.abs(m) === Infinity) {
m = Number.MAX_SAFE_INTEGER
const minYOffset = getYOffset(width - 1, 0, m)
const maxYOffset = getYOffset(0, height - 1, m)
const normalizedMaxYOffset = maxYOffset - minYOffset
By plugging in the maximum X value (width - 1) and the maximum Y value (height - 1) we can find the range of Y offsets that will occur in this gradient. Now, if we know the X and Y coordinates of a pixel, we can determine it's value like so:
const yOffset = getYOffset(x, y, m)
const normalizedYOffset = maxYOffset - yOffset
const percentageOfMaxYOffset = normalizedYOffset / normalizedMaxYOffset
grid[y][x] = percentageOfMaxYOffset * normalizedValueTo
So, this is what's happening now, step by step:
Transform the angle of all lines into the slope of all lines
Do some failover (if (Math.abs(m) === Infinity) ...) to not run into divisions by zero etc.
Determine the maximum Y axis offset we'll encounter
Determine the minimum Y axis offset we'll encounter
Normalize the maximum Y axis offset, so we don't have to deal with negatives
Figure out the Y axis offset of the line that goes through X and Y
Normalize that calculated Y axis offset as well
Figure out how far (in %) this line is in the gradient
Use the calculated % to figure out the color value of the line
Assign the color value to the pixel
Let's do that for every pixel of the grid:
This will yield an array of arrays the size of the canvas with values for each cell between valueFrom and valueTo.
Creating the actual rainbow
Let's use this to create a rainbow:
const canvas = document.querySelector('#canvas')
const grid = createGradientMatrix(0, 360, 400, 400, 65)
grid.forEach((row, y) => row.forEach((cellValue, x) => {
context.fillStyle = 'hsl('+cellValue+', 100%, 50%)'
context.fillRect(x, y, 1, 1)
You can now see that the gradient matrix we've created isn't necessarily for canvasses only. We could also use this to create colored text:
const loremIpsum = 'Lorem ipsum ...' // Really long text here.
const lines = loremIpsum.substring(0, 400).match(/.{1,20}/g)
const loremMatrix = lines.map(l => l.split(''))
const textColorGrid = createGradientMatrix(0, 360, 20, 20, 65)
loremMatrix[y][x] = `
<span class="letter" style="color: hsl(${textColorGrid[y][x]}, 100%, 50%);">
${loremMatrix[y][x]}
const coloredText = loremMatrix.map(l => l.join('')).join('')
document.querySelector('#text').innerHTML = coloredText
Awesome! And it just started raining again...
(Cover image by Flickr user Ivan, released under CC by 2.0, no changes made to the image)
Nice! I love it when a post has the math.
Can you tell me what math tool you were using in the photos?
Glad you liked it! That's Desmos, you can find it here: desmos.com/calculator?lang=en
Got it from the YT channel "3blue1brown", I love that tool.
I do however admit that I cheated a bit with the gradient in Desmos. I created each linear function twice with different colors and changed the opacity to make it look like a gradient. It does the trick, though :D
Ah, thanks. I watch 3blue1brown too.
I find it much easier to make gradients with WebGL. You should give it a try too :-)
Agreed, there's specialized tools for this that are much simpler to handle!
I wanted to do it from scratch entirely and show how to approach this problem from a mathematical point of view with the most basic tools at hand. This post was meant to be more about thinking patterns, I could've emphasized that more 😅
I didn't mean it as an offence in any way. I enjoy your posts and this one is not an exception 👍
So let me rephrase: those looking to draw heavily optimized 2D graphics, especially gradients should take a look at WebGL, where the computation is performed on GPU directly. It's not as scary as it sounds 😃
No offence taken! Sorry if my answer came over as too defensive, I think I misunderstood your original comment a bit. I'm really glad you enjoy my articles, thank you so much 😀
I should really dive deeper into WebGL, I've seen amazing things built with it. You're absolutely right!
Works mostly with Symfony, Laravel and a sprinkle of WordPress plugins. Testing enthusiast.
Hah, you're very welcome @thormeier ! :-D |
Thermomechanical Fatigue Behavior of a Directionally Solidified Ni-Base Superalloy | J. Eng. Mater. Technol. | ASME Digital Collection
M. M. Shenoy,
M. M. Shenoy
A. P. Gordon,
A. P. Gordon
D. L. McDowell,
R. W. Neu
e-mail: rick.neu@me.gatech.edu
J. Eng. Mater. Technol. Jul 2005, 127(3): 325-336 (12 pages)
Shenoy, M. M., Gordon, A. P., McDowell, D. L., and Neu, R. W. (February 2, 2005). "Thermomechanical Fatigue Behavior of a Directionally Solidified Ni-Base Superalloy." ASME. J. Eng. Mater. Technol. July 2005; 127(3): 325–336. https://doi.org/10.1115/1.1924560
A continuum crystal plasticity model is used to simulate the material behavior of a directionally solidified Ni-base superalloy, DS GTD-111, in the longitudinal and transverse orientations. Isothermal uniaxial fatigue tests with hold times and creep tests are conducted at temperatures ranging from room temperature (RT) to
1038°C
to characterize the deformation response. The constitutive model is implemented as a User MATerial subroutine (UMAT) in ABAQUS (2003, Hibbitt, Karlsson, and Sorensen, Inc., Providence, RI, v6.3) and a parameter estimation scheme is developed to obtain the material constants. Both in-phase and out-of-phase thermo-mechanical fatigue tests are conducted. A physically based model is developed for correlating crack initiation life based on the experimental life data and predictions are made using the crack initiation model.
nickel alloys, superalloys, thermomechanical treatment, fatigue cracks, solidification, fatigue testing, creep, creep testing, parameter estimation, Constitutive Modeling, Crack Initiation, Directionally Solidified Superalloy, Fatigue Life Prediction
Constitutive equations, Creep, Deformation, Fatigue, Fracture (Materials), Stress, Superalloys, Temperature, Thermomechanics, Cycles, Crystals, Plasticity, Fatigue testing, Fatigue cracks, Parameter estimation
Poubanne
Single Crystal Modeling for Structural Calculations Part 1–Model Presentation
, 2002, personal communication,
Superalloys (Ni base) and dislocations—An Introduction
Dislocat. Solids
Orientation and temperature dependence of the yield stress of Ni3 (al, nb) single crystals
A theory of the anamolous yield behavior in LI2 ordered alloys
The asymmetry of the flow stress in Ni3 (Al, Ta) single crystals
Klingelhoffer
Modelling the orientation and direction dependence of the critical resolved shear stress of Nickel-base superalloy single crystals
Tension compression asymmetry of the (001) single crystal superalloy SC16 under cyclic loading at elevated temperatures
Elastic-Plastic Deformations at Finite Strains
Anisotropic Constitutive model for Nickel-base Single Crystal Superalloys
,” NASA Report No. CR-182157.
Continuous Distributions of Dislocations: A New Application of the Methods of Non-Riemannian Geometry
Anomalous temperature dependence of the yield stress in Ni3Ga
Temperature and Orientation Dependence of the Yield Stress in Ni3Ga Single Crystals
A new theory of the anomalous yield stress in LI2 alloys
Geometry of dislocation glide in L12 γ′-phase: TEM observations
Non-Schmid yield behavior in single crystals
A nonlinear kinematic hardening theory for cyclic thermoplasticity and thermoviscoplasticity
Improved techniques for thermomechanical testing in support of deformation modeling
,” Symposium on Thermomechanical Fatigue Behavior of Materials (STP 1186),
Sehitoglu.
(Ed.), San Diego, CA, ASTM, pp.
,” ASTM, West Conshohocken, PA.
Standard Practice for Strain-Controlled Thermomechanical Fatigue Testing
,” ASTM International.
, 2003, Hibbitt, Karlsson, and Sorensen, Inc., Providence, RI, v6.3.
Multiscale Representation of Polycrystalline Inelasticity
Cube Slip in Near-[111] Oriented Speciments of a Single-Crystal Nickel-Base Superalloy
Zarubova
Gemberle
Orientation Dependence of Plastic Deformation in NiAl Single Crystals
GTD-111 Alloy Material Study
,” J. engineering for gas turbines and power,
Modeling Temperature and Strain Rate Sequence Effects on OFHC Copper
Epogy, Synaps Inc.
, 2004, Atlanta, GA, USA, v2004A.
Multiaxial Cyclic Behavior in Two Precipitates Strengthened Alloys: Influence of the Loading Path and Microstructure
Mem. Etud. Sci. Rev. Metall.
On the Modeling of Non-Proportional Cyclic Plasticity of Waspaloy
A Computational Model for the Finite Element Analysis of Thermoplasticty Coupled with Ductile Damage at Finite Strains
Fourth International Conference on Fracture
, Waterloo, Ont., Canada.
Thermomechanical Fatigue, Oxidation and Creep: Part II Life Prediction
A Damage Function and Associated Failure Equations for Predicting Hold Time and Frequency Effects in Elevated Temperature, Low-Cycle Fatigue
Thermo-Mechanical Out-of-Phase Fatigue Life of Overlay Coated IN-738LC Gas Turbine Material
Thermomechanical Fatigue Behavior of Materials: Third Volume
, ASTM 1371,
, Philadelphia, pp.
Prediction of Thermal Mechanical Fatigue Life for Gas Turbine Blades in Electric Power Generation
Thermomechanical Fatigue Behavior of Materials
, West Conshohocken, PA, pp.
Rezai-Aria
Fatigue Life Prediction Under Thermal-Mechanical Loading in a Nickel-Base Superalloy
R. L. T.
A Creep-Fatigue-Oxidation Microcrack Propagation Model for Thermomechanical Fatigue
Mechanistic Considerations for TMF Life Prediction of Nickel-Base Superalloys
Fatigue Life Prediction Modeling for Turbine Hot Section Materials
An Analytical Stress-Strain Hysteresis Model for a Directionally-Solidified Superalloy Under Thermomechanical Fatigue |
EUDML | On two conjectures of Hartshorne's. EuDML | On two conjectures of Hartshorne's.
Thomas Peternell; Daniel Barlet
Peternell, Thomas, and Barlet, Daniel. "On two conjectures of Hartshorne's.." Mathematische Annalen 286.1-3 (1990): 13-26. <http://eudml.org/doc/164625>.
@article{Peternell1990,
author = {Peternell, Thomas, Barlet, Daniel},
keywords = {submanifolds with ample normal bundle; -bundles},
title = {On two conjectures of Hartshorne's.},
AU - Peternell, Thomas
TI - On two conjectures of Hartshorne's.
KW - submanifolds with ample normal bundle; -bundles
submanifolds with ample normal bundle,
{ℙ}^{2}
Deformations of submanifolds and subspaces
q
q
4
-folds
Articles by Thomas Peternell |
Aug. 3d
Having an idle
\frac{1}{4}
of hour, I will write.— I am delighted that your climbing work seems so interesting. At p. 17 (2d Edit.)) of my Climbers I mention the spiral shoots of Akebia & Stauntonia (both memispermeæ); but with one the spirality was clearly connected with very slow growth or ill-health— the poorer the shoot the more spiral it became.2 Very many thanks about Heliotropism.— I hope that you gave my invitation to Sachs, whether or no he accepts it.3
Get name of malvaceous plant which sleep, though I have one good case with Sida, & here leaves turn vertically up at night.—4
The Anagallis seem very odd.—5
I have just succeeded in showing tips of radicles of Tropæolum majus are sensitive to square of card; but failed signally with those of Vegetable marrow, yet with some indication that they are really sensitive.— We must try Horse & Spanish Chesnut.—6
William has sent me your letter to him, & we had a jolly laugh over the difficulty of finding a present for a Professor 5ft. 8 inches high &c. &c7
Your present ought to be something handsome, & I pity you in having to solve such a problem.— “Oh no” is Bernard’s8 favourite expression now, & he brings it in delightfully, reproving us for our nonsense.—
The year is established by the relationship between this letter and the letters from Francis Darwin, 24 and 25 July 1878 and [before 3 August 1878].
Francis had described his own and Julius Sachs’s observations on spiral shoots of Menispermum (the genus of moonseed) and Akebia (the genus of chocolate vines) in his letter of [before 3 August 1878]. Stauntonia and Akebia are now placed in the family Lardizabalaceae, which was formerly a division of the Menispermaceae. CD refers to Climbing plants 2d ed. For more on Sachs’s views on climbing plants, see the letter from Francis Darwin, [before 17 July 1878].
See letter from Francis Darwin, [before 3 August 1878]. CD had asked about heliotropism in moulds and roots in his letter to Francis of 25 July [1878]; he also extended an invitation for Sachs to visit him at Down.
See letter from Francis Darwin, [before 3 August 1878] and nn. 5 and 9. CD described sleep movements in Sida rhombifolia (arrowleaf sida) in Movement in plants, pp. 322–3.
In his letter of [before 3 August 1878], Francis had drawn a sketch of the unusual sleep movements of Anagallis arvensis (scarlet pimpernel).
CD described his experiments on the sensitivity of the tip of the radicle or embryonic root of Tropaeolum majus (nasturtium) in Movement in plants, pp. 167–8. He described similar sensitivity in Aesculus hippocastanum (horse chestnut) in ibid., pp. 172–4, but did not discuss the Spanish chestnut (Castanea sativa).
William Erasmus Darwin. The letter has not been found.
Is pleased FD’s climbing work goes well.
Thanks him for information on heliotropism.
Discusses sleep movements
and his observations on the sensitivity of radicle tips. |
Bonds (BBOND) - bomb.money
The "Pit", where you can interact with the protocol's bonding mechanism
What are BBOND (Bonds)?
Bonds are unique tokens that can be utilized to help stabilize BOMB price around peg (10,000 BOMB = 1 BTC) by reducing the circulating supply of BOMB if the TWAP (time-weighted average price) goes below peg.
When can I buy BBOND (Bonds)?
BBOND can be purchased only during periods of supply contraction and when the TWAP of BOMB is below 1.
At the beginning of every new epoch during contraction periods, BBONDs are issued in the amount of 3% of BOMB's circulating supply, with a max debt amount of 35%. This means that if bonds reach 35% of the circulating supply of BOMB, no more bonds will be issued.
Note that during a zen epoch (when an epoch ends with a TWAP between 1.0 - 1.01), no BBOND will be issued, even though the Boardroom does not print.
BBOND TWAP (time-weighted average price) is based on the BOMB TWAP from the previous epoch as it ends. In other words, the BOMB TWAP is real-time but the BBOND TWAP is not.
Where can I buy BBOND (Bonds)?
You can buy BBONDs if any are available through bomb.money in the Pit. Anyone can buy as many BBONDs as they want as long as they have enough BOMB to pay for them.
There is a limit of available BBONDs per epoch during contraction periods (3% of BOMB's current circulating supply), and are sold first-come-first-serve.
Why should I buy BBOND (Bonds)?
The first and most important reason to buy BBOND is that they help to maintain the peg, but they are not the only measure in place to keep the protocol on track.
BBONDs don't have an expiration date, so you can view them as an investment in the long-term health of the protocol to be redeemed for a premium at a later date.
Incentives for Holding BBOND
The idea is to reward BBOND buyers for helping the protocol, while also protecting the protocol from being manipulated by big players.
So after you buy BBOND using BOMB, you have two possible ways to get your BOMB back:
Sell back your BBOND for BOMB while the peg is between 1 - 1.1 (10,000 BOMB = 1 BTCB) with no redemption bonus. This is in place to prevent an instant dump as soon as peg is recovered.
Sell back your BBOND for BOMB while the peg is above 1.1 (10,000 BOMB = 1 BTCB) with a bonus redemption rate.
The longer you hold, the more both the protocol and you benefit from BBOND.
When BOMB = 0.8, burn 1 BOMB to get 1 BBOND (BBOND price = 0.8)
When BOMB = 1.15, redeem 1 BBOND to get 1.105 BOMB (BBOND price = 1.27)
If I buy BOMB at 0.8, and hold it until 1.15 and then sell, I'm getting +$0.35 per BOMB.
But, if I buy BOMB at 0.8, burn it for BBOND, and redeem it at 1.15, I'm getting 1.105 BOMB * 1.15 (BOMB current price) = 1,271 (+$0.47) per BBOND redeemed.
But, what if getting back to peg is taking too long?
We will adjust our use cases to have different behaviors on contraction and expansion periods to benefit both BOMB and BBOND holders when needed.
What is the formula to calculate the redemption bonus for BBOND?
To encourage the redemption of BBOND for BOMB when BOMB's TWAP > 1.1 and in order to incentivize users to redeem bonds at a higher price, BBOND redemption is designed to be more profitable with a higher BOMB TWAP value. The BBOND to BOMB ratio is 1:R, where R can be calculated using the formula as shown below:
R=1+[(BOMBtwapprice)-1)*coeff)]
coeff = 0.7
When can I swap BBOND for a premium?
You can only redeem BBONDs for a premium when the previous epoch's TWAP is greater than 1.1. |
Node (graph theory) - zxc.wiki
Node (graph theory)
In graph theory, a node or corner is an element of the node set of a graph . If the graph, its node quantity is usually with ( English vertex called). Graphs exist with the node-set or from a corresponding edge set ( English edge ), which describes how the individual nodes of the graph by edges are connected.
{\ displaystyle G}
{\ displaystyle V (G)}
{\ displaystyle E (G)}
A universal node is a node that is adjacent to all other nodes in the graph .
A simplicial node is a node whose neighbors form a clique , i.e. a complete subgraph of the output graph .
An isolated node in an undirected graph is a node without neighbors, i.e. a node of degree zero. In a directed graph , an isolated node has no predecessors and successors and thus has zero degrees of entry and exit.
This page is based on the copyrighted Wikipedia article "Knoten_%28Graphentheorie%29" (Authors); it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License. You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA. |
Bananas » Illogicopedia - The nonsensical encyclopedia anyone can mess up
Looking at two examples of bananas, let's be honest, who do you think was closest to being right?
The guy being ridiculed for calling a lot of bananas thousands, or the guy with five bananas in a corridor?
Let's face it, the second guy is closer, after all, 5 is closer to 1000s than
{\displaystyle bananas<1000}
Yum Yum banananananananas Very Yummy!
Aid Epoc Igolli
Here be Bananas
Aid Epoc Igolli | All you need to know on Bananas | Ananab | Ba+Na² | Banala | Banana | Banana bread | Banana dissection | Banana Fandom | Banana Gun | Banana man | Bananaminions | Banana Peel | Banana pudding | Banana surfing | Bananaman | Bananaphone | Bananas | Bananaslicing | Bananna | Bananuh | BONJOOK, The Banana God | Butter milk cow banana | Chocolate bananas | Diskosherist | Fanged bananas | People going Bananas | Scythe banana | The Banana is eeble sonk | The evil one | The man from U.N.C.Y.C.L.O.P.E.D.I.A | Truth About Bananas | BANANA
Retrieved from "http://en.illogicopedia.org/w/index.php?title=Bananas&oldid=285527" |
Phenylalanine ammonia-lyase - WikiProjectMed
PDB rendering based on 1T6J.
Phenylalanine ammonia lyase (EC 4.3.1.24) is an enzyme that catalyzes a reaction converting L-phenylalanine to ammonia and trans-cinnamic acid.[1] Phenylalanine ammonia lyase (PAL) is the first and committed step in the phenyl propanoid pathway and is therefore involved in the biosynthesis of the polyphenol compounds such as flavonoids, phenylpropanoids, and lignin in plants.[2][3] Phenylalanine ammonia lyase is found widely in plants, as well as some bacteria, yeast, and fungi, with isoenzymes existing within many different species. It has a molecular mass in the range of 270–330 kDa.[1][4] The activity of PAL is induced dramatically in response to various stimuli such as tissue wounding, pathogenic attack, light, low temperatures, and hormones.[1][5] PAL has recently been studied for possible therapeutic benefits in humans afflicted with phenylketonuria.[6] It has also been used in the generation of L-phenylalanine as precursor of the sweetener aspartame.[7]
The enzyme is a member of the ammonia lyase family, which cleaves carbon–nitrogen bonds. Like other lyases, PAL requires only one substrate for the forward reaction, but two for the reverse. It is thought to be mechanistically similar to the related enzyme histidine ammonia-lyase (EC:4.3.1.3, HAL).[8] The systematic name of this enzyme class is L-phenylalanine ammonia-lyase (trans-cinnamate-forming). Previously, it was designated EC 4.3.1.5, but that class has been redesignated as EC 4.3.1.24 (phenylalanine ammonia-lyases), EC 4.3.1.25 (tyrosine ammonia-lyases), and EC 4.3.1.26 (phenylalanine/tyrosine ammonia-lyases). Other names in common use include tyrase, phenylalanine deaminase, tyrosine ammonia-lyase, L-tyrosine ammonia-lyase, phenylalanine ammonium-lyase, PAL, and L-phenylalanine ammonia-lyase.
5.2 Unnatural amino acid synthesis
Phenylalanine ammonia lyase is specific for L-phe, and to a lesser extent, L-tyrosine.[9][10] The reaction catalyzed by PAL is a spontaneous elimination reaction rather than an oxidative deamination.[11]
{\displaystyle \rightleftharpoons }
trans-cinnamic acid + NH3
The cofactor 3,5-dihydro-5-methyldiene-4H-imidazol-4-one (MIO) is involved in the reaction and sits atop the positive pole of three polar helices in the active site, which helps to increase its electrophilicity.[12] MIO is attacked by the aromatic ring of L-phe, which activates the C-H bond on the β carbon for deprotonation by a basic residue.[13][14] The carbanion intermediate of this E1cB-elimination reaction, which is stabilized by partial positive regions in the active site, then expels ammonia to form the cinnamate alkene. The mechanism of the reaction of PAL is thought to be similar to the mechanism of the related enzyme histidine ammonia lyase.[13]
Proposed autocatalytic formation of MIO cofactor from the tripeptide Ala-Ser-Gly by two water elimination steps.[15]
A dehydroalanine residue was long thought to be the key electrophilic catalytic residue in PAL and HAL, but the active residue was later found instead to be MIO, which is even more electrophilic.[16][17] It is formed by cyclization and dehydration of conserved Ala-Ser-Gly tripeptide segment. The first step of MIO formation is a cyclization-elimination by an intramolecular nucleophilic attack of the nitrogen of Gly204 at the carbonyl group of Ala202. A subsequent water elimination from the side chain of Ser203 completes the system of crossconjugated double bonds.[15] Numbers are given for the phenylalanine ammonia lyase from Petroselinum crispum (PDB 1W27). Although MIO is a polypeptide modification, it was proposed to call it a prosthetic group, because it has the quality of an added organic compound.[8]
PAL is inhibited by trans-cinnamic acid, and, in some species, may be inhibited by trans-cinnamic acid derivatives.[1][18] The unnatural amino acids D-Phe and D-Tyr, the enantiomeric forms of the normal substrate, are competitive inhibitors.[9]
PAL active site
Phenylalanine ammonia lyase is composed of four identical subunits composed mainly of alpha-helices, with pairs of monomers forming a single active site.[17] Catalysis in PAL may be governed by the dipole moments of seven different alpha helices associated with the active site.[19] The active site contains the electrophilic group MIO non-covalently bonded to three helices. Leu266, Asn270, Val269, Leu215, Lys486, and Ile472 are located on the active site helices, while Phe413, Glu496, and Gln500 contribute to the stabilization of the MIO cofactor. The orientation of dipole moments generated by helices within the active site generates an electropositive region for ideal reactivity with MIO. The partially positive regions in the active site may also help stabilize the charge of a carbanion intermediate. PAL is structurally similar to the mechanistically related histidine ammonia lyase, although PAL has approximately 215 additional residues.[17]
Phenylalanine ammonia lyase can perform different functions in different species. It is found mainly in some plants and fungi (i.e. yeast). In fungal and yeast cells, PAL plays an important catabolic role, generating carbon and nitrogen.[2] In plants it is a key biosynthetic enzyme that catalyzes the first step in the synthesis of a variety of polyphenyl compounds [2][3] and is mainly involved in defense mechanisms. PAL is involved in 5 metabolic pathways: tyrosine metabolism, phenylalanine metabolism, nitrogen metabolism, phenylpropanoid biosynthesis, and alkaloid biosynthesis.
Enzyme substitution therapy using PAL to treat phenylketonuria (PKU), an autosomal recessive genetic disorder in humans in which mutations in the phenylalanine hydroxylase (PAH, EC 1.14.16.1) gene inactivate the enzyme is being explored.[6] This leads to an inability of the patient to metabolize phenylalanine, causing elevated levels of Phe in the bloodstream (hyperphenylalaninemia) and mental retardation if therapy is not begun at birth.[6]
In May 2018, the FDA approved pegvaliase, a recombinant PEGylated phenylalanine ammonia-lyase for the treatment of PKU that had been developed by Biomarin.[20][21]
Lactuca sativa was investigated by Vàsquez et al 2017. They find that UV-C treatment increased PAL enzyme activity. This increase results in decreased susceptibility to Botrytis cinerea.[22]
The reverse reaction catalyzed by PAL has been explored for use to convert trans-cinnamic acid to L-phenylalanine, which is a precursor of the sweetener aspartame. This process was developed by Genex Corporation but was never commercially adopted.[23]
Analogous to how aspartame is synthesized, PAL is also used to synthesize unnatural amino acids from various substituted cinnamic acids for research purposes.[24] Steric hindrance from arene substitution limits PAL's utility for this purpose however.[25] For instance, when Rhodotorula glutinis was used to affect this biotransformation the enzyme was discovered to be intolerant of all para substituents other than F, presumably due to the element's small atomic radius. Meta and ortho positions were found to be more tolerant, but still limited by, larger substituents. For instance the enzyme's active site permitted ortho methoxy substitution but forbade meta ethoxy. Other organisms with different versions of the enzyme may be less limited in this way.[26][27]
As of late 2007, 5 structures have been solved for this class of enzymes, with PDB accession codes 1T6J, 1T6P, 1W27, 1Y2M, and 2NYF.
^ a b c d Camm EL, Towers G (1 May 1973). "Phenylalanine ammonia lyase". Phytochemistry. 12 (5): 961–973. doi:10.1016/0031-9422(73)85001-0.
^ a b c Fritz RR, Hodgins DS, Abell CW (August 1976). "Phenylalanine ammonia-lyase. Induction and purification from yeast and clearance in mammals". The Journal of Biological Chemistry. 251 (15): 4646–50. PMID 985816.
^ a b Tanaka Y, Matsuoka M, Yamanoto N, Ohashi Y, Kano-Murakami Y, Ozeki Y (August 1989). "Structure and characterization of a cDNA clone for phenylalanine ammonia-lyase from cut-injured roots of sweet potato". Plant Physiology. 90 (4): 1403–7. doi:10.1104/pp.90.4.1403. PMC 1061903. PMID 16666943.
^ Appert C, Logemann E, Hahlbrock K, Schmid J, Amrhein N (October 1994). "Structural and catalytic properties of the four phenylalanine ammonia-lyase isoenzymes from parsley (Petroselinum crispum Nym.)". European Journal of Biochemistry. 225 (1): 491–9. doi:10.1111/j.1432-1033.1994.00491.x. PMID 7925471.
^ Hahlbrock K, Grisebach H (1 June 1979). "Enzymic Controls in the Biosynthesis of Lignin and Flavonoids". Annual Review of Plant Physiology. 30 (1): 105–130. doi:10.1146/annurev.pp.30.060179.000541.
^ a b c Sarkissian CN, Gámez A (December 2005). "Phenylalanine ammonia lyase, enzyme substitution therapy for phenylketonuria, where are we now?". Molecular Genetics and Metabolism. 86 Suppl 1: S22-6. doi:10.1016/j.ymgme.2005.06.016. PMID 16165390.
^ Evans C, Hanna K, Conrad D, Peterson W, Misawa M (1 February 1987). "Production of phenylalanine ammonia-lyase (PAL): isolation and evaluation of yeast strains suitable for commercial production of L-phenylalanine". Applied Microbiology and Biotechnology. 25 (5): 406–414. doi:10.1007/BF00253309. S2CID 40066810.
^ a b Schwede TF, Rétey J, Schulz GE (April 1999). "Crystal structure of histidine ammonia-lyase revealing a novel polypeptide modification as the catalytic electrophile". Biochemistry. 38 (17): 5355–61. doi:10.1021/bi982929q. PMID 10220322.
^ a b Hodgins DS (May 1971). "Yeast phenylalanine ammonia-lyase. Purification, properties, and the identification of catalytically essential dehydroalanine". The Journal of Biological Chemistry. 246 (9): 2977–85. PMID 5102931.
^ Barros J, Serrani-Yarce JC, Chen F, Baxter D, Venables BJ, Dixon RA (May 2016). "Role of bifunctional ammonia-lyase in grass cell wall biosynthesis". Nature Plants. 2 (6): 16050. doi:10.1038/nplants.2016.50. PMID 27255834. S2CID 3462127.
^ Koukol J, Conn EE (October 1961). "The metabolism of aromatic compounds in higher plants. IV. Purification and properties of the phenylalanine deaminase of Hordeum vulgare". The Journal of Biological Chemistry. 236: 2692–8. PMID 14458851.
^ Alunni S, Cipiciani A, Fioroni G, Ottavi L (April 2003). "Mechanisms of inhibition of phenylalanine ammonia-lyase by phenol inhibitors and phenol/glycine synergistic inhibitors". Archives of Biochemistry and Biophysics. 412 (2): 170–5. doi:10.1016/s0003-9861(03)00007-9. PMID 12667480.
^ a b Langer B, Langer M, Rétey J (2001). "Methylidene-imidazolone (MIO) from histidine and phenylalanine ammonia-lyase". Advances in Protein Chemistry. 58: 175–214. doi:10.1016/s0065-3233(01)58005-5. ISBN 9780120342587. PMID 11665488.
^ Frey PA, Hegeman AD (2007). "Methylene imidazolone-dependent elimination and addition: phenylalanine ammonia-lyase". Enzymatic Reaction Mechanisms. Oxford University Press. pp. 460–466. ISBN 9780195352740.
^ a b Ritter H, Schulz GE (December 2004). "Structural basis for the entrance into the phenylpropanoid metabolism catalyzed by phenylalanine ammonia-lyase". The Plant Cell. 16 (12): 3426–36. doi:10.1105/tpc.104.025288. PMC 535883. PMID 15548745.
^ Rétey, János (2003). "Discovery and role of methylidene imidazolone, a highly electrophilic prosthetic group". Biochimica et Biophysica Acta (BBA) - Proteins and Proteomics. 1647 (1–2): 179–184. doi:10.1016/S1570-9639(03)00091-8. PMID 12686130.
^ a b c Calabrese JC, Jordan DB, Boodhoo A, Sariaslani S, Vannelli T (September 2004). "Crystal structure of phenylalanine ammonia lyase: multiple helix dipoles implicated in catalysis". Biochemistry. 43 (36): 11403–16. doi:10.1021/bi049053+. PMID 15350127.
^ Sato T, Kiuchi F, Sankawa U (1 January 1982). "Inhibition of phenylalanine ammonia-lyase by cinnamic acid derivatives and related compounds". Phytochemistry. 21 (4): 845–850. doi:10.1016/0031-9422(82)80077-0.
^ Pilbák S, Tomin A, Rétey J, Poppe L (March 2006). "The essential tyrosine-containing loop conformation and the role of the C-terminal multi-helix region in eukaryotic phenylalanine ammonia-lyases". The FEBS Journal. 273 (5): 1004–19. doi:10.1111/j.1742-4658.2006.05127.x. PMID 16478474. S2CID 33002042.
^ Powers M (May 29, 2018). "Biomarin aces final exam: Palynziq gains FDA approval to treat PKU in adults". BioWorld.
^ Levy HL, Sarkissian CN, Stevens RC, Scriver CR (June 2018). "Phenylalanine ammonia lyase (PAL): From discovery to enzyme substitution therapy for phenylketonuria". Molecular Genetics and Metabolism. 124 (4): 223–229. doi:10.1016/j.ymgme.2018.06.002. PMID 29941359.
^ Urban, L.; Chabane Sari, D.; Orsal, B.; Lopes, M.; Miranda, R.; Aarrouf, J. (2018). "UV-C light and pulsed light as alternatives to chemical and biological elicitors for stimulating plant natural defenses against fungal diseases". Scientia Horticulturae. Elsevier. 235: 452–459. doi:10.1016/j.scienta.2018.02.057. ISSN 0304-4238. S2CID 90436989.
^ Straathof AJ, Adlercreutz P (2014). Applied Biocatalysis. CRC Press. p. 146. ISBN 9781482298420.
^ Hughes A (2009). Amino Acids, Peptides and Proteins in Organic Chemistry Volume 1. Weinheim Germany: Wiley VCH. p. 94. ISBN 9783527320967.
^ Renard G, Guilleux J, Bore C, Malta-Valette V, Lerner D (1992). "Synthesis of L-phenylalanine analogs by Rhodotorula glutinis. Bioconversion of cinnamic acids derivatives". Biotechnology Letters. 14 (8): 673–678. doi:10.1007/BF01021641. S2CID 46423586.
^ Lovelock SL, Turner NJ (October 2014). "Bacterial Anabaena variabilis phenylalanine ammonia lyase: a biocatalyst with broad substrate specificity". Bioorganic & Medicinal Chemistry. 22 (20): 5555–7. doi:10.1016/j.bmc.2014.06.035. PMID 25037641.
^ Showa D, Hirobumi A. "Production process of L-phenylalanine derivatives by microorganisms". Google Patents. Hirobumi Central Research Laboratory. Retrieved 20 July 2014.
Koukol J, Conn EE (October 1961). "The metabolism of aromatic compounds in higher plants. IV. Purification and properties of the phenylalanine deaminase of Hordeum vulgare" (PDF). The Journal of Biological Chemistry. 236 (10): 2692–8. PMID 14458851.
Young MR, Neish AC (1966). "Properties of the ammonia-lyases deaminating phenylalanine and related compounds in Triticum sestivum and Pteridium aquilinum". Phytochemistry. 5 (6): 1121–1132. doi:10.1016/S0031-9422(00)86105-1.
Retrieved from "https://mdwiki.org/wiki/Phenylalanine_ammonia-lyase"
Natural phenols metabolism |
Gyro Ball (move) - Bulbapedia, the community-driven Pokémon encyclopedia
Points double if last to appeal.
Gyro Ball (Japanese: ジャイロボール Gyro Ball) is a damage-dealing Steel-type move introduced in Generation IV. It is TM74 from Generation IV to Pokémon Ultra Sun and Ultra Moon and in Pokémon Brilliant Diamond and Shining Pearl. It is TR52 in Pokémon Sword and Shield.
Gyro Ball inflicts more damage the slower the user is compared to the target. Therefore, the larger the relative difference between the user's and target's Speed stat, the greater the damage. The base power is calculated by the following formula:
{\displaystyle BasePower=min(150,{\dfrac {25\times CurrentSpeed_{target}}{CurrentSpeed_{user}}}+1)}
The Speed values used in the calculation take all modifiers into account (including stat stages, paralysis, held items such as the Iron Ball, and Abilities such as Slush Rush). Effects that only modify movement order without affecting Speed (such as Trick Room and Stall) have no effect on Gyro Ball.
If the user of Gyro Ball has a Speed stat that rounds down to 0, the stat will be treated as 1 in the power formula, to avoid having to divide by 0.
The maximum base power that can be reached with Gyro Ball is 150, if the target's Speed is at least 5.96 times the user's Speed.
If the user of Gyro Ball has a Speed stat that rounds down to 0, the move's power is set to 1, entirely ignoring the above formula.
Pokémon with Bulletproof are immune to Gyro Ball.
DPPtHGSS The user tackles the foe with a high-speed spin. The slower the user, the greater the damage.*
The user tackles the foe from a high-speed spin. The slower the user, the greater the damage.*
PBR The user tackles the foe with a high-speed spin. The slower the user, the greater the damage.
BWB2W2 The user tackles the target with a high-speed spin. The slower the user, the greater the damage.
SwShBDSPLA The user tackles the target with a high-speed spin. The slower the user compared to the target, the greater the move's power.
In Explorers of Time, Darkness and Sky, Gyro Ball is a move with 1 base power, 88% accuracy, and 12 PP. The user attacks enemy in the front, with the final damage being doubled if the user is suffering from Half Speed status condition.
Effect: Deals more damage the slower the user is compared to the target.
Physical 2 — 44 52 100% An opponent — This move's power is doubled when the target's Speed is raised.
Prior to Version 2.10.0 (from June 28, 2021): Its power is doubled if the target's Speed has risen.
In Pokémon GO, Gyro Ball is a Charged Attack that has been available since February 16, 2017.
MDTDS Inflicts damage on the target and double damage on a target if the user has a Half Speed status.
BSL てきポケモンに ダメージをあたえる じぶんが どんそくじょうたいだと ダメージが 2ばいになる
Conq. The user tackles the target with a high-speed spin. The slower the user, the greater the damage.
MDGtI It damages an enemy. The slower your Travel Speed than that of the enemy, the greater the damage the move causes.
MDRTDX It damages an enemy. The slower your Speed compared to your enemy's, the greater the damage caused by the move.
Spritzee Steelix Golett Hitmontop
The user tackles the foe with a high-speed spin.
Bronzong Bronzong holds out its arms and white balls of energy appear at the ends of them. Bronzong then spins around quickly and it slams into the opponent.
A Coordinator's Bronzong Old Rivals, New Tricks! None
Lickilicky Lickilicky spins at a high speed and spins into the opponent.
Koffing Koffing spins forward at a fast speed and a purple ring surrounds its body vertically, or Koffing spins around and its body becomes surrounded by a light blue aura. It then spins into the opponent.
Bronzor Bronzor spins at a fast speed and three light blue rings surround Bronzor's body. Bronzor then spins into the opponent, hitting the opponent with the top of its head.
Byron's Bronzor Dealing With Defensive Types! None
Glalie Two light blue orbs appear at the end of Glalie's horns and it spins around quickly, slamming into the opponent.
Metagross Metagross pulls its arms back and light blue orbs appear at the end of its four claws. Metagross then spins rapidly and slams into the opponent.
Jigglypuff Light blue orbs appear at the end of Jigglypuff's hands, and its body becomes surrounded by light blue sparkles. Jigglypuff then begins to spin rapidly and slams into the opponent.
Baltoy Two light blue orbs appear at the end of Baltoy's arms and it spins rapidly. It then repeatedly floats into the opponent.
Shuckle Shuckle pulls its head into its shell and its arms and legs glow light blue. It then spins its body rapidly and can float into the air. It then spins into the opponent.
Golurk Golurk flies towards the opponent and holds its arms out like a 'T'. It then starts to spin its body and a line of bright orange energy appears over its fists, and Golurk flies into the opponent.
Juanita's Golurk Black—Victini and Reshiram None
Golett Golett's arms and legs glow light blue and it pulls its head and limbs into its body. It then starts spinning around in the air, then floats down and rolls into the opponent.
Luke's Golett The Club Battle Finale: A Heroes Outcome! None
Magnezone Mangezone's magnets glow light blue, and it starts spinning around rapidly. It then spins into the opponent.
Spritzee Spritzee spins around rapidly into the opponent. While spinning, a light pink aura shines around it.
Avalugg Avalugg spins in a fast counter-clockwise motion. As it spins, a light-blue aura surrounds it, with a bright white light running horizontally across the light-blue aura surrounding Avalugg.
Glalie Four light blue orbs appear at the end of each of Mega Glalie's horns and it spins around quickly, causing two blue rings to form around its body, and slams into the opponent.
Geodude Geodude's hands glow light blue and it spins around fast, with a light-blue ring surrounding it. It then spins into the opponent.
Brock's Geodude When Regions Collide! None
Steelix Some of Steelix's spikes glow light blue and spin around fast, with a light-blue ring surrounding Steelix.
Hitmontop Hitmontop jumps up and spins on his head. He then spins into the opponent.
The user shoots the opponent with a laser.
Bronzor Bronzor's eyes glow brightly and it fires a powerful beam of energy from its body at the opponent.
Mars's Bronzor Belligerent Bronzor Debut
Bronzong Bronzong tackles the opponent, a ball is then left in between the middle.
In Pokémon Conquest, Gyro Ball will always do 1 damage if the user is faster than the target. This happens due to how the move's damage is calculated: the target's Speed stat is divided by the user's, and this quotient is rounded down to use as a damage multiplier. A faster user than the target will give a multiplier of 0, causing the move's damage to become 0, finally adjusted to a minimum of 1.
Chinese Cantonese 陀螺球 Tòhlòh Kàuh *
迴轉球 Wùihjyún Kàuh *
螺旋波 Lòhsyùhn Bō *
Mandarin 陀螺球 Tuóluó Qiú *
迴轉球 Huízhuǎn Qiú *
Czech Gyroskop
Dutch Gyrobal
French Gyroballe
German Gyroball
Greek Περιστρόσφαιρα
Indonesian Gyro Ball
Italian Vortexpalla
Korean 자이로볼 Gyroball
Polish Żyroskop
Portuguese Girobola
Romanian Gyro Ball
Russian Гирошар Giroshar
Serbian Žiro lopta
Latin America Giro Bola (DP035-BW119)
Gira Bola (XY074-present)
Spain Giro Bola
Thai ไจโรบอล
Turkish Gyro Ball
Vietnamese Quả Cầu Quay
Retrieved from "https://bulbapedia.bulbagarden.net/w/index.php?title=Gyro_Ball_(move)&oldid=3510125" |
Intel (R) MKL 10.0 (Windows)
Fetching Information of Polynomials via has, sort, degree, type
Faster GraphTheory Algorithms
. (dot) Operator Improvements
Direct Access to Solvers
The Intel (R) Math Kernel Library (Intel (R) MKL) version 10.0 replaces a prior version on 32-bit Windows, and replaces generic BLAS on 64-bit Windows. These highly optimized core routines are used in various places throughout Maple.
The following example highlights how performance in Maple 14 is significantly improved.
M:=LinearAlgebra:-RandomMatrix(2000,datatype=float[8]):
time( M.M );
\textcolor[rgb]{0,0,1}{1.343}
On a 2.13GHz Core2 Duo, running 32-bit Windows, Maple 14 takes 2.44 seconds now compared to 7.95 seconds in Maple 13.02.
On a 2.00GHz dual quad core Xeon, running 64-bit Windows, Maple 14 takes 2.437 seconds now compared to 25.593 seconds in Maple 13.02.
A new implementation of numtheory[cyclotomic] constructs cyclotomic polynomials significantly faster than the previous implementation for large prime factors.
t:= time():
numtheory[cyclotomic](255255,x):
\textcolor[rgb]{0,0,1}{0.228}
This is about 1000 times as fast as in Maple 13. The amount of speedup increases with the size of the problem.
Polynomial multiplication and division over integers can be performed faster than in Maple 13 because of an improved implementation. Multiple threads may also be used for large polynomial multiplications.
The following example is about twenty times as fast as in Maple 13:
f,g := seq(randpoly([x,y,z],dense,degree=30),i=1..2):
assign(p=expand(f*g)):
divide(p,f,'q');
\textcolor[rgb]{0,0,1}{\mathrm{true}}
\textcolor[rgb]{0,0,1}{1.600}
Higher level commands for polynomial arithmetic, such as factor, benefits from the improvements described above. The following example is about 10 times as fast as in Maple 13:
f := expand((1+x+y+z+t)^11)+1:
p:=expand(f*(f+1)):
time(factor(p));
\textcolor[rgb]{0,0,1}{2.828}
Maple 14 adds improved algorithms for multiplication, division, inversion, and gcd of recursive dense polynomials, which is the representation used to compute with polynomials over algebraic number fields.
The new algorithms preallocate memory and run in place, which makes them significantly faster when multiple extensions are present.
The following example is about twice as fast as in Maple 13:
p := modp1(Prime(1)):
alpha := RootOf(Randprime(2,x) mod p):
beta := RootOf(Randprime(5,x,alpha) mod p):
cofs := ()->randpoly([alpha,beta],degree=4,dense):
f,g,h := seq(Expand(randpoly(x,degree=40,dense,coeffs=cofs)) mod p,i=1..3):
time(assign('a'=Expand(f*g) mod p));
\textcolor[rgb]{0,0,1}{0.396}
time(assign('b'=Expand(g*h) mod p));
\textcolor[rgb]{0,0,1}{0.316}
time(assign('g'=Gcd(a,b) mod p));
\textcolor[rgb]{0,0,1}{0.824}
The commands has, sort, degree, and type are now much faster when algebraic numbers or functions are present. A new implementation reduces the time complexity of these commands.
For the examples below, Maple 14's implementation is 3 seconds faster than Maple 13 on the commands concerning F:
c := rand(0..995):
r := RootOf(numtheory[cyclotomic](997,Z),Z):
F := [seq(add(c()*Z^c()*x^i, i=0..10^4-1), j=1..10)]:
G := subs(Z=r, F):
time(map(degree, eval(F,1), x));
\textcolor[rgb]{0,0,1}{0.012}
time(map(degree, eval(G,1), x));
\textcolor[rgb]{0,0,1}{0.012}
time(map(type, eval(F,1), polynom(anything,x)));
\textcolor[rgb]{0,0,1}{0.004}
time(map(type, eval(G,1), polynom(anything,x)));
\textcolor[rgb]{0,0,1}{0.008}
Time and memory efficiency of BipartiteMatching, ConnectedComponents and MaxFlow has been improved in Maple 14 because a new algorithm is employed.
For the example below, Maple 14 is about 200 times faster and takes 150 MB less memory than Maple 13 to perform the computation.
bt:=CompleteBinaryTree(8):
time(BipartiteMatching(bt));
\textcolor[rgb]{0,0,1}{0.048}
For the example below, Maple 14 takes 9 fewer seconds and 250 MB less memory than Maple 13 to perform the computation.
cg:=CycleGraph(4000):
cgc:=GraphComplement(cg):
time(ConnectedComponents(cgc));
\textcolor[rgb]{0,0,1}{0.112}
For the example below, Maple 14 is 9 times faster than Maple 13.
sb:=SoccerBallGraph():
sbw:=MakeWeighted(sb):
time(MaxFlow(sbw,1,20));
\textcolor[rgb]{0,0,1}{0.020}
In addition, basic GraphTheory commands (including CompleteGraph, DisjointUnion, and AddVertex) have been made more efficient for large graphs having many vertices and edges.
We have added support for accelerating LinearAlgebra routines using NVIDIA's CUDA technology. Matrix multiplication can be accelerated for a variety of datatypes and shapes. See the CUDA package and CUDA,supported_routines for more information. Note: the following example will only show a speed up if the computer on which it is run supports CUDA. See CUDA,supported_hardware for more information.
m1 := LinearAlgebra:-RandomMatrix( 2000,2000,outputoptions=[datatype=float[4]] ):
t := time[real]():
to N do mNoCuda := m1.m2: end:
tNoCuda := time[real]()-t;
\textcolor[rgb]{0,0,1}{\mathrm{tNoCuda}}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{24.942}
CUDA:-Enable( true );
to N do mCuda := m1.m2: end:
tCuda := time[real]()-t;
\textcolor[rgb]{0,0,1}{\mathrm{tCuda}}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{2.202}
We reduced the overhead of calling the . operator. This is particularly significant when working with small matrices.
In Maple 13, the following commands took over 9 seconds to complete.
A := Matrix([[9,5],[6,4]]):
B := Matrix([[6,7],[3,10]]):
to n do to n do A.B; od; od:
t1 := time()-st;
\textcolor[rgb]{0,0,1}{\mathrm{t1}}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{0.072}
Univariate polynomial solving can be accessed directly with the new command SolveTools[Polynomial] and multivariate solving can be accessed with SolveTools[PolynomialSystem]. Both commands are can be more efficient than solve on purely polynomial equations, since they avoid a large amount of preprocessing and dispatch overhead much like SolveTools[Linear]. See Enhancements to Symbolic Capabilities in Maple 14 for details. |
Membraneless Fuel Cells - Wikipedia
Membraneless Fuel Cells convert stored chemical energy into electrical energy without the use of a conducting membrane as with other types of fuel cells. In Laminar Flow Fuel Cells (LFFC) this is achieved by exploiting the phenomenon of non-mixing laminar flows where the interface between the two flows works as a proton/ion conductor. The interface allows for high diffusivity and eliminates the need for costly membranes. The operating principles of these cells mean that they can only be built to millimeter-scale sizes. The lack of a membrane means they are cheaper but the size limits their use to portable applications which require small amounts of power.
Another type of membraneless fuel cell is a Mixed Reactant Fuel Cell (MRFC). Unlike LFFCs, MRFCs use a mixed fuel and electrolyte, and are thus not subject to the same limitations. Without a membrane, MRFCs depend on the characteristics of the electrodes to separate the oxidation and reduction reactions. By eliminating the membrane and delivering the reactants as a mixture, MRFCs can potentially be simpler and less costly than conventional fuel cell systems.[1]
The efficiency of these cells is generally much higher than modern electricity producing sources. For example, a fossil fuel power plant system can achieve a 40% electrical conversion efficiency while an outdated nuclear power plant is slightly lower at 32%. GenIII and GenIV Nuclear Fission plants can get up to 90% efficient if using direct conversion or up to 65% efficient if using a magnetohydrodynamic generator as a topping cycle. Fuel cell systems are capable of reaching efficiencies in the range of 55%–70%. However, as with any process, fuel cells also experience inherent losses due to their design and manufacturing processes.
2 Membraneless Fuel Cells and Operating Principles
3 Scaling Issues
4 Potential Applications of LFFCs
Fuel Cell Diagram. Note: Electrolyte can be a polymer or solid oxide
A fuel cell consists of an electrolyte which is placed in between two electrodes – the cathode and the anode. In the simplest case, hydrogen gas passes over the cathode, where it is decomposed into hydrogen protons and electrons. The protons pass through the electrolyte (often NAFION – manufactured by DuPont) across to the anode to the oxygen. Meanwhile, the free electrons travel around the cell to power a given load and then combine with the oxygen and hydrogen at the anode to form water. Two common types of electrolytes are a proton exchange membrane(PEM) (also known as Polymer Electrolyte Membrane) and a ceramic or solid oxide electrolyte (often used in Solid oxide fuel cells). Although hydrogen and oxygen are very common reactants, a plethora of other reactants exist and have been proven effective.
Hydrogen for fuel cells can be produced in many ways. The most common method in the United States (95% of production) is via Gas reforming, specifically using methane,[2] which produces hydrogen from fossil fuels by running them through a high temperature steam process. Since fossil fuels are primarily composed of carbon and hydrogen molecules of various sizes, various fossil fuels can be utilized. For example, methanol, ethanol, and methane can all be used in the reforming process. Electrolysis and high temperature combination cycles are also used to provide hydrogen from water whereby the heat and electricity provide sufficient energy to disassociate the hydrogen and oxygen atoms.
However, since these methods of hydrogen production are often energy and space intensive, it is often more convenient to use the chemicals directly in the fuel cell. Direct Methanol Fuel Cells (DMFC's), for example, use methanol as the reactant instead of first using reformation to produce hydrogen. Although DMFC's are not very efficient (~25%),[3] they are energy dense which means that they are quite suitable for portable power applications. Another advantage over gaseous fuels, as in the H2-O2 cells, is that liquids are much easier to handle, transport, pump and often have higher specific energies allowing for greater power extraction. Generally gases need to be stored in high pressure containers or cryogenic liquid containers which is a significant disadvantage to liquid transport.
Membraneless Fuel Cells and Operating Principles[edit]
The majority of fuel cell technologies currently employed are either PEM or SOFC cells. However, the electrolyte is often costly and not always completely effective. Although hydrogen technology has significantly evolved, other fossil fuel based cells (such as DMFC's) are still plagued by the shortcomings of proton exchange membranes. For example, fuel crossover means that low concentrations need to be used which limits the available power of the cell. In solid oxide fuel cells, high temperatures are needed which require energy and can also lead to quicker degradation of materials. Membraneless fuel cells offer a solution to these problems.
Laminar Flow[edit]
A vortex street around a cylinder. At the beginning of the vortex, both fluids are separate. This indicates laminar flow with minimal mixing. Picture courtesy, Cesareo de La Rosa Siqueira.
LFFC's overcome the problem of unwanted crossover through the manipulation of the Reynolds number, which describes the behavior of a fluid. In general, at low Reynolds numbers, flow is laminar whereas turbulence occurs at a higher Reynolds number. In laminar flow, two fluids will interact primarily through diffusion which means mixing is limited. By choosing the correct fuel and oxidizing agents in LFFC's, protons can be allowed to diffuse from the anode to the cathode across the interface of the two streams.[4] The LFFC's are not limited to a liquid feed and in certain cases, depending on the geometry and reactants, gases can also be advantageous. Current designs inject the fuel and oxidizing agent into two separate streams which flow side by side. The interface between the fluids acts as the electrolytic membrane across which protons diffuse. Membraneless fuel cells offer a cost advantage due to the lack of the electrolytic membrane. Further, a decrease in crossover also increases fuel efficiency resulting in higher power output.
Diffusion across the interface is extremely important and can severely affect fuel cell performance. The protons need to be able to diffuse across both the fuel and the oxidizing agent. The diffusion coefficient, a term which describes the ease of diffusion of an element in another medium, can be combined with Fick's laws of diffusion which addresses the effects of a concentration gradient and distance over which diffusion occurs:
{\displaystyle {\bigg .}J=-D{\frac {\partial \phi }{\partial x}}{\bigg .}}
{\displaystyle J}
{\displaystyle \left({\tfrac {\mathrm {mol} }{m^{2}\cdot s}}\right)}
{\displaystyle J}
{\displaystyle \,D}
{\displaystyle \left({\tfrac {m^{2}}{s}}\right)}
{\displaystyle \,\phi }
{\displaystyle \left({\tfrac {\mathrm {mol} }{m^{3}}}\right)}
{\displaystyle \,x}
is the diffusion length i.e. the distance over which diffusion occurs
In order to increase the diffusion flux, the diffusivity and/or concentration need to be increased while the length needs to be decreased. In DMFC's for example, the thickness of the membrane determines the diffusion length while the concentration is often limited due to crossover. Thus, the diffusion flux is limited. A membraneless fuel cell is theoretically the better option since the diffusion interface across both fluids is extremely thin and using higher concentrations does not result in a drastic effect on crossover.
In most fuel cell configurations with liquid feeds, the fuel and oxidizing solutions almost always contain water which acts as a diffusion medium. In many hydrogen-oxygen fuel cells, the diffusion of oxygen at the cathode is rate limiting since the diffusivity of oxygen in water is much lower than that of hydrogen.[5][6] As a result, LFFC performance can also be improved by not using aqueous oxygen carriers.
The promise of membraneless fuel cells has been offset by several problems inherent to their designs. Ancillary structures are one of the largest obstacles. For example, pumps are required to maintain laminar flow while gas separators can be needed to supply the correct fuels into the cells. For micro fuel cells, these pumps and separators need to be miniaturized and packaged into a small volume (under 1 cm3). Associated with this process is a so-called "packaging penalty" which results in higher costs. Further, pumping power drastically increases with decreasing size (see Scaling Laws) which is disadvantageous. Efficient packaging methods and/or self-pumping cells (see Research and Development) need to be developed to make this technology viable. Also, while using high concentrations of specific fuels, such as methanol, crossover still occurs. This problem can be partially solved by using a nanoporous separator, lowering fuel concentration[7] or choosing reactants which have a lower tendency towards crossover.
Date: January 2010: Researchers developed a novel method of inducing self-pumping in a membraneless fuel cell. Using formic acid as a fuel and sulfuric acid as an oxidant, CO2 is produced in the reaction in the form of bubbles. The bubbles nucleate and coalesce on the anode. A check valve at the supply end prevents any fuel entering while the bubbles are growing. The check valve is not mechanical but hydrophobic in nature. By creating micro structures which form specific contact angles with water, fuel cannot be drawn backwards. As the reaction continues, more CO2 is formed while fuel is consumed. The bubble begins to propagate towards the outlet of the cell. However, before the outlet, a hydrophobic vent allows the carbon dioxide to escape while simultaneously ensuring other byproducts (such as water) do not clog the vent. As the carbon dioxide is being vented, fresh fuel is also drawn in at the same through the check valve and the cycle begins again. Thus, the fuel cell pumping is regulated by the reaction rate. This type of cell is not a two stream laminar flow fuel cell. Since the formation of bubbles can disrupt two separate laminar flows, a combined stream of fuel and oxidant was used. In laminar conditions, mixing will still not occur. It was found that using selective catalysts (i.e. Not platinum) or extremely low flow rates can prevent crossover.[8][9]
Scaling Issues[edit]
Membraneless fuel cells are currently being manufactured on the micro scale using fabrication processes found in the MEMS/NEMS area. These cell sizes are suited for the small scale due to the limit of their operating principles. The scale-up of these cells to the 2–10 Watt range has proven difficult[10] since, at large scales, the cells cannot maintain the correct operating conditions.
For example, laminar flow is a necessary condition for these cells. Without laminar flow, crossover would occur and a physical electrolytic membrane would be needed. Maintaining laminar flow is achievable on the macro scale but maintaining a steady Reynolds number is difficult due to variations in pumping. This variation causes fluctuations at the reactant interfaces which can disrupt laminar flow and affect diffusion and crossover. However, self-pumping mechanisms can be difficult and expensive to produce on the macro-scale. In order to take advantage of hydrophobic effects, the surfaces need to be smooth to control the contact angle of water. To produce these surfaces on a large scale, the cost will significantly increase due to the close tolerances which are needed. Also, it is not evident whether using a carbon-dioxide based pumping system on the large scale is viable.
Membraneless fuel cells can utilize self-pumping mechanisms but requires the use of fuel which release GHG's (greenhouse gases) and other unwanted products. To use an environmentally friendly fuel configuration (such as H2-O2), self pumping can be difficult. Thus, external pumps are required. However, for a rectangular channel, the pressure required increases proportional to the L−3, where L is a length unit of the cell. Thus, by decreasing the size of a cell from 10 cm to 1 cm, the required pressure will increase by 1000. For micro fuel cells, this pumping requirement requires high voltages. Although in some cases, Electroosmotic flow can be induced. However, for liquid mediums, high voltages are also required. Further, with decreasing size, surface tension effects also become significantly more important. For the fuel cell configuration with a carbon dioxide generating mechanism, the surface tension effects could also increase the pumping requirements drastically.
Potential Applications of LFFCs[edit]
The thermodynamic potential of a fuel cell limits the amount of power that an individual cell can deliver. Therefore, in order to obtain more power, fuel cells must be connected in series or parallel (depending on whether greater current or voltage is desired). For large scale building and automobile power applications, macro fuel cells can be used because space is not necessarily the limiting constraint. However, for portable devices such as cell phones and laptops, macro fuel cells are often inefficient due to their space requirements lower run times. LFFCs however, are perfectly suited for these types of applications. The lack of a physical electrolytic membrane and energy dense fuels that can be used means that LFFC's can be produced at lower costs and smaller sizes. In most portable applications, energy density is more important than efficiency due to the low power requirements.
^ "MRFC Technology - Mantra Energy Alternatives". Mantra Energy Alternatives. Retrieved 2015-10-27.
^ Ragheb, Magdi. "Steam Reforming." Lecture. Energy Storage Systems. University of Illinois, 3 Oct. 2010. Web. 12 Oct. 2010. <https://netfiles.uiuc.edu/mragheb/www/NPRE%20498ES%20Energy%20Storage%20Systems/index.htm Archived 2012-12-18 at the Wayback Machine>.
^ Kin, T., W. Shieh, C. Yang, and G. Yu. "Estimating the Methanol Crossover Rate of PEM and the Efficiency of DMFC via a Current Transient Analysis." Journal of Power Sources 161.2 (2006): 1183–186. Print.
^ 1. E.R. Choban, L.J. Markoski, A. Wieckowski, P.J.A. Kenis, Micro-Fluidic Fuel Cell Based on Laminar Flow. J. Power Sources, 2004,128, 54–60.
^ Fukada, Satoshi. "Analysis of Oxygen Reduction Rate in a Proton Exchange Membrane Fuel Cell." Energy Conversion and Management 42.9 (2000): 1121. Print.
^ Verhallen, P., L. Oomen, A. Elsen, and A. Kruger. "The Diffusion Coefficients of Helium, Hydrogen, Oxygen and Nitrogen in Water Determined from the Permeability of a Stagnant Liquid Layer in the Quasi-steady State." Chemical Engineering Science 39.11 (1984): 1535–541. Print.
^ Hollinger, Adam S., R. J. Maloney, L. J. Markoski, P. J. Kenis, R. S. Jayashree, and D. Natarajan. "Nanoporous Separator and Low Fuel Concentration to Minimize Crossover in Direct Methanol Laminar Flow Fuel Cells." Journal of Power Sources 195.11 (2010): 3523–528. Print.
^ D. D. Meng and C.-J. Kim, “Micropumping of liquid by directional growth and selective venting of gas Bubbles”, Lab on a Chip, 8 (2008), pp. 958- 968.
^ Meng, D. D., J. Hur, and C. Kim. "MEMBRANELSS MICRO FUEL CELL CHIP ENABLED BY SELF-PUMPING OF FUEL-OXIDANT MIXTURE." Proc. of 2010 IEEE 23rd International Conference on Micro Electro Mechanical Systems, Wanchai, Hong Kong. Print.
^ Abruna, H., and A. Stroock. "Transport Phenomena and Interfacial Kinetics in Planar Microfluidic Membraneless Fuel Cells." Hydrogen Program. U.S. Department of Energy. Web. 25 Nov. 2010. <http://www.hydrogen.energy.gov/pdfs/review10/bes017_abruna_2010_o_web.pdf>.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Membraneless_Fuel_Cells&oldid=1068840712" |
Dynamics of Patterns | EMS Press
Ideas from dynamical systems have had a profound impact on the way we think about pattern formation. Bifurcation theory, for instance, has helped tremendously in explaining pattern selection in experiments, including Rayleigh--Benard convection and Belousov--Zhabotinsky reactions. However, these results can typically only describe patterns with a given prescribed periodic lattice structure on the plane. Amplitude equations go beyond this limitation: They allow us to investigate the dynamics of slowly varying amplitude modulations of a fixed spatially homogeneous state over large, but finite, time intervals.
Over the past few years, the focus has shifted to situations where neither bifurcation theory nor the amplitude-equation formalism can give enough insight into the formation and the dynamics of patterns. Examples are the dynamical selection of patterns, extracting and describing transient dynamics, the nonlinear stability of patterns in unbounded domains, and the development of efficient numerical techniques to capture specific dynamical effects and behaviours. This workshop brought together researchers who work on these questions from different perspectives and with different techniques, ranging from dynamical systems theory, qualitative analysis of partial differential equations, and bifurcation theory to spectral analysis and numerical methods for patterns.
During the workshop, 25 presentations were given. In addition, three PhD students discussed their projects in shorter talks of 15 minutes length. On Tuesday afternoon, no talks were scheduled. Instead, the attendees had the opportunity to discuss more specialized topics in smaller groups. We now describe briefly the main outcomes and new directions that emerged during the workshop.
The formation and interaction of pulses in one space dimension were one central theme of the workshop. Recent efforts aim to describe the interaction of localized pulses analytically and to compute interacting pulses efficiently, using numerical means. Progress was made in particular for pulses that are only weakly localized: in certain circumstances, it is then still possible to capture the interaction of such pulses analytically.
Over the past few years, the \emph{freezing method} has been investigated thoroughly from both analytical and numerical viewpoints. This method computes pulses numerically by separating the shape dynamics from the dynamics on the underlying symmetry group. These developments were discussed together with applications to spiral waves and to propagating pulses in partial differential equations of mixed type.
With all these successes, it became clear, however, that both analytic and numerical understanding of the evolution and interaction of two-dimensional localized spatio-temporal patterns is still very rudimentary.
Spatially extended patterns and their dynamics constituted a further focus. Much recent work has centered on explaining specific phenomena that have been observed experimentally: examples include turbulent stripe patterns in fluid flows, Liesegang precipitation patterns, planar hexagon patches, and vortex dynamics in flows past cylinders. Significant progress was also made in proving spectral and nonlinear stability of spatially extended waves such as rotating waves, spatially homogeneous oscillations, and spatially periodic structures. Furthermore, techniques to assess spectral stability for multi-dimensional fronts were discussed.
Systems with delay form an important class of infinite-dimensional systems that exhibit interesting dynamical patterns. State-dependent delays allow the delay of the system to depend on the prehistory state of that system itself. Hysteresis is one well-known example. State-dependent delays are motivated by important applications, generate a plethora of new dynamical patterns, and present formidable obstacles to analysis. Progress reports included patterns of periodicity in hysteresis, implicitly defined delays, and singularly perturbed equations.
\subsection*{Special Tuesday sessions}
No talks were scheduled on Tuesday afternoon to give participants an additional opportunity for discussion in smaller groups. We briefly report on two group meetings that took place in this setting.
\subsubsection*{Poster discussion: More on pulses, shocks, and their interactions.}
The main intention of the posters during the work session was to discuss among a group of specialists the existence, stability and bifurcation of nonlinear waves that are either periodic in time or in space. The group discussed progress to these kinds of problems that involved various different approaches such as singular perturbation techniques, modulation equations, and pointwise Green's function estimates. Specific topics and the corresponding contributors were: \begin{itemize} \item Busse balloons and bifurcations of spatially periodic patterns (Arjen Doelman, with Harmen von der Ploeg, Jens Rademacher and Sjors van der Stelt); \item Interfaces between rolls in the Swift--Hohenberg equation (Mariana Haragus with Arnd Scheel); \item Delayed bifurcation in a simple reaction-diffusion equation (Tasso Kaper with Peter De Maesschalck and Nikola Popovic); \item Stability of time-periodic viscous shocks (Bj\"orn Sandstede with Margaret Beck and Kevin Zumbrun). \end{itemize} Another intention of the poster discussion was to have interactions between this group and junior researchers who were given the possibility to present more details (in particular numerical results) than during their short talks: \begin{itemize} \item Freezing waves in hyperbolic PDEs (Jens Rottmann-Matthes); \item Numerical decomposition of multistructures (Sabrina Selle). \end{itemize}
\subsubsection*{Global parabolic dynamics.} Progress and discussion addressed two aspects of the global dynamics of semilinear parabolic partial differential equations, mainly on a circle domain. These aspects are the Morse--Smale or Kupka--Smale property, on the one hand, and the characterization of global attractors, on the other hand. The Kupka--Smale property asserts hyperbolicity of all equilibria and periodic orbits, as well as transversality of their stable and unstable manifolds to hold for generic (i.e., for ``most'') nonlinearities. The Morse--Smale property asserts, in addition, the absence of any recurrence beyond periodicity. In such situations, it is conceivable, but still a formidable task, to study the detailed spatio-temporal structure of the patterns arising in the global attractor.
More precisely, we considered the following reaction-diffusion equations on the circle
S^1
u_t(x,t) = u_{xx}(x,t) + f(x, u(x,t), u_x(x,t)), \quad (x,t) \in S^1 \times {\bf R}^+,
f
is a regular function from
S^1 \times {\bf R}^2
{\bf R}
. First, we have recalled that these equations satisfy the Poincar\'{e}--Bendixson property [2]. We have also stated the recent result of Czaja and Rocha [1], who showed that the stable and unstable manifolds of hyperbolic periodic orbits intersect transversally.
Genevi{\`e}ve Raugel presented a proof of the Morse--Smale property for the above equation. She first showed genericity (with respect to the non-linearity) of the hyperbolicity of all equilibria and periodic orbits [3]. The main ingredients are the non-increase of the zero number and Sard--Smale theorems. She also showed automatic transversality of the stable and unstable manifolds of equilibria with different Morse indices, the generic non-existence (with respect to the non-linearity) of orbits connecting two equilibria with the same Morse index, etc. [4]. These properties allow to show that, generically with respect to the non-linearity
f
, the above reaction-diffusion equation on the circle
S^1
indeed possesses the Morse--Smale property.
In the second part of this discussion, we have considered genericity (with respect to the non-linearity) of hyperbolicity of all equilibria and periodic orbits in the case of a scalar reaction-diffusion equation in higher dimension and explained the results already obtained in a work in progress by P. Brunovsk{\`y}, R. Joly, and G. Raugel. Genevi{\`e}ve Raugel noticed that the same types of arguments and techniques should lead them to the proof of the Kupka--Smale property for parabolic PDEs in the near future.
x
-independent nonlinearities
f=f(u,u_x)
on the circle domain, the Morse--Smale property enters the description of all generic global attractors given in [5]. Carlos Rocha indicated how to characterize the set of
2\pi
-periodic solutions of planar Hamiltonians of the form
u'' + g(u) = 0
and obtained a useful tool for the description of the associated global attractors. He discussed a permutation characterization for the periodic solutions of the corresponding stationary problems. Essentially, the permutation describes the braid formed by the stationary solutions and traveling waves of the semilinear parabolic equation [6]. Extending this result to equilibria in the
x
-reversible case
f(u,-p)=f(u,p)
, this characterization indeed extends to describe the precise heteroclinic structure of the above parabolic partial differential equation in this case.
{\bf References\medskip }
\noindent [1] R. Czaja and C. Rocha, {\it Transversality in scalar reaction-diffusion equations on a circle}, J. Diff. Eqs. {\bf 245} (2008), 692--721
\noindent [2] B. Fiedler and J. Mallet-Paret, {\it A Poincar\'{e}--Bendixson theorem for scalar reaction-diffusion equations}, Arch. Rational Mech. Analysis {\bf 107} (1989), 325--345.
\noindent [3] R. Joly and G. Raugel, {\it Generic hyperbolicity of equilibria and periodic orbits of the parabolic equation on the circle}, to appear in Transactions of the AMS.
\noindent [4] R. Joly and G. Raugel, {\it Generic Morse--Smale property for the parabolic equation on the circle}, manuscript.
\noindent [5] B. Fiedler, C. Rocha, M. Wolfrum, {\it Heteroclinic orbits between rotating waves of semilinear parabolic equations on the circle}, J. Diff. Eqs. {\bf 201} (2004), 99--138.
\noindent [6] B. Fiedler, C. Rocha, M. Wolfrum, {\it A permutation characterization of periodic orbits in autonomous Hamiltonians or reversible systems}, Preprint (2009).
Björn Sandstede, Wolf-Jürgen Beyn, Bernold Fiedler, Dynamics of Patterns. Oberwolfach Rep. 5 (2008), no. 4, pp. 3201–3271 |
Coincidence situations for absolutely summing non-linear mappings | EMS Press
Coincidence situations for absolutely summing non-linear mappings
Geraldo Botelho
Situations where every linear operator between certain Banach spaces is absolutely
(p;q)
-summing are very useful in linear functional analysis. In this paper we investigate situations of this type for three classes (two new and one well known) of nonlinear mappings which are generalizations of the notion of absolutely summing linear operators.
Geraldo Botelho, Daniel Pellegrino, Coincidence situations for absolutely summing non-linear mappings. Port. Math. 64 (2007), no. 2, pp. 175–191 |
Francis_turbine Knowpia
The Francis turbine is a type of water turbine. It is an inward-flow reaction turbine that combines radial and axial flow concepts. Francis turbines are the most common water turbine in use today, and can achieve over 95% efficiency.[1]
Francis inlet scroll at the Grand Coulee Dam
Side-view cutaway of a vertical Francis turbine. Here water enters horizontally in a spiral shaped pipe (spiral case) wrapped around the outside of the turbine's rotating runner and exits vertically down through the center of the turbine.
The process of arriving at the modern Francis runner design took from 1848 to approximately 1920.[1] It became known as the Francis turbine around 1920, being named after British-American engineer James B. Francis who in 1848 created a new turbine design.[1]
Francis turbines are primarily used for electrical power production. The power output of the electric generators generally ranges from just a few kilowatts up to 1000 MW, though mini-hydro installations may be lower. The best performance is seen when the head height is between 100–300 metres (330–980 ft).[2] Penstock diameters are between 1 and 10 m (3.3 and 32.8 ft). The speeds of different turbine units range from 70 to 1000 rpm. A wicket gate around the outside of the turbine's rotating runner controls the rate of water flow through the turbine for different power production rates. Francis turbines are usually mounted with a vertical shaft, to isolate water from the generator. This also facilitates installation and maintenance.[citation needed]
Pawtucket Gatehouse in Lowell, Massachusetts; site of the first Francis turbine
Francis Runner, Grand Coulee Dam
Water wheels of different types have been used for more than 1,000 years to power mills of all types, but they were relatively inefficient. Nineteenth-century efficiency improvements of water turbines allowed them to replace nearly all water wheel applications and compete with steam engines wherever water power was available. After electric generators were developed in the late 1800s, turbines were a natural source of generator power where potential hydropower sources existed.
In 1826 the French engineer Benoit Fourneyron developed a high-efficiency (80%) outward-flow water turbine. Water was directed tangentially through the turbine runner, causing it to spin. Another French engineer Jean-Victor Poncelet designed an inward-flow turbine in about 1820 that used the same principles. S. B. Howd obtained a US patent in 1838 for a similar design.
In 1848 James B. Francis, while working as head engineer of the Locks and Canals company in the water wheel-powered textile factory city of Lowell, Massachusetts,[3] improved on these designs to create more efficient turbines. He applied scientific principles and testing methods to produce a very efficient turbine design. More importantly, his mathematical and graphical calculation methods improved turbine design and engineering. His analytical methods allowed the design of high-efficiency turbines to precisely match a site's water flow and pressure (water head).
Spiral casing: The spiral casing around the runner of the turbine is known as the volute casing or scroll case. Throughout its length, it has numerous openings at regular intervals to allow the working fluid to impinge on the blades of the runner. These openings convert the pressure energy of the fluid into kinetic energy just before the fluid impinges on the blades. This maintains a constant velocity despite the fact that numerous openings have been provided for the fluid to enter the blades, as the cross-sectional area of this casing decreases uniformly along the circumference.
Guide and stay vanes: The primary function of the guide and stay vanes is to convert the pressure energy of the fluid into kinetic energy. It also serves to direct the flow at design angles to the runner blades.
Runner blades: Runner blades are the heart of any turbine. These are the centers where the fluid strikes and the tangential force of the impact produces torque causing the shaft of the turbine to rotate. Close attention to design of blade angles at inlet and outlet is necessary, as these are major parameters affecting power production.
Three Gorges Dam Francis turbine runner, on the Yangtze River, China
The Francis turbine is a type of reaction turbine, a category of turbine in which the working fluid comes to the turbine under immense pressure and the energy is extracted by the turbine blades from the working fluid. A part of the energy is given up by the fluid because of pressure changes occurring on the blades of the turbine, quantified by the expression of degree of reaction, while the remaining part of the energy is extracted by the volute casing of the turbine. At the exit, water acts on the spinning cup-shaped runner features, leaving at low velocity and low swirl with very little kinetic or potential energy left. The turbine's exit tube is shaped to help decelerate the water flow and recover the pressure.
Francis turbine (exterior view) attached to a generator
Cut-away view, with wicket gates (yellow) at minimum flow setting
Cut-away view, with wicket gates (yellow) at full flow setting
Blade efficiencyEdit
Ideal velocity diagram, illustrating that in ideal cases the whirl component of outlet velocity is zero and the flow is completely axial
Usually the flow velocity (velocity perpendicular to the tangential direction) remains constant throughout, i.e. Vf1=Vf2 and is equal to that at the inlet to the draft tube. Using the Euler turbine equation, E/m=e=Vw1U1, where e is the energy transfer to the rotor per unit mass of the fluid. From the inlet velocity triangle,
{\displaystyle V_{w1}=V_{f1}\cot \alpha _{1}}
{\displaystyle U_{1}=V_{f1}(\cot \alpha _{1}+\cot \beta _{1}),}
{\displaystyle e=V_{f1}^{2}\cot \alpha _{1}(\cot \alpha _{1}+\cot \beta _{1}).}
The loss of kinetic energy per unit mass at the outlet is Vf22/2. Therefore, neglecting friction, the blade efficiency becomes
{\displaystyle \eta _{b}={e \over (e+V_{f2}^{2}/2)},}
{\displaystyle \eta _{b}={\frac {2V_{f1}^{2}(\cot \alpha _{1}(\cot \alpha _{1}+\cot \beta _{1}))}{V_{f2}^{2}+2V_{f1}^{2}(\cot \alpha _{1}(\cot \alpha _{1}+\cot \beta _{1}))}}\,.}
Degree of reactionEdit
Actual velocity diagram, illustrating that the whirl component of the outlet velocity is non-zero
Degree of reaction can be defined as the ratio of pressure energy change in the blades to total energy change of the fluid.[4] This means that it is a ratio indicating the fraction of total change in fluid pressure energy occurring in the blades of the turbine. The rest of the changes occur in the stator blades of the turbines and the volute casing as it has a varying cross-sectional area. For example, if the degree of reaction is given as 50%, that means that half of the total energy change of the fluid is taking place in the rotor blades and the other half is occurring in the stator blades. If the degree of reaction is zero it means that the energy changes due to the rotor blades is zero, leading to a different turbine design called the Pelton Turbine.
{\displaystyle R=1-{\frac {V_{1}^{2}-V_{2}^{2}}{2e}}=1-{\frac {V_{1}^{2}-V_{f2}^{2}}{2e}}}
The second equality above holds, since discharge is radial in a Francis turbine. Now, putting in the value of 'e' from above and using
{\displaystyle V_{1}^{2}-V_{f2}^{2}=V_{f1}^{2}\cot \alpha _{2}}
{\displaystyle V_{f2}=V_{f1}}
{\displaystyle R=1-{\frac {\cot \alpha _{1}}{2(\cot \alpha _{1}+\cot \beta _{1})}}}
Small Swiss-made Francis turbine
Francis turbines may be designed for a wide range of heads and flows. This versatility, along with their high efficiency, has made them the most widely used turbine in the world. Francis type units cover a head range from 40 to 600 m (130 to 2,000 ft), and their connected generator output power varies from just a few kilowatts up to 1000 MW. Large Francis turbines are individually designed for each site to operate with the given water flow and water head at the highest possible efficiency, typically over 90% (to 99%[5]).
Wikimedia Commons has media related to Francis turbine.
Sensor fish, a device used to study the impact of fish travelling through Francis and Kaplan turbines
^ a b c Lewis, B J; Cimbala, J M; Wouden, A M (2014-03-01). "Major historical developments in the design of water wheels and Francis hydroturbines". IOP Conference Series: Earth and Environmental Science. 22 (1): 012020. Bibcode:2014E&ES...22a2020L. doi:10.1088/1755-1315/22/1/012020. ISSN 1755-1315.
This article incorporates text from this source, which is available under the CC BY 3.0 license.
^ Paul Breeze, Power Generation Technologies (Third Edition), 2019
^ "Lowell Notes – James B. Francis" (PDF). National Park Service. Archived from the original (PDF) on 2016-03-10.
^ Bansal, RK (2010). A textbook of fluid mechanics and hydraulic machines (Revised ninth ed.). India: Laxmi publications. pp. 880–883.
^ L. Suo, ... H. Xie, in Comprehensive Renewable Energy, 2012
Layton, Edwin T. From Rule of Thumb to Scientific Engineering: James B. Francis and the Invention of the Francis Turbine. NLA Monograph Series. Stony Brook, NY: Research Foundation of the State University of New York, 1992. OCLC 1073565482.
S. M. Yahya, page number 13, fig. 1.14.[full citation needed] |
Heat Transfer Analysis of a Novel Pressurized Air Receiver for Concentrated Solar Power via Combined Cycles | J. Thermal Sci. Eng. Appl. | ASME Digital Collection
I. Hischier,
I. Hischier
, Zurich 8092, Switzerland
D. Hess,
W. Lipiński,
M. Modest,
, Merced, CA 95343
, Zurich 8092, Switzerland; Solar Technology Laboratory,
, Villigen 5232, Switzerland
Hischier, I., Hess, D., Lipiński, W., Modest, M., and Steinfeld, A. (May 19, 2010). "Heat Transfer Analysis of a Novel Pressurized Air Receiver for Concentrated Solar Power via Combined Cycles." ASME. J. Thermal Sci. Eng. Appl. December 2009; 1(4): 041002. https://doi.org/10.1115/1.4001259
A novel design of a high-temperature pressurized solar air receiver for power generation via combined Brayton–Rankine cycles is proposed. It consists of an annular reticulate porous ceramic (RPC) bounded by two concentric cylinders. The inner cylinder, which serves as the solar absorber, has a cavity-type configuration and a small aperture for the access of concentrated solar radiation. Absorbed heat is transferred by conduction, radiation, and convection to the pressurized air flowing across the RPC. A 2D steady-state energy conservation equation coupling the three modes of heat transfer is formulated and solved by the finite volume technique and by applying the Rosseland diffusion,
P1
, and Monte Carlo radiation methods. Key results include the temperature distribution and thermal efficiency as a function of the geometrical and operational parameters. For a solar concentration ratio of 3000 suns, the outlet air temperature reaches
1000°C
at 10 bars, yielding a thermal efficiency of 78%.
Brayton cycle, heat transfer, Monte Carlo methods, solar power stations, concentrated solar power, pressurized receiver, cavity receiver, reticulate porous ceramic, Rosseland diffusion, P1, Monte Carlo, radiation, conduction, convection, solar energy
Cavities, Ceramics, Concentrating solar power, Convection, Heat conduction, Heat transfer, Radiation (Physics), Solar energy, Temperature, Combined cycles, Diffusion (Physics), Thermal efficiency, Heat, Cylinders, Design
Trough Integration Into Power Plants—A Study on the Performance and Economy of Integrated Solar Combined Cycle Systems
Schwarzbözl
Solar-Hybrid Gas Turbine-Based Power Tower Systems (REFOS)
SOLGATE—Solar Hybrid Gas Turbine Electric Power System
Ostraich
Multiple Air-Jet Window Cooling for High-Temperature Pressurized Volumetric Receivers: Testing, Evaluation, and Modeling
Development of a Tube Receiver for a Solar-Hybrid Microturbine System
Proceedings of the 14th SolarPaces Symposium
Development of a High Temperature Air Solar Receiver Based on Compact Heat Exchanger Technology
Tomography Based Determination of Permeability, Dupuit–Forchheimer Coefficient, and Interfacial Heat Transfer Coefficient in Reticulate Porous Ceramics
New Radiative Analysis Approach for Reticulated Porous Ceramics Using Discrete Ordinates Method
Skocypec
Garrait
A Direct Absorber Reactor/Receiver for Solar Thermal Applications
Solar Reforming of Methane in a Direct Absorption Catalytic Reactor on a Parabolic Dish: II—Modeling and Analysis
Material Properties of a Sintered Alpha-SiC
DIPPR Project 801—Full Version, Design Institute for Physical Property Data/AIChE
,” sponsored by AIChE.
Heat Transfer in Open Cell Foam Insulation |
Cardinal point (optics) - Wikipedia
For other uses, see Cardinal point (disambiguation).
In Gaussian optics, the cardinal points consist of three pairs of points located on the optical axis of a rotationally symmetric, focal, optical system. These are the focal points, the principal points, and the nodal points.[1] For ideal systems, the basic imaging properties such as image size, location, and orientation are completely determined by the locations of the cardinal points; in fact only four points are necessary: the focal points and either the principal or nodal points. The only ideal system that has been achieved in practice is the plane mirror,[2] however the cardinal points are widely used to approximate the behavior of real optical systems. Cardinal points provide a way to analytically simplify a system with many components, allowing the imaging characteristics of the system to be approximately determined with simple calculations.
1.1 Focal points and planes
1.2 Principal planes and points
1.3 Nodal points
1.4 Surface vertices
2 Modeling optical systems as mathematical transformations
2.1 Rotationally symmetric optical systems; Optical axis, axial points, and meridional planes
2.2 Ideal, rotationally symmetric, optical imaging system
2.3 Focal and afocal systems, focal points
The cardinal points of a thick lens in air.
F, F' front and rear focal points,
P, P' front and rear principal points,
V, V' front and rear surface vertices.
The cardinal points lie on the optical axis of the optical system. Each point is defined by the effect the optical system has on rays that pass through that point, in the paraxial approximation. The paraxial approximation assumes that rays travel at shallow angles with respect to the optical axis, so that
{\displaystyle \sin \theta \approx \theta }
{\displaystyle \cos \theta \approx 1}
.[3] Aperture effects are ignored: rays that do not pass through the aperture stop of the system are not considered in the discussion below.
Focal points and planes[edit]
See also: Focus (optics) and Focal length
The front focal point of an optical system, by definition, has the property that any ray that passes through it will emerge from the system parallel to the optical axis. The rear (or back) focal point of the system has the reverse property: rays that enter the system parallel to the optical axis are focused such that they pass through the rear focal point.
Rays that leave the object with the same angle cross at the back focal plane.
The front and rear (or back) focal planes are defined as the planes, perpendicular to the optic axis, which pass through the front and rear focal points. An object infinitely far from the optical system forms an image at the rear focal plane. For objects a finite distance away, the image is formed at a different location, but rays that leave the object parallel to one another cross at the rear focal plane.
Angle filtering with an aperture at the rear focal plane.
A diaphragm or "stop" at the rear focal plane can be used to filter rays by angle, since:
It only allows rays to pass that are emitted at an angle (relative to the optical axis) that is sufficiently small. (An infinitely small aperture would only allow rays that are emitted along the optical axis to pass.)
No matter where on the object the ray comes from, the ray will pass through the aperture as long as the angle at which it is emitted from the object is small enough.
Note that the aperture must be centered on the optical axis for this to work as indicated. Using a sufficiently small aperture in the focal plane will make the lens telecentric.
Similarly, the allowed range of angles on the output side of the lens can be filtered by putting an aperture at the front focal plane of the lens (or a lens group within the overall lens). This is important for DSLR cameras having CCD sensors. The pixels in these sensors are more sensitive to rays that hit them straight on than to those that strike at an angle. A lens that does not control the angle of incidence at the detector will produce pixel vignetting in the images.
Principal planes and points[edit]
Various lens shapes, and the location of the principal planes.
The two principal planes have the property that a ray emerging from the lens appears to have crossed the rear principal plane at the same distance from the axis that the ray appeared to cross the front principal plane, as viewed from the front of the lens. This means that the lens can be treated as if all of the refraction happened at the principal planes, and the linear magnification from one principal plane to the other is +1. The principal planes are crucial in defining the optical properties of the system, since it is the distance of the object and image from the front and rear principal planes that determines the magnification of the system. The principal points are the points where the principal planes cross the optical axis.
If the medium surrounding the optical system has a refractive index of 1 (e.g., air or vacuum), then the distance from the principal planes to their corresponding focal points is just the focal length of the system. In the more general case, the distance to the foci is the focal length multiplied by the index of refraction of the medium.
For a thin lens in air, the principal planes both lie at the location of the lens. The point where they cross the optical axis is sometimes misleadingly called the optical centre of the lens. Note, however, that for a real lens the principal planes do not necessarily pass through the centre of the lens, and in general may not lie inside the lens at all.
Nodal points[edit]
N, N' The front and rear nodal points of a thick lens.
The front and rear nodal points have the property that a ray aimed at one of them will be refracted by the lens such that it appears to have come from the other, and with the same angle with respect to the optical axis. (Angular magnification between nodal points is +1.) The nodal points therefore do for angles what the principal planes do for transverse distance. If the medium on both sides of the optical system is the same (e.g., air), then the front and rear nodal points coincide with the front and rear principal points, respectively.
The nodal points are widely misunderstood in photography, where it is commonly asserted that the light rays "intersect" at "the nodal point", that the iris diaphragm of the lens is located there, and that this is the correct pivot point for panoramic photography, so as to avoid parallax error.[4][5][6] These claims generally arise from confusion about the optics of camera lenses, as well as confusion between the nodal points and the other cardinal points of the system. (A better choice of the point about which to pivot a camera for panoramic photography can be shown to be the centre of the system's entrance pupil.[4][5][6] On the other hand, swing-lens cameras with fixed film position rotate the lens about the rear nodal point to stabilize the image on the film.[6][7])
Surface vertices[edit]
In optics, the surface vertices are the points where each optical surface crosses the optical axis. They are important primarily because they are the physically measurable parameters for the position of the optical elements, and so the positions of the cardinal points must be known with respect to the vertices to describe the physical system.
In anatomy, the surface vertices of the eye's lens are called the anterior and posterior poles of the lens.[8]
Modeling optical systems as mathematical transformations[edit]
In geometrical optics for each ray entering an optical system a single, unique, ray exits. In mathematical terms, the optical system performs a transformation that maps every object ray to an image ray.[1] The object ray and its associated image ray are said to be conjugate to each other. This term also applies to corresponding pairs of object and image points and planes. The object and image rays and points are considered to be in two distinct optical spaces, object space and image space; additional intermediate optical spaces may be used as well.
Rotationally symmetric optical systems; Optical axis, axial points, and meridional planes[edit]
An optical system is rotationally symmetric if its imaging properties are unchanged by any rotation about some axis. This (unique) axis of rotational symmetry is the optical axis of the system. Optical systems can be folded using plane mirrors; the system is still considered to be rotationally symmetric if it possesses rotational symmetry when unfolded. Any point on the optical axis (in any space) is an axial point.
Rotational symmetry greatly simplifies the analysis of optical systems, which otherwise must be analyzed in three dimensions. Rotational symmetry allows the system to be analyzed by considering only rays confined to a single transverse plane containing the optical axis. Such a plane is called a meridional plane; it is a cross-section through the system.
Ideal, rotationally symmetric, optical imaging system[edit]
An ideal, rotationally symmetric, optical imaging system must meet three criteria:
All rays "originating" from any object point converge to a single image point (Imaging is stigmatic).
Object planes perpendicular to the optical axis are conjugate to image planes perpendicular to the axis.
The image of an object confined to a plane normal to the axis is geometrically similar to the object.
In some optical systems imaging is stigmatic for one or perhaps a few object points, but to be an ideal system imaging must be stigmatic for every object point.
Unlike rays in mathematics, optical rays extend to infinity in both directions. Rays are real when they are in the part of the optical system to which they apply, and are virtual elsewhere. For example, object rays are real on the object side of the optical system. In stigmatic imaging an object ray intersecting any specific point in object space must be conjugate to an image ray intersecting the conjugate point in image space. A consequence is that every point on an object ray is conjugate to some point on the conjugate image ray.
Geometrical similarity implies the image is a scale model of the object. There is no restriction on the image's orientation. The image may be inverted or otherwise rotated with respect to the object.
Focal and afocal systems, focal points[edit]
In afocal systems an object ray parallel to the optical axis is conjugate to an image ray parallel to the optical axis. Such systems have no focal points (hence afocal) and also lack principal and nodal points. The system is focal if an object ray parallel to the axis is conjugate to an image ray that intersects the optical axis. The intersection of the image ray with the optical axis is the focal point F' in image space. Focal systems also have an axial object point F such that any ray through F is conjugate to an image ray parallel to the optical axis. F is the object space focal point of the system.
The transformation between object space and image space is completely defined by the cardinal points of the system, and these points can be used to map any point on the object to its conjugate image point.
^ a b Greivenkamp, John E. (2004). Field Guide to Geometrical Optics. SPIE Field Guides vol. FG01. SPIE. pp. 5–20. ISBN 0-8194-5294-7.
^ Welford, W.T. (1986). Aberrations of Optical Systems. CRC. ISBN 0-85274-564-8.
^ Hecht, Eugene (2002). Optics (4th ed.). Addison Wesley. p. 155. ISBN 0-321-18878-0.
^ a b Kerr, Douglas A. (2005). "The Proper Pivot Point for Panoramic Photography" (PDF). The Pumpkin. Archived from the original (PDF) on 13 May 2006. Retrieved 5 March 2006.
^ a b van Walree, Paul. "Misconceptions in photographic optics". Archived from the original on 19 April 2015. Retrieved 1 January 2007. Item #6.
^ a b c Littlefield, Rik (6 February 2006). "Theory of the "No-Parallax" Point in Panorama Photography" (PDF). ver. 1.0. Retrieved 14 January 2007. {{cite journal}}: Cite journal requires |journal= (help)
^ Searle, G.F.C. 1912 Revolving Table Method of Measuring Focal Lengths of Optical Systems in "Proceedings of the Optical Convention 1912" pp. 168–171.
^ Gray, Henry (1918). "Anatomy of the Human Body". p. 1019. Retrieved 12 February 2009.
Lambda Research Corporation (2001). OSLO Optics Reference (PDF) (Version 6.1 ed.). Retrieved 5 March 2006. Pages 74–76 define the cardinal points.
Learn to use TEM
Retrieved from "https://en.wikipedia.org/w/index.php?title=Cardinal_point_(optics)&oldid=1076992174#Focal_planes" |
Plot_(graphics) Knowpia
Scatterplot of the eruption interval for Old Faithful (a geyser)
Plots play an important role in statistics and data analysis. The procedures here can broadly be split into two parts: quantitative and graphical. Quantitative techniques are the set of statistical procedures that yield numeric or tabular output. Examples of quantitative techniques include:[1]
These and similar techniques are all valuable and are mainstream in terms of classical analysis. There are also many statistical tools generally referred to as graphical techniques. These include:[1]
Graphical procedures such as plots are a short path to gaining insight into a data set in terms of testing assumptions, model selection, model validation, estimator selection, relationship identification, factor effect determination, outlier detection. Statistical graphics give insight into aspects of the underlying structure of the data.[1]
Types of plotsEdit
Biplot : These are a type of graph used in statistics. A biplot allows information on both samples and variables of a data matrix to be displayed graphically. Samples are displayed as points while variables are displayed either as vectors, linear axes or nonlinear trajectories. In the case of categorical variables, category level points may be used to represent the levels of a categorical variable. A generalised biplot displays information on both continuous and categorical variables.
Bland–Altman plot : In analytical chemistry and biostatistics this plot is a method of data plotting used in analysing the agreement between two different assays. It is identical to a Tukey mean-difference plot, which is what it is still known as in other fields, but was popularised in medical statistics by Bland and Altman.[2][3]
Bode plots are used in control theory.
Box plot : In descriptive statistics, a boxplot, also known as a box-and-whisker diagram or plot, is a convenient way of graphically depicting groups of numerical data through their five-number summaries (the smallest observation, lower quartile (Q1), median (Q2), upper quartile (Q3), and largest observation). A boxplot may also indicate which observations, if any, might be considered outliers.
Carpet plot : A two-dimensional plot that illustrates the interaction between two and three independent variables and one to three dependent variables.
Comet plot : A two- or three-dimensional animated plot in which the data points are traced on the screen.
Contour plot : A two-dimensional plot which shows the one-dimensional curves, called contour lines on which the plotted quantity q is a constant. Optionally, the plotted values can be color-coded.
Dalitz plot : This a scatterplot often used in particle physics to represent the relative frequency of various (kinematically distinct) manners in which the products of certain (otherwise similar) three-body decays may move apart
Phase path of Duffing oscillator plotted as a comet plot[4]
Animated marker over a 2D plot[4]
Parallel Category Plot
Funnel plot : This is a useful graph designed to check the existence of publication bias in meta-analyses. Funnel plots, introduced by Light and Pillemer in 1994[5] and discussed in detail by Egger and colleagues,[6] are useful adjuncts to meta-analyses. A funnel plot is a scatterplot of treatment effect against a measure of study size. It is used primarily as a visual aid to detecting bias or systematic heterogeneity.
Dot plot (statistics) : A dot chart or dot plot is a statistical chart consisting of group of data points plotted on a simple scale. Dot plots are used for continuous, quantitative, univariate data. Data points may be labelled if there are few of them. Dot plots are one of the simplest plots available, and are suitable for small to moderate sized data sets. They are useful for highlighting clusters and gaps, as well as outliers.
Forest plot : is a graphical display that shows the strength of the evidence in quantitative scientific studies. It was developed for use in medical research as a means of graphically representing a meta-analysis of the results of randomized controlled trials. In the last twenty years, similar meta-analytical techniques have been applied in observational studies (e.g. environmental epidemiology) and forest plots are often used in presenting the results of such studies also.
Galbraith plot : In statistics, a Galbraith plot (also known as Galbraith's radial plot or just radial plot), is one way of displaying several estimates of the same quantity that have different standard errors.[7] It can be used to examine heterogeneity in a meta-analysis, as an alternative or supplement to a forest plot.
Nichols plot : This is a graph used in signal processing in which the logarithm of the magnitude is plotted against the phase of a frequency response on orthogonal axes.
Normal probability plot : The normal probability plot is a graphical technique for assessing whether or not a data set is approximately normally distributed. The data are plotted against a theoretical normal distribution in such a way that the points should form an approximate straight line. Departures from this straight line indicate departures from normality. The normal probability plot is a special case of the probability plot.
Nyquist plot : Plot is used in automatic control and signal processing for assessing the stability of a system with feedback. It is represented by a graph in polar coordinates in which the gain and phase of a frequency response are plotted. The plot of these phasor quantities shows the phase as the angle and the magnitude as the distance from the origin.
Partial regression plot : In applied statistics, a partial regression plot attempts to show the effect of adding another variable to the model (given that one or more independent variables are already in the model). Partial regression plots are also referred to as added variable plots, adjusted variable plots, and individual coefficient plots.
Partial residual plot : In applied statistics, a partial residual plot is a graphical technique that attempts to show the relationship between a given independent variable and the response variable given that other independent variables are also in the model.
Probability plot : The probability plot is a graphical technique for assessing whether or not a data set follows a given distribution such as the normal or Weibull, and for visually estimating the location and scale parameters of the chosen distribution. The data are plotted against a theoretical distribution in such a way that the points should form approximately a straight line. Departures from this straight line indicate departures from the specified distribution.
Q–Q plot : In statistics, a Q–Q plot (Q stands for quantile) is a graphical method for diagnosing differences between the probability distribution of a statistical population from which a random sample has been taken and a comparison distribution. An example of the kind of differences that can be tested for is non-normality of the population distribution.
Recurrence plot : In descriptive statistics and chaos theory, a recurrence plot (RP) is a plot showing, for a given moment in time, the times at which a phase space. In other words, it is a graph of
{\displaystyle {\vec {x}}(i)\approx {\vec {x}}(j),\,}
{\displaystyle i}
on a horizontal axis and
{\displaystyle j}
on a vertical axis, where
{\displaystyle {\vec {x}}}
is a phase space trajectory.
Scatterplot : A scatter graph or scatter plot is a type of display using variables for a set of data. The data is displayed as a collection of points, each having the value of one variable determining the position on the horizontal axis and the value of the other variable determining the position on the vertical axis.[8]
Shmoo plot : In electrical engineering, a shmoo plot is a graphical display of the response of a component or system varying over a range of conditions and inputs. Often used to represent the results of the testing of complex electronic systems such as computers, ASICs or microprocessors. The plot usually shows the range of conditions in which the device under test will operate.
Spaghetti plots are a method of viewing data to visualize possible flows through systems. Flows depicted in this manner appear like noodles, hence the coining of this term.[9] This method of statistics was first used to track routing through factories. Visualizing flow in this manner can reduce inefficiency within the flow of a system.
A normal Q–Q plot
Stemplot : A stemplot (or stem-and-leaf plot), in statistics, is a device for presenting quantitative data in a graphical format, similar to a histogram, to assist in visualizing the shape of a distribution. They evolved from Arthur Bowley's work in the early 1900s, and are useful tools in exploratory data analysis. Unlike histograms, stemplots retain the original data to at least two significant digits, and put the data in order, thereby easing the move to order-based inference and non-parametric statistics.
Star plot : A graphical method of displaying multivariate data. Each star represents a single observation. Typically, star plots are generated in a multi-plot format with many stars on each page and each star representing one observation.
Surface plot : In this type of graph, a surface is plotted to fit a set of data triplets (X,Y,Z), where Z if obtained by the function to be plotted Z=f(X,Y). Usually, the set of X and Y values are equally spaced. Optionally, the plotted values can be color-coded.
Ternary plot : A ternary plot, ternary graph, triangle plot, simplex plot, or de Finetti diagram is a barycentric plot on three variables which sum to a constant. It graphically depicts the ratios of the three variables as positions in an equilateral triangle. It is used in petrology, mineralogy, metallurgy, and other physical sciences to show the compositions of systems composed of three species. In population genetics, it is often called a de Finetti diagram. In game theory, it is often called a simplex plot.
Vector field : Vector field plots (or quiver plots) show the direction and the strength of a vector associated with a 2D or 3D points. They are typically used to show the strength of the gradient over the plane or a surface area.
Violin plot : Violin plots are a method of plotting numeric data. They are similar to box plots, except that they also show the probability density of the data at different values (in the simplest case this could be a histogram). Typically violin plots will include a marker for the median of the data and a box indicating the interquartile range, as in standard box plots. Overlaid on this box plot is a kernel density estimation. Violin plots are available as extensions to a number of software packages, including R through the vioplot library, and Stata through the vioplot add-in.[10]
Plots for specific quantitiesEdit
Arrhenius plot : This plot compares the logarithm of a reaction rate (
{\displaystyle \ln(k)}
, ordinate axis) plotted against inverse temperature (
{\displaystyle 1/T}
, abscissa). Arrhenius plots are often used to analyze the effect of temperature on the rates of chemical reactions.
Dot plot (bioinformatics) : This plot compares two biological sequences and is a graphical method that allows the identification of regions of close similarity between them. It is a kind of recurrence plot.
Lineweaver–Burk plot : This plot compares the reciprocals of reaction rate and substrate concentration. It is used to represent and determine enzyme kinetics.
Dot plot (bioinformatics)
3D plotsEdit
SteamTube plot
Types of graphs and their uses vary very widely. A few typical examples are:
Simple graph: Supply and demand curves, simple graphs used in economics to relate supply and demand to price. The graphs can be used together to determine the economic equilibrium (essentially, to solve an equation).
Simple graph used for reading values: the bell-shaped normal or Gaussian probability distribution, from which, for example, the probability of a man's height being in a specified range can be derived, given data for the adult male population.
Very complex graph: the psychrometric chart, relating temperature, pressure, humidity, and other quantities.
Non-rectangular coordinates: the above all use two-dimensional rectangular coordinates; an example of a graph using polar coordinates, sometimes in three dimensions, is the antenna radiation pattern chart, which represents the power radiated in all directions by an antenna of specified type.
List of plotting programs
^ a b c NIST/SEMATECH (2003). "The Role of Graphics". In: e-Handbook of Statistical Methods 6 January 2003 (Date created).
^ Altman DG, Bland JM (1983). "Measurement in medicine: the analysis of method comparison studies". The Statistician. Blackwell Publishing. 32 (3): 307–317. doi:10.2307/2987937. JSTOR 2987937.
^ Bland JM, Altman DG (1986). "Statistical methods for assessing agreement between two methods of clinical measurement". Lancet. 1 (8476): 307–10. doi:10.1016/S0140-6736(86)90837-8. PMID 2868172. S2CID 2844897.
^ a b Simionescu, P.A. (2014). Computer Aided Graphing and Simulation Tools for AutoCAD Users (1st ed.). Boca Raton, FL: CRC Press. ISBN 978-1-4822-5290-3.
^ R. J. Light; D. B. Pillemer (1984). Summing up: The Science of Reviewing Research. Cambridge, Massachusetts.: Harvard University Press.
^ M. Egger, G. Davey Smith, M. Schneider & C. Minder (September 1997). "Bias in meta-analysis detected by a simple, graphical test". BMJ. 315 (7109): 629–634. doi:10.1136/bmj.315.7109.629. PMC 2127453. PMID 9310563. {{cite journal}}: CS1 maint: multiple names: authors list (link)
^ Galbraith, Rex (1988). "Graphical display of estimates having differing standard errors". Technometrics. American Society for Quality. 30 (3): 271–281. doi:10.2307/1270081. JSTOR 1270081.
^ Utts, Jessica M. Seeing Through Statistics 3rd Edition, Thomson Brooks/Cole, 2005, pp 166–167. ISBN 0-534-39402-7
^ Theodore T. Allen (2010). Introduction to Engineering Statistics and Lean Sigma: Statistical Quality Control and Design of Experiments and Systems. Springer. p. 128. ISBN 978-1-84882-999-2. Retrieved 2011-02-17.
^ Hintze Jerry L.; Nelson Ray D. (1998). "Violin Plots: A Box Plot-Density Trace Synergism". The American Statistician. 52 (2): 181–84. doi:10.1080/00031305.1998.10480559.
Wikimedia Commons has media related to Plots.
Dataplot gallery of some useful graphical techniques at itl.nist.gov. |
2015 Estimation of Hazard Rate and Mean Residual Life Ordering for Fuzzy Random Variable
{L}_{2}
-metric is used to find the distance between triangular fuzzy numbers. The mean and variance of a fuzzy random variable are also determined by this concept. The hazard rate is estimated and its relationship with mean residual life ordering of fuzzy random variable is investigated. Additionally, we have focused on deriving bivariate characterization of hazard rate ordering which explicitly involves pairwise interchange of two fuzzy random variables
X
Y
S. Ramasubramanian. P. Mahendran. "Estimation of Hazard Rate and Mean Residual Life Ordering for Fuzzy Random Variable." Abstr. Appl. Anal. 2015 1 - 5, 2015. https://doi.org/10.1155/2015/164795
S. Ramasubramanian, P. Mahendran "Estimation of Hazard Rate and Mean Residual Life Ordering for Fuzzy Random Variable," Abstract and Applied Analysis, Abstr. Appl. Anal. 2015(none), 1-5, (2015) |
A characterization theorem for operators on white noise functionals
April, 1999 A characterization theorem for operators on white noise functionals
Dong Myung CHUNG, Tae Su CHUNG, Un Cig Jl
W
-transform of an operator on white noise functionals is introduced and then characterizations for operators on white noise functionals are given in terms of their
W
-transforms. A simple proof of the analytic characterization theorem for operator symbol and convergence of operators are also discussed.
Dong Myung CHUNG. Tae Su CHUNG. Un Cig Jl. "A characterization theorem for operators on white noise functionals." J. Math. Soc. Japan 51 (2) 437 - 447, April, 1999. https://doi.org/10.2969/jmsj/05120437
Keywords: $S$-transform , $W$-transform , operator symbol , White noise functionals
Dong Myung CHUNG, Tae Su CHUNG, Un Cig Jl "A characterization theorem for operators on white noise functionals," Journal of the Mathematical Society of Japan, J. Math. Soc. Japan 51(2), 437-447, (April, 1999) |
Octave (electronics) - Wikipedia
Relative unit corresponding to doubling of frequency
In electronics, an octave (symbol: oct) is a logarithmic unit for ratios between frequencies, with one octave corresponding to a doubling of frequency. For example, the frequency one octave above 40 Hz is 80 Hz. The term is derived from the Western musical scale where an octave is a doubling in frequency.[note 1] Specification in terms of octaves is therefore common in audio electronics.
Along with the decade, it is a unit used to describe frequency bands or frequency ratios.[1][2]
1 Ratios and slopes
Ratios and slopes[edit]
A frequency ratio expressed in octaves is the base-2 logarithm (binary logarithm) of the ratio:
{\displaystyle {\text{number of octaves}}=\log _{2}\left({\frac {f_{2}}{f_{1}}}\right)}
An amplifier or filter may be stated to have a frequency response of ±6 dB per octave over a particular frequency range, which signifies that the power gain changes by ±6 decibels (a factor of 4 in power), when the frequency changes by a factor of 2. This slope, or more precisely 10 log10(4) ≈ 6.0206 decibels per octave, corresponds to an amplitude gain proportional to frequency, which is equivalent to ±20 dB per decade (factor of 10 amplitude gain change for a factor of 10 frequency change). This would be a first-order filter.
The distance between the frequencies 20 Hz and 40 Hz is 1 octave. An amplitude of 52 dB at 4 kHz decreases as frequency increases at −2 dB/oct. What is the amplitude at 13 kHz?
{\displaystyle {\text{number of octaves}}=\log _{2}\left({\frac {13}{4}}\right)=1.7}
{\displaystyle {\text{Mag}}_{13{\text{ kHz}}}=52{\text{ dB}}+(1.7{\text{ oct}}\times -2{\text{ dB/oct}})=48.6{\text{ dB}}.\,}
^ The prefix octa-, denoting eight, refers to the eight notes of a diatonic scale; the association of the word with doubling is solely a matter of customary usage.
^ Levine, William S. (2010). The Control Handbook: Control System Fundamentals, p. 9–29. ISBN 9781420073621/ISBN 9781420073669.
^ Perdikaris, G. (1991). Computer Controlled Systems: Theory and Applications, p. 117. ISBN 9780792314226.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Octave_(electronics)&oldid=1045455578" |
Gay-Lussac's law - Wikipedia
Gay-Lussac's law (also referred to as Amonton's law[citation needed]) states that the pressure of a given mass of gas varies directly with the absolute temperature of the gas when the volume is kept constant.[1] Mathematically, it can be written as:
{\displaystyle {\frac {P}{T}}=k}
. It is a special case of the ideal gas law. Gay-Lussac is recognized for the Pressure Law which established that the pressure of an enclosed gas is directly proportional to its temperature and which he was the first to formulate (c. 1809).[2] He is also sometimes credited[3][4][5] with being the first to publish convincing evidence that shows the relationship between the pressure and temperature of a fixed mass of gas kept at a constant volume.[4]
These laws are also known commonly as the Pressure Law or Amontons's law and Dalton's law respectively.[3][4][5][6]
1 Law of combining volumes
2 Pressure-temperature law
3 Expansion of gases
Law of combining volumesEdit
Under STP, a reaction between three cubic meters of hydrogen gas and one cubic meter of nitrogen gas will produce about two cubic meters of ammonia.
The ratio between the volumes of the reactant gases and the gaseous products can be expressed in simple whole numbers.
For example, Gay-Lussac found that two volumes of hydrogen and one volume of oxygen would react to form two volumes of gaseous water. Based on Gay-Lussac's results, Amedeo Avogadro hypothesized that, at the same temperature and pressure, equal volumes of gas contain equal numbers of molecules (Avogadro's law). This hypothesis meant that the previously stated result
2 volumes of hydrogen + 1 volume of oxygen = 2 volume of gaseous water
2 molecules of hydrogen + 1 molecule of oxygen = 2 molecule of water.
It can also be expressed in another way of example, 100 mL of hydrogen combine with 50 mL of oxygen to give 100 mL of water vapour. Hydrogen(100 mL) + Oxygen(50 mL) = Water(100 mL)
The law of combining gases was made public by Joseph Louis Gay-Lussac in 1808.[7][8] Avogadro's hypothesis, however, was not initially accepted by chemists until the Italian chemist Stanislao Cannizzaro was able to convince the First International Chemical Congress in 1860.[9]
Pressure-temperature lawEdit
This law is often referred to as Gay-Lussac's law of pressure–temperature, between 1800 and 1802, discovered the relationship between the pressure and temperature of a fixed mass of gas kept at a constant volume.[10][11][12] Gay Lussac discovered this while building an "air thermometer".
The pressure of a gas of fixed mass and fixed volume is directly proportional to the gas's absolute temperature.
If a gas's temperature increases, then so does its pressure if the mass and volume of the gas are held constant. The law has a particularly simple mathematical form if the temperature is measured on an absolute scale, such as in kelvins. The law can then be expressed mathematically as
{\displaystyle {P}\propto {T}\quad {\text{or}}\quad P=kT,}
{\displaystyle {\frac {P}{T}}=k,}
T is the temperature of the gas (measured in kelvins),
{\displaystyle {\frac {P_{1}}{T_{1}}}={\frac {P_{2}}{T_{2}}}\qquad {\text{or}}\qquad P_{1}T_{2}=P_{2}T_{1}.}
Because Amontons discovered the law beforehand, Gay-Lussac's name is now generally associated within chemistry with the law of combining volumes discussed in the section above. Some introductory physics textbooks still define the pressure-temperature relationship as Gay-Lussac's law.[13][14][15] Gay-Lussac primarily investigated the relationship between volume and temperature and published it in 1802, but his work did cover some comparison between pressure and temperature.[16] Given the relative technology available to both men, Amontons was only able to work with air as a gas, where Gay-Lussac was able to experiment with multiple types of common gases, such as oxygen, nitrogen, and hydrogen.[17] Gay-Lussac did attribute his findings to Jacques Charles because he used much of Charles's unpublished data from 1787 – hence, the law became known as Charles's law or the Law of Charles and Gay-Lussac.[18]
Gay-Lussac's (Amontons') law, Charles's law, and Boyle's law form the combined gas law. These three gas laws in combination with Avogadro's law can be generalized by the ideal gas law.
Expansion of gasesEdit
Gay-Lussac used the formula acquired from ΔV/V = αΔT to define the rate of expansion α for gases. For air he found a relative expansion ΔV/V = 37.50% and obtained a value of α = 37.50%/100°C = 1/266.66°C which indicated that the value of absolute zero was approximately 266.66°C below 0°C.[19] The value of the rate of expansion α is approximately the same for all gases and this is also sometimes referred to as Gay-Lussac's Law.
Avogadro's law – Relationship between volume and number of moles of a gas at constant temperature and pressure.
^ "Gay-Lussac's Law". LibreTexts. 2016-06-27. Retrieved 5 December 2018.
^ Lagassé, Paul (2016), "Joseph Louis Gay-Lussac", Columbia Electronic Encyclopedia (6th Edition, Q2 ed.), Columbia University, ISBN 978-0787650155 [page needed]
^ a b Palmer, WP (1991), "Philately, Science Teaching and the History of Science" (PDF), Lab Talk, 35 (1): 30–31
^ a b c Holbrow, CH; Amato, JC (2011), "What Gay-Lussac didn't tell us", Am. J. Phys., 79 (1): 17, Bibcode:2011AmJPh..79...17H, doi:10.1119/1.3485034
^ a b Spurgin, CB (1987), "Gay-Lussac's gas-expansivity experiments and the traditional mis-teaching of 'Charles's Law'", Annals of Science, 44 (5): 489–505, doi:10.1080/00033798700200321
^ Crosland MP (1961), "The Origins of Gay-Lussac's Law of Combining Volumes of Gases", Annals of Science, 17 (1): 1, doi:10.1080/00033796100202521
^ Gay-Lussac (1809) "Mémoire sur la combinaison des substances gazeuses, les unes avec les autres" (Memoir on the combination of gaseous substances with each other), Mémoires de la Société d'Arcueil 2: 207–234. Available in English at: Le Moyne College.
^ "Joseph-Louis Gay-Lussac". chemistryexplained.com.
^ Hartley Harold (1966). "Stanislao Cannizzaro, F.R.S. (1826–1910) and the First International Chemical Conference at Karlsruhe". Notes and Records of the Royal Society of London. 21 (1): 56–63. doi:10.1098/rsnr.1966.0006. S2CID 58453894.
^ Barnett, Martin K. (Aug 1941), "A brief history of thermometry", Journal of Chemical Education, 18 (8): 358, Bibcode:1941JChEd..18..358B, doi:10.1021/ed018p358 . Extract.
^ "Thall's History of Gas Laws". Archived from the original on 2010-09-08. Retrieved 2010-07-16.
Amontons, G. (presented 1699, published 1732) "Moyens de substituer commodément l'action du feu à la force des hommes et des chevaux pour mouvoir les machines" (Ways to conveniently substitute the action of fire for the force of men and horses in order to power machines), Mémoires de l’Académie des sciences de Paris, 112–126; see especially pages 113–117.
See also: Fontenelle, B. B. (1743) "Sur une nouvelle proprieté de l'air, et une nouvelle construction de Thermométre" (On a new property of the air and a new construction of thermometer), Histoire de l'Academie royale des sciences, 1–8.
^ Tippens, Paul E. (2007). Physics, 7th ed. McGraw-Hill. 386–387.
^ Cooper, Crystal (Feb. 11, 2010). "Gay-Lussac's Law". Bright Hub Engineering. Retrieved from http://www.brighthubengineering.com/hvac/26213-gay-lussacs-law/ on July 8, 2013.
^ Verma, K.S. - Cengage Physical Chemistry Part 1 - Section 5.6.3
^ Crosland, Maurice P. (2004). Gay-Lussac: Scientist and Bourgeois. Cambridge University Press. 119–120.
^ Asimov, Isaac (1966). Understanding Physics – Motion, Sound, and Heat. Walker and Co. 191–192.
^ Gay-Lussac (1802), "Recherches sur la dilatation des gaz et des vapeurs" (Researches on the expansion of gases and vapors), Annales de Chimie 43: 137–175. On page 157, Gay-Lussac mentions the unpublished findings of Charles: "Avant d'aller plus loin, je dois prévenir que quoique j'eusse reconnu un grand nombre de fois que les gaz oxigène, azote, hydrogène et acide carbonique, et l'air atmosphérique se dilatent également depuis 0° jusqu'a 80°, le cit. Charles avait remarqué depuis 15 ans la même propriété dans ces gaz ; mais n'avant jamais publié ses résultats, c'est par le plus grand hasard que je les ai connus." (Before going further, I should inform [you] that although I had recognized many times that the gases oxygen, nitrogen, hydrogen, and carbonic acid [i.e., carbon dioxide], and atmospheric air also expand from 0° to 80°, citizen Charles had noticed 15 years ago the same property in these gases; but having never published his results, it is by the merest chance that I knew of them.) Available in English at: Le Moyne College.
^ Gay-Lussac (1802). "Recherches sur la dilatation des gaz et des vapeurs". Annales de chimie, ou, Recueil de mémoires concernant la chimie (in French).
Guch, Ian (2003). The Complete Idiot's Guide to Chemistry. Alpha, Penguin Group Inc. ISBN 978-1-59257-101-7.
Mascetta, Joseph A. (1998). How to Prepare for the SAT II Chemistry. Barron's. ISBN 978-0-7641-0331-5.
World of Scientific Discovery on Joseph-Louis Gay-Lussac on Bookrags
Retrieved from "https://en.wikipedia.org/w/index.php?title=Gay-Lussac%27s_law&oldid=1075767211" |
Detect errors in input samples using checksum - Simulink - MathWorks France
Detect errors in input samples using checksum
The LTE CRC Decoder block calculates a cyclic redundancy check (CRC) and compares it with the appended checksum, for each frame of streaming data samples. You can select from the polynomials specified by LTE standard TS 36.212 [1]. The block provides a hardware-optimized architecture and interface.
Input sample, specified as a binary scalar, unsigned integer scalar, or binary vector. The vector size must be less than or equal to the length of the polynomial. The CRC length also must be divisible by the vector size. For example, for polynomial type CRC24A, the valid vector sizes are 24, 12, 8, 6, 4, 3, 2, and 1. An integer input is interpreted as a binary word. For example, vector input [0 0 0 1 0 0 1 1] is equivalent to uint8 input 19.
Data Types: single | double | Boolean | ufix1 | uint8 | uint16 | uint32
Output sample, returned a binary scalar, unsigned integer scalar, or binary vector of the same data type and size as the input samples. The checksum is removed from the end of the frame.
err — Indicator of checksum mismatch
binary scalar | integer scalar
Indicator of checksum mismatch, returned as a binary scalar or an integer scalar. If you select Full checksum mismatch, this port returns the integer XOR result of the calculated checksum against the appended checksum. The err value is valid when ctrl.end is 1 (true). The data type of this port matches the data type of the input samples.
Full checksum mismatch — Return bit-by-bit mismatch information
When this parameter is not selected, the err port returns a Boolean value indicating whether any checksum bits are mismatched, after applying CRC Mask. When this parameter is selected, the err port returns an integer that represents the locations of bit mismatches in the checksum.
This parameter appears when Full checksum mismatch is cleared.
Check for CRC Errors in Streaming Samples
Use the LTE CRC Decoder block to check encoded data, and how to compare the hardware-friendly design with the results from LTE Toolbox™. The workflow follows these steps:
{X}^{\text{'}}={F}_{W}\left(×\right)X\left(+\right)D.
This waveform shows a 40-sample frame, input two samples at a time, encoded with a CRC16 polynomial. There is no gap between input frames. The output stream has removed the checksum, so there are eight cycles between output frames. The latency of the decoder is 3*CRCLength/InputSize + 5, assuming contiguous valid input samples.
These resource and performance data are the synthesis results from the generated HDL targeted to a Xilinx® Zynq®-7000 ZC706 board. The implementation is for a CRC24 polynomial, with no CRC Mask or output checksum mismatch, and scalar input. The design achieves 526.31 MHz clock frequency.
lteCRCDecode (LTE Toolbox) | lteCRCEncode (LTE Toolbox) |
Plot singular values of frequency response with additional plot customization options - MATLAB sigmaplot - MathWorks France
Sigma Plot with Specified Frequency Scale and Units
Plot singular values of frequency response with additional plot customization options
h = sigmaplot(sys)
h = sigmaplot(sys1,sys2,...,sysN)
h = sigmaplot(sys1,LineSpec1,...,sysN,LineSpecN)
h = sigmaplot(___,w)
h = sigmaplot(___,type)
h = sigmaplot(AX,___)
h = sigmaplot(___,plotoptions)
sigmaplot lets you plot the singular values (SV) of frequency response of a dynamic system model with a broader range of plot customization options than sigma. You can use sigmaplot to obtain the plot handle and use it to customize the plot, such as modify the axes labels, limits and units. You can also use sigmaplot to draw an SV plot on an existing set of axes represented by an axes handle. To customize an existing SV plot using the plot handle:
For more information, see Customizing Response Plots from the Command Line. To create SV plots with default options or to extract the frequency response data, use sigma.
h = sigmaplot(sys) plots the singular values (SV) of the frequency response of the dynamic system model sys and returns the plot handle h to the plot. You can use this handle h to customize the plot with the getoptions and setoptions commands.
h = sigmaplot(sys1,sys2,...,sysN) plots the SV of multiple dynamic systems sys1,sys2,…,sysN on the same plot. All systems must have the same number of inputs and outputs to use this syntax.
h = sigmaplot(sys1,LineSpec1,...,sysN,LineSpecN) sets the line style, marker type, and color for the SV plot of each system. All systems must have the same number of inputs and outputs to use this syntax.
h = sigmaplot(___,w) plots singular values for frequencies specified by the frequencies in w.
If w is a cell array of the form {wmin,wmax}, then sigmaplot plots the singular values at frequencies ranging between wmin and wmax.
If w is a vector of frequencies, then sigmaplot plots the singular values at each specified frequency.
h = sigmaplot(___,type) plots the modified singular value responses based on the type argument. Specify type as:
1 to plot the SV of the frequency response H-1, where H is the frequency response of sys.
2 to plot the SV of the frequency response I+H.
3 to plot the SV of the frequency response I+H-1.
h = sigmaplot(AX,___) plots the singular values on the Axes object in the current figure with the handle AX.
h = sigmaplot(___,plotoptions) plots the singular values with the options set specified in plotoptions. You can use these options to customize the SV plot appearance using the command line. Settings you specify in plotoptions overrides the preference settings in the MATLAB® session in which you run sigmaplot. Therefore, this syntax is useful when you want to write a script to generate multiple plots that look the same regardless of the local preferences.
For this example, consider a MIMO state-space model with 3 inputs, 3 outputs and 3 states. Create a sigma plot with linear frequency scale, frequency units in Hz and turn the grid on.
Create a sigma plot with plot handle h and use getoptions for a list of the options available.
h = sigmaplot(sys_mimo);
XLim: {[1.0000e-03 1]}
H\left(s\right)=\left[\begin{array}{cc}0& \frac{3s}{{s}^{2}+s+10}\\ \frac{s+1}{s+5}& \frac{2}{s+6}\end{array}\right].
For tunable control design blocks, the function evaluates the model at its current value to plot the SV.
For uncertain control design blocks, the function plots the SV at the nominal value and random samples of the model.
Frequency-response data models such as frd models. For such models, the function plots the SV at frequencies defined in the model.
Frequencies at which to compute and plot SV of the frequency response, specified as the cell array {wmin,wmax} or as a vector of frequency values.
If w is a cell array of the form {wmin,wmax}, then the function plots the SV at frequencies ranging between wmin and wmax.
If w is a vector of frequencies, then the function plots the SV at each specified frequency. For example, use logspace to generate a row vector with logarithmically spaced frequency values.
Target axes, specified as an Axes object. If you do not specify the axes and if the current axes are Cartesian axes, then sigmaplot plots on the current axes.
plotoptions — Sigma plot options set
SigmaPlotOptions object
Sigma plot options set, specified as a SigmaPlotOptions object. You can use this option set to customize the SV plot appearance. Use sigmaoptions to create the option set. Settings you specify in plotoptions overrides the preference settings in the MATLAB session in which you run sigmaplot. Therefore, plotoptions is useful when you want to write a script to generate multiple plots that look the same regardless of the local preferences.
For the list of available options, see sigmaoptions.
Plot handle, returned as a handle object. Use the handle h to get and set the properties of the SV plot using getoptions and setoptions. For the list of available options, see the Properties and Values Reference section in Customizing Response Plots from the Command Line.
getoptions | setoptions | sigma | sigmaoptions |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.