text stringlengths 256 16.4k |
|---|
A commutative ring $R$ with $1\neq 0$ is called an integral domain if it has no zero divisors.That is, if $ab=0$ for $a, b \in R$, then either $a=0$ or $b=0$.
Proof.
We give two proofs.
Proof 1.
Let $r \in R$ be a nonzero element in $R$.We show that $r$ is a unit.
Consider the map $f: R\to R$ sending $x\in R$ to $f(x)=rx$.We claim that the map $f$ is injective.Suppose that we have $f(x)=f(y)$ for $x, y \in R$. Then we have\[rx=ry\]or equivalently, we have\[r(x-y)=0.\]
Since $R$ is an integral domain and $r\neq 0$, we must have $x-y=0$, and thus $x=y$.Hence $f$ is injective. Since $R$ is a finite set, the map is also surjective.
Then it follows that there exists $s\in R$ such that $rs=f(s)=1$, and thus $r$ is a unit.Since any nonzero element of a commutative ring $R$ is a unit, $R$ is a field.
Proof 2.
Let $r\in R$ be a nonzero element.We show that the inverse element of $r$ exists in $R$ as follows.Consider the powers of $r$:\[r, r^2, r^3,\dots.\]Since $R$ is a finite ring, not all of the powers cannot be distinct.Thus, there exist positive integers $m > n$ such that\[r^m=r^n.\]
Equivalently we have\[r^n(r^{m-n}-1)=0.\]Since $R$ is an integral domain, this yields either $r^n=0$ or $r^{m-n}-1=0$.But the former gives $r=0$, and this is a contradiction since $r\neq 0$.
Hence we have $r^{m-n}=1$, and thus\[r\cdot r^{m-n-1}=1.\]Since $r-m-1 \geq 0$, we have $r^{m-n-1}\in R$ and it is the inverse element of $r$.
Therefore, any nonzero element of $R$ has the inverse element in $R$, hence $R$ is a field.
Related Question.
Problem. Let $R$ be a finite commutative ring with identity $1$. Prove that every prime ideal of $R$ is a maximal ideal of $R$.
Characteristic of an Integral Domain is 0 or a Prime NumberLet $R$ be a commutative ring with $1$. Show that if $R$ is an integral domain, then the characteristic of $R$ is either $0$ or a prime number $p$.Definition of the characteristic of a ring.The characteristic of a commutative ring $R$ with $1$ is defined as […]
Every Maximal Ideal of a Commutative Ring is a Prime IdealLet $R$ be a commutative ring with unity.Then show that every maximal ideal of $R$ is a prime ideal.We give two proofs.Proof 1.The first proof uses the following facts.Fact 1. An ideal $I$ of $R$ is a prime ideal if and only if $R/I$ is an integral […]
Torsion Submodule, Integral Domain, and Zero DivisorsLet $R$ be a ring with $1$. An element of the $R$-module $M$ is called a torsion element if $rm=0$ for some nonzero element $r\in R$.The set of torsion elements is denoted\[\Tor(M)=\{m \in M \mid rm=0 \text{ for some nonzero} r\in R\}.\](a) Prove that if $R$ is an […] |
LaTeX supports many worldwide languages by means of some special packages. In this article is explained how to import and use those packages to create documents in
Portuguese.
Contents
Portuguese language has some accentuated words. For this reason the preamble of your file must be modified accordingly to support these characters and some other features.
\documentclass{article} %encoding %-------------------------------------- \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} %-------------------------------------- %Portuguese-specific commands %-------------------------------------- \usepackage[portuguese]{babel} %-------------------------------------- %Hyphenation rules %-------------------------------------- \usepackage{hyphenat} \hyphenation{mate-mática recu-perar} %-------------------------------------- \begin{document} \tableofcontents \vspace{2cm} %Add a 2cm space \begin{abstract} Este é um breve resumo do conteúdo do documento escrito em Português. \end{abstract} \section{Seção introdutória} Esta é a primeira seção, podemos acrescentar alguns elementos adicionais e tudo será escrito corretamente. Além disso, se uma palavra é um caminho muito longo e tem de ser truncado, babel irá tentar truncar corretamente, dependendo do idioma. \section{Segunda seção} Esta seção é para ver o que acontece com comandos de texto que definem \[ \lim x = \theta + 152383.52 \] \end{document}
There are two packages in this document related to the encoding and the special characters. These packages will be explained in the next sections.
If your are looking for instructions on how to use more than one language in a single document, for instance English and Portuguese, see the International language support article.
Modern computer systems allow you to input letters of national alphabets directly from the keyboard. In order to handle a variety of input encodings used for different groups of languages and/or on different computer platforms LaTeX employs the
inputenc package to set up input encoding. In this case the package properly displays characters in the Portuguese alphabet. To use this package add the next line to the preamble of your document: \usepackage[utf8]{inputenc} The recommended input encoding is utf-8. You can use other encodings depending on your operating system.
To format LaTeX documents properly you should also choose a font encoding which supports specific characters for Portuguese language, this is accomplished by the
package: fontenc \usepackage[T1]{fontenc} Even though the default encoding works well in Portuguese, using this specific encoding will avoid glitches with some specific characters, e.g., some accented characters might not be directly copyable from the generated PDF and instead are constructed using the base character and an overlayed shifted accent symbol, resulting in two separate symbols if you copy it. The default LaTeX encoding is
OT1.
To extended the default LaTeX capabilities, for proper hyphenation and translating the names of the document elements, import the
babel package for the Portuguese language. \usepackage[portuguese]{babel} As you may see in the example at the introduction, instead of "abstract" and "Contents" the Portuguese words "Resumo" and "Conteúdo" are used.
If you need to use the Brazilian Portuguese localization use
brazilian instead of
portuguese as parameter when importing
babel.
Sometimes for formatting reasons some words have to be broken up in syllables separated by a
- (
hyphen) to continue the word in a new line. For example, matemática could become mate-mática. The package babel, whose usage was described in the previous section, usually does a good job breaking up the words correctly, but if this is not the case you can use a couple of commands in your preamble.
\usepackage{hyphenat} \hyphenation{mate-mática recu-perar}
The first command will import the package
hyphenat and the second line is a list of space-separated words with defined hyphenation rules. On the other side, if you want a word not to be broken automatically, use the
{\nobreak word} command within your document or include it in an
\mbox{word}.
For more information see |
Abbreviation:
BDLat
A
is a structure $\mathbf{L}=\langle L,\vee ,0,\wedge ,1\rangle $ such that bounded distributive lattice
$\langle L,\vee ,\wedge \rangle $ is a distributive lattice
$0$ is the least element: $0\leq x$
$1$ is the greatest element: $x\leq 1$
Let $\mathbf{L}$ and $\mathbf{M}$ be bounded distributive lattices. A morphism from $\mathbf{L}$ to $\mathbf{M}$ is a function $h:L\to M$ that is a homomorphism:
$h(x\vee y)=h(x)\vee h(y)$, $h(x\wedge y)=h(x)\wedge h(y)$, $h(0)=0$, $h(1)=1$
Example 1: $\langle \mathcal P(S), \cup, \emptyset, \cap, S\rangle$, the collection of subsets of a set $S$, with union, empty set, intersection, and the whole set $S$.
Classtype variety Equational theory decidable Quasiequational theory decidable First-order theory undecidable Congruence distributive yes Congruence modular yes Congruence n-permutable no Congruence regular no Congruence uniform no Congruence extension property yes Definable principal congruences no Equationally def. pr. cong. no Amalgamation property yes Strong amalgamation property no Epimorphisms are surjective no Locally finite yes Residual size 2
$\begin{array}{lr} f(1)= &1\\ f(2)= &1\\ f(3)= &1\\ f(4)= &2\\ f(5)= &3\\ \end{array}$ $\begin{array}{lr} f(6)= &5\\ f(7)= &8\\ f(8)= &15\\ f(9)= &26\\ f(10)= &47\\ \end{array}$ $\begin{array}{lr} f(11)= &82\\ f(12)= &151\\ f(13)= &269\\ f(14)= &494\\ f(15)= &891\\ \end{array}$ $\begin{array}{lr} f(16)= &1639\\ f(17)= &2978\\ f(18)= &5483\\ f(19)= &10006\\ f(20)= &18428\\ \end{array}$
Values known up to size 49
1). Marcel Erne, Jobst Heitzig and J\”urgen Reinhold, 1) , Electron. J. Combin., On the number of distributive lattices 9, 2002, Research Paper 24, 23 pp. (electronic) |
A
quantum gate or quantum logic gate is a rudimentary quantum circuit operating on a small number of qubits. They are the analogues for quantum computers to classical logic gates for conventional digital computers. Quantum logic gates are reversible, unlike many classical logic gates. Some universal classical logic gates, such as the Toffoli gate, provide reversibility and can be directly mapped onto quantum logic gates. Quantum logic gates are represented by unitary matrices.
The most common quantum gates operate on spaces of one or two qubits. This means that as matrices, quantum gates can be described by 2 x 2 or 4 x 4 matrices with orthonormal rows.
Remark. The investigation of quantum logic gates is unrelated to quantum logic, which is a foundational formalism for quantum mechanics based on a modification of some of the rules of propositional logic. Examples Hadamard gate. This gate operates on a single qubit. It is represented by the Hadamard matrix:
$$H = \frac{1}{\sqrt{2}} \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix}$$
Since the rows of the matrix are orthogonal,
H is indeed a unitary matrix. Phase shifter gates. Gates in this class operate on a single qubit. They are represented by 2 x 2 matrices of the form
$$R(\theta) = \begin{bmatrix} 1 & 0\\ 0 & e^{2 \pi i \theta} \end{bmatrix}$$
where θ is the
phase shift. Controlled gates. Suppose U is a gate that operates on single qubits with matrix representation
$$U = \begin{bmatrix} x_{00} & x_{01} \\ x_{10} & x_{11} \end{bmatrix}$$
The
controlled-U gate is a gate that operates on two qubits in such a way that the first qubit serves as a control.
∣00⟩ ↦ ∣00⟩
∣01⟩ ↦ ∣01⟩
∣10⟩ ↦ ∣1⟩
U∣0⟩ = ∣1⟩( x00∣0⟩ + x01∣1⟩)
∣11⟩ ↦ ∣1⟩
U∣1⟩ = ∣1⟩( x10∣0⟩ + x11∣1⟩)
Thus the matrix of the controlled
U gate is as follows:
$$\operatorname{C}(U) = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & x_{00} & x_{01} \\ 0 & 0 & x_{10} & x_{11} \end{bmatrix}$$
Uncontrolled gate. We note the difference between the controlled- U gate and an uncontrolled 2 qubit gate I ⊗ U defined as follows:
∣00⟩ ↦ ∣0⟩
U∣0⟩
∣01⟩ ↦ ∣0⟩
U∣1⟩
∣10⟩ ↦ ∣1⟩
U∣0⟩
∣11⟩ ↦ ∣1⟩
U∣1⟩
represented by the unitary matrix
$$\begin{bmatrix} x_{00} & x_{01} & 0 & 0 \\ x_{10} & x_{11} & 0 & 0 \\ 0 & 0 & x_{00} & x_{01} \\ 0 & 0 & x_{10} & x_{11} \end{bmatrix}.$$
Since this gate is reducible to more elementary gates it is usually not included in the basic repertoire of quantum gates. It is mentioned here only to contrast it with the previous controlled gate.
Universal Quantum Gates
A set of universal quantum gates is any set of gates to which any operation possible on a quantum computer can be reduced. One simple set of two-qubit universal quantum gates is the Hadamard gate (
H), a phase rotation gate $R(\cos^{-1}\begin{matrix} \frac{3}{5} \end{matrix}))$, and the controlled-NOT gate, a special case of controlled-U such that
$$\operatorname{CNOT} = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{bmatrix}$$.
A single-gate set of universal quantum gates can also be formulated using the three-qubit Deutsch gate,
D( θ)
$$\operatorname{D(\theta)}: |i,j,k\rangle \rightarrow \begin{cases} i \cos(\theta) |i,j,k\rangle + \sin(\theta) |i,j,1-k\rangle & \mbox{for }i=j=1 \\ |i,j,k\rangle & \mbox{otherwise}\end{cases}$$
The universal classical logic gate, the Toffoli gate, is reducible to the Deutsch gate, $D(\begin{matrix} \frac{\pi}{2} \end{matrix})$, thus showing that all classical logic operations can be performed on a universal quantum computer.
References M. Nielsen and I. Chuang, Quantum Computation and Quantum Information, Cambridge University Press, 2000 |
Today, I am presenting the Chapter 3 of the book Probability on Graphs ofGeoffrey Grimmett during a monthly reading seminar at LIAFA. The title ofthe chapter is
Percolation and self-avoiding walks. I did some computationsto improve my intuition on the question. My code is in the following file :bond_percolation.sage. This post is about some of my computations. Youmight want to test them yourself online using the Sage Cell Server. Basic Definitions
Let \(\mathbb{L}^d=(\mathbb{Z}^d,\mathbb{E}^d)\) be the hypercubiclattice. Let \(p\in[0,1]\). Each edge \(e\in \mathbb{E}^d\) isdesignated either
open with probability \(p\), or closed otherwise,different edges receiving independant states. For \(x,y\in \mathbb{Z}^d\),we write \(x \leftrightarrow y \) if there exists an open path joining\(x\) and \(y\). For \(x\in \mathbb{Z}^d\), we consider the opencluster \(C_x\) containing \(x\) :\[C_x = \{y \in \mathbb{Z}^d : x \leftrightarrow y \}.\]The percolation probability \(\Theta(p)\) is given by\[\Theta(p) = P_p(\vert C_0\vert=\infty).\]Finally, the critical probability is defined as\[p_c = \sup\{p : \Theta(p) = 0 \}.\]The question is to compute \(p_c\). Results in the Chapter give lower boundand upper bound for \(p_c\). Many problems are still open like the oneclaiming that \(\Theta(p_c) = 0\) for all \(d\geq 2\): it is known onlyfor \(d=2\) and \(d\geq 19\) according to a remark in the chapter. Some samples when p=0.5
A bond percolation sample inside the box \(\Lambda(m)=[-m,m]^d\) when \(p=0.5\) and \(d=2\):
sage: S = BondPercolationSample(p=0.5, d=2) sage: S.plot(m=40, pointsize=10, thickness=1) Graphics object consisting of 7993 graphics primitives sage: _.show()
Another time gives something different:
sage: S = BondPercolationSample(p=0.5, d=2) sage: S.plot(m=40, pointsize=10, thickness=1) Graphics object consisting of 10176 graphics primitives sage: _.show() Some samples for ranges of values of p
From
p=0.1 to
p=0.9:
sage: percolation_graphics_array(srange(0.1,1,0.1), d=2, m=5)
From
p=0.41 to
p=0.49:
sage: percolation_graphics_array(srange(0.41,0.50,0.01), d=2, m=5)
From
p=0.51 to
p=0.59:
sage: percolation_graphics_array(srange(0.51,0.60,0.01), d=2, m=5) Upper bound and lower bound for percolation probability \(\Theta(p)\)
In every case, we have the following upper bound for the percolation probability: \[ \Theta(p) = \mathbb{P}_p(\vert C_0\vert=\infty) \leq \mathbb{P}_p(\vert C_0\vert > 1) = 1 - \mathbb{P}_p(\vert C_0\vert = 1) = 1 - (1-p)^{2d}. \] In particular, if \(p\neq 1\), then \(\Theta(p)<1\). In Sage, define the upper bound:
sage: p,n = var('p,n') sage: d = var('d') sage: upper_bound = 1 - (1-p)^(2*d)
Also, from Equation (3.8), we have the following lower bound: \[ \Theta(p) \geq 1 - \sum_{n=4}^{\infty} n (4(1-p))^n. \]
In Sage, define the lower bound:
sage: p,n = var('p,n') sage: lower_bound = 1 - sum(n*(4*(1-p))^n,n,4,oo) sage: lower_bound.factor() -(3072*p^5 - 14336*p^4 + 26624*p^3 - 24592*p^2 + 11288*p - 2057)/(4*p - 3)^2
This is not defined when \(p=3/4\), but we are interested in the values in the interval \(]3/4,1]\). In particular, for which value of \(p\) is this lower bound strictly larger than zero:
sage: root = lower_bound.find_root(0.76, 0.99); root 0.8639366490304586
Let's now draw a graph of the lower and upper bound:
sage: U = plot(upper_bound(d=2),(0,1),color='red', thickness=3) sage: L = plot(lower_bound,(0.86,1),color='green', thickness=3) sage: G = U + L sage: G += point((root, 0), color='red', size=20) sage: lower = r"$1-\sum_{n=4}^{\infty} n4^n(1-p)^n$" sage: upper = r"$1 -(1-p)^{4}$" sage: title = r"$1-\sum_{n=4}^{\infty} n4^n(1-p)^n\leq\Theta(p)\leq 1 -(1-p)^{2d}$" sage: G += text(title, (.5, 1.05), color='black', fontsize=15) sage: G += text(upper, (.3, 0.5), color='red', fontsize=20) sage: G += text(lower, (.7, 0.5), color='green', fontsize=20) sage: G += text("%.5f"%root,(0.88, .03), color='green', horizontal_alignment='left') sage: G.show()
Thus we conclude that \(\Theta(p) >0\) for \(p>0.8639\) and thus \(p_c \leq 0.8639\).
Percolation probability - dimension 2
The code allows to define the percolation probability function for a givendimension
d. It generates
n samples and consider the cluster to beinfinite if its cardinality is larger than the given
stop value.
Here we use Sage adaptative recursion algorithm for drawing the plot of thepercolation probability which finds the particular important intervals to askfor more values of the function. See help section of plot function for details.Because
T might be long to compute we start with only 4 points.
When
stop=100:
sage: T = PercolationProbability(d=2, n=10, stop=100) sage: T.return_plot((0,1),adaptive_recursion=4,plot_points=4).show()
When
stop=1000:
sage: T = PercolationProbability(d=2, n=10, stop=1000) sage: T.return_plot((0,1),adaptive_recursion=4,plot_points=4).show()
When
stop=2000:
sage: T = PercolationProbability(d=2, n=10, stop=2000) sage: T.return_plot((0,1),adaptive_recursion=4,plot_points=4).show() Percolation probability - dimension 3
When
stop=100:
sage: T = PercolationProbability(d=3, n=10, stop=100) sage: T.return_plot((0,1),adaptive_recursion=4,plot_points=4).show()
When
stop=1000:
sage: T = PercolationProbability(d=3, n=10, stop=1000) sage: T.return_plot((0,1),adaptive_recursion=4,plot_points=4).show() Percolation probability - dimension 4
When
stop=100:
sage: T = PercolationProbability(d=4, n=10, stop=100) sage: T.return_plot((0,1),adaptive_recursion=4,plot_points=4).show() Percolation probability - dimension 13
When
stop=100:
sage: T = PercolationProbability(d=13, n=10, stop=100) sage: T.return_plot((0,1),adaptive_recursion=4,plot_points=4).show() Theorem 3.2
Theorem 3.2 states that \(0 < p_c < 1\), but its proof does much more in fact. Following the computation we just did for Equation (3.8), we get for \(d=2\) \[ 0.3333 < \frac{1}{2d-1} \leq p_c \leq 0.8639 \] and for \(d=3\): \[ 0.2000 < \frac{1}{2d-1} \leq p_c \leq 0.8639 \] This allows to grasp the improvement brought later by Theorem 3.12.
Connective constant
Using the two following sequences of the On-Line Encyclopedia of Integer Sequences, one can evaluate the connective constant \(\kappa(d)\)
By taking the k-th root of of k-th term of A001411, we may give an approximation of \(\kappa(2)\):
sage: L = [1, 4, 12, 36, 100, 284, 780, 2172, 5916, 16268, 44100, 120292, 324932, 881500, 2374444, 6416596, 17245332, 46466676, 124658732, 335116620, 897697164, 2408806028, 6444560484, 17266613812, 46146397316, 123481354908, 329712786220, 881317491628] sage: for k in range(1, len(L)): print numerical_approx(L[k]^(1/k)) 4.00000000000000 3.46410161513775 3.30192724889463 3.16227766016838 3.09502148400370 3.03400133198980 2.99705187539871 2.96144397263395 2.93714926770637 2.91369345857619 2.89627439045790 2.87949308754677 2.86632078916860 2.85362749495679 2.84328447096562 2.83329615650289 2.82493415671599 2.81684125361654 2.80992368218258 2.80321554383456 2.79738645741910 2.79172363211806 2.78673687369245 2.78188437392354 2.77756387722633 2.77335345579129 2.76956977331575
By taking the k-th root of of k-th term of A001412, we may give an approximation of \(\kappa(3)\):
sage: L = [1, 6, 30, 150, 726, 3534, 16926, 81390, 387966, 1853886, 8809878, 41934150, 198842742, 943974510, 4468911678, 21175146054, 100121875974, 473730252102, 2237723684094, 10576033219614, 49917327838734, 235710090502158, 1111781983442406, 5245988215191414, 24730180885580790, 116618841700433358, 549493796867100942,2589874864863200574, 12198184788179866902, 57466913094951837030, 270569905525454674614] sage: for k in range(1, len(L)): print numerical_approx(L[k]^(1/k)) 6.00000000000000 5.47722557505166 5.31329284591305 5.19079831727404 5.12452137580198 5.06709510955294 5.02933019629493 4.99573287588832 4.97111339009676 4.94876680377358 4.93129192790635 4.91521453865211 4.90209314463520 4.88990167518413 4.87964724632057 4.87004597517131 4.86178722582108 4.85400655861169 4.84719703702142 4.84074902256992 4.83502763526502 4.82958688248615 4.82470487210973 4.82004549244633 4.81582557693112 4.81178552451599 4.80809774735294 4.80455755518719 4.80130435575213 4.79817388859565
Then, \(\kappa(2)\) would be something less than 2.769 and \(\kappa(3)\) would be something less than 4.798.
Theorem 3.12
Thus, we may evaluate the lower bound and upper bound given at Theorem 3.12. For dimension \(d=2\):
sage: k < 2.76956977331575 k < 2.76956977331575 sage: _ / (2.76956977331575 * k) 0.361066909970928 < (1/k) sage: 1 - 0.361066909970928 0.638933090029072
The critical probability of bond percolation on \(\mathbb{L}^d\) with \(d=2\) satisfies \[ 0.3610 < \frac{1}{\kappa(2)} \leq p_c \leq 1 - \frac{1}{\kappa(2)} < 0.6389. \] If we look at the graph of the percolation probability \(\Theta(p)\) we did above for when \(d=2\), it seems that the lower bound is not far from \(p_c\). The lower bound 0.3610 is a small improvement to the simple one got from Theorem 3.2 (0.3333).
Similarly, for dimension \(d=3\):
sage: k < 4.79817388859565 k < 4.79817388859565 sage: _ / (4.79817388859565 * k) 0.208412621805310 < (1/k)
The critical probability of bond percolation on \(\mathbb{L}^d\) with \(d=3\) satisfies \[ 0.2084 < \frac{1}{\kappa(3)} \leq p_c \leq 1 - \frac{1}{\kappa(2)} < 0.6389. \] Again, if we look at the graph of \(\Theta(p)\) we did above for when \(d=3\), it seems that the lower bound 0.2084 is not far from \(p_c\). In this case, the lower bound 0.2084 is a rather small improvement to the lower bound from Theorem 3.2 (0.2000). It might be caused by a poor approximation of \(\kappa(3)\) from the above sequences of only 30 terms from the OEIS. |
I am trying to find the amplitude and phase plots of the saw tooth waveform pictured.I have already computed the Fourier series of the waveform but I don't know how to derive the amplitude and phase plots from the sawtooth's Fourier series.
$$ x(t) = \begin{cases} \frac{V}{T}t + V & \text{$-T < t < 0$} \\ \\ \\ \\ \frac{V}{T}t & \text{$0 < t < T $ } \\ \end{cases} $$
The Complex Fourier series of this waveform is
$$ x(t) = \sum_{n=-\infty}^{n = \infty} C_n e^{jnw_0t} $$
$$ x(t) = \sum_{n=\infty}^{n=\infty}\left[ \frac{jV}{nw_0T} e^{jnw_0t} + \left( \frac{jV}{2nw_0T} - \frac{V}{n^2w_0^2T^2} \right) e^{jnw_0(t + \frac{T}{2})} + \left( \frac{V}{n^2w_0^2T^2} + \frac{jV}{nw_0T} \right) e^{jnw_0(t - \frac{T}{2})} \right] $$
where
$$ w_0 = 2 \pi f_0 = \frac{2 \pi}{T} $$
I know the amplitude plot is $|C_n|$ vs Frequency and the phase plot is $Arg(C_n)$ vs frequency but how would I do the get the amplitudes and phases from a fourier series such as this one?
Addition:
Simplifying x(t) by replacing $w_0T = 2\pi$ and $e^{j\pi} = -1$ gives me
$$ x(t) = \sum_{n=-\infty}^{n=\infty} \begin{cases} \frac{V}{2\pi n}\left( \frac{2 + 3(-1)^n}{2} \right) e^{j \left( \frac{2 \pi n t}{T} + \frac{\pi}{2} \right)} & \text{if n $\ne$ 0} \\ \\ \\ \\ \\ C_0 = \frac{VT}{2} & \text{ if n = 0 } \end{cases} $$
Still not sure how I can interpret this, but it seems for $f = 0$, amplitude = $\frac{VT}{2}$ and phase = 0, and for f = $\frac{n}{T}$, amplitude = $\frac{5V}{4 \pi n}$ ( for even n ) and $\frac{V}{4 \pi n}$ (for odd n), with the phase = $\frac{2 \pi nt}{T} + \frac{\pi}{2}$ for both even and odd cases, is that right? |
Preprints (rote Reihe) des Fachbereich Mathematik Refine Keywords density distribution (2) (remove)
296
We show that the occupation measure on the path of a planar Brownian motion run for an arbitrary finite time intervalhas an average density of order three with respect to thegauge function t^2 log(1/t). This is a surprising resultas it seems to be the first instance where gauge functions other than t^s and average densities of order higher than two appear naturally. We also show that the average densityof order two fails to exist and prove that the density distributions, or lacunarity distributions, of order threeof the occupation measure of a planar Brownian motion are gamma distributions with parameter 2.
303
We show that the intersection local times \(\mu_p\) on the intersection of \(p\) independent planar Brownian paths have an average density of order three with respect to the gauge function \(r^2\pi\cdot (log(1/r)/\pi)^p\), more precisely, almost surely, \[ \lim\limits_{\varepsilon\downarrow 0} \frac{1}{log |log\ \varepsilon|} \int_\varepsilon^{1/e} \frac{\mu_p(B(x,r))}{r^2\pi\cdot (log(1/r)/\pi)^p} \frac{dr}{r\ log (1/r)} = 2^p \mbox{ at $\mu_p$-almost every $x$.} \] We also show that the lacunarity distributions of \(\mu_p\), at \(\mu_p\)-almost every point, is given as the distribution of the product of \(p\) independent gamma(2)-distributed random variables. The main tools of the proof are a Palm distribution associated with the intersection local time and an approximation theorem of Le Gall. |
i'm having trouble a bit of trouble with taking the derivatives and collating my results of trig functions in the form of $sin$ $3x$ for example.
The specific problem i'm stuck on is;
Find the derivative of $2\sin3t\cos4t$
I'll show what I have done, but i'm really looking for some explanation of a general case for this type of problem.
I let $u$ $=$ $2\sin3t$ and $v=\cos4t$
$du/dt=6\cos3t$
$dv/dt = -4\sin4t$
$d/dt[2\sin3t\cos4t] = u\cdot dv/dt +v\cdot du/dt$
$=-4\sin4t(2\sin3t)+6\cos3t(\cos4t)$
Firstly, are there any glaring mistakes in the above?
and secondly, my book says the answer should be $7\cos(7t) - \cos(t)$. I'm unsure of how to simplify my answer down to actually check it against the book.
I'd appreciate some general help on how I treat functions in this format.
Thank you! |
Consider a Dirichlet character, $\chi(n)$, and the partial sum :
$$S(\chi,x)=\bigg |\sum_{n=1}^{x} \chi(n)\bigg|$$
There are many works to bound this sum when $\chi$ is a primitive character, but what can we say if $\chi$ is not primitive ?
More specifically if we fix a primitive Dirichlet character $\chi$ and then define from it a suite of non primitive character $\chi_N$ defined by:
$$\forall n, \chi_1(n)=\chi(n)$$
$$\forall n, \chi_N(n)=\chi_{N-1}(n).\chi^{P_N} (n)$$
Where $\chi^{P_N} (n)$ is the principal Dirichlet character associated to the N-th prime number (not considering 2).
(so $\chi^{P_N} (n)$ is the principal character simply defined by : $\chi^{P_N} (n) =0 $ if n is a multiple of the N-th prime number $P_N$ and 1 if not)
This suite of character is build from original character by "removing at each step a prime".
Question : how will evolute the max of $S(\chi^{P_N},x)$ for these charcaters ?
I would like to show that in this suite there are an infinity of characters with their $Max(S(\chi^{P_N},x))$ lower than a fix constant ? Is it realistic ?
Any reference on bounding partial sum of imprimitive characters ? |
In a university statistics course, we were presented with a "proof" of the Weak Law of Large Numbers (as it applies to population samples) based on Chebyshev's inequality. The steps the professor took are nearly identical to those linked in the following Wikipedia article: https://en.wikipedia.org/wiki/Law_of_large_numbers#Proof_using_Chebyshev.27s_inequality
The main difference in the proofs appeared when my professor began with Chebyshev's inequality:
$P(|\bar{X_n}-\mu|< k\sigma)\geq1-\frac{1}{k^2}$
Next, she replaced $\sigma$ with $\frac{\sigma}{\sqrt{n}}$ and $\frac{k\sigma}{\sqrt{n}}$ with $c$, so
$P(\mu-c<\bar{X_n}< \mu+c)\geq1-\frac{\sigma^2}{nc^2}$
This aligns identically with the proof on Wikipedia. We examine this as the sample number (n) approaches infinity and claim the probability approaches 1 that the sample average ($\bar{X_n}$) approaches the expected value ($\mu$).
Here are my problems:
1: As n approaches infinity, c approaches 0, so $P(\mu-c<\bar{X_n}< \mu+c)=P(\mu<\bar{X_n}< \mu)$ This doesn't make sense, since $\mu$ is not less than $\mu$.
Given this question, does this proof have any merit? Or am I perhaps misunderstanding something algebraically or conceptually? Thanks for your time! |
I am trying to evaluate $$\lim_{x \to \frac{\pi}{4}} \frac{1-\tan x}{x-\frac{\pi}{4}}$$ without using L'hopital's rule. However, I am not sure what to do. The only thing that came to my mind was to change the tan to sin over cos and get a common denominator but I felt that won't get me anywhere. A hint will be greatly appreciated.
If you substitute $t=x-\pi/4$, then $$ \tan x=\tan(t+\pi/4)=\frac{\tan t+1}{1-\tan t} $$ so your limit is $$ \lim_{t\to0}\frac{1}{t}\left(1-\frac{\tan t+1}{1-\tan t}\right)= \lim_{t\to0}\frac{\tan t}{t}\frac{-2}{1-\tan t} $$ Alternatively, recall that $$ \cos x-\sin x=-\sqrt{2}\sin\left(x-\frac{\pi}{4}\right) $$ and therefore $$ 1-\tan x=-\sqrt{2}\sin\left(x-\frac{\pi}{4}\right)\frac{1}{\cos x} $$ Hence the limit can be rewritten as $$ \lim_{x\to\pi/4}\frac{\sin\left(x-\frac{\pi}{4}\right)}{x-\frac{\pi}{4}}\frac{-\sqrt{2}}{\cos x} $$
We're looking for
$$\lim_{x\to{\pi\over 4}}-{\tan{x}-\tan{\pi\over 4}\over x-{\pi\over 4}}$$
And this is $-\tan'{\pi\over 4}=-2$
$$\lim_{x \to \frac{\pi}{4}} \frac{1-\tan x}{x-\frac{\pi}{4}} =\lim_{x \to \frac{\pi}{4}} \frac{-\tan (x-\frac{\pi}{4})}{x-\frac{\pi}{4}}(1+ \tan x)=-1(2)=-2$$
Hint
This kind of problems are rather simple to address if you know Taylor series. Assuming you do, built at $t=a$, you have
$$\tan(x)=\tan (a)+ \left(\tan ^2(a)+1\right)(x-a)+O\left((x-a)^2\right)$$ |
Using Properties of Inverse Matrices, Simplify the Expression
Problem 694
Let $A, B, C$ be $n\times n$ invertible matrices. When you simplify the expression\[C^{-1}(AB^{-1})^{-1}(CA^{-1})^{-1}C^2,\]which matrix do you get?(a) $A$(b) $C^{-1}A^{-1}BC^{-1}AC^2$(c) $B$(d) $C^2$(e) $C^{-1}BC$(f) $C$
In this problem, we use the following facts about inverse matrices: if $P, Q$ are invertible matrices, then we have\[(PQ)^{-1}=Q^{-1}P^{-1} \text{ and } (P^{-1})^{-1}=P.\]
Using these, we simplify the given expression as follows:\begin{align*}&C^{-1}(AB^{-1})^{-1}(CA^{-1})^{-1}C^2\\&=C^{-1}(B^{-1})^{-1}A^{-1}(A^{-1})^{-1}C^{-1}C^2\\&=C^{-1}BA^{-1}AC^{-1}C^2\\&=C^{-1}BIC\\&=C^{-1}BC,\end{align*}where we used the identity $A^{-1}A=I$, identity matrix, in the third step.Thus, the answer is (e).
Common Mistake
This is a midterm exam problem of Lienar Algebra at the Ohio State University.
Here are two common mistakes.
The first one is to use a wrong identity $(PQ)^{-1}=P^{-1}Q^{-1}$. This is WRONG.If you use this, then most likely you chose (b).
The seocnd one is to think matrix multiplication is commutative. Even though $PQ\neq QP$ for matrices $P, Q$, some students freely cancel terms.In this case, (c) was chosen.
Be careful, if you have $A^{-1}BA$, then in general it is not equal to $B$. You cannot cancel $A$ becuase $A$ and $A^{-1}$ are not adjacent each other.
Find the Inverse Matrix of a $3\times 3$ Matrix if ExistsFind the inverse matrix of\[A=\begin{bmatrix}1 & 1 & 2 \\0 &0 &1 \\1 & 0 & 1\end{bmatrix}\]if it exists. If you think there is no inverse matrix of $A$, then give a reason.(The Ohio State University, Linear Algebra Midterm Exam […]
Find a Nonsingular Matrix $A$ satisfying $3A=A^2+AB$(a) Find a $3\times 3$ nonsingular matrix $A$ satisfying $3A=A^2+AB$, where \[B=\begin{bmatrix}2 & 0 & -1 \\0 &2 &-1 \\-1 & 0 & 1\end{bmatrix}.\](b) Find the inverse matrix of $A$.Solution(a) Find a $3\times 3$ nonsingular matrix $A$.Assume […]
Quiz 4: Inverse Matrix/ Nonsingular Matrix Satisfying a Relation(a) Find the inverse matrix of\[A=\begin{bmatrix}1 & 0 & 1 \\1 &0 &0 \\2 & 1 & 1\end{bmatrix}\]if it exists. If you think there is no inverse matrix of $A$, then give a reason.(b) Find a nonsingular $2\times 2$ matrix $A$ such that\[A^3=A^2B-3A^2,\]where […]
Diagonalize a 2 by 2 Matrix if DiagonalizableDetermine whether the matrix\[A=\begin{bmatrix}1 & 4\\2 & 3\end{bmatrix}\]is diagonalizable.If so, find a nonsingular matrix $S$ and a diagonal matrix $D$ such that $S^{-1}AS=D$.(The Ohio State University, Linear Algebra Final Exam […]
Linear Algebra Midterm 1 at the Ohio State University (2/3)The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017.There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold).The time limit was 55 minutes.This post is Part 2 and contains […]
Linear Algebra Midterm 1 at the Ohio State University (1/3)The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017.There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold).The time limit was 55 minutes.This post is Part 1 and contains the […] |
Abbreviation:
ComBCK
A
is a structure $\mathbf{A}=\langle A,\cdot ,0\rangle$ of type $\langle 2,0\rangle$ such that commutative BCK-algebra
(1): $((x\cdot y)\cdot (x\cdot z))\cdot (z\cdot y) = 0$
(2): $x\cdot 0 = x$
(3): $0\cdot x = 0$
(4): $x\cdot y=y\cdot x= 0 \Longrightarrow x=y$
(5): $x\cdot (x\cdot y) = y\cdot (y\cdot x)$
Remark: Note that the commutativity does not refer to the operation $\cdot$, but rather to the term operation $x\wedge y=x\cdot (x\cdot y)$, which turns out to be a meet with respect to the following partial order:
$x\le y \iff x\cdot y=0$, with $0$ as least element.
A
is a BCK-algebra$\mathbf{A}=\langle A,\cdot ,0\rangle$ such that commutative BCK-algebra
$x\cdot (x\cdot y) = y\cdot (y\cdot x)$
Let $\mathbf{A}$ and $\mathbf{B}$ be commutative BCK-algebras. A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a function $h:A\rightarrow B$ that is a homomorphism:
$h(x\cdot y)=h(x)\cdot h(y) \mbox{ and } h(0)=0$
Example 1:
$\begin{array}{lr} f(1)= &1\\ f(2)= &1\\ f(3)= &2\\ f(4)= &5\\ f(5)= &11\\ f(6)= &28\\ f(7)= &72\\ f(8)= &192\\ \end{array}$ |
By definition supersymmetry transformations square to spacetime translations. In a superspace formalism the supersymmetry operator is constructed from the vector field $\partial_\theta$ with respect to the odd coordinates $\theta$. As this operator has to square to the vector field $\partial_x$ with respect to the even coordinates $x$, which is of dimension $1$, the vector field with respect to the odd coordinate has to be of dimension $1/2$ and so the odd coordinate as to be of dimension $-1/2$.
Equivalently, a typical superfield is of the form
$\phi + \theta \psi +...$
where $\phi$ is a scalar and $\psi$ a spinor. In $d$ spacetime dimensions, a scalar is of dimension $(d-2)/2$, a spinor is of dimension $(d-1)/2$ and so $\theta$ has to be of dimension $-1/2$. |
The Area Interaction Point Process Model
Creates an instance of the Area Interaction point process model (Widom-Rowlinson penetrable spheres model) which can then be fitted to point pattern data.
Usage
AreaInter(r)
Arguments r
The radius of the discs in the area interaction process
Details
This function defines the interpoint interaction structure of a point process called the Widom-Rowlinson penetrable sphere model or area-interaction process. It can be used to fit this model to point pattern data.
The function
ppm(), which fits point process models to point pattern data, requires an argument of class
"interact" describing the interpoint interaction structure of the model to be fitted. The appropriate description of the area interaction structure is yielded by the function
AreaInter(). See the examples below.
In
standard form, the area-interaction process (Widom and Rowlinson, 1970; Baddeley and Van Lieshout, 1995) with disc radius \(r\), intensity parameter \(\kappa\) and interaction parameter \(\gamma\) is a point process with probability density $$ f(x_1,\ldots,x_n) = \alpha \kappa^{n(x)} \gamma^{-A(x)} $$ for a point pattern \(x\), where \(x_1,\ldots,x_n\) represent the points of the pattern, \(n(x)\) is the number of points in the pattern, and \(A(x)\) is the area of the region formed by the union of discs of radius \(r\) centred at the points \(x_1,\ldots,x_n\). Here \(\alpha\) is a normalising constant.
The interaction parameter \(\gamma\) can be any positive number. If \(\gamma = 1\) then the model reduces to a Poisson process with intensity \(\kappa\). If \(\gamma < 1\) then the process is regular, while if \(\gamma > 1\) the process is clustered. Thus, an area interaction process can be used to model either clustered or regular point patterns. Two points interact if the distance between them is less than \(2r\).
The standard form of the model, shown above, is a little complicated to interpret in practical applications. For example, each isolated point of the pattern \(x\) contributes a factor \(\kappa \gamma^{-\pi r^2}\) to the probability density.
In spatstat, the model is parametrised in a different form, which is easier to interpret. In
canonical scale-free form, the probability density is rewritten as $$ f(x_1,\ldots,x_n) = \alpha \beta^{n(x)} \eta^{-C(x)} $$ where \(\beta\) is the new intensity parameter, \(\eta\) is the new interaction parameter, and \(C(x) = B(x) - n(x)\) is the interaction potential. Here $$ B(x) = \frac{A(x)}{\pi r^2} $$ is the normalised area (so that the discs have unit area). In this formulation, each isolated point of the pattern contributes a factor \(\beta\) to the probability density (so the first order trend is \(\beta\)). The quantity \(C(x)\) is a true interaction potential, in the sense that \(C(x) = 0\) if the point pattern \(x\) does not contain any points that lie close together (closer than \(2r\) units apart).
When a new point \(u\) is added to an existing point pattern \(x\), the rescaled potential \(-C(x)\) increases by a value between 0 and 1. The increase is zero if \(u\) is not close to any point of \(x\). The increase is 1 if the disc of radius \(r\) centred at \(u\) is completely contained in the union of discs of radius \(r\) centred at the data points \(x_i\). Thus, the increase in potential is a measure of how close the new point \(u\) is to the existing pattern \(x\). Addition of the point \(u\) contributes a factor \(\beta \eta^\delta\) to the probability density, where \(\delta\) is the increase in potential.
The old parameters \(\kappa,\gamma\) of the standard form are related to the new parameters \(\beta,\eta\) of the canonical scale-free form, by $$ \beta = \kappa \gamma^{-\pi r^2} = \kappa /\eta $$ and $$ \eta = \gamma^{\pi r^2} $$ provided \(\gamma\) and \(\kappa\) are positive and finite.
In the canonical scale-free form, the parameter \(\eta\) can take any nonnegative value. The value \(\eta = 1\) again corresponds to a Poisson process, with intensity \(\beta\). If \(\eta < 1\) then the process is regular, while if \(\eta > 1\) the process is clustered. The value \(\eta = 0\) corresponds to a hard core process with hard core radius \(r\) (interaction distance \(2r\)).
The
nonstationary area interaction process is similar except that the contribution of each individual point \(x_i\) is a function \(\beta(x_i)\) of location, rather than a constant beta.
Note the only argument of
AreaInter() is the disc radius
r. When
r is fixed, the model becomes an exponential family. The canonical parameters \(\log(\beta)\) and \(\log(\eta)\) are estimated by
ppm(), not fixed in
AreaInter().
Value
An object of class
"interact" describing the interpoint interaction structure of the area-interaction process with disc radius \(r\).
Warnings
The interaction distance of this process is equal to
2 * r. Two discs of radius
r overlap if their centres are closer than
2 * r units apart.
The estimate of the interaction parameter \(\eta\) is unreliable if the interaction radius
r is too small or too large. In these situations the model is approximately Poisson so that \(\eta\) is unidentifiable. As a rule of thumb, one can inspect the empty space function of the data, computed by
Fest. The value \(F(r)\) of the empty space function at the interaction radius
r should be between 0.2 and 0.8.
References
Baddeley, A.J. and Van Lieshout, M.N.M. (1995). Area-interaction point processes.
Annals of the Institute of Statistical Mathematics 47 (1995) 601--619.
Widom, B. and Rowlinson, J.S. (1970). New model for the study of liquid-vapor phase transitions.
The Journal of Chemical Physics 52 (1970) 1670--1684. See Also Aliases AreaInter Examples
# NOT RUN { # }# NOT RUN { # prints a sensible description of itself AreaInter(r=0.1) # Note the reach is twice the radius reach(AreaInter(r=1)) # Fit the stationary area interaction process to Swedish Pines data data(swedishpines) ppm(swedishpines, ~1, AreaInter(r=7)) # Fit the stationary area interaction process to `cells' data(cells) ppm(cells, ~1, AreaInter(r=0.06)) # eta=0 indicates hard core process. # Fit a nonstationary area interaction with log-cubic polynomial trend # }# NOT RUN { ppm(swedishpines, ~polynom(x/10,y/10,3), AreaInter(r=7)) # }# NOT RUN { # }
Documentation reproduced from package spatstat, version 1.59-0, License: GPL (>= 2) |
skills to develop
To learn how some events are naturally expressible in terms of other events. To learn how to use special formulas for the probability of an event that is expressed in terms of one or more other events.
Some events can be naturally expressed in terms of other, sometimes simpler, events.
Complements
Definition: complement
The complement of an event \(A\) in a sample space \(S\), denoted \(A^c\), is the collection of all outcomes in \(S\) that are not elements of the set \(A\). It corresponds to negating any description in words of the event \(A\).
Example \(\PageIndex{1}\)
Two events connected with the experiment of rolling a single die are \(E\)
: and \(T\) “the number rolled is even” : Find the complement of each. “the number rolled is greater than two.” Solution:
In the sample space \(S=\{1,2,3,4,5,6\}\) the corresponding sets of outcomes are \(E=\{2,4,6\}\) and \(T=\{3,4,5,6\}\). The complements are \(E^c=\{1,3,5\}\) and \(T^c=\{1,2\}\).
In words the complements are described by “the number rolled is not even” and “the number rolled is not greater than two.” Of course easier descriptions would be “the number rolled is odd” and “the number rolled is less than three.”
If there is a \(60\%\) chance of rain tomorrow, what is the probability of fair weather? The obvious answer, \(40\%\), is an instance of the following general rule.
Definition: Probability Rule for Complements
The Probability Rule for Complements states that \[P(A^c) = 1 - P(A)\]
This formula is particularly useful when finding the probability of an event directly is difficult.
Example \(\PageIndex{2}\)
Find the probability that at least one heads will appear in five tosses of a fair coin.
Solution:
Identify outcomes by lists of five \(hs\) and \(ts\), such as \(tthtt\) and \(hhttt\). Although it is tedious to list them all, it is not difficult to count them. Think of using a tree diagram to do so. There are two choices for the first toss. For each of these there are two choices for the second toss, hence \(2\times 2 = 4\) outcomes for two tosses. For each of these four outcomes, there are two possibilities for the third toss, hence \(4\times 2 = 8\) outcomes for three tosses. Similarly, there are \(8\times 2 = 16\) outcomes for four tosses and finally \(16\times 2 = 32\) outcomes for five tosses.
Let \(O\) denote the event “at least one heads.” There are many ways to obtain at least one heads, but only one way to fail to do so: all tails. Thus although it is difficult to list all the outcomes that form \(O\), it is easy to write \(O^c = \{ttttt\}\). Since there are \(32\) equally likely outcomes, each has probability \(\frac{1}{32}\), so \(P(O^c)=1∕32\), hence \(P(O) = 1-\frac{1}{32}\approx 0.97\) or about a \(97\%\) chance.
Intersection of Events
Definition: intersections
The intersection of events \(A\) and \(B\), denoted \(A\cap B\), is the collection of all outcomes that are elements of both of the sets \(A\) and \(B\). It corresponds to combining descriptions of the two events using the word “and.”
To say that the event \(A\cap B\) occurred means that on a particular trial of the experiment both \(A\) and \(B\) occurred. A visual representation of the intersection of events \(A\) and \(B\) in a sample space \(S\) is given in Figure \(\PageIndex{1}\). The intersection corresponds to the shaded lens-shaped region that lies within both ovals.
Figure \(\PageIndex{1}\) : The Intersection of Events A and B
Example \(\PageIndex{3}\)
In the experiment of rolling a single die, find the intersection \(E\cap T\) of the events \(E\):
“the number rolled is even” and \(T\): “the number rolled is greater than two.” Solution:
The sample space is \(S=\{1,2,3,4,5,6\}\). Since the outcomes that are common to \(E=\{2,4,6\}\) and \(T=\{3,4,5,6\}\) are \(4\) and \(6\), \(E\cap T=\{4,6\}\).
In words the intersection is described by “the number rolled is even and is greater than two.” The only numbers between one and six that are both even and greater than two are four and six, corresponding to \(E\cap T\) given above.
Example \(\PageIndex{4}\)
A single die is rolled.
Suppose the die is fair. Find the probability that the number rolled is both even and greater than two. Suppose the die has been “loaded” so that \(P(1)=\frac{1}{12}\) , \(P(6)=\frac{3}{12}\) , and the remaining four outcomes are equally likely with one another. Now find the probability that the number rolled is both even and greater than two. Solution:
In both cases the sample space is \(S=\{1,2,3,4,5,6\}\) and the event in question is the intersection \(E\cap T=\{4,6\}\) of the previous example.
Since the die is fair, all outcomes are equally likely, so by counting we have \(P(E\cap T)=\frac{2}{6}\). The information on the probabilities of the six outcomes that we have so far is
\[\begin{array}{l|cccc}Outcome & 1 & 2 & 3 & 4 & 5 & 6 \\ Probablity & \frac{1}{12} & p & p & p & p & \frac{3}{12}\end{array}\]
Since \(P(1)+P(6)=\frac{4}{6}=\frac{1}{3}\)
\[P(2) + P(3) + P(4) + P(5) = 1 - \frac{1}{3} = \frac{2}{3}\]
Thus \(4p=\frac{2}{3}\), so \(p=\frac{1}{6}\). In particular \(P(4)=\frac{1}{6}\) therefore:
\[P(E\cap T) = P(4) + P(6) = \frac{1}{6} + \frac{3}{12} = \frac{5}{12}\]
Definition: mutually exclusive
Events \(A\) and \(B\) are mutually exclusive (cannot both occur at once) if they have no elements in common.
For \(A\) and \(B\) to have no outcomes in common means precisely that it is impossible for both \(A\) and \(B\) to occur on a single trial of the random experiment. This gives the following rule:
Definition: Probability Rule for Mutually Exclusive Events
Events \(A\) and \(B\) are mutually exclusive if and only if
\[P(A ∩ B) = 0\]
Any event \(A\) and its complement \(A^c\) are mutually exclusive, but \(A\) and \(B\) can be mutually exclusive without being complements.
Example \(\PageIndex{5}\)
In the experiment of rolling a single die, find three choices for an event \(A\) so that the events \(A\) and \(E\): “the number rolled is even” are mutually exclusive.
Solution:
Since \(E=\{2,4,6\}\) and we want \(A\) to have no elements in common with \(E\), any event that does not contain any even number will do. Three choices are \(\{1,3,5\}\) (the complement \(E^c\), the odds), \(\{1,3\}\), and \(\{5\}\).
Union of Events
Definition: Union of Events
The union of events \(A\) and \(B,\) denoted \(A\cup B\), is the collection of all outcomes that are elements of one or the other of the sets \(A\) and \(B\), or of both of them. It corresponds to combining descriptions of the two events using the word “or.”
To say that the event \(A\cup B\) occurred means that on a particular trial of the experiment either \(A\) or \(B\) occurred (or both did). A visual representation of the union of events \(A\) and \(B\) in a sample space \(S\) is given in Figure \(\PageIndex{2}\). The union corresponds to the shaded region.
Figure \(\PageIndex{2}\): The Union of Events A and B
Example \(\PageIndex{6}\)
In the experiment of rolling a single die, find the union of the events
: “the number rolled is even” and \(T\) : “the number rolled is greater than two.” Solution:
Since the outcomes that are in either \(E=\{2,4,6\}\) or \(T=\{3,4,5,6\}\) (or both) are \(2, 3, 4, 5,\) and \(6\), that means \(E\cup T=\{2,3,4,5,6\}\).
Note that an outcome such as \(4\) that is in both sets is still listed only once (although strictly speaking it is not incorrect to list it twice).
In words the union is described by “the number rolled is even or is greater than two.” Every number between one and six except the number one is either even or is greater than two, corresponding to \(E\cup T\) given above.
Example \(\PageIndex{7}\)
A two-child family is selected at random. Let \(B\) denote the event that at least one child is a boy, let \(D\) denote the event that the genders of the two children differ, and let \(M\)
Solution:
A sample space for this experiment is \(S=\{bb,bg,gb,gg\}\), where the first letter denotes the gender of the firstborn child and the second letter denotes the gender of the second child. The events
Each outcome in \(D\) is already in \(B\), so the outcomes that are in at least one or the other of the sets \(B\) and \(D\) is just the set \(B\) itself: \(B\cup D=\{bb,bg,gb\}=B\).
Every outcome in the whole sample space \(S\) is in at least one or the other of the sets \(B\) and \(M\), so \(B\cup M=\{bb,bg,gb,gg\}=S\).
Definition: Additive Rule of Probability
A useful property to know is the Additive Rule of Probability, which is
\[P(A\cup B) = P(A) + P(B) − P(A\cap B)\]
The next example, in which we compute the probability of a union both by counting and by using the formula, shows why the last term in the formula is needed.
Example \(\PageIndex{8}\)
Two fair dice are thrown. Find the probabilities of the following events:
both dice show a four at least one die shows a four Solution:
As was the case with tossing two identical coins, actual experience dictates that for the sample space to have equally likely outcomes we should list outcomes as if we could distinguish the two dice. We could imagine that one of them is red and the other is green. Then any outcome can be labeled as a pair of numbers as in the following display, where the first number in the pair is the number of dots on the top face of the green die and the second number in the pair is the number of dots on the top face of the red die.
\[\begin{array}11 & 12 & 13 & 14 & 15 & 16 \\ 21 & 22 & 23 & 24 & 25 & 26 \\ 31 & 32 & 33 & 34 & 35 & 36 \\ 41 & 42 & 43 & 44 & 45 & 46 \\ 51 & 52 & 53 & 54 & 55 & 56 \\ 61 & 62 & 63 & 64 & 65 & 66\end{array}\]
There are \(36\) equally likely outcomes, of which exactly one corresponds to two fours, so the probability of a pair of fours is \(1/36\). From the table we can see that there are \(11\) pairs that correspond to the event in question: the six pairs in the fourth row (the green die shows a four) plus the additional five pairs other than the pair \(44\), already counted, in the fourth column (the red die is four), so the answer is \(11/36\). To see how the formula gives the same number, let \(A_G\) denote the event that the green die is a four and let \(A_R\) denote the event that the red die is a four. Then clearly by counting we get: \(P(A_G) = 6/36\) and \(P(A_R) = 6/36\). This is the computation from part 1, of course. Thus by the Additive Rule of Probability we get:
\[P(A_G\cap A_R ) = P(A_G) + P(A_R) - P(A_G - A_R) = 6/36 + 6/36 - 1/36 = \frac{11}{36}\]
Example \(\PageIndex{9}\)
A tutoring service specializes in preparing adults for high school equivalence tests. Among all the students seeking help from the service, \(63\%\) need help in mathematics, \(34\%\) need help in English, and \(27\%\) need help in both mathematics and English. What is the percentage of students who need help in either mathematics or English?
Solution:
Imagine selecting a student at random, that is, in such a way that every student has the same chance of being selected. Let \(M\) denote the event “the student needs help in mathematics” and let \(E\) denote the event “the student needs help in English.” The information given is that \(P(M) = 0.63\), \(P(E) = 0.34\) and \(P(M\cap E) = 0.27\). Thus the Additive Rule of Probability gives:
\[P(M\cup E) = P(M) + P(E) - P(M\cap E) = 0.63 + 0.34 - 0.27 = 0.70\]
Note how the naïve reasoning that if \(63\%\) need help in mathematics and \(34\%\) need help in English then \(63\) plus \(34\) or \(97\%\) need help in one or the other gives a number that is too large. The percentage that need help in both subjects must be subtracted off, else the people needing help in both are counted twice, once for needing help in mathematics and once again for needing help in English. The simple sum of the probabilities would work if the events in question were mutually exclusive, for then \(P(A\cap B)\) is zero, and makes no difference.
Example \(\PageIndex{10}\)
Volunteers for a disaster relief effort were classified according to both specialty (\(C\): construction, \(E\): education, \(M\): medicine) and language ability (\(S\): speaks a single language fluently, \(T\): speaks two or more languages fluently). The results are shown in the following two-way classification table:
Specialty Language Ability \(S\) \(T\) \(C\) 12 1 \(E\) 4 3 \(M\) 6 2
The first row of numbers means that \(12\) volunteers whose specialty is construction speak a single language fluently, and \(1\) volunteer whose specialty is construction speaks at least two languages fluently. Similarly for the other two rows.
A volunteer is selected at random, meaning that each one has an equal chance of being chosen. Find the probability that:
his specialty is medicine and he speaks two or more languages; either his specialty is medicine or he speaks two or more languages; his specialty is something other than medicine. Solution:
When information is presented in a two-way classification table it is typically convenient to adjoin to the table the row and column totals, to produce a new table like this:
Specialty Language Ability Total \(S\) \(T\) \(C\) 12 1 13 \(E\) 4 3 7 \(M\) 6 2 8 Total 22 6 28 The probability sought is \(P(M\cap T)\). The table shows that there are \(2\) such people, out of \(28\) in all, hence \(P(M\cap T) = 2/28 \approx 0.07\) or about a \(7\%\) chance. The probability sought is \(P(M\cup T)\). The third row total and the grand total in the sample give \(P(M) = 8/28\). The second column total and the grand total give \(P(T) = 6/28\). Thus using the result from part (1),
\[P(M\cup T) = P(M) + P(T) - P(M\cap T) = 828 + 628 - 228 = 1228\approx 0.43\]
or about a \(43\%\) chance.
This probability can be computed in two ways. Since the event of interest can be viewed as the event \(C\cup E\) and the events \(C\) and \(E\) are mutually exclusive, the answer is, using the first two row totals,
On the other hand, the event of interest can be thought of as the complement \(M^c\) of \(M\), hence using the value of \(P(M) \)computed in part (2),
as before.
Key Takeaway The probability of an event that is a complement or union of events of known probability can be computed using formulas. |
Statistics Seminar
Department of Mathematical Sciences
DATE: Thursday, May 05, 2016 TIME: 1:15pm to 2:15pm LOCATION: WH 100E SPEAKER: Aleksey Polunchenko, Binghamton University TITLE: On a Diffusion Process that Arises in Quickest Change-Point Detection Abstract
We consider the diffusion $(R_t)_{t\ge0}$ generated by the stochastic differential equation $dR_t=dt+\mu R_t dB_t$ with $R_0=0$, where $\mu\neq0$ is given and $(B_t)_{t\ge0}$ is standard Brownian motion. We obtain a closed-from expression for the quasi-stationary distribution of $(R_t)_{t\ge0}$, i.e., the limit $Q_A(x)=\lim_{t\to+\infty}\Pr(R_t\le x|T_A>t)$, $x\in[0,A]$, where $T_A=\inf\{t>0:R_t=A\}$ with $A>0$ fixed. The process $(R_t)_{t\ge0}$, its quasi-stationary distribution $Q_A(x)$, $x\in[0,A]$, and the stopping time $T_A$ are of importance in the theory of quickest change-point detection, especially the case when $A$ is large. We study the asymptotic behavior of $Q_A(x)$ for large $A$'s, and provide an order-three asymptotic approximation. |
This question was originally posted on mathoverflow. Having obtained no answers after one month, I've decided to cross-post it here.
Let $H = ( V, E )$ be a $k$-uniform connected hypergraph, with $n = |V|$ vertices and $m = |E|$ hyperedges. Let $O_w$ be the number of distinct edge induced subgraphs of $H$ having $w$ vertices and an odd number of hyperedges. Let $E_w$ be the number of distinct edge induced subgraphs of $H$ having $w$ vertices and an even number of hyperedges. Let $\Delta_w = O_w - E_w$.
Let $b_w$ be the number of bits required to encode $\Delta_w$, i.e. $b_w = log_2 \ \Delta_w$.
Let $b = \displaystyle\max_{\substack{w}} b_w$.
I'm interested in how $b$ grows. I would like to determine the best possible upper bound for $b$ which is expressible as a function of only $n$, $m$ and $k$. More precisely, I would like to determine a function $f(n,m,k)$ having both the following properties:
$b \leq f( n, m, k )$ for any $k$-uniform hypergraph $H$ having $n$ vertices and $m$ hyperedges. $f(n,m,k)$ grows slower than any other function which satisfies the 1 stproperty.
In general both $O_w$ and $E_w$ are exponential in $m$, therefore I expect that their difference $\Delta_w$ is not exponential in $m$ and thus that $b \in o( m )$.
However for the moment I've no clue on how to try to prove this.
Questions How does $b$ grow with respect to $n$, $m$ and $k$? Are there any relevant results in the literature? Any hint on how to try to prove $b \in o(m)$? |
I know it’s taking a while before I use maths to model a mass on a spring, but that will only make sense by fully describing the graphs of sine equations. Hopefully, this development is interesting in its own right.
Now if you were to plot the daylight length at a certain latitude against days, and if you plotted for a full year, you would see a shape that looks amazingly like the sine graph I showed you in my last post. Except at the equator, the length of a day gets longer in the summer and shorter in the winter. Without actual taking a year to collect the data, I’ve plotted the daylight length in Melbourne Australia against days using the equation\[
{L}\hspace{0.33em}{=}\hspace{0.33em}{2}{.}{63}\sin\left({\frac{360t}{365}}\right)\hspace{0.33em}{+}\hspace{0.33em}{12}{.}{165}
\]
where
L is the length of daylight in hours and t is the number of days after 22 September of any year. Why I chose 22 September is an interesting topic which I may eventually discuss, but it has to do with what are called equinoxes. The plot is below and is almost exactly the plot created if I actually measured the day length each day and plotted these for a year:
Now the shape of this curve is a sine wave but you can see several differences from the standard sine wave explored in my last post:
The amplitude is 2.63 instead of 1 The wavelength is 365 days instead of 360 degrees The wave is centered at 12.165 instead of the x-axis We are evaluating the sine of timeinstead of degrees.
Let me explain these differences.
Amplitude –The value of sin( x) , regardless of what form xis in, only has values from -1 to +1. So if I multiply the sine by any number, say 2.63, so that I now have y= 2.63 sin( x), then this results in values from 2.63 × (-1) = -2.63 to 2.63 × (+1) = 2.63. This make the amplitude of this new equation 2.63, that is, the number I multiply the sine by. So in general, the amplitude of y= Asin( x) is A. Wavelength– Notice that the wavelength 365 is the denominator in
\sin\left({\frac{360t}{365}}\right)
\]
The 360 in the numerator is the wavelength of the standard sine wave. The common symbol to represent wavelength is the Greek letter lambda, 𝝀, so in general, when you are taking the sine of something that looks like\[
\sin\left({\frac{360t}{\mathit{\lambda}}}\right)
\]
the denominator, 𝝀, is the wavelength.
Now for those of you who have had exposure to this before, you may have expected to see 2𝜋
t in the numerator instead of 360 t. This would be the case if we were taking the sine of numbers expressed as radians. But this series of posts is doing everything with calculators in the degree mode. I will explain radians later in a different post.
3.
Wave center – Notice that the center of the sine wave is at 12.165 which is the number added to the sine in the daylight length equation. The effect on a graph of adding a number to an equation is to raise or lower it – it does not change shape. So if you can graph and know the shape of y = something, then y = something + 10 will be the same shape, just shifted up 10 units. So adding 12.165 to the sine, doesn’t change its shape, it just changes where it is on the graph.
4.
Time – The big change here is that we are no longer finding the sine of an angle. It may appear that we are now taking the sine of numbers in seconds, hours, or days – whatever the units of t are. However, the 360 in the numerator serves the purpose of making the number we are taking the sine of, unitless. That is, 360 t/365 does not have any units – it is just a pure number.
Mathematicians/scientists long ago discovered that many periodic physical processes, have motions that follow a sine wave. In fact, when equations were formed that represented the forces on objects that were experiencing periodic motion, the sine of numbers involving time appeared when solving these equations.
And so it is with the length of the day throughout the year. The earth is rotating around the sun and this motion repeats, that is, is periodic. It is no surprise then that the graph of the day length is a sine wave.
I think we are now ready to model a mass on a spring. Let’s do that in my next post. |
First, let me introduce my little story:
I started searching information about the famous Dirichlet Divisor Problem: getting the exact asymptotic behaviour of the sum of the divisor function up to an integer $x$. More specifically, the aim of the problem is to bound $\theta$ in:
$$D(x)=\sum_{n \le x} d(n) = x \log x +x(2 \gamma -1) + O(x^{\theta + \epsilon})$$
Then, I found that this problem could be generalised to finding the sum of the number of ways an integer $n$ can be written as the product of $k$ natural numbers up to an integer $x$, which is called the Piltz Divisor Problem. Again, it is based on bounding the error term in:
$$D_k(x)=xP_k(\log x) + O(x^{\alpha_k + \epsilon})$$
being $P_k$ a polynomial of degree $k$.
After all that, I found a paper relating these two problems to the Riemann Hypothesis, which seemed something quite interesting for me. It was called
The Generalized Divisor Problem andThe Riemann Hypothesis, by Hideki Nakaya. It starts quoting in (1) a work by A. Selberg (which I have not been able to find) where he extended Piltz's Divisor Problem to all complex $k$. And I have a lot of trouble when trying to understand this.
I. How can one extend Piltz's Problem to complex numbers?
II. What would that 'extension' be useful for?
III. Does this 'extension' have any geometrical interpretations such as the hyperbola method of both Dirichlet's and Piltz's Problems?
IV. Is there any direct relationship between the error bound on Piltz's Problem and the error bound for the extended problem?
I know these are a lot of questions, so thank you for your help. |
Search
Now showing items 1-9 of 9
Measurement of $J/\psi$ production as a function of event multiplicity in pp collisions at $\sqrt{s} = 13\,\mathrm{TeV}$ with ALICE
(Elsevier, 2017-11)
The availability at the LHC of the largest collision energy in pp collisions allows a significant advance in the measurement of $J/\psi$ production as function of event multiplicity. The interesting relative increase ...
Multiplicity dependence of jet-like two-particle correlations in pp collisions at $\sqrt s$ =7 and 13 TeV with ALICE
(Elsevier, 2017-11)
Two-particle correlations in relative azimuthal angle (Δ ϕ ) and pseudorapidity (Δ η ) have been used to study heavy-ion collision dynamics, including medium-induced jet modification. Further investigations also showed the ...
The new Inner Tracking System of the ALICE experiment
(Elsevier, 2017-11)
The ALICE experiment will undergo a major upgrade during the next LHC Long Shutdown scheduled in 2019–20 that will enable a detailed study of the properties of the QGP, exploiting the increased Pb-Pb luminosity ...
Azimuthally differential pion femtoscopy relative to the second and thrid harmonic in Pb-Pb 2.76 TeV collisions from ALICE
(Elsevier, 2017-11)
Azimuthally differential femtoscopic measurements, being sensitive to spatio-temporal characteristics of the source as well as to the collective velocity fields at freeze-out, provide very important information on the ...
Charmonium production in Pb–Pb and p–Pb collisions at forward rapidity measured with ALICE
(Elsevier, 2017-11)
The ALICE collaboration has measured the inclusive charmonium production at forward rapidity in Pb–Pb and p–Pb collisions at sNN=5.02TeV and sNN=8.16TeV , respectively. In Pb–Pb collisions, the J/ ψ and ψ (2S) nuclear ...
Investigations of anisotropic collectivity using multi-particle correlations in pp, p-Pb and Pb-Pb collisions
(Elsevier, 2017-11)
Two- and multi-particle azimuthal correlations have proven to be an excellent tool to probe the properties of the strongly interacting matter created in heavy-ion collisions. Recently, the results obtained for multi-particle ...
Jet-hadron correlations relative to the event plane at the LHC with ALICE
(Elsevier, 2017-11)
In ultra relativistic heavy-ion collisions at the Large Hadron Collider (LHC), conditions are met to produce a hot, dense and strongly interacting medium known as the Quark Gluon Plasma (QGP). Quarks and gluons from incoming ...
Measurements of the nuclear modification factor and elliptic flow of leptons from heavy-flavour hadron decays in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 and 5.02 TeV with ALICE
(Elsevier, 2017-11)
We present the ALICE results on the nuclear modification factor and elliptic flow of electrons and muons from open heavy-flavour hadron decays at mid-rapidity and forward rapidity in Pb--Pb collisions at $\sqrt{s_{\rm NN}}$ ...
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
Anyons are particles which quantum statistics is neither bosonic or fermionic one. They are proved to exist only in 2 dimensions and can have some quantum numbers of fractional values in respect to other elementary particles such as for example electron. An anyon can be charged and the charge can be a fracture of 1e. (quarks have a charge of value $\frac 1 3$ but they are not anyons because their statistics is not fractional). After exchanging two identical particles the quantum mechanics predicts that the wave function gain a phase factor:
Ψ →
e i θΨ For bosons θ= 0 For fermions θ= π For anyons θ= 2 π ν− 1
The structure of the excitation spectrum is efficiently described in terms of a gauge theory and Aharanov-Bohm gauge interactions.
History
These particles were predicted for the first time in 1977 by J. M. Leinaas and J. Myrheim and studied independently in more details by F. Wilczek in 1982 who gave them the name "anyons". In 1983 R. B. Laughlin proposted a model where anyons can be found. There was however for many years no idea how to observe them directly. In recent investigation of F. E. Camino, Wei Zhou, and V. J. Goldman show how to design such an experiment using interferometry methods. Nowdays the most of interest is focused on so called non-Abelian anyons which are believed to provide a tool to introduce fault-tolerant topological quantum computing.
fractionalization of statistics
In order to understand how this can be possible let's see how the phase of wave functiona changes during the evolution of a system. Let's take a particle moving slowly along a loop $\mathcal(C)$ in parameter space
R( t). It's phase changes like:
$\Psi_{R(t_f)}(t_f)=exp\left(-\frac i \hbar\int^{t_f}_{t_i}dt' E(R(t'))+i\gamma(\mathcal C)\right)\Psi_{R(t_i)}(t_i)$
The first term is just the dynamical phase and the second is independent of
t i and t f and depend only on the trajectory $\mathcal C$. $\gamma(\mathcal C) = i \oint_{\mathcal C}\langle\psi_{R(t)}|\nabla_{R(t)}\Psi_R(t)\rangle dR(t).$When the particle is an electron moving in magnetic field this reduces to the Ahoronov-Bohm effect and:$\gamma(\mathcal C)=e\frac \Phi \hbar$where Φ is the flux passing through the contour $\mathcal C$. Now if we exchange two electrons their wave function changes sing. If we, however could construct two particles consisting of an electorn and a magnetic flux Ψ attached to it we would have an extra phase of geometric origin. We would be able to tune this term to change the statistics from fermionic to bosonic or even to an in-between one. This new particles would have anyonic statistics. Braiding
The characteristic feature of anyons is that their movements are best described by the braid group. This is due to the fact that while braiding their world lines they can gain non-trivial phase factor or even, in non-Abelian the process of braiding can be equivalent to multiplication by an unitary matrix. In the latter case the final state can be an superposition.
Fusion rules
The other process that plays a key role in anyons life is the fusion. It is well know that a fusion of two fermions gives a boson. The generalization of this phenomena is the fusion of anyons. In general the fuison rule look like:$a\times b =\sum_c \mathcal N_{ab}^c c$And if$\sum_c \mathcal N ^c_{ab}\geq 2$then the particles a and b are non-Abelian.
Abelian anyons
The description of anyons usually uses the braid group representation. The braid group is infinite so it has infinite number of representations. The one dimensional representation describes the Abelian anyons. Its generators are then just the multiplication by a phase factor:
σ j = e i ϕ j The first and so far the only system where the existence of Abelian anyons is convincing is the system with FQHE (Fractional Quantum Hall Effect) described by Lauglin model. Lately several lattice models with anyonic excitations have been proposed. In quantum computation using them one could store some information but not process it. In order to do so the non-Abelian anons are needed. Non-Abelian anyons
The phase that a wave function gains after moving an anyon around a loop, with or without other anyons inside, in general can be trajectory dependent. That's why we say that non-Abelian anyons follow the braiding-statistics. Using this concept one can build a universal set of computational gates. The unitary gate operations are carried out by braiding quasi-particles, and then measuring the multi-quasi-particle states. The fault-tolerance arises from the non-local encoding of the states of the quasi-particles.
literature
J. M. Leinaas and J. Myrheim Nuevo Cimento B
37 (1977) p.1 F. Wilczek, Phys. Rev. Lett. 48, 1144 (1982).R. B. Laughlin, Phys. Rev. Lett. 50, 1395 (1983). |
A particle moves along the x-axis so that at time t its position is given by $x(t) = t^3-6t^2+9t+11$ during what time intervals is the particle moving to the left? so I know that we need the velocity for that and we can get that after taking the derivative but I don't know what to do after that the velocity would than be $v(t) = 3t^2-12t+9$ how could I find the intervals
Fix $c\in\{0,1,\dots\}$, let $K\geq c$ be an integer, and define $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$.I believe I have numerically discovered that$$\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n \quad \text{ as } K\to\infty$$but cannot ...
So, the whole discussion is about some polynomial $p(A)$, for $A$ an $n\times n$ matrix with entries in $\mathbf{C}$, and eigenvalues $\lambda_1,\ldots, \lambda_k$.
Anyways, part (a) is talking about proving that $p(\lambda_1),\ldots, p(\lambda_k)$ are eigenvalues of $p(A)$. That's basically routine computation. No problem there. The next bit is to compute the dimension of the eigenspaces $E(p(A), p(\lambda_i))$.
Seems like this bit follows from the same argument. An eigenvector for $A$ is an eigenvector for $p(A)$, so the rest seems to follow.
Finally, the last part is to find the characteristic polynomial of $p(A)$. I guess this means in terms of the characteristic polynomial of $A$.
Well, we do know what the eigenvalues are...
The so-called Spectral Mapping Theorem tells us that the eigenvalues of $p(A)$ are exactly the $p(\lambda_i)$.
Usually, by the time you start talking about complex numbers you consider the real numbers as a subset of them, since a and b are real in a + bi. But you could define it that way and call it a "standard form" like ax + by = c for linear equations :-) @Riker
"a + bi where a and b are integers" Complex numbers a + bi where a and b are integers are called Gaussian integers.
I was wondering If it is easier to factor in a non-ufd then it is to factor in a ufd.I can come up with arguments for that , but I also have arguments in the opposite direction.For instance : It should be easier to factor When there are more possibilities ( multiple factorizations in a non-ufd...
Does anyone know if $T: V \to R^n$ is an inner product space isomorphism if $T(v) = (v)_S$, where $S$ is a basis for $V$? My book isn't saying so explicitly, but there was a theorem saying that an inner product isomorphism exists, and another theorem kind of suggesting that it should work.
@TobiasKildetoft Sorry, I meant that they should be equal (accidently sent this before writing my answer. Writing it now)
Isn't there this theorem saying that if $v,w \in V$ ($V$ being an inner product space), then $||v|| = ||(v)_S||$? (where the left norm is defined as the norm in $V$ and the right norm is the euclidean norm) I thought that this would somehow result from isomorphism
@AlessandroCodenotti Actually, such a $f$ in fact needs to be surjective. Take any $y \in Y$; the maximal ideal of $k[Y]$ corresponding to that is $(Y_1 - y_1, \cdots, Y_n - y_n)$. The ideal corresponding to the subvariety $f^{-1}(y) \subset X$ in $k[X]$ is then nothing but $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$. If this is empty, weak Nullstellensatz kicks in to say that there are $g_1, \cdots, g_n \in k[X]$ such that $\sum_i (f^* Y_i - y_i)g_i = 1$.
Well, better to say that $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$ is the trivial ideal I guess. Hmm, I'm stuck again
O(n) acts transitively on S^(n-1) with stabilizer at a point O(n-1)
For any transitive G action on a set X with stabilizer H, G/H $\cong$ X set theoretically. In this case, as the action is a smooth action by a Lie group, you can prove this set-theoretic bijection gives a diffeomorphism |
Giulio Ciraolo
Publications39
Citations238
Influential Citations17
Publications Influence
Claim Your Author Page
Ensure your research is discoverable on Semantic Scholar. Claiming your author page allows you to personalize the information displayed and manage publications.
When holes or hard elastic inclusions are closely located, stress which is the gradient of the solution to the anti-plane elasticity equation can be arbitrarily large as the distance between two… (More)
The aim of this paper is to give a mathematical justification of cloaking due to anomalous localized resonance (CALR). We consider the dielectric problem with a source term in a structure with a… (More)
The distance of an almost constant mean curvature boundary from a finite family of disjoint tangent balls with equal radii is quantitatively controlled in terms of the oscillation of the scalar mean… (More)
We prove that the boundary of a (not necessarily connected) bounded smooth set with constant nonlocal mean curvature is a sphere. More generally, and in contrast with what happens in the classical… (More)
We study the uniqueness of solutions of Helmholtz equation for a problem that concerns wave propagation in waveguides. The classical radiation condition does not apply to our problem because the… (More)
In a bounded domain $$\varOmega $$Ω, we consider a positive solution of the problem $$\Delta u+f(u)=0$$Δu+f(u)=0 in $$\varOmega $$Ω, $$u=0$$u=0 on $$\partial \varOmega $$∂Ω, where $$f:\mathbb… (More)
We consider the solution of the torsion problem $-\Delta u=1$ in $\Omega$ and $u=0$ on $\partial \Omega$. Serrin's celebrated symmetry theorem states that, if the normal derivative $u_\nu$ is… (More)
We consider the solution of the problem �u = f(u) and u > 0 in , u = 0 on , where is a bounded domain in R N with boundary of class C 2,� , 0 0, where re and ri are the radii of a spherical annulus… (More)
We prove the following quantitative version of the celebrated Soap Bubble Theorem of Alexandrov. Let $S$ be a $C^2$ closed embedded hypersurface of $\mathbb{R}^{n+1}$, $n\geq1$, and denote by… (More) |
I realize that the solution I'm giving further below is basically the one you mentioned in your question. So let's begin by addressing this:
The solution in question assumes that M° is linear.. if that was the case you can have:
M°(P) = alpha*M°(A) + beta * M°(B) + gamma*M°(C)
But that is not the case (correct me if I'm wrong).
The solution in question works directly in UV space. I don't see where it would make any assumptions concerning the nature or even existence of a "mapping from 3D to UV space"!? All it does assume is that UV coordinates vary linearly across each triangle (note: each triangle, not the entire mesh)…
$$\def\mvec#1{\begin{pmatrix}#1\end{pmatrix}}\def\ivec#1{\left(\begin{smallmatrix}#1\end{smallmatrix}\right)}\def\mmat#1{\begin{bmatrix}#1\end{bmatrix}}\def\vec#1{\mathrm{\mathbf{#1}}}\def\mat#1{\mathrm{\mathbf{#1}}}$$
In general, the $u$ and $v$ coordinates at a point on a triangle's plane given by the affine coordinates $\lambda_1$ and $\lambda_2$ are\begin{align} u &= u_1 + \lambda_1 (u_2 - u_1) + \lambda_2 (u_3 - u_1) \\ v &= v_1 + \lambda_1 (v_2 - v_1) + \lambda_2 (v_3 - v_1)\end{align}where $u_i$, $v_i$ are the $u$, $v$ coordinates at vertex $i$. We can rewrite this as\begin{equation} \mmat{ u_2 - u_1 & u_3 - u_1 \\ v_2 - v_1 & v_3 - v_1 } \cdot \mvec{ \lambda_1 \\ \lambda_2 } = \mvec{ u - u_1 \\ v - v_1 }.\end{equation}Knowing $u_i$ and $v_i$ at each vertex of a given triangle, we can compute\begin{equation} \mat M = \mmat{ u_2 - u_1 & u_3 - u_1 \\ v_2 - v_1 & v_3 - v_1 }.\end{equation}We can then find the affine coordinates corresponding to the point with whatever $u$, $v$ coordinates we want:\begin{equation} \mvec{ \lambda_1 \\ \lambda_2 } = \mat M^{-1} \cdot \mvec{ u - u_1 \\ v - v_1 }.\end{equation}Note that these affine coordinates always exist as long as $\mat M^{-1}$ exists (which makes sense; there will always be exactly one point somewhere in the plane of the triangle where $u$ and $v$ reach whatever value you may be looking for as long as $u$ and $v$ vary independently). However, they will not necessarily fall inside the triangle.
So one way to find the point you're looking for would be to iterate over all triangles of the mesh and, for each triangle, compute $\mat M^{-1}$, and $\lambda_1$ and $\lambda_2$ for $u = 0$ and $v = 0$, and check whether\begin{align} \lambda_1 &\geq 0 & &\land & \lambda_2 &\geq 0 & &\land & \lambda_1 + \lambda_2 &\leq 1.\end{align}If this is the case, you have found a triangle that contains a point at which $u = 0$ and $v = 0$. You can then compute the coordinates of that point as\begin{equation} \vec p = (1 - \lambda_1 - \lambda_2) \vec v_1 + \lambda_1 \vec v_2 + \lambda_2 \vec v_3\end{equation}where $\vec v_1$, $\vec v_2$, and $\vec v_3$ are the coordinates of the respective triangle vertices.
It may make sense to, for example, use an initial check against a given triangle's bounding rectangle in UV space and/or other methods to quickly rule out triangles that cannot contain the coordinates of interest to speed up the search… |
Yesterday, in the short course on model theory I am currently teaching, I gave the following nice application of downward Lowenheim-Skolem which I found in W. Hodges
A Shorter Model Theory:
Thm: Let $G$ be an infinite simple group, and let $\kappa$ be an infinite cardinal with $\kappa \leq |G|$. Then there exists a simple subgroup $H \subset G$ with $|H| = \kappa$.
(The proof, which is short but rather clever, is reproduced on p. 10 of http://www.math.uga.edu/~pete/modeltheory2010Chapter2.pdf.)
This example led both the students and I (and, course mechanics aside, I am certainly still a student of model theory) to ask some questions:
$1$. The theorem is certainly striking, but to guarantee content we need to see an uncountable simple group without, say, an obvious countable simple subgroup. I don't know that many uncountable simple groups. The most familiar examples are linear algebraic groups like $\operatorname{PSL}_n(F)$ for $F$ an uncountable field like $\mathbb{R}$ or $\mathbb{C}$. But this doesn't help, an infinite field has infinite subfields of all infinite cardinalities -- as one does not need Lowenheim-Skolem to see! (I also mentioned the case of a simple Lie group with trivial center, although how different this is from the previous example I'm not sure.) The one good example I know is supplied by the Schreier-Ulam-Baer theorem: let $X$ be an infinite set. Then the quotient of $\operatorname{Sym}(X)$ by the normal subgroup of all permutations moving less than $|X|$ elements is a simple group of cardinality $2^{|X|}$. (Hmm -- at least it is when $X$ is countably infinite. I'm getting a little nervous about the cardinality of the normal subgroup in the general case. Maybe I want an inaccessible cardinal or somesuch, but I'm getting a little out of my depth.) So:
Are there there other nice examples of uncountable simple groups?
$2$. At the beginning of the proof of the theorem, I remarked that straightforward application of Lowenheim-Skolem to produce a subgroup $H$ of cardinality $\kappa$ which is elementarily embedded in $G$ is not enough, because it is not clear whether the class of simple groups, or its negation, is elementary. Afterwards I wrote this on a sideboard as a question:
Is the class of simple groups (or the class of nonsimple groups) an elementary class?
Someone asked me what techniques one could apply to try to answer a problem like this. Good question!
$3$. The way I stated Hodges' result above is the way it is in my lecture notes. But when I wrote it on the board, for no particular reason I decided to write $\kappa < |G|$ instead of $\kappa \leq |G|$. I got asked about this, and was ready with my defense: $G$ itself is a simple subgroup of $G$ of cardinality $|G|$. But then we mutually remarked that in the case of $\kappa = |G|$ we could ask for a
proper simple subgroup $H$ of $G$ of cardinality $|G|$. My response was: well, let's see whether the proof gives us this stronger result. It doesn't. Thus:
Let $G$ be an infinite simple group. Must there exist a
propersimple subgroup $H$ of $G$ with $|H| = |G|$?
Wait, I just remembered about the existence of Tarski monsters. So the answer is no. But what if we require $G$ to be uncountable? |
How does one get the frequency response of a filter, given an input signal and the signal output by the filter?
The plant output data is usually generated using Gaussian white-noise excitation, although more informative input signals can be generated by experiment design, if prior information about the plant is known [3]. The ETFE of the plant $\widehat{G}(k)$ is found as the quotient of the cross power spectral density estimate of the input and the measured output $P_{yu}(k)$, and the power spectral density estimate of the input $P_{uu}(k)$, i.e.,\begin{equation*} \widehat{G}(k) = \frac{P_{yu}(k)}{P_{uu}(k)} .\end{equation*}In Welch's method, the time-series data is divided into windowed segments, with an option to use overlapping segments. Then, a modified periodogram of each segment is computed and the results are averaged. Welch's method for generating an ETFE corresponds to the function
tfestimate in MATLAB. One of the advantages of Welch's method is the flexibility in terms of the number of frequency samples and excitation signal used.
Frequency response is simply the ratio of the Fourier transforms of the output and input signals.
$$ H(e^{j\omega}) = \frac{Y(e^{j\omega})}{X(e^{j\omega})} $$
where $Y(e^{j\omega})$ is the Fourier transform of output $y[n]$:
$$ Y(e^{j\omega}) = \sum\limits_{n=-\infty}^{+\infty} y[n] e^{-j\omega n} $$
and $X(e^{j\omega})$ is the Fourier transform of input $x[n]$:
$$ X(e^{j\omega}) = \sum\limits_{n=-\infty}^{+\infty} x[n] e^{-j\omega n} $$
It might be a good idea to choose an input $x[n]$ such that $X(e^{j\omega}) \ne 0$ for all $\omega$ of interest in the frequency response.
if you give an impulse as input , frequency response of output signal is equal to frequency response of filter. This technique is efficient only if you don't know filter specifications.
So that I m assuming you know nothing about filter specifications. Here this matlab code implement the technique.
fs=1000;impulse=[1 zeros(1,999)];f=linspace(-fs/2,fs/2,length(impulse));hpass=fdesign.highpass('Fst,Fp,Ast,Ap',100,200,40,1,fs);Hdhp=design(hpass,'butter');y=filter(Hdhp,impulse);figure,plot(f,fftshift(abs(fft(y,fs)))); |
In the preceding chapter we learned that populations are characterized by descriptive measures called parameters. Inferences about parameters are based on sample statistics. We now want to estimate population parameters and assess the reliability of our estimates based on our knowledge of the sampling distributions of these statistics.
Point Estimates
We start with a point estimate. This is a single value computed from the sample data that is used to estimate the population parameter of interest.
The sample mean (\(\bar {x}\)) is a point estimate of the population mean (\(\mu\)). The sample proportion (\(\hat {p}\)) is the point estimate of the population proportion (p).
We use point estimates to construct confidence intervals for unknown parameters.
A confidence interval is an interval of values instead of a single point estimate. The level of confidence corresponds to the expected proportion of intervals that will contain the parameter if many confidence intervals are constructed of the same sample size from the same population. Our uncertainty is about whether our particular confidence interval is one of those that truly contains the true value of the parameter.
Example \(\PageIndex{1}\): bear weight
We are 95% confident that our interval contains the population mean bear weight.
If we created 100 confidence intervals of the same size from the same population, we would expect 95 of them to contain the true parameter (the population mean weight). We also expect five of the intervals would not contain the parameter.
Figure \(\PageIndex{1}\): Confidence intervals from twenty-five different samples.
In this example, twenty-five samples from the same population gave these 95% confidence intervals. In the long term, 95% of all samples give an interval that contains µ, the true (but unknown) population mean.
Level of confidence is expressed as a percent.
The compliment to the level of confidence is α (alpha), the level of significance. The level of confidence is described as \((1- \alpha) \times 100%\).
What does this really mean?
We use a point estimate (e.g., sample mean) to estimate the population mean. We attach a level of confidence to this interval to describe how certain we are that this interval actually contains the unknown population parameter. We want to estimate the population parameter, such as the mean (μ) or proportion ( p).
\[\bar {x}-E < \mu < \bar {x}+E\]
or
\[\hat {p}-E < p <\hat {p}+E\]
where \(E\) is the margin of error.
The confidence is based on area under a normal curve. So the assumption of normality must be met (Chapter 1).
Confidence Intervals about the Mean ( μ) when the Population Standard Deviation ( σ) is Known
A confidence interval takes the form of:
point estimate \(\pm\) margin of error. The point estimate The point estimate comes from the sample data. To estimate the population mean (\(μ\)), use the sample mean (\(\bar{x}\)) as the point estimate. The margin of error Depends on the level of confidence, the sample size and the population standard deviation. It is computed as \(E=Z_{\frac {\alpha}{2}}\times \frac {\sigma}{\sqrt {n}}\)where \(Z_{\frac {\alpha}{2}}\) is the critical value from the standard normal table associated with α (the level of significance). The critical value \(Z_{\frac {\alpha}{2}}\) This is a Z-score that bounds the level of confidence. Confidence intervals are ALWAYS two-sided and the Z-scores are the limits of the area associated with the level of confidence.
Figure \(\PageIndex{1}\): The middle 95% area under a standard normal curve. The level of significance (α) is divided into halves because we are looking at the middle 95% of the area under the curve. Go to your standard normal table and find the area of 0.025 in the body of values. What is the Z-score for that area? The Z-scores of ± 1.96 are the critical Z-scores for a 95% confidence interval.
Table \(\PageIndex{1}\): Common critical values (Z-scores).
Steps
Construction of a confidence interval about \(μ\) when \(σ\) is known:
\(Z_{\frac {\alpha}{2}}\) (critical value) \(E=Z_{\frac {\alpha}{2}}\times \frac {\sigma}{\sqrt {n}}\) (margin of error) \(\bar {x} \pm E\) (point estimate ± margin of error)
Example \(\PageIndex{3}\): Construct a confidence interval about the population mean
Researchers have been studying p-loading in Jones Lake for many years. It is known that mean water clarity (using a Secchi disk) is normally distributed with a population standard deviation of
σ = 15.4 in. A random sample of 22 measurements was taken at various points on the lake with a sample mean of x̄ = 57.8 in. The researchers want you to construct a 95% confidence interval for μ, the mean water clarity.
A secchi disk to measure turbidly of water. Image used with permission (CC SA; publiclab.org) Solution
1) \(Z_{\frac {\alpha}{2}}\) = 1.96
2) \(E=Z_{\frac {\alpha}{2}}\times \frac {\sigma}{\sqrt {n}}\) = \(1.96 \times \frac {15.4}{\sqrt {22}}\) = 6.435
3) \(\bar {x} \pm E\) = 57.8 ± 6.435
95% confidence interval for the mean water clarity is (51.36, 64.24).
We can be 95% confident that this interval contains the population mean water clarity for Jones Lake.
Now construct a 99% confidence interval for μ, the mean water clarity, and interpret.
1) \(Z_{\frac {\alpha}{2}}\)= 2.575
2) \(E=Z_{\frac {\alpha}{2}}\times \frac {\sigma}{\sqrt {n}}\) = \(2.575 \times \frac {15.4}{\sqrt {22}}\) = 8.454
3) \(\bar {x} \pm E\)= 57.8± 8.454
99% confidence interval for the mean water clarity is (49.35, 66.25).
We can be 99% confident that this interval contains the population mean water clarity for Jones Lake.
As the level of confidence increased from 95% to 99%, the width of the interval increased. As the probability (area under the normal curve) increased, the critical value increased resulting in a wider interval.
Software Solutions Minitab
You can use Minitab to construct this 95% confidence interval (Excel does not construct confidence intervals about the mean when the population standard deviation is known). Select Basic Statistics>1-sample Z. Enter the known population standard deviation and select the required level of confidence.
Figure 3. Minitab screen shots for constructing a confidence interval. One-Sample Z: depth
Variable
N
Mean
StDev
SE Mean
95% CI
depth
22
57.80
11.60
3.28
(51.36, 64.24)
Confidence Intervals about the Mean (μ) when the Population Standard Deviation ( σ) is Unknown
Typically, in real life we often don’t know the population standard deviation (σ). We can use the sample standard deviation (
s) in place of σ. However, because of this change, we can’t use the standard normal distribution to find the critical values necessary for constructing a confidence interval.
The Student’s t-distribution was created for situations when σ was unknown. Gosset worked as a quality control engineer for Guinness Brewery in Dublin. He found errors in his testing and he knew it was due to the use of s instead of σ. He created this distribution to deal with the problem of an unknown population standard deviation and small sample sizes. A portion of the t-table is shown below.
Table \(\PageIndex{2}\): Portion of the student’s t-table.
Example \(\PageIndex{4}\)
Find the critical value \(t_{\frac {\alpha}{2}}\) for a 95% confidence interval with a sample size of n=13.
Solution Degrees of freedom (down the left-hand column) is equal to n-1 = 12 α = 0.05 and α/2 = 0.025 Go down the 0.025 column to 12 df \(t_{\frac {\alpha}{2}}\)= 2.179
The critical values from the students’ t-distribution approach the critical values from the standard normal distribution as the sample size (n) increases.
Table 3. Critical values from the student’s t-table.
Using the standard normal curve, the critical value for a 95% confidence interval is
1.96. You can see how different samples sizes will change the critical value and thus the confidence interval, especially when the sample size is small.
Construction of a Confidence Interval
When
σ is Unknown \(t_{\frac {\alpha}{2}}\) critical value with n-1 df \(E = t_{\frac {\alpha}{2}} \times \frac{s}{\sqrt {n}}\) \(\bar {x} \pm E\)
Example \(\PageIndex{5}\):
Researchers studying the effects of acid rain in the Adirondack Mountains collected water samples from 22 lakes. They measured the pH (acidity) of the water and want to construct a 99% confidence interval about the mean lake pH for this region. The sample mean is 6.4438 with a sample standard deviation of 0.7120. They do not know anything about the distribution of the pH of this population, and the sample is small (n<30), so they look at a normal probability plot.
Figure 4. Normal probability plot. Solution
The data is normally distributed. Now construct the 99% confidence interval about the mean pH.
1) \(t_{\frac {\alpha}{2}}\) = 2.831
2) \(E = t_{\frac {\alpha}{2}} \times \frac{s}{\sqrt {n}}\) = \(2.831 \times \frac {0.7120}{\sqrt {22}}\)= 0.4297
3) \(\bar {x} \pm E\) = 6.443 ± 0.4297
The 99% confidence interval about the mean pH is (6.013, 6.863).
We are 99% confident that this interval contains the mean lake pH for this lake population.
Now construct a 90% confidence interval about the mean pH for these lakes.
1) \(t_{\frac {\alpha}{2}}\) = 1.721
2) \(E = t_{\frac {\alpha}{2}} \times \frac{s}{\sqrt {n}}\) = \(1.71221 \times \frac {0.7120}{\sqrt {22}}\)0.2612
3) \(\bar {x} \pm E\) = 6.443 ± 0.2612
The 90% confidence interval about the mean pH is (6.182, 6.704).
We are 90% confident that this interval contains the mean lake pH for this lake population.
Notice how the width of the interval decreased as the level of confidence decreased from 99 to 90%.
Construct a 90% confidence interval about the mean lake pH using Excel and Minitab.
Software Solutions Minitab
For Minitab, enter the data in the spreadsheet and select Basic statistics and 1-sample t-test.
One-Sample T: pH
Variable N Mean StDev SE Mean 90% CI pH
22
6.443
0.712
0.152
(6.182, 6.704)
Additional example:
Excel
For Excel, enter the data in the spreadsheet and select descriptive statistics. Check Summary Statistics and select the level and confidence.
Mean
6.442909
Standard Error
0.151801
Median
6.4925
Mode
#N/A
Standard Deviation
0.712008
Sample Variance
0.506956
Kurtosis
-0.5007
Skewness
-0.60591
Range
2.338
Minimum
5.113
Maximum
7.451
Sum
141.744
Count
22
Confidence Level(90.0%)
0.26121
Excel gives you the sample mean in the first line (6.442909) and the margin of error in the last line (0.26121). You must complete the computation yourself to obtain the interval (6.442909±0.26121).
Confidence Intervals about the Population Proportion ( p)
Frequently, we are interested in estimating the population proportion (p), instead of the population mean (
µ). For example, you may need to estimate the proportion of trees infected with beech bark disease, or the proportion of people who support “green” products. The parameter p can be estimated in the same ways as we estimated µ, the population mean. The Sample Proportion The sample proportion is the best point estimate for the true population proportion. Sample proportion \(\hat {p} = \frac {x}{n}\)where xis the number of elements in the sample with the characteristic you are interested in, and nis the sample size. The Assumption of Normality when Estimating Proportions The assumption of a normally distributed population is still important, even though the parameter has changed. Normality can be verified if:$$ n \times \hat {p} \times (1- \hat {p}) \ge 10$$ Constructing a Confidence Interval about the Population Proportion
Constructing a confidence interval about the proportion follows the same three steps we have used in previous examples.
\(Z_{\frac {\alpha}{2}}\)(critical value from the standard normal table) \(E = Z_{\frac {\alpha}{2}} \times \sqrt {\frac{\hat {p}(1-\hat {p})}{n}}\) (margin of error) \(\hat {p} \pm E\)(point estimate ± margin of error)
Example \(\PageIndex{6}\):
A botanist has produced a new variety of hybrid soybean that is better able to withstand drought. She wants to construct a 95% confidence interval about the germination rate (percent germination). She randomly selected 500 seeds and found that 421 have germinated.
Solution
First, compute the point estimate
$$\hat {p} = \frac {x}{n} =\frac {421}{500}=0.842$$
Check normality:
$$n \times \hat {p} \times (1-\hat {p}) \ge 10 = 500 \times 0.842 \times (1-0.842) =66.5$$
You can assume a normal distribution.
Now construct the confidence interval:
1) \(Z_{\frac {\alpha}{2}}\) = 1.96
2) \(E = Z_{\frac {\alpha}{2}} \times \sqrt {\frac{\hat {p}(1-\hat {p})}{n}}\) =\(1.96 \times \sqrt {\frac {0.842(1-0.842)}{500}}\) = 0.032
3) \(\hat {p} \pm E =0.842 \pm 0.0032\)
The 95% confidence interval for the germination rate is (81.0%, 87.4%).
We can be 95% confident that this interval contains the true germination rate for this population.
Software Solutions Minitab
You can use Minitab to compute the confidence interval. Select STAT>Basic stats>1-proportion. Select summarized data and enter the number of events (421) and the number of trials (500). Click Options and select the correct confidence level. Check “test and interval based on normal distribution” if the assumption of normality has been verified.
Test and CI for One Proportion
Sample X N Sample p 95% CI 1 421 500 0.842000 (0.810030, 0.873970)
Using the normal approximation.
Excel
Excel does not compute confidence intervals for estimating the population proportion.
Confidence Interval Summary
Which method do I use?
The first question to ask yourself is:
Which parameter are you trying to estimate ? If it is the mean ( µ), then ask yourself: Is the population standard deviation ( σ ) known ? If yes, then follow the next 3 steps: Confidence Interval about the Population Mean (µ) when σ is Known \(Z_{\frac {\alpha}{2}}\) critical value (from the standard normal table) \(E=Z_{\frac {\alpha}{2}} \times \frac {\sigma}{\sqrt {n}}\) \(\bar {x} \pm E\)
If no, follow these 3 steps:
Confidence Interval about the Population Mean (µ) when σ is Unknown \(t_{\frac {\alpha}{2}}\) critical value with n-1 df from the student t-distribution \(E=t_{\frac {\alpha}{2}} \times \frac {s}{\sqrt {n}}\) \(\bar {x} \pm E\)
If you want to construct a confidence interval about the population proportion, follow these 3 steps:
Confidence Interval about the Proportion \(Z_{\frac {\alpha}{2}}\) critical value from the standard normal table \(E = Z_{\frac {\alpha}{2}} \times \sqrt {\frac{\hat {p}(1-\hat {p})}{n}}\) \(\hat {p} \pm E\)
Remember that the assumption of normality must be verified. |
A particle moves along the x-axis so that at time t its position is given by $x(t) = t^3-6t^2+9t+11$ during what time intervals is the particle moving to the left? so I know that we need the velocity for that and we can get that after taking the derivative but I don't know what to do after that the velocity would than be $v(t) = 3t^2-12t+9$ how could I find the intervals
Fix $c\in\{0,1,\dots\}$, let $K\geq c$ be an integer, and define $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$.I believe I have numerically discovered that$$\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n \quad \text{ as } K\to\infty$$but cannot ...
So, the whole discussion is about some polynomial $p(A)$, for $A$ an $n\times n$ matrix with entries in $\mathbf{C}$, and eigenvalues $\lambda_1,\ldots, \lambda_k$.
Anyways, part (a) is talking about proving that $p(\lambda_1),\ldots, p(\lambda_k)$ are eigenvalues of $p(A)$. That's basically routine computation. No problem there. The next bit is to compute the dimension of the eigenspaces $E(p(A), p(\lambda_i))$.
Seems like this bit follows from the same argument. An eigenvector for $A$ is an eigenvector for $p(A)$, so the rest seems to follow.
Finally, the last part is to find the characteristic polynomial of $p(A)$. I guess this means in terms of the characteristic polynomial of $A$.
Well, we do know what the eigenvalues are...
The so-called Spectral Mapping Theorem tells us that the eigenvalues of $p(A)$ are exactly the $p(\lambda_i)$.
Usually, by the time you start talking about complex numbers you consider the real numbers as a subset of them, since a and b are real in a + bi. But you could define it that way and call it a "standard form" like ax + by = c for linear equations :-) @Riker
"a + bi where a and b are integers" Complex numbers a + bi where a and b are integers are called Gaussian integers.
I was wondering If it is easier to factor in a non-ufd then it is to factor in a ufd.I can come up with arguments for that , but I also have arguments in the opposite direction.For instance : It should be easier to factor When there are more possibilities ( multiple factorizations in a non-ufd...
Does anyone know if $T: V \to R^n$ is an inner product space isomorphism if $T(v) = (v)_S$, where $S$ is a basis for $V$? My book isn't saying so explicitly, but there was a theorem saying that an inner product isomorphism exists, and another theorem kind of suggesting that it should work.
@TobiasKildetoft Sorry, I meant that they should be equal (accidently sent this before writing my answer. Writing it now)
Isn't there this theorem saying that if $v,w \in V$ ($V$ being an inner product space), then $||v|| = ||(v)_S||$? (where the left norm is defined as the norm in $V$ and the right norm is the euclidean norm) I thought that this would somehow result from isomorphism
@AlessandroCodenotti Actually, such a $f$ in fact needs to be surjective. Take any $y \in Y$; the maximal ideal of $k[Y]$ corresponding to that is $(Y_1 - y_1, \cdots, Y_n - y_n)$. The ideal corresponding to the subvariety $f^{-1}(y) \subset X$ in $k[X]$ is then nothing but $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$. If this is empty, weak Nullstellensatz kicks in to say that there are $g_1, \cdots, g_n \in k[X]$ such that $\sum_i (f^* Y_i - y_i)g_i = 1$.
Well, better to say that $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$ is the trivial ideal I guess. Hmm, I'm stuck again
O(n) acts transitively on S^(n-1) with stabilizer at a point O(n-1)
For any transitive G action on a set X with stabilizer H, G/H $\cong$ X set theoretically. In this case, as the action is a smooth action by a Lie group, you can prove this set-theoretic bijection gives a diffeomorphism |
Let $D_8=D_{2 \cdot 4}$ be the dihedral group on a regular $4$-gon. Suppose that $S$ is a subset of $S_4$, such that S contains the element $( 1 \ 2 \ 3)$. We also know that $D_8$ acts
transitivelyon $S$ via the action: $$ D_8 \times S \mapsto S$$ $$ (\sigma, s) \mapsto \sigma \circ s $$ Task: determine $S$
My approach:
There are a lot of details, but we have to start somewhere. We can start with the orbit-stabiliser theorem: $$ |D_8|=|\text{Stab}(( 1 \ 2 \ 3))|\cdot |\text{Orb}(( 1 \ 2 \ 3))|$$ Because we know that $D_8$ acts transivitely on our set $S$, the orbits is all of $S$, $$ 8=|\text{Stab}(( 1 \ 2 \ 3))|\cdot |S|$$
We know there is only one element in $D_8$ that fixes/stabilises $(1 \ 2 \ 3)$, this is the identity symmetry, $Id$ and consequently $|\text{Stab}(( 1 \ 2 \ 3))|=1$. This is because the non-identity symmetries of a square are reflections (these leave two corners fixed) and rotations (the centre is fixed, but none of the corners), neither of these have three fixed non-collinear points, the only symmetry that does this is the identity, therefore it lets the cycle $( 1 \ 2 \ 3)$ do its job, and cycle around three elements, without changing the order. Therefore we have: $$ |S|=8$$ How do I now determine which $8-1=7$ other elements we are dealing with? |
The argument made by Ben can be supported by computational chemistry. I calculated three conformers of the compound.
Conformation B1 includes an intramolecular hydrogen bond, while in conformation B2 the proton was rotated away from the second hydroxyl group. Conformation C is the staggered one. The displayed structures were optimised in the gas phase.
Computations were performed with Gaussian09 rev. D at the DF-BP86/def2-SVP level of theory. Solvent effects were estimated with the polarised continuum model, taking the dielectric constants as $\epsilon=78.3553$ for water and $\epsilon=2.0165$ for cyclohexane. Energies in $\mathrm{kJ\,mol^{-1}}$. The energies of each row are relative to conformer C.\begin{array}{lrrr}\hline\phantom{\hspace{3cm}} & \hspace{2cm}\mathbf{B1} & \hspace{2cm}\mathbf{B2} & \hspace{2cm}\mathbf{C}\\\hline\text{gas phase} & -7.2 & 10.2 & 0.0 \\\ce{H2O} & -7.0 & 5.3 & 0.0 \\\ce{C6H12} & -7.3 & 8.8 & 0.0 \\\hline\end{array}
As you can see, the intramolecular hydrogen bond accounts for much more than the repulsion between hydroxyl and methyl moieties. While the dihedral angle in B2 is $\angle(\ce{OCCO},\mathbf{B2})=-75.2^\circ$ - larger than expected -, it is smaller than expected in B1 with $\angle(\ce{OCCO},\mathbf{B1})=-57.0^\circ$.
We can assume that the hydrogen bond supplies about $-12$ to $-16~\mathrm{kJ\,mol^{-1}}$ depending on the solvent, which is in quite good agreement of what is known in the literature.
It is remarkable, that the energy difference between B1 and C in the various solvation states is negligible, but in the open form it has quite a significant impact, i.e. a polar solvent stabilises the open form more than an non-polar solvent. This is somewhat a little oversimplification because the used solvent model is quite crude. To obtain a concise picture one would, of course, have to increase the level of theory and treat the solvent explicitly.
Take home message: If you have two hydroxyl groups in proximity, don't forget that there might be an internal hydrogen bond stabilisation. |
Suppose {$a_n$}$_{n=1}^{\infty}$ {$b_n$}$_{n=1}^{\infty}$ are sequences such that {$a_n$}$_{n=1}^{\infty}$ and {$a_n + b_n$}$_{n=1}^{\infty}$ converge. Prove that {$b_n$}$_{n=1}^{\infty}$ also converges.
I'm confused on a few parts. Taking what can be assumed, we know that for every $\epsilon$>0, there exists $N_1 \in \Bbb N$ such that for every n $\in \Bbb N$, if n$\ge$$N_1$, then |$a_n -A|\lt \epsilon$. Also, we know that for every $\epsilon$>0, there exists $N_2 \in \Bbb N$ such that for every n $\in \Bbb N$, if n$\ge$$N_2$, then |$a_n + b_n - (A+B)|\lt \epsilon$.
How would you put the two of these together in order to form a proof that {$b_n$}$_{(n=1)}^\infty$ converges too?
Please don't write a proof for the answer. Just give arguments that will help lead me on the path. Thanks! |
Your job is to approximate $\pi$ using the sequence of digits (in order):
1 2 3 4 5 6 7 8 9
with operators inserted between them (permitted operators listed below). You are to find the best approximation to $\pi$ that you can using the allowed operators and the numbers listed in order as they appear above. You must use all nine digits.
Your score is given by $$ \frac{-\ln\left|1-A/\pi\right|}{n_{ops}} = \frac{\ln\left|\frac{\pi}{\pi-A}\right|}{n_{ops}} $$ where $n_{ops}$ is the number of operators you used, and $A$ is your approximation. So if you managed to get $A=22/7$ and required three operations, then we have $\ln\left|1-A/\pi\right|\approx−7.818$, and so your score would be approximately $2.606$. Larger scores are better.
You may use parentheses, but only to control order of operations - other uses such as binomials or Jacobi symbols are not valid.
Permitted operations:
$+$ (plus): standard addition of real or complex numbers. $-$ (minus): standard subtraction of real or complex numbers, or unary negation of real or complex numbers. $\times$ (times): standard multiplication of real or complex numbers. Implied multiplication (e.g. $(1+2)3$) counts as an operation. $/$ or $\div$ (divide): standard division of real or complex numbers (allows division by positive or negative infinity to get zero). $\sqrt{ }$ (square root): standard principle square root of real or complex numbers, with second root (negative if number being square rooted is positive) allowed as $\sqrt[-]{}$ as a single operation. $!$ (factorial): standard factorial for non-negative integer values (i.e. natural numbers) only, cannot be applied to non-natural numbers. $|.|$ (absolute value): standard absolute value for real or complex numbers, equal to $\sqrt{a^2+b^2}$ if the number is of the form $a+bi$ with $a$ and $b$ real and $i$ being the imaginary number. $\lfloor.\rfloor$ (floor): standard round-downwards to integer for real numbers only. $\lceil.\rceil$ (ceil): standard round-upwards to integer for real numbers only.
Permitted operations but
counting as three operations: $^\wedge$ (exponentiation): standard exponentiation of real and complex numbers with integer powers only. Note that $0^0$ cannot be used. $\ln(.)$ (natural log): standard natural logarithm of positive real numbers only. Note the restrictions on some operators - This is primarily to ensure that exact values of $\pi$ cannot be obtained, as well as ensuring that the operations are well-defined.
If you feel that a reasonable operation has been left out, mention it in the comments and I may add it.
Note: There must be at least 1 operation in your answer! I will also upvote people who obtain high accuracy using many operations, in addition to those who get a good score, and encourage others to do likewise.
I'm also going to keep track of the best answers for each number of operations, up to 10.
$$\begin{matrix} \underline{\text{Operations}} & \underline{\text{Score}} & \underline{\text{User}}\\ 1 & 0.864669301 & \text{pacoverflow}\\ 2 & 1.272568270 & \text{pacoverflow}\\ 3 & 1.501203245 & \text{Ben Frankel}\\ 4 & 2.254197410 & \text{Ben Frankel}\\ 5 & 2.286460415 & \text{Lynn}\\ 6 & 2.713605107 & \text{Lynn}\\ 7 & 2.151734961 & \text{Lynn}\\ 8 & 2.135316968 & \text{Ben Frankel}\\ 9 & 2.063009554 & \text{Ben Frankel}\\ 10 & 2.186087400 & \text{Glen O} \end{matrix}$$ Let me know if I've missed one. |
Forgot password? New user? Sign up
Existing user? Log in
The maximum number of isomers for an alkene with molecular formula C4H8
Note by Divyansh Singhal 5 years, 10 months ago
Easy Math Editor
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
*italics*
_italics_
**bold**
__bold__
- bulleted- list
1. numbered2. list
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)
> This is a quote
This is a quote
# I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world"
# I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world"
\(
\)
\[
\]
2 \times 3
2^{34}
a_{i-1}
\frac{2}{3}
\sqrt{2}
\sum_{i=1}^3
\sin \theta
\boxed{123}
Sort by:
Four. They are methyl-propene (or isobutene), cis 2-butene, trans 2-butene and 1-butene (or simply butene).
Log in to reply
Can there be any cyclic isomers for this compound?
Yes, but he specifies what kinds of isomers he wants (alkenes).C4H8 would be a cyclic alkane.
@Guilherme Dela Corte – And those would be cyclobutane and methyl cyclopropane.
@Siyu Bu – Yes, but he specifies what kinds of isomers he wants (alkenes).
@Guilherme Dela Corte – Yeah yeah totally agreed with you just was trying to add sth.:-)
@Guilherme Dela Corte – however, cycloalkane is an isomer of alkene with same molecular formula. i think we have to consider all types of isomerism.
Quite fast at chemistry. Good!!!
Isn't this a question of NSEJS???
Yes,how much you are scoring in NSEJS ?
140 and if this answer ( I have done 4) is correct then 143 and what about you... We will have a common cutoff of Rajasthan...
Arent u the one who got state rank 1 in IMO??
C4H8 have 4 isomers
C4H8 have 3 isomer, they are : H2C = CH – CH2– CH3 1-butenaH3C - CH = CH– CH3 2-butenaH2C = C – CH3 2-metilpropena I CH3
What about cis/trans isomers of 2-butene?Also I think you've wronged when writing methyl-propene's formula.
but-1-ene,but-2-ene,cyclobutane,methylcyclopropane,cis-2-butene,trans-2-butene
He clearly specifies: he only wants the alkenes.
oh sorry there are 5 isomers then but-1-ene,but-2-ene,2-methyl propene,cis-2-butene,trans-2-butene,
which one ..structural or all?
6 including the 2 cyclic isomers
Problem Loading...
Note Loading...
Set Loading... |
Subspace clustering by (
k, k)-sparse matrix factorization
Department of Mathematics, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China
High-dimensional data often lie in low-dimensional subspaces instead of the whole space. Subspace clustering is a problem to analyze data that are from multiple low-dimensional subspaces and cluster them into the corresponding subspaces. In this work, we propose a $(k,k)$-sparse matrix factorization method for subspace clustering. In this method, data itself is considered as the "dictionary", and each data point is represented as a linear combination of the basis of its cluster in the dictionary. Thus, the coefficient matrix is low-rank and sparse. With an appropriate permutation, it is also blockwise with each block corresponding to a cluster. With an assumption that each block is no more than $k$-by-$k$ in matrix recovery, we seek a low-rank and $(k,k)$-sparse coefficient matrix, which will be used for the construction of affinity matrix in spectral clustering. The advantage of our proposed method is that we recover a coefficient matrix with $(k,k)$-sparse and low-rank simultaneously, which is better fit for subspace clustering. Numerical results illustrate the effectiveness that it is better than SSC and LRR in real-world classification problems such as face clustering and motion segmentation.
Mathematics Subject Classification:Primary: 68T04; Secondary: 65F04. Citation:Haixia Liu, Jian-Feng Cai, Yang Wang. Subspace clustering by ( k, k)-sparse matrix factorization. Inverse Problems & Imaging, 2017, 11 (3) : 539-551. doi: 10.3934/ipi.2017025
References:
[1]
P. K. Agarwal and N. H. Mustafa, k-means projective clustering, in
[2]
A. Argyriou, R. Foygel and N. Srebro, Sparse prediction with the
[3]
J. Bolte, S. Sabach and M. Teboulle,
Proximal alternating linearized minimization for nonconvex and nonsmooth problems,
[4] [5] [6] [7]
H. Derksen, Y. Ma, W. Hong and J. Wright, Segmentation of multivariate mixed data via lossy coding and compression in
[8]
E. Elhamifar and R. Vidal,
Sparse subspace clustering: Algorithm, theory, and applications,
[9]
P. Favaro, R. Vidal and A. Ravichandran, A closed form solution to robust subspace estimation and clustering, in
[10]
J. Feng, Z. Lin, H. Xu and S. Yan, Robust subspace segmentation with block-diagonal prior, in
[11]
A. Goh and R. Vidal, Segmenting motions of different types by unsupervised manifold clustering, in
[12]
L. N. Hutchins, S. M. Murphy, P. Singh and J. H. Graber,
Position-dependent motif characterization using non-negative matrix factorization,
[13]
K. Lee, J. Ho and D. Kriegman,
Acquiring linear subspaces for face recognition under variable lighting,
[14]
G. Liu, Z. Lin and Y. Yu, Robust subspace segmentation by low-rank representation, in
[15]
L. Lu and R. Vidal, Combined central and subspace clustering for computer vision applications, in
[16]
B. Nasihatkon and R. Hartley, Graph connectivity in sparse subspace clustering, in
[17] [18]
S. Oymak, A. Jalali, M. Fazel, Y. C. Eldar and B. Hassibi,
Simultaneously structured models with application to sparse and low-rank matrices,
[19] [20] [21]
S. R. Rao, R. Tron, R. Vidal and Y. Ma, Motion segmentation via robust subspace separationin the presence of outlying, incomplete, or corrupted trajectories, in
[22]
E. Richard, G. R. Obozinski and J. -P. Vert, Tight convex relaxations for sparse matrix factorization, in
[23]
Y. Sugaya and K. Kanatani, Geometric structure of degeneracy for multi-body motion segmentation, in
[24] [25] [26] [27]
Y. -X. Wang, H. Xu and C. Leng, Provable subspace clustering: When LRR meets SSC, in
[28]
J. Yan and M. Pollefeys, A general framework for motion segmentation: Independent, articulated, rigid, non-rigid, degenerate and non-degenerate, in
[29]
W. I. Zangwill,
[30]
T. Zhang, A. Szlam, Y. Wang and G. Lerman,
Hybrid linear modeling via local best-fit flats,
show all references
References:
[1]
P. K. Agarwal and N. H. Mustafa, k-means projective clustering, in
[2]
A. Argyriou, R. Foygel and N. Srebro, Sparse prediction with the
[3]
J. Bolte, S. Sabach and M. Teboulle,
Proximal alternating linearized minimization for nonconvex and nonsmooth problems,
[4] [5] [6] [7]
H. Derksen, Y. Ma, W. Hong and J. Wright, Segmentation of multivariate mixed data via lossy coding and compression in
[8]
E. Elhamifar and R. Vidal,
Sparse subspace clustering: Algorithm, theory, and applications,
[9]
P. Favaro, R. Vidal and A. Ravichandran, A closed form solution to robust subspace estimation and clustering, in
[10]
J. Feng, Z. Lin, H. Xu and S. Yan, Robust subspace segmentation with block-diagonal prior, in
[11]
A. Goh and R. Vidal, Segmenting motions of different types by unsupervised manifold clustering, in
[12]
L. N. Hutchins, S. M. Murphy, P. Singh and J. H. Graber,
Position-dependent motif characterization using non-negative matrix factorization,
[13]
K. Lee, J. Ho and D. Kriegman,
Acquiring linear subspaces for face recognition under variable lighting,
[14]
G. Liu, Z. Lin and Y. Yu, Robust subspace segmentation by low-rank representation, in
[15]
L. Lu and R. Vidal, Combined central and subspace clustering for computer vision applications, in
[16]
B. Nasihatkon and R. Hartley, Graph connectivity in sparse subspace clustering, in
[17] [18]
S. Oymak, A. Jalali, M. Fazel, Y. C. Eldar and B. Hassibi,
Simultaneously structured models with application to sparse and low-rank matrices,
[19] [20] [21]
S. R. Rao, R. Tron, R. Vidal and Y. Ma, Motion segmentation via robust subspace separationin the presence of outlying, incomplete, or corrupted trajectories, in
[22]
E. Richard, G. R. Obozinski and J. -P. Vert, Tight convex relaxations for sparse matrix factorization, in
[23]
Y. Sugaya and K. Kanatani, Geometric structure of degeneracy for multi-body motion segmentation, in
[24] [25] [26] [27]
Y. -X. Wang, H. Xu and C. Leng, Provable subspace clustering: When LRR meets SSC, in
[28]
J. Yan and M. Pollefeys, A general framework for motion segmentation: Independent, articulated, rigid, non-rigid, degenerate and non-degenerate, in
[29]
W. I. Zangwill,
[30]
T. Zhang, A. Szlam, Y. Wang and G. Lerman,
Hybrid linear modeling via local best-fit flats,
Data: Initialize 1 while not convergence do 2 │ Update 3 │ Update 4 │ Update 5 end
Data: Initialize 1 while not convergence do 2 │ Update 3 │ Update 4 │ Update 5 end
Data: Result: 1 Let 2 Find
$\frac{\tilde{h}_{k-r-1}}{\alpha+1}>\frac{\gamma_{r, l}}{l-k+(\alpha+1)(r+1)}\ge\frac{\tilde{h}_{k-r}}{\alpha+1}, $
$\tilde{u}_l>\frac{\gamma_{r, l}}{l-k+(\alpha+1)(r+1)}\ge \tilde{g}_{l+1}.$
3 Define
$ q_i = \left\{ \begin{array}{ll} \frac{\alpha}{\alpha+1}\tilde{h}_i & \textrm{if}\; i=1, \cdots, k-r-1\\ \tilde{h}_i-\frac{\gamma_{r, l}}{l-k+(\alpha+1)(r+1)}\\ & \textrm{if}\; i=k-r, \cdots, l\\ 0 & \textrm{if} \; i=l+1, \cdots, n \end{array} \right.$
4 Set
Data: Result: 1 Let 2 Find
$\frac{\tilde{h}_{k-r-1}}{\alpha+1}>\frac{\gamma_{r, l}}{l-k+(\alpha+1)(r+1)}\ge\frac{\tilde{h}_{k-r}}{\alpha+1}, $
$\tilde{u}_l>\frac{\gamma_{r, l}}{l-k+(\alpha+1)(r+1)}\ge \tilde{g}_{l+1}.$
3 Define
$ q_i = \left\{ \begin{array}{ll} \frac{\alpha}{\alpha+1}\tilde{h}_i & \textrm{if}\; i=1, \cdots, k-r-1\\ \tilde{h}_i-\frac{\gamma_{r, l}}{l-k+(\alpha+1)(r+1)}\\ & \textrm{if}\; i=k-r, \cdots, l\\ 0 & \textrm{if} \; i=l+1, \cdots, n \end{array} \right.$
4 Set
# Classes mean/median SSC LRR (3, 3)-SMF (4, 4)-SMF error s error s 2 mean 15.83 6.37 3.38 18 3.53 18 median 15.63 6.25 2.34 2.34 3 mean 28.13 9.57 6.19 25 6.06 25 median 28.65 8.85 5.73 5.73 5 mean 37.90 14.86 11.06 35 10.04 35 median 38.44 14.38 9.38 9.06 8 mean 44.25 23.27 23.08 50 22.51 50 median 44.82 21.29 27.54 26.06 10 mean 50.78 29.38 25.36 65 23.91 65 median 49.06 32.97 27.19 27.34
# Classes mean/median SSC LRR (3, 3)-SMF (4, 4)-SMF error s error s 2 mean 15.83 6.37 3.38 18 3.53 18 median 15.63 6.25 2.34 2.34 3 mean 28.13 9.57 6.19 25 6.06 25 median 28.65 8.85 5.73 5.73 5 mean 37.90 14.86 11.06 35 10.04 35 median 38.44 14.38 9.38 9.06 8 mean 44.25 23.27 23.08 50 22.51 50 median 44.82 21.29 27.54 26.06 10 mean 50.78 29.38 25.36 65 23.91 65 median 49.06 32.97 27.19 27.34
SSC LRR (3, 3)-SMF (4, 4)-SMF Mean 9.28 8.43 6.61 7.16 Median 0.24 1.54 1.20 1.32
SSC LRR (3, 3)-SMF (4, 4)-SMF Mean 9.28 8.43 6.61 7.16 Median 0.24 1.54 1.20 1.32
[1] [2]
Tao Wu, Yu Lei, Jiao Shi, Maoguo Gong.
An evolutionary multiobjective method for low-rank and sparse matrix decomposition.
[3] [4] [5] [6]
Baolan Yuan, Wanjun Zhang, Yubo Yuan.
A Max-Min clustering method for $k$-means algorithm of
data clustering.
[7]
Ruiqi Yang, Dachuan Xu, Yicheng Xu, Dongmei Zhang.
An adaptive probabilistic algorithm for online
[8] [9]
Baoli Shi, Zhi-Feng Pang, Jing Xu.
Image segmentation based on the hybrid total variation model and the K-means clustering strategy.
[10]
Changguang Dong.
Separated nets arising from certain higher rank $\mathbb{R}^k$ actions on homogeneous spaces.
[11]
Hsin-Yi Liu, Hsing Paul Luh.
Kronecker product-forms of steady-state probabilities with
$C_k$/$C_m$/$1$ by matrix polynomial approaches.
[12] [13]
Narciso Román-Roy, Ángel M. Rey, Modesto Salgado, Silvia Vilariño.
On the $k$-symplectic, $k$-cosymplectic and
multisymplectic formalisms of classical field theories.
[14] [15]
Sangye Lungten, Wil H. A. Schilders, Joseph M. L. Maubach.
Sparse inverse incidence matrices for Schilders' factorization applied to resistor network modeling.
[16]
Zhengshan Dong, Jianli Chen, Wenxing Zhu.
Homotopy method for matrix rank minimization based on the matrix hard thresholding method.
[17] [18]
Michael Baur, Marco Gaertler, Robert Görke, Marcus Krug, Dorothea Wagner.
Augmenting $k$-core generation with preferential attachment.
[19] [20]
2018 Impact Factor: 1.469
Tools Metrics Other articles
by authors
[Back to Top] |
Let $M$ be
finite set with $n$ distinct elements. I want to probalistically approximate the relative counts $\frac{|P(Q)|}{|M|}$ of $Q \subseteq M$, where $P(Q) = |P \cap M|$.
An upper-bound for the number of samples need to get an (additive) $\varepsilon$-approximation can be derrived using the Hoeffding bound.
I am interested in achieving (empirical) better bounds using
empirical Rademacher averages. My idea is to adaptability sample from $M$, to fulfill \begin{equation} \sup_{f \in \mathcal{F}}\,| L_{\mathcal{D}}(f) - L_{S}(f) | \leq 2\cdot\mathcal{R}_{\mathcal{F}}(S) + 3 \sqrt{\frac{ \log (2/\delta)}{2m}} \leq \epsilon, \end{equation} where\begin{equation*} \mathcal{F} = \{ \mathbf{1}_{Q} \mid Q \subseteq M \}, \end{equation*}and $\mathbf{1}_{Q}$ equals $1$ if the current sample is in $Q$, and otherwise $0$.It follows that \begin{equation*} L_{\mathcal{D}}(\mathbf{1}_{Q}) = \frac{|P(Q)|}{|M|}.\end{equation*}
Can one achieve better bounds using this approaches, e.g., by using Massart's Lemma to approximate $\mathcal{R}_{\mathcal{F}}(S)$? |
This is the first time I have read about polynomial delay algorithms, so I am not 100% sure of my answer, but I think something like the following should work.
Pick some convention for representing paths that has a natural total ordering $<$ defined on it. (One example would be just to list the vertexes of the path and order lexicographically). Pick your favorite in-place data-structure $D$ that supports logarithmic search and insert (say a red-black tree). Let $G$ be your graph
Define an algorithm $F$:
$F(s,t,G, ^*D)$:
(here $^*D$ means a reference to an inplace datastructure $D$)
run your poly-time algorithm for returning a pair of edge-disjoint paths $(P,Q)$ with $P < Q$ from $s$ to $t$.
If $(P,Q)$ is not in $D$.
2.1. Insert $(P,Q)$ into $D$ (and output if you are suppose to output as the algorithm runs).
2.2. For each edge $uv \in E(P\cup Q)$ run $F(s,t,G - \{uv\}, ^*D)$
Now, to enumerate all your paths, create an empty $D$ and for each pair $s,t \in V(G)$ with $s < t$ (if the graph is undirected, $s \neq t$ otherwise) run $F(s,t,G,*D)$. You will output every path the first time you see them, and you will also have a nice searchable data-structure that contains all paths when you are done. Note that this algorithm also runs in polynomial time in the size of the input + output (just like any polynomial delay algorithm would).
I doubt that this is the best way to do this, in particular this approach is not in $PSPACE$ (in size of the input). I think by thinking carefully you could find something that runs in $PSPACE$, although it won't be able to build the data-structure as it goes along. |
Skills to Develop
To learn the concept of the probability distribution of a continuous random variable, and how it is used to compute probabilities. To learn basic facts about the family of normally distributed random variables. The Probability Distribution of a Continuous Random Variable
For a discrete random variable \(X\) the probability that \(X\) assumes one of its possible values on a single trial of the experiment makes good sense. This is not the case for a continuous random variable. For example, suppose \(X\) denotes the length of time a commuter just arriving at a bus stop has to wait for the next bus. If buses run every \(30\) minutes without fail, then the set of possible values of \(X\) is the interval denoted \(\left [ 0,30 \right ]\), the set of all decimal numbers between \(0\) and \(30\). But although the number \(7.211916\) is a possible value of \(X\), there is little or no meaning to the concept of the probability that the commuter will wait precisely \(7.211916\) minutes for the next bus. If anything the probability should be zero, since if we could meaningfully measure the waiting time to the nearest millionth of a minute it is practically inconceivable that we would ever get exactly \(7.211916\) minutes. More meaningful questions are those of the form: What is the probability that the commuter's waiting time is less than \(10\) minutes, or is between \(5\) and \(10\) minutes? In other words, with continuous random variables one is concerned not with the event that the variable assumes a single particular value, but with the event that the random variable assumes a value in a particular interval.
Definition: density function
The probability distribution of a continuous random variable \(X\) is an assignment of probabilities to intervals of decimal numbers using a function \(f(x)\), called a density function, in the following way: the probability that \(X\) assumes a value in the interval \(\left [ a,b\right ]\) is equal to the area of the region that is bounded above by the graph of the equation \(y=f(x)\), bounded below by the x-axis, and bounded on the left and right by the vertical lines through \(a\) and \(b\), as illustrated in Figure \(\PageIndex{1}\).
: Figure \(\PageIndex{1}\) Probability Given as Area of a Region under a Curve
This definition can be understood as a natural outgrowth of the discussion in Section 2.1.3. There we saw that if we have in view a population (or a very large sample) and make measurements with greater and greater precision, then as the bars in the relative frequency histogram become exceedingly fine their vertical sides merge and disappear, and what is left is just the curve formed by their tops, as shown in Figure 2.1.5. Moreover the total area under the curve is \(1\), and the proportion of the population with measurements between two numbers \(a\) and \(b\) is the area under the curve and between \(a\) and \(b\), as shown in Figure 2.1.6. If we think of \(X\) as a measurement to infinite precision arising from the selection of any one member of the population at random, then \(P(a<X<b)\)is simply the proportion of the population with measurements between \(a\) and \(b\), the curve in the relative frequency histogram is the density function for \(X\), and we arrive at the definition just above.
Every density function \(f(x)\) must satisfy the following two conditions: For all numbers \(x\), \(f(x)\geq 0\), so that the graph of \(y=f(x)\) never drops below the x-axis. The area of the region under the graph of \(y=f(x)\) and above the \(x\)-axis is \(1\).
Because the area of a line segment is \(0\), the definition of the probability distribution of a continuous random variable implies that for any particular decimal number, say \(a\), the probability that \(X\) assumes the exact value a is \(0\). This property implies that whether or not the endpoints of an interval are included makes no difference concerning the probability of the interval.
For any continuous random variable \(X\):
Example \(\PageIndex{1}\)
A random variable \(X\) has the uniform distribution on the interval \(\left [ 0,1\right ]\): the density function is \(f(x)=1\) if \(x\) is between \(0\) and \(1\) and \(f(x)=0\) for all other values of \(x\), as shown in Figure \(\PageIndex{2}\) "Uniform Distribution on ".
: Figure \(\PageIndex{2}\) Uniform Distribution on Find \(P(X > 0.75)\), the probability that \(X\) assumes a value greater than \(0.75\). Find \(P(X \leq 0.2)\), the probability that \(X\) assumes a value less than or equal to \(0.2\). Find \(P(0.4 < X < 0.7)\), the probability that \(X\) assumes a value between \(0.4\) and \(0.7\). Solution: \(P(X > 0.75)\) is the area of the rectangle of height \(1\) and base length \(1-0.75=0.25\), hence is \(base\times height=(0.25)\cdot (1)=0.25\) .See Figure \(\PageIndex{3a}\). \(P(X \leq 0.2)\) is the area of the rectangle of height \(1\) and base length \(0.2-0=0.2\), hence is \(base\times height=(0.2)\cdot (1)=0.2\) .See Figure \(\PageIndex{3b}\). \(P(0.4 < X < 0.7)\) is the area of the rectangle of height \(1\) and length \(0.7-0.4=0.3\), hence is \(base\times height=(0.3)\cdot (1)=0.3\) .See Figure \(\PageIndex{3c}\).
: Figure \(\PageIndex{3}\) Probabilities from the Uniform Distribution on [0,1]
Example \(\PageIndex{2}\)
A man arrives at a bus stop at a random time (that is, with no regard for the scheduled service) to catch the next bus. Buses run every \(30\) minutes without fail, hence the next bus will come any time during the next \(30\) minutes with evenly distributed probability (a uniform distribution). Find the probability that a bus will come within the next \(10\) minutes.
Solution:
The graph of the density function is a horizontal line above the interval from \(0\) to \(30\) and is the \(x\)-axis everywhere else. Since the total area under the curve must be \(1\), the height of the horizontal line is \(1/30\) (Figure \(\PageIndex{4}\)). The probability sought is \(P(0\leq X\leq 10)\).By definition, this probability is the area of the rectangular region bounded above by the horizontal line \(f(x)=1/30\), bounded below by the \(x\)-axis, bounded on the left by the vertical line at \(0\) (the \(y\)-axis), and bounded on the right by the vertical line at \(10\). This is the shaded region in Figure \(\PageIndex{4}\). Its area is the base of the rectangle times its height, \((10)\cdot (1/30)=1/3\)
Figure \(\PageIndex{4}\): Probability of Waiting At Most 10 Minutes for a Bus Normal Distributions
Most people have heard of the “bell curve.” It is the graph of a specific density function \(f(x)\)
\[f(x)=\frac{1}{\sqrt{2\pi \sigma ^2}}e^{-\frac{1}{2}(\mu -x)^2/\sigma ^2}\]
Each different choice of specific numerical values for the pair \(\mu\) and \(\sigma\) gives a different bell curve. The value of \(\mu\) determines the location of the curve, as shown in Figure \(\PageIndex{5}\). In each case the curve is symmetric about \(\mu\).
: Figure \(\PageIndex{5}\) Bell Curves with σ= 0.25 and Different Values of μ
The value of \(\sigma\) determines whether the bell curve is tall and thin or short and squat, subject always to the condition that the total area under the curve be equal to \(1\). This is shown in Figure \(\PageIndex{6}\) , where we have arbitrarily chosen to center the curves at \(\mu=6\).
Figure \(\PageIndex{6}\): Bell Curves with \(\mu =6\) and Different Values of \(\sigma\)
Definition: normal distribution
The probability distribution corresponding to the density function for the bell curve with parameters \(\mu\) and \(\sigma\) is called the normal distribution with mean \(\mu\) and standard deviation \(\sigma\).
Definition: normally distributed random variable
A continuous random variable whose probabilities are described by the normal distribution with mean \(\mu\) and standard deviation \(\sigma\) is called a normally distributed random variable, or a normal random variable for short, with mean \(\mu\) and standard deviation \(\sigma\).
Figure \(\PageIndex{7}\) shows the density function that determines the normal distribution with mean \(\mu\) and standard deviation \(\sigma\). We repeat an important fact about this curve:
The density curve for the normal distribution is symmetric about the mean.
Figure \(\PageIndex{7}\) : Density Function for a Normally Distributed Random Variable with Mean \(\mu\) and Standard Deviation\(\sigma\)
Example \(\PageIndex{3}\)
Heights of \(25\)-year-old men in a certain region have mean \(69.75\) inches and standard deviation \(2.59\) inches. These heights are approximately normally distributed. Thus the height \(X\) of a randomly selected \(25\)-year-old man is a normal random variable with mean \(\mu = 69.75\) and standard deviation \(\sigma = 2.59\). Sketch a qualitatively accurate graph of the density function for \(X\). Find the probability that a randomly selected \(25\)-year-old man is more than \(69.75\) inches tall.
Solution:
The distribution of heights looks like the bell curve in Figure \(\PageIndex{8}\). The important point is that it is centered at its mean, \(69.75\), and is symmetric about the mean.
: Figure \(\PageIndex{8}\) Density Function for Heights of\(25\) -Year-Old Men
Since the total area under the curve is \(1\), by symmetry the area to the right of \(69.75\) is half the total, or \(0.5\). But this area is precisely the probability \(P(X > 69.75)\), the probability that a randomly selected \(25\)-year-old man is more than \(69.75\) inches tall. We will learn how to compute other probabilities in the next two sections.
Key Takeaway For a continuous random variable \(X\) the only probabilities that are computed are those of \(X\) taking a value in a specified interval. The probability that \(X\) take a value in a particular interval is the same whether or not the endpoints of the interval are included. The probability \(P(a<X<b)\), that \(X\) take a value in the interval from \(a\) to \(b\), is the area of the region between the vertical lines through \(a\) and \(b\), above the \(x\)-axis, and below the graph of a function \(f(x)\) called the density function. A normally distributed random variable is one whose density function is a bell curve. Every bell curve is symmetric about its mean and lies everywhere above the \(x\)-axis, which it approaches asymptotically (arbitrarily closely without touching). |
Finite Element Analysis - May 2012
Mechanical Engineering (Semester 6)
TOTAL MARKS: 80
TOTAL TIME: 3 HOURS (1) Question 1 is compulsory. (2) Attempt any three from the remaining questions. (3) Assume data if required. (4) Figures to the right indicate full marks. 2(b) Solve the following differential equation using Rayleigh-Ritz method$$3\dfrac{d^{2}y}{dx^{2}}-\dfrac{dy}{dx}+8=0\ \cdots\ 0≤x≤1$$ with boundary condition y(0) =1 and y(1) =2.assume cubic Polynomial for trial solution. Find the value at y(2,3) and y(0,8)(10 marks) 3(a) Evaluate the following integral using Gauss Quadrature.Compare your answer with exact $$I=\int_{-1}^{1} \int_{-1}^{1}(r^{3}-1)(s-1)^{2}dr ds$$
n ϵ w 1 0.0 2 2 ± 0.5774 1 3
± 0.0
± 0.7746
0.8889
0.5556
3(b)Explain the following :
Convergence requirements
Global, local and natural co - ordinate system (8 marks)
4(a)For the bar truss shown in figure, determine the nodal displacement, stresses in each element and reaction at support. Take $$E =2\times 10^{5} \dfrac{N}{mm^{2}}, A= 200mm^{2}$$
(15 marks)
4(b)Explain Band width.(8 marks) 5(a)Using Direct Stiffness method,determine the nodal displacements of stepped bar shown in
(12 marks)
5(b)Derive the shape function for a Quadratic bar element [3 noded 1 dimensional bar] using Lagrangian polynomial in,
Global co - ordinates and
Natural co - ordinates.(8 marks)
6(a)Find the shape function for two dimensional Nine rectangular elements mapped into natural coordinates.(12 marks) 6(b)The nodal co-ordinates of a triangular element are as shown in figure.The x co-ordinate of interior point P is 3.3 and shape function N 1=0.3.Determine N 2N 3and y co-ordinates of point P
(8 marks)
7(a)Find the natural frequency of axial vibration of a bar of uniform cross section of 20mm 2and length 1m. Take $$E=2\times 10^{5} \dfrac{N}{mm^{2}}$$ and $$\rho =8000\dfrac{kg}{m^{3}}$$
Take 2 linear elements.(10 marks)
7(b)Discuss briefly higher order and iso - parametric elements with suitable sketches.(10 marks) 1(a)Attempt any four of the following:
Briefly explain application of FEM in Various fields.(5 marks)
1(b)Explain principle of minimum Potential Energy.(5 marks) 1(c)Explain different sources of error in a typical F.E.M solution.(5 marks) 1(d)Briefly explain Node Numbering Scheme.(5 marks) 1(e)Explain properties of Global Matrix.(5 marks) 2(a)Solve the following differential equation using Galerkins method $$3\dfrac{d ^ {2}u}{dx ^ {2}} - 3u + 4x ^ {2} = 0$$ with boundary condition u(o) = u(1) = 0.Assume Cubic polynomial for approximate solution.(10 marks) |
If you know the rolling speed at a given flight speed, you can calculate the aileron effectiveness and use that to calculate the forces. The final rolling speed is reached when roll damping and the aileron-induced rolling moment reach an equilibrium: $$c_{l\xi} \cdot \frac{\xi_l - \xi_r}{2} = -c_{lp} \cdot \frac{\omega_x \cdot b}{2\cdot v_\infty} = -c_{lp} \cdot p$$Thus, your aileron effectiveness is $$c_{l\xi} = -c_{lp}\cdot\frac{\omega_x \cdot b}{v_\infty\cdot(\xi_l - \xi_r)}$$The roll damping term is for unswept wings $$c_{lp} = -\frac{1}{4} \cdot \frac{\pi \cdot AR}{\sqrt{\frac{AR^2}{4}+4}+2}$$and the moment per aileron now is $$M = c_{l\xi} \cdot \xi \cdot S_{ref} \cdot b \cdot q_\infty$$Calculate the moment for each aileron separately; normally the left and right deflection angles are not exact opposites, which helps to reduce stick forces.
If you only need an approximation, maybe do it like this:
You first need to have all dimensions and the deflection angles. I expect you don't have lift polars of the wing section, so you need to approximate the lift increase due to aileron deflection with general formulas. This is $$c_{l\xi} = c_{l\alpha} \cdot \sqrt{\lambda} \cdot \frac{S_{aileron}}{S_{ref}} \cdot \frac{y_{aileron}}{b}$$and the moment per aileron now is $$M = c_{l\xi} \cdot \xi \cdot S_{ref} \cdot b \cdot q_\infty = c_{l\alpha} \cdot \sqrt{\lambda} \cdot \xi \cdot S_{aileron} \cdot y_{aileron} \cdot q_\infty$$Nomenclature:
$p \:\:\:\:\:\:\:\:$ dimensionless rolling speed (= $\omega_x\cdot\frac{b}{2\cdot v_\infty}$). $\omega_x$ is the roll rate in radians per second. $b \:\:\:\:\:\:\:\:$ wing span $c_{l\xi} \:\:\:\:\:\:$ aileron lift increase with deflection angles $\xi$ $\xi_{l,r} \:\:\:\:\:$ left and right aileron deflection angles (in radians) $c_{lp} \:\:\:\:\:\;$ roll damping $c_{l\alpha} \:\:\:\:\:\;$ the wing's lift coefficient gradient over angle of attack. See this answer on how to calculate it. $\pi \:\:\:\:\:\:\:\:$ 3.14159$\dots$ $AR \:\:\:\:\:$ aspect ratio of the wing $\lambda \:\:\:\:\:\:\:\:$ relative aileron chord $S_{aileron} \:$ Surface area of the aileron-equipped part of the wing $S_{ref} \:\:\:\:\:$ Reference area (normally the wings's area) $y_{aileron} \:$ spanwise center of the aileron-equipped part of the wing $v_\infty \:\:\:\:\:$ true flight speed $q_\infty \:\:\:\:\:$ dynamic pressure
Depending on the relative chord length of the aileron, this formula is good for maximum deflections of 20° of a 20% chord aileron or 15° deflection of a 30% chord aileron. Remember: This is a rough estimate for straight wings. |
So let us start with the "simplest" scheme over $Spec(\mathbf{Z})$ namely $X_0=Spec(\mathbf{Z})$. Then the (reciprocal) Weil zeta function of $X_0$ at a prime $p$ is given by $Z_p(T)=1-T$ (a polynomial of degree $1$). So the Hasse-Weil zeta function of $X_0$ is given by $$ L_{X_0}(s):=\prod_{p}Z_p(p^{-s})^{-1}=\zeta(s). $$ Now if one lets $\psi(z)=\sum_{n\in\mathbf{Z}}e^{i\pi n^2z}$ then $\psi(z)$ is a modular form of weight $1/2$ (over a suitable congruence group of $SL_2(\mathbf{Z})$) in the sense that $$ (-iz)^{-1/2}\psi(-1/z)=\psi(z). \;\;\;\;(*) $$ A straight forward computation which uses the definition of the Gamma function implies that $$ \tilde{L}_{X_0}(s):=\int_{0}^{\infty}(\psi(it)-1)t^{s/2}\frac{dt}{t}=2\pi^{-s/2}\Gamma(s/2)\zeta(s). $$ Using the functional equation $(*)$ one may deduce the meromorphic continuation and the functional equation of $\zeta(s)$ (invariance of $\tilde{L}_{X_0}(s)$ under $s\mapsto 1-s$).
Now let us take the scheme $X_1=\mathbf{P}^1$ over $Spec(\mathbf{Z})$. Then the (reciprocal) Weil zeta function of $X_1$ at a prime $p$ is given by $Z_p(T)=(1-T)(1-pT)=1-\sigma_1(p)T+pT^2$ (a polynomial of degree $2$). It thus follows that the Hasse-Weil zeta function of $X_1$ is given by $$ L_{X_1}(s)=\prod_{p}Z_p(p^{-s})^{-1}=\zeta(s)\zeta(s-1). $$ Now let us look at the Eisenstein series of weight $2$ i.e. $$ E_2(z)=(2\pi i)^{-2}\sum_{m,n}'\frac{1}{(mz+n)^2}:=\frac{-B_2}{2}+2\sum_{n\geq 1}\sigma_1(n)q_{z}^n, $$ where $q_{z}=e^{2\pi iz}$. (Note that I don't get any convergence issue here since I take this $q$-expansion as the definition of $E_2(z)$). Note that $E_2(z)$ is "almost" a moldular form of weight $2$ (for the full congruence group $SL_2(\mathbf{Z})$) since $$ (-z)^{-2}E_2(-1/z)=E_2(z)-\frac{1}{2\pi iz} \;\;\;\; (**) $$ A straight forward computation similar to the one before implies that $$ \tilde{L}_{X_1}(s):=\int_{0}^{\infty} (E_2(it)+B_2/2)t^{s}\frac{dt}{t}=2 (2\pi)^{-s}\Gamma(s)\zeta(s)\zeta(s-1). $$ As before using $(**)$ one obtains the meromorphic continuation of $L_{X_1}(s)$ and its functional equation ($\tilde{L}_{X_1}(s)=-\tilde{L}_{X_1}(2-s)$, note the appearance of the sign $-1$). Note that this could already be deduced from what we know from $L_{X_0}(s)$.
Now there is no reason to stop here. So let $X_2=\mathbf{P}^2$ over $Spec(\mathbf{Z})$. Then the Weil zeta function of $X_2$ at $p$ is $Z_p(T)=(1-T)(1-pT)(1-p^2T)$ (a polynomial of degree $3$). It thus follows that the Hasse-Weil zeta function of $X_2$ is given by $$ L_{X_2}(s)=\prod_{p}Z_p(p^{-s})^{-1}=\zeta(s)\zeta(s-1)\zeta(s-2)=\sum_{n\geq 1}\frac{a_n}{n^s} $$
Q1: Is it reasonable to expect the formal $q$-expansion $f(q_z)=\sum_{n\geq 1} a_n q_z^n$ to be related in some direct way to an automorphic form w.r.t. a suitable congruence subgroup of $GL_3(\mathbf{Z})$?
Q2: What about $X_n=\mathbf{P}^n$ in general?
added: Note that in the case of $X_0$ I'm really looking at $\tilde{\zeta}(s):=\zeta(2s)$ which is an $L$-function of weight $1/2$ in the sense that $\tilde{\zeta}(s)$ is related to $\tilde{\zeta}(1/2-s)$ which is in accordance with the fact that $\psi(z)$ has weight $1/2$. |
56 0 1. Homework Statement
A large 16.0 kg roll of paper with radius R=18.0 cm rests against the wall and is held in place by a bracket attached to a rod through the center of the roll. The rod turns without friction in the bracket, and the moment of inertia of the paper and rod about the axis is 0.260 kg m^2. The other end of the bracket is attached by a frictionless hinge to the wall such that the bracket makes an angle of 30.0° with the wall. The weight of the bracket is negligible. The coefficient of kinetic friction between the paper and the wall is μ= 0.25. A constant vertical force F= 60.0 N is applied to the paper, and the paper unrolls.
1. What is the magnitude of the force that the rod exerts on the paper as it unrolls?
2. What is the magnitude of the angular acceleration of the roll?
2. Homework Equations
[itex]\sum F_{x}=0[/itex]
[itex]\sum F_{y}=0[/itex]
[itex]a=\frac{\tau}{I}[/itex]
3. The Attempt at a Solution
[itex]\sum F_{y}=60N+W-\mu*F=0[/itex] where [itex]W=16kg*9.8m/s^{2}[/itex] and [itex]\sum F_{x}=Tcos(30)=0[/itex]
I'm also not sure in what direction the force vector should be pointing when it says "What is the magnitude of the force that the rod exerts on the paper as it unrolls". The paper is in all directions around the rod so I'm not sure how to look at this problem.
Please help, I'm really lost. |
I have the following inequation: $\sqrt{t^{2}-t-12}<7-t$. Can I just set both sides of the inequation to the power of two, or is there any condition under which exponentiating is an equivalent operation?
Thanks
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
This is not an equation; it's an inequality.
However, you have that $\exp\text{LHS}<\exp\text{RHS}$ because the exponential function is monotonic.
Oh, it seems the operation you want to perform is not exponentiation, but empowerment -- raising to a power. You may always do this if the power you want to raise both sides to is odd or a fraction with odd denominator since such functions are monotonic. However, when they are even, or have even denominator, you have to make sure that both sides are nonnegative. Otherwise, you need to simultaneously reverse the inequality, too.
Thus, you may square both sides without changing the $<$ to $>$ only whenever $7-t\ge 0.$ Of course, $\text{LHS},$ being a square root, is always nonnegative whenever it is real.
You have used many terms wrongly here, so let me correct the last: equivalent operation. Well, it seems like what you mean is order-preserving operation. What we have are
equivalence relations, which are quite different things from operations that respect linear orders. Hints:
It is valid to exponentiate if you preserve the inequality, that is, if $$a<b\iff a^c<b^c$$
We would have a problem if we were to do this as follows: $$-2<1\iff (-2)^2<1^2$$
In your case there is no problem, since the squareroot is defined to be positive, we only have to make sure that it stays that way. We have to look at $t:t^2-t-12\geq 0$, that is, $t\in (-\infty,-3]\cup [4,\infty)$, and also $t< 7$ (otherwise we would have $\sqrt{t^2-t-12}<0$, which is a problem). This forces us to have $t\in (-\infty,-3]\cup [4,7)$.
Given these conditions we know that $$\sqrt{t^2-t-12}<7-t\iff t^2-t-12<(7-t)^2$$ This simplifies to $$t<\frac{61}{13}$$
It must be $$t^2-t-12\geq 0$$ and $$7-t>0$$, then we get by raising to the power two: $$t^2-t-12<49+t^2-14t$$ nand we get $$t<\frac{61}{13}$$ Finally we get $$t\le -2\sqrt{3}$$ |
"Abstract: We prove, once and for all, that people who don't use superspace are really out of it. This includes QCDers, who always either wave their hands or gamble with lettuce (Monte Zuma calculations). Besides, all nonsupersymmetric theories have divergences which lead to problems with things like renormalons, instantons, anomalons, and other phenomenons. Also, they can't hide from gravity forever."
Can a gravitational field possess momentum? A gravitational wave can certainly possess momentum just like a light wave has momentum, but we generally think of a gravitational field as a static object, like an electrostatic field.
"You have to understand that compared to other professions such as programming or engineering, ethical standards in academia are in the gutter. I have worked with many different kinds of people in my life, in the U.S. and in Japan. I have only encountered one group more corrupt than academic scientists: the mafia members who ran Las Vegas hotels where I used to install computer equipment. "
so Ive got a small bottle that I filled up with salt. I put it on the scale and it's mass is 83g. I've also got a jup of water that has 500g of water. I put the bottle in the jug and it sank to the bottom. I have to figure out how much salt to take out of the bottle such that the weight force of the bottle equals the buoyancy force.
For the buoyancy do I: density of water * volume of water displaced * gravity acceleration?
so: mass of bottle * gravity = volume of water displaced * density of water * gravity?
@EmilioPisanty The measurement operators than I suggested in the comments of the post are fine but I additionally would like to control the width of the Poisson Distribution (much like we can do for the normal distribution using variance). Do you know that this can be achieved while still maintaining the completeness condition $$\int A^{\dagger}_{C}A_CdC = 1$$?
As a workaround while this request is pending, there exist several client-side workarounds that can be used to enable LaTeX rendering in chat, including:ChatJax, a set of bookmarklets by robjohn to enable dynamic MathJax support in chat. Commonly used in the Mathematics chat room.An altern...
You're always welcome to ask. One of the reasons I hang around in the chat room is because I'm happy to answer this sort of question. Obviously I'm sometimes busy doing other stuff, but if I have the spare time I'm always happy to answer.
Though as it happens I have to go now - lunch time! :-)
@JohnRennie It's possible to do it using the energy method. Just we need to carefully write down the potential function which is $U(r)=\frac{1}{2}\frac{mg}{R}r^2$ with zero point at the center of the earth.
Anonymous
Also I don't particularly like this SHM problem because it causes a lot of misconceptions. The motion is SHM only under particular conditions :P
I see with concern the close queue has not shrunk considerably in the last week and is still at 73 items. This may be an effect of increased traffic but not increased reviewing or something else, I'm not sure
Not sure about that, but the converse is certainly false :P
Derrida has received a lot of criticism from the experts on the fields he tried to comment on
I personally do not know much about postmodernist philosophy, so I shall not comment on it myself
I do have strong affirmative opinions on textual interpretation, made disjoint from authoritorial intent, however, which is a central part of Deconstruction theory. But I think that dates back to Heidegger.
I can see why a man of that generation would be leaned towards that idea. I do too. |
We are currently in the midst of updating the too-long-unupdated software which underlies Science After Sunclipse. This process has largely gone without incident, though some mathematical formulæ are failing to display properly. Until we get everything sorted, just imagine an early-1990s “This Website Under Construction” icon next to anything which doesn’t look right.
Consider the Lagrangian density
\[ \mathcal{L} (\tilde{\phi},\phi) = \tilde{\phi}\left((\partial_t + D_A(r_A – \nabla^2)\right)\phi – u\tilde{\phi}(\tilde{\phi} – \phi)\phi + \tau \tilde{\phi}^2\phi^2. \]
Particle physicists of the 1970s would recognize this as the Lagrangian for a Reggeon field theory with triple- and quadruple-Pomeron interaction vertices. In the modern literature on theoretical ecology, it encodes the behaviour of a spatially distributed predator-prey system near the predator extinction threshold.
Such is the perplexing unity of mathematical science: formula
X appears in widely separated fields A and Z. Sometimes, this is a sign that a common effect is at work in the phenomena of A and those of Z; or, it could just mean that scientists couldn’t think of anything new and kept doing whatever worked the first time. Wisdom lies in knowing which is the case on any particular day.
[Reposted from the archives, in the light of John Baez’s recent writings.] |
Linear Boltzmann equation and fractional diffusion
1.
Laboratoire J.-L. Lions, BP 187, 75252 Paris Cedex 05, France
2.
CMLS, École polytechnique, 91128 Palaiseau Cedex, France
3.
DPMMS, University of Cambridge, Wilberforce Road, CB3 0WA Cambridge, United Kingdom
Consider the linear Boltzmann equation of radiative transfer in a half-space, with constant scattering coefficient $\sigma$. Assume that, on the boundary of the half-space, the radiation intensity satisfies the Lambert (i.e. diffuse) reflection law with albedo coefficient $\alpha$. Moreover, assume that there is a temperature gradient on the boundary of the half-space, which radiates energy in the half-space according to the Stefan-Boltzmann law. In the asymptotic regime where $\sigma\to+∞$ and $ 1-\alpha \sim C/\sigma$, we prove that the radiation pressure exerted on the boundary of the half-space is governed by a fractional diffusion equation. This result provides an example of fractional diffusion asymptotic limit of a kinetic model which is based on the harmonic extension definition of $\sqrt{-\Delta}$. This fractional diffusion limit therefore differs from most of other such limits for kinetic models reported in the literature, which are based on specific properties of the equilibrium distributions ("heavy tails") or of the scattering coefficient as in [U. Frisch-H. Frisch: Mon. Not. R. Astr. Not.
181 (1977), 273-280]. Keywords:Linear Boltzmann equation, radiative transfer equation, diffusion approximation, fractional diffusion. Mathematics Subject Classification:Primary: 45K05, 45M05, 35R11; Secondary: 82C70, 85A25. Citation:Claude Bardos, François Golse, Ivan Moyano. Linear Boltzmann equation and fractional diffusion. Kinetic & Related Models, 2018, 11 (4) : 1011-1036. doi: 10.3934/krm.2018039
References:
[1] [2]
C. Bardos, F. Golse, B. Perthame and R. Sentis,
The nonaccretive radiative transfer equations, existence of solutions and Rosseland approximation,
[3]
C. Bardos, E. Bernard, F. Golse and R. Sentis,
The diffusion approximation for the linear Boltzmann equation with vanishing scattering coefficient,
[4] [5]
N. Ben Abdallah, A. Mellet and M. Puel,
Anomalous diffusion limit for kinetic equations with degenerate collision frequency,
[6]
A. Bensoussan, J.-L. Lions and G.C. Papanicolaou,
Boundary layers and homogenization of transport processes,
[7]
H. Brezis,
[8] [9] [10]
S. Chandrasekhar,
[11]
R. Dautray and J.-L. Lions,
[12]
L. Desvillettes and F. Golse, A remark concerning the Chapman-Enskog asymptotics, in
[13] [14]
F. Golse, Fluid dynamic limits of the kinetic theory of gases, in
[15] [16] [17] [18] [19] [20] [21] [22]
G. C. Pomraning,
[23]
L. Tartar,
[24]
A. Weinberg and E. Wigner,
show all references
References:
[1] [2]
C. Bardos, F. Golse, B. Perthame and R. Sentis,
The nonaccretive radiative transfer equations, existence of solutions and Rosseland approximation,
[3]
C. Bardos, E. Bernard, F. Golse and R. Sentis,
The diffusion approximation for the linear Boltzmann equation with vanishing scattering coefficient,
[4] [5]
N. Ben Abdallah, A. Mellet and M. Puel,
Anomalous diffusion limit for kinetic equations with degenerate collision frequency,
[6]
A. Bensoussan, J.-L. Lions and G.C. Papanicolaou,
Boundary layers and homogenization of transport processes,
[7]
H. Brezis,
[8] [9] [10]
S. Chandrasekhar,
[11]
R. Dautray and J.-L. Lions,
[12]
L. Desvillettes and F. Golse, A remark concerning the Chapman-Enskog asymptotics, in
[13] [14]
F. Golse, Fluid dynamic limits of the kinetic theory of gases, in
[15] [16] [17] [18] [19] [20] [21] [22]
G. C. Pomraning,
[23]
L. Tartar,
[24]
A. Weinberg and E. Wigner,
[1]
Arnaud Debussche, Sylvain De Moor, Julien Vovelle.
Diffusion limit for the radiative transfer equation perturbed by a Wiener process.
[2] [3] [4]
Luis Caffarelli, Juan-Luis Vázquez.
Asymptotic behaviour of a porous medium equation with fractional diffusion.
[5]
Eugenio Montefusco, Benedetta Pellacci, Gianmaria Verzini.
Fractional diffusion with Neumann boundary conditions: The logistic equation.
[6]
Stefan Possanner, Claudia Negulescu.
Diffusion limit of a generalized matrix Boltzmann equation for
spin-polarized transport.
[7] [8] [9]
Kenichi Sakamoto, Masahiro Yamamoto.
Inverse source problem with a final
overdetermination for a fractional diffusion
equation.
[10] [11] [12]
N. Ben Abdallah, M. Lazhar Tayeb.
Diffusion approximation for the one dimensional Boltzmann-Poisson system.
[13]
Marc Briant.
Instantaneous exponential lower bound for solutions to the Boltzmann equation with Maxwellian diffusion boundary conditions.
[14] [15]
Marie Henry, Danielle Hilhorst, Masayasu Mimura.
A reaction-diffusion approximation to an area preserving mean curvature flow coupled with a bulk equation.
[16] [17]
Hao Yang, Fuke Wu, Peter E. Kloeden.
Existence and approximation of strong solutions of SDEs with fractional diffusion coefficients.
[18] [19] [20]
2018 Impact Factor: 1.38
Tools Metrics Other articles
by authors
[Back to Top] |
I hope nobody cares that i exhume this question, but i found it interesting that it is possible to obtain this integral by a relativly straightforward contour integration method.
Observe that,following the question opener and using parity, that we can rewrite the integral as
$$\frac{1}{2}\int^{\infty}_{-\infty}\frac{1}{1+t^2}\frac{1}{1+\sin^2(t)}$$
It is now easy to show that the poles are
$$t_{\pm}=\pm i\\t_{n\pm}=\pi n\pm i \text{arcsinh(1)}$$
so we have two isolated poles and the rest lies on two straight lines paralell to the real axis.
Because the integrand interpreted as a complex function converges as $|z|\rightarrow\infty$ we can choose a semicircle closed in the upper half plane as an integration contour. We find
$$I=\pi i\sum_{n=-\infty}^{\infty}\text{res}(t_{n+})+\pi i \text{res}(t_{+})$$
Where the residues are given by$$\text{res}(t_{+})=\frac{i}{2}\frac{1}{2 \sinh^2(1)-1}\\\text{res}(t_{n+})=\frac{-i}{2\sqrt{2}}\frac{1}{1+(n \pi+i \text{arcsinh(1)} )^2}$$
Therefore the integral reduces to the following sum
$$I=\frac{\pi}{2\sqrt{2}} \sum_{n=-\infty}^{\infty} \frac{1}{1+(n \pi+i \text{arcsinh(1)})^2} -\frac{\pi}{2}\frac{1}{2 \sinh^2(1)-1}$$
Using a partial fraction decomposition together with the Mittag-Leffler expansion of $\coth(x)$, this can be rewritten as
$$I=\frac{\pi}{4\sqrt{2}} \sum_{n=-\infty}^{\infty} \frac{-i}{-i+n \pi+ \text{arcsinh(1)}}+ \frac{i}{i+n \pi+ i\text{arcsinh(1)}}-\frac{\pi}{2}\frac{1}{2 \sinh^2(1)-1}=\\\frac{\sqrt{2} \pi}{8} \left( \coth \left(1-\text{arcsinh(1)}\right)+ \coth \left(1+\text{arcsinh(1)}\right)\right)-\frac{\pi}{2}\frac{1}{2 \sinh^2(1)-1}\\$$
Or $$I\approx 1.16353$$
Which matches the claimed result.
One can also compute this explicitly noting that $\text{arcsinh(1)}=\log(1+\sqrt{2})$ (*). But this is rather tedious so i just leave this step to the reader and conclude that$$I=\frac{e^2+3-2\sqrt{2}}{e^2-3+2\sqrt{2}}$$
Appendix
Just to give some details of the last part of the calculations:
Using (*) the part stemming from the sum is $$\frac{\pi}{4\sqrt{2}}\left(\frac{ \frac{1+\sqrt{2}}{e}+\frac{e}{1+\sqrt{2}}}{ \frac{e}{1+\sqrt{2}}-\frac{1+\sqrt{2}}{e}}+\frac{e \left(1+\sqrt{2}\right)+\frac{1}{1+\sqrt{2} e} }{\left(1+\sqrt{2}\right) e-\frac{1}{\left(1+\sqrt{2}\right) e}}\right)=\\\frac{\left(e^4-1\right) \pi }{2 \sqrt{2} \left(1-6 e^2+e^4\right)}$$
The part of the single pole gives
$$\frac{\pi }{2 \left(\left(\frac{e}{2}-\frac{1}{2 e}\right)^2-1\right)}=\frac{2 e^2 \pi }{1-6 e^2+e^4}$$
Adding both terms and factorizing then yields the desired result |
Torsion Submodule, Integral Domain, and Zero Divisors Problem 409
Let $R$ be a ring with $1$. An element of the $R$-module $M$ is called a
torsion element if $rm=0$ for some nonzero element $r\in R$. The set of torsion elements is denoted \[\Tor(M)=\{m \in M \mid rm=0 \text{ for some nonzero} r\in R\}.\] (a) Prove that if $R$ is an integral domain, then $\Tor(M)$ is a submodule of $M$. (Remark: an integral domain is a commutative ring by definition.) In this case the submodule $\Tor(M)$ is called torsion submodule of $M$. (b) Find an example of a ring $R$ and an $R$-module $M$ such that $\Tor(M)$ is not a submodule. (c) If $R$ has nonzero zero divisors, then show that every nonzero $R$-module has nonzero torsion element.
Contents
Problem 409 Proof. Proof. (a) Prove that if $R$ is an integral domain, then $\Tor(M)$ is a submodule of $M$.
To prove $\Tor(M)$ is a submodule of $M$, we check the following submodule criteria:
$\Tor(M)$ is not empty. For any $m, n\in \Tor(M)$ and $t\in R$, we have $m+tn\in M$.
It is clear that the zero element $0$ in $M$ is in $\Tor(M)$, hence condition 1 is met.
To prove condition 2, let $m, n \in \Tor(M)$ and $t\in R$.
Since $m, n$ are torsion elements, there exist nonzero elements $r, s\in R$ such that $rm=0, sn=0$.
Since $R$ is an integral domain, the product $rs$ of nonzero elements is nonzero.
We have \begin{align*} rs(m+tn)&=rsm+rstn\\ &=s(rm)+rt(sn) &&\text{($R$ is commutative)}\\ &=s0+rt0=0. \end{align*} This yields that $m+tn$ is a torsion elements, hence $m+tn\in \Tor(M)$. This condition 2 is met as well, and we conclude that $\Tor(M)$ is a submodule of $M$. (b) Find an example of a ring $R$ and an $R$-module $M$ such that $\Tor(M)$ is not a submodule.
Let us consider $R=\Zmod{6}$ and let $M$ be the $R$-module $R$.
We just simply write $n$ for the element $n+6\Z$ in $R=\Zmod{6}$. Then we have \[3\cdot 2=0 \text{ and } 2\cdot 3=0.\] This implies that $2$ and $3$ are torsion elements of the module $M$.
However, the sum $5=2+3$ is not a torsion element in $M$ since if $r\cdot 5=0$ in $\Zmod{6}$, then $r=0$.
Thus, $\Tor(M)$ is not closed under addition. Hence it is not a submodule of $M$. (c) If $R$ has nonzero zero divisors, then show that every nonzero $R$-module has nonzero torsion element.
Let $r$ be nonzero zero divisors of $R$. That is, there exists a nonzero element $s\in R$ such that $rs=0$. Let $M$ be a nonzero $R$-module and let $m$ be a nonzero element in $M$.
Put $n=sm$.
If $n=0$, then this implies $m$ is a nonzero torsion element of $M$, and we are done.
If $n\neq 0$, then we have \begin{align*} rn=r(sm)=(rs)m=0m=0. \end{align*} This yields that $n$ is a nonzero torsion element of $M$.
Hence, in either case, we obtain a nonzero torsion element of $M$. This completes the proof.
Add to solve later |
Recent CMS results on exclusive processes
Pre-published on: 2019 June 28
Published on: 2019 October 04
Abstract
Exclusive vector meson photoproduction is studied in ultra-peripheral pPb collisions at $\sqrt{s_{NN}} = 5.02$ TeV. The cross sections are measured as a function of the photon-proton centre-of-mass energy, extending the energy range explored by H1 and ZEUS Experiments at HERA. In addition, the differential cross sections ($d\sigma/d|t|$), where $|t|\approx p^{2}_{T}$ is the squared transverse momentum of produced vector mesons, are measured and the slope parameters are obtained. The results are compared to previous measurements and to theoretical predictions.
DOI: https://doi.org/10.22323/1.352.0048 |
Search
Now showing items 1-10 of 24
Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV
(Springer, 2015-01-10)
The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ...
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV
(Springer Berlin Heidelberg, 2015-04-09)
The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV
(Springer, 2015-05-27)
The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ...
Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(American Physical Society, 2015-03)
We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV
(American Physical Society, 2015-06)
The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ...
Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Springer, 2015-11)
The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ...
K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2015-02)
The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ... |
!iframe>
A Dutch variation of this blog post is available.
Astrophysicist Mario Livio has joined the community of people who spread the rumor about a bump that will be presented on Tuesday between 3 pm and 5 pm:
Rumor from #LHC @CERN potential detection of excess at \(700\GeV\) [most recently sometimes said to be \(750\GeV\)] decaying into 2 photons (3 sigma in both @ATLASexperiment & @CMSexperiment)I think that by now, everyone who wanted to know something about the 2015 results has heard or seen this rumor so let me officially declassify it. BTW the schedule for Tuesday December 15th: ............ webcast.cern.ch ............ 15:00-15:40 CMS, Jim Olsen (Princeton U., USA) 15:40-16:20 ATLAS, Marumi Kado (Lab. de l'Acc. Lin., FR)
All this knowledge could have come from a single source and the source may hypothetically be a prankster. But I have some feelings that the sources are actually numerous and independent at the root. So I think it's more likely than not that an excess of this kind will be reported. There may be other interesting (and maybe more interesting) excesses announced on Tuesday but I won't discuss this possibility in this blog post at all.
Exactly four years ago, we were shown pictures such as this one. When you collide proton beams and look for final states that include two photons (or "diphotons", as physicists like to call it when they want to pretend that they speak Greek), you may find a 3-sigmaish bump around \(125\GeV\) – which is the value of the invariant mass \((p_1^\mu+p_2^\mu)^2\) of the two photons – already with the data that was available by the end of 2011.
The Higgs boson had to be
somewhereon the mass axis and the remaining values had been largely excluded so it was pretty much unavoidable that this excess did meanthat the Higgs boson existed and had the mass around \(125\GeV\) – we were more likely to call the mass \(126\GeV\) at least throughout 2012 but the most accurate values are close to \(125.1\GeV\) now. Additional channels strengthened in 2012. The Higgs was seen to decay to \(ZZ^*\), a pair of massive electroweak gauge bosons (one of them must be virtual because they're too heavy), and some lepton and quark pairs later.
Now, on Tuesday, we are likely to be told that there is a very similar \(\gamma\gamma\) excess near the invariant mass of \(700\GeV\) – which happens to be 4 times the top quark mass, if you haven't noticed. There is a 3-sigma excess in the ATLAS data and a similar 3-sigma excess in the CMS data. By the Pythagorean rule, the "combined" excess has the significance of \[
\sqrt{3^2 + 3^2} = \sqrt{18} \approx 4.24.
\] Even the combination is insufficient to be a discovery but the excess is pretty strong. But if your prior probability that a "similar particle" should exist is low, 4.2 sigma is a pretty weak piece of evidence even though it's formally just a "1 in 100,000" risk that it's a fluke. Numerous 4-sigma signals have gone away.
In fact, there were apparently even 4-sigma bumps in diphotons that have gone away. Look at this April 2011 blog post about an ATLAS diphoton memo. At that time, it suggested that the Higgs boson could have existed and have the mass of \(115\GeV\). It was the most convincing value of the mass at that moment, I think – but you may also verify that I have never made any statements about being "comparably certain" about the \(115\GeV\) Higgs as I made about the \(125\GeV\) Higgs in December.
More...
The somewhat lighter \(115\GeV\) Higgs would have meant a more direct support for some of the most popular models with supersymmetry "right around the corner". Consequently, I think that it is probably not a coincidence that a Finnish company named Darkglass Electronics introduced a compressor called Supersymmetry \(115\GeV\) a bit later. By a compressor, they probably mean some gadget (a pedal!) for electric guitars but I am really confused what the product actually does LOL. They must have followed particle physics – or know someone who does. Or do you seriously believe that someone could have chosen this bizarre name that so accurately picks the excitement of a light Higgs in Spring 2011?
Too bad. If the Higgs were found at \(115\GeV\), I am sure that the pedal would be a huge bestseller. ;-)
Note that \(115\GeV\) was an intriguing value of the mass also because a decade earlier, the LEP collider saw a very weak hint of a Higgs boson at that mass – which was coincidentally the maximum mass that LEP was able to probe. At any rate, we know that the \(115\GeV\) hints went away (the same was true for hints around \(140\)-\(145\GeV\) from the Tevatron etc.) and a new bump ultimately began to emerge around \(125\GeV\) later in 2011 and it was declared a discovery on July 4th, 2012.
When the ATLAS and CMS see a very similar bump in the same channel, it may turn out to be a fluke – much like the \(115\GeV\) bump etc. In that case, it's not too interesting. Flukes have always taken place and they will never be banned. However, the signals may also be real – after all, 4.2 sigma isn't quite negligible evidence.
Because the particle decays to two photons, it must be a boson. I think that the simplest guess would be that it's a new Higgs boson. Supersymmetric models predict that there exist at least five faces of the God particle. A new gauge boson would marginally increase the probability that supersymmetry is right – \(700\GeV\) is an allowed mass, of course. But this new particle wouldn't be a
superpartneryet so I wouldn't say that "supersymmetry will have been discovered" as soon as this new scalar boson were proven to exist.
However, a new Higgs boson would surely be exciting, anyway. We would enter the epoch "Beyond the Standard Model". It would mean that despite all the talk about the Standard Model's nearly eternal validity and the nicknames The Core Theory and similar ideology, the completed Standard Model (with all parameters basically known) will have only been valid for some modest four years! ;-) This completely real possibility is the right perspective from which you should evaluate the question whether the modest, technical, seemingly temporary name "The Standard Model" is appropriate for the particular quantum field theory. It surely
isappropriate because we are aware of no good reasons why such a theory couldn't fall very soon.
There is an extra problem about any new particle of mass \(700\GeV\) that decays to two photons. What is the extra problem? The extra problem is that this is a pretty low mass and at these low masses, the increase of the LHC energy from \(8\TeV\) to \(13\TeV\) doesn't substantially improve the visibility of the new particle. So a new \(700\GeV\) particle visible in the 4/fb of the 2015 data should have been visible in the 20/fb of the 2012 data, too! But no one saw a big (or any) bump near \(700\GeV\) in the 2012 data, I think, although I am simply unable to find good diphoton graphs from the 2012 data that go this high.
This ATLAS diphoton graph only goes to \(600\GeV\), with an over-two-sigma bump around \(530\GeV\). The CMS graph goes up to \(3.5\TeV\) and is very chaotic around \(700\GeV\) or so. For a cleaner CMS paper, see the comments. Update: on Dec 15th in the morning, I found a TRF blog post and CMS paper in it that discuss an \(8\TeV\) excesses of 2.56 and 2.64 sigma between \(700\) and \(800\GeV\) on page 15.
If they are going to announce a bump in the simple \(\gamma\gamma\) channel, it will "probably" look like a pair of flukes due to the tension with the 2012 data. But the channel may be a more complicated one – with other particles produced simultaneously with the diphoton. If that's so, the process that creates the \(700\GeV\) may involve some heavier particles for which the increase from \(8\TeV\) to \(13\TeV\) is much more useful. Some multi-\({\rm TeV}\) particle (which is easy to produce now but was very hard in 2012) may decay to a product that decays further – we have decay chains or cascades etc.
The new particles that exist before the photons may belong e.g. to hidden valleys. What are hidden valleys? Matt Strassler has given this definition:
A unexpected place …Some of the TRF readers may say: Could you please be a bit more specific about the steps 1-4, Dr Strassler? ;-) More seriously, the definition's being vague is pretty much a point of the hidden valleys. Hidden valleys are any collections (hidden sectors) of new particles and fields influencing experiments at accessible energies that look pretty useless at the beginning and that only play role in the middle of some decay chains. Something more important stands at the beginning of the decay chains; but the chains always end with the Standard Model and not new particles. … of beauty and abundance … … discovered only after a long climb …
To some extent, to focus on hidden valleys means to abandon the minimality – or Occam's razor, if you wish. I would say that minimality is overrated but it's still more sensible to spend more time with models that look simpler than with the contrived ones. One simply shouldn't try to construct completely arbitrary Rube Goldberg machines and test whether they're the right theories of Nature.
On Tuesday, they may announce something much more convoluted than just the "diphoton final state". Perhaps the events will suggest that there is a heavier particle such as the \(2\TeV\) \(W'\)-boson – or a superpartner, if you want to be really ambitious – that decays into some decay products including a new \(700\GeV\) Higgs which decays into two photons. I believe that the number of possibilities is too high here and I won't be able to pick a reasonable candidate of the sort.
Even if these two 3-sigma excesses were the only result in tension with the Standard Model on Tuesday, it should be a very good idea to watch the 2-hour talks carefully. |
This isn't a full answer, just a simple observation that spectral theory gives a partial answer of sorts (as you may already know). See here or any introduction to Banach algebra/C*-algebra theory for supporting details.
A
character of a unital commutative Banach algebra $A$ is a unital algebra homomorphism $\phi: A \to \mathbb{C}$. The set $\Phi_A$ of all characters of $A$ has a canonical topology (the relative weak-star topology) which makes it a compact Hausdorff space called the spectrum of $A$. In fact, the assignment $A \mapsto \Phi_A$ is a contravariant functor from unital commutative Banach algebras to compact Hausdorff spaces.
If $X$ is a compact Hausdorff space (one could get away with less, but this is convenient) then $C(X)$ is a unital commutative Banach algebra (in fact a C*-algebra). There is a mapping (actually a homeomorphism) $X \to \Phi_{C(X)}$ sending $x$ to evaluation at $X$. This map$$X \to \Phi_{C(X)}$$is universal in the sense that, if $A$ is another unital commutative Banach algebra and there is a given (continuous) map $$x \mapsto \phi_x : X \to \Phi_A,$$then there is a unique homomorphism $a \mapsto f_a: A \to C(X)$ which makes the diagram$$\begin{matrix}X & \to & \Phi_A \\ & \searrow & \uparrow \\& & \Phi_{C(X)}\end{matrix}$$commutative. It is easy enough to see the uniqueness. Commutativity of the diagram entails that, for each $a \in A, x \in X$, we have $f_a(x) = \phi_x(a)$. It is simple to check that $a \mapsto f_a$ is a homomorphism.
I guess my question to you is whether you want a notion of univerality which looks something like this in general? Or if are looking for something wildly different.
Added: Responding to your last comment, yes the universal property of the mapping $X \to \Phi_{C(X)}$ determines $C(X)$ uniquely as a Banach algebra. What follows is just the usual argument for uniqueness of universal objects Suppose that there are two Banach algebras $A_1$ and $A_2$ and given mappings \begin{align*} X \to \Phi_{A_1} && X \to \Phi_{A_2} \end{align*} which are universal in the sense outlined above. We shall show there are unique inverse isomorphisms between $A_1$ and $A_2$ which make the relevant triangles commute. Using the universality of each map in turn, we see there are unique homomorphisms \begin{align*} f : A_1 \to A_2 && g : A_2 \to A_1\end{align*} which make diagrams\begin{align*}\begin{matrix}X & \to & \Phi_{A_1} \\ & \searrow & \uparrow \\& & \Phi_{A_2}\end{matrix} &&\begin{matrix}X & \to & \Phi_{A_2} \\ & \searrow & \downarrow \\& & \Phi_{A_1}\end{matrix}\end{align*}commutative (here the vertical maps are induced contravariantly from $f_1$ and $f_2$). Lastly, one notes that the identity map $\mathrm{id}_{A_1}$ and $f \circ g$ are both mappings $A_1 \to A_1$ which make the diagram $$ \begin{matrix}X & \to & \Phi_{A_1} \\ & \searrow & \downarrow \\& & \Phi_{A_1}\end{matrix} $$commute so that $f \circ g = \mathrm{id}_{A_1}$ by uniqueness and, similarly, that $g \circ f = \mathrm{id}_{A_2}$. Thus, $f$ and $g$ are inverse isomorphisms between the Banach algebras $A_1$ and $A_2$. |
Let $f$ be an entire function (holomorphic over the complex plane). If $f$ has no zero point, then $\text{Log} f$ is also an entire function. How to prove this?
My idea: one branch of $\text{Log}f$ is well-defined on $\mathbb{C}\setminus \gamma$ for any $\gamma:[0,\infty)\longrightarrow \mathbb{C}$ such that $\gamma(\infty)=\infty$. Once it is well-defined, $\text{Log}f$ is holomorphic. By the open mapping theorem, $f(\mathbb{C})$ is open in $\mathbb{C}$. To show that $\text{Log}$ is well-defined on $f(\mathbb{C})$, we only need to show that $f(\mathbb{C})$ is simply-connected. But I do not know how to continue... Is a holomorphic function with no zero point a homeomorphism?
Since my way is inconvenient and problematic, is there any otehr ways to prove:
Let $f$ be an entire function (holomorphic over the complex plane). If $f$ has no zero point, then $\text{Log} f$ is also an entire function ? |
Dynamic viscosity, abbreviated as \(\mu\) (Greek symbol mu), also called absolute viscosity of simple viscosity, is the force required to move adjacent layers that are parallel to each other at different speeds. The velocity and shear stress are combined to determine the dynamic viscosity.
Dynamic Viscosity Formula
\(\large{ \mu = \frac {\tau} {\dot {\gamma}} }\)
Where:
\(\large{ \mu }\) (Greek symbol mu) = dynamic viscosity
\(\large{ \dot {\gamma} }\) (Greek symbol gamma) = shear rate
\(\large{ \tau }\) (Greek symbol tau) = shear stress
Tags: Equations for Viscosity |
Electronic Journal of Probability Electron. J. Probab. Volume 16 (2011), paper no. 69, 1900-1933. Upper large deviations for Branching Processes in Random Environment with heavy tails Abstract
Branching Processes in Random Environment (BPREs) $(Z_n:n\geq0)$ are the generalization of Galton-Watson processes where 'in each generation' the reproduction law is picked randomly in an i.i.d. manner. The associated random walk of the environment has increments distributed like the logarithmic mean of the offspring distributions. This random walk plays a key role in the asymptotic behavior. In this paper, we study the upper large deviations of the BPRE $Z$ when the reproduction law may have heavy tails. More precisely, we obtain an expression for the limit of $-\log \mathbb{P}(Z_n\geq \exp(\theta n))/n$ when $n\rightarrow \infty$. It depends on the rate function of the associated random walk of the environment, the logarithmic cost of survival $\gamma:=-\lim_{n\rightarrow\infty} \log \mathbb{P}(Z_n \gt 0)/n$ and the polynomial rate of decay $\beta$ of the tail distribution of $Z_1$. This rate function can be interpreted as the optimal way to reach a given "large" value. We then compute the rate function when the reproduction law does not have heavy tails. Our results generalize the results of Böinghoff & Kersting (2009) and Bansaye & Berestycki (2008) for upper large deviations. Finally, we derive the upper large deviations for the Galton-Watson processes with heavy tails.
Article information Source Electron. J. Probab., Volume 16 (2011), paper no. 69, 1900-1933. Dates Accepted: 19 October 2011 First available in Project Euclid: 1 June 2016 Permanent link to this document https://projecteuclid.org/euclid.ejp/1464820238 Digital Object Identifier doi:10.1214/EJP.v16-933 Mathematical Reviews number (MathSciNet) MR2851050 Zentralblatt MATH identifier 1245.60081 Subjects Primary: 60J80: Branching processes (Galton-Watson, birth-and-death, etc.) Secondary: 60K37: Processes in random environments 60J05: Discrete-time Markov processes on general state spaces 92D25: Population dynamics (general) Rights This work is licensed under aCreative Commons Attribution 3.0 License. Citation
Bansaye, Vincent; Böinghoff, Christian. Upper large deviations for Branching Processes in Random Environment with heavy tails. Electron. J. Probab. 16 (2011), paper no. 69, 1900--1933. doi:10.1214/EJP.v16-933. https://projecteuclid.org/euclid.ejp/1464820238 |
Let $\phi(n)$ be Euler's totient function. For $n=4m$ what is the smallest $n$ for which
$$\phi(n) \ne \phi(k) \textrm{ for any } k<n \textrm{ ?} \quad (1)$$
When $n=4m+1$ and $n>1$ the smallest is $n=5$ and when $n=4m+3$ it's $n=3.$ However, when $n=4m+2$ the above condition can never be satisfied since
$$\phi(4m+2) = (4m+2) \prod_{p | 4m+2} \left( 1 - \frac{1}{p} \right)$$ $$ = (2m+1) \prod_{p | 2m+1} \left( 1 - \frac{1}{p} \right) = \phi(2m+1).$$
In the case $n=4m,$ $n=2^{33}$ is a candidate and $\phi(2^{33})=2^{32}.$ This value satisfies $(1)$ because $\phi(n)$ is a power of $2$ precisely when $n$ is the product of a power of $2$ and any number of distinct Fermat primes:
$$2^1+1,2^2+1,2^4+1,2^8+1 \textrm{ and } 2^{16}+1.$$
Note the $n=2^{32}$ does not satisfy condition $(1)$ because the product of the above Fermat primes is $2^{32}-1$ and so $\phi(2^{32})=2^{31}=\phi(2^{32}-1)$ and $2^{32}-1 < 2^{32}.$
The only solutions to $\phi(n)=2^{32}$ are given by numbers of the form $n=2^a \prod (2^{x_i}+1)$ where $x_i \in \lbrace 1,2,4,8,16 \rbrace $ and $a+ \sum x_i = 33$ (note that the product could be empty), so all these numbers are necessarily $ \ge 2^{33}.$
Why don't many "small" multiples of $4$ satisfy condition $(1)$? Well, note that for $n=2^a(2m+1)$ we have
$$\phi(2^a(2m+1))= 2^a(2m+1) \prod_{p | 2^a(2m+1)} \left( 1 - \frac{1}{p} \right)$$ $$ = 2^{a-1}(2m+1) \prod_{p | 2m+1} \left( 1 - \frac{1}{p} \right) = 2^{a-1}\phi(2m+1),$$
and so, for $a \ge 2,$ if $2^{a-1}\phi(2m+1)+1$ is prime we can take this as our value of $k<n$ and we have $\phi(n)=\phi(k).$ This, together with the existence of the Fermat primes, seems to be why it's difficult to satisfy when $n=4m.$
I have only made hand calculations so far, so I would not be too surprised if the answer is much smaller than my suggestion. The problem is well within the reach of a computer, and possibly further analysis without the aid of a computer. But, anyway, I've decided to ask here as many of you have ready access to good mathematical software and I'm very intrigued to know whether there is a smaller solution than $2^{33}.$
Some background information:
This question arose in my search to bound the function $\Phi(n)$ defined as follows.
Let $\Phi(n)$ be the number of distinct values taken on by $\phi(k)$ for $1 \le k \le n.$ For example, $\Phi(13)=6$ since $\phi(k)$ takes on the values $\lbrace 1,2,4,6,10,12 \rbrace$ for $1 \le k \le 13.$
It is clear that $\Phi(n)$ is increasing and increases by $1$ at each prime value of $n,$ except $n=2,$ but it also increases at other values as well. For example, $\Phi(14)=6$ and $\Phi(15)=7.$
Currently, for an upper bound, I'm hoping to do better than $\Phi(n) \le \lfloor (n+1)/2 \rfloor .$
But this this not the issue at the moment, although it may well become a separate question.
This work originates from this stackexchange problem. |
$R$ is taken to be the ring of upper $3 \times 3$ matrices with entries in $\mathbb{R}$.
If I view $R$ as a module over itself, are any of its submodules free?
And how can I prove that its submodules are pojective $R$-modules?
Thanks!
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
None of its proper submodules could be free. Think about it: $R$ is a $6$ $\Bbb R$ dimensional algebra. It could not contain even a single copy of itself properly, considering that a single copy would have dimension at least $6$.
It certainly contains projective submodules, though. For any idempotent $e$, $eR$ is going to be a summand of $R$, and hence a projective module.
Actually triangular matrix rings over fields are hereditary meaning that all their left and right ideals are projective.
The only proofs I'm aware of for this would be a little long to write out. I know a proof appears in section 2 of
Lectures on Modules and Rings, and First Course in Noncommutative rings section 25, both books by T.Y. Lam.
The version I like is the one where you show that a ring is right hereditary iff its Jacobson radical is a projective right $R$ module.
The algebra you're studying is the path algebra over the quiver $$ \underset{1}{\bullet}\xrightarrow{\alpha}\underset{2}{\bullet} \xrightarrow{\beta}\underset{3}{\bullet} $$ so it's hereditary as all path algebras. You find a proof of this result in a paper by C. M. Ringel, see section 3, page 3. |
The best model is not always the most complicated. Sometimes including variables that are not evidently important can actually reduce the accuracy of predictions. In this section we discuss model selection strategies, which will help us eliminate from the model variables that are less important. In this section, and in practice, the model that includes all available explanatory variables is often referred to as the
full model. Our goal is to assess whether the full model is the best model. If it isn't, we want to identify a smaller model that is preferable. Identifying Variables in the Model that may not be Helpful
Table 8.6 provides a summary of the regression output for the full model for the auction data. The last column of the table lists p-values that can be used to assess hypotheses of the following form:
H 0: \(\beta _i\) = 0 when the other explanatory variables are included in the model. H A: \(\beta _i \ne 0\) when the other explanatory variables are included in the model.
Estimate Std. Error t value Pr(>|t|)
(Intercept)
cond_new
stock_photo
duration
wheels
36.2110
5.1306
1.0803
-0.0268
7.2852
1.5140
1.0511
1.0568
0.1904
0.5547
23.92
4.88
1.02
-0.14
13.13
0.0000
0.0000
0.3085
0.8882
0.0000
Example \(\PageIndex{1}\)
The coefficient of cond new has a t test statistic of T = 4.88 and a p-value for its corresponding hypotheses (\(H_0 : \beta _1 = 0, H_A : \beta _1 \ne 0\)) of about zero. How can this be interpreted?
Solution
If we keep all the other variables in the model and add no others, then there is strong evidence that a game's condition (new or used) has a real relationship with the total auction price.
Example \(\PageIndex{2}\)
Is there strong evidence that using a stock photo is related to the total auction price?
Solution
The t test statistic for stock photo is T = 1.02 and the p-value is about 0.31. After accounting for the other predictors, there is not strong evidence that using a stock photo in an auction is related to the total price of the auction. We might consider removing the stock photo variable from the model.
Exercise \(\PageIndex{1}\)
Identify the p-values for both the duration and wheels variables in the model. Is there strong evidence supporting the connection of these variables with the total price in the model?
Answer
The p-value for the auction duration is 0.8882, which indicates that there is not statistically significant evidence that the duration is related to the total auction price when accounting for the other variables. The p-value for the Wii wheels variable is about zero, indicating that this variable is associated with the total auction price.
There is not statistically significant evidence that either the stock photo or duration variables contribute meaningfully to the model. Next we consider common strategies for pruning such variables from a model.
TIP: Using adjusted \(R^2\) instead of p-values for model selection
The adjusted \(R^2\) may be used as an alternative to p-values for model selection, where a higher adjusted \(R^2\) represents a better model t. For instance, we could compare two models using their adjusted \(R^2\), and the model with the higher adjusted \(R^2\) would be preferred. This approach tends to include more variables in the final model when compared to the p-value approach.
Two model selection strategies
Two common strategies for adding or removing variables in a multiple regression model are called backward-selection and forward-selection. These techniques are often referred to as
stepwise model selection strategies, because they add or delete one variable at a time as they "step" through the candidate predictors. We will discuss these strategies in the context of the p-value approach. Alternatively, we could have employed an \(R^2_{adj}\) approach.
The
backward-elimination strategy starts with the model that includes all potential predictor variables. Variables are eliminated one-at-a-time from the model until only variables with statistically significant p-values remain. The strategy within each elimination step is to drop the variable with the largest p-value, re t the model, and reassess the inclusion of all variables.
Example \(\PageIndex{3}\)
Results corresponding to the full model for the mario kart data are shown in Table 8.6. How should we proceed under the backward-elimination strategy?
Solution
There are two variables with coefficients that are not statistically different from zero: stock_photo and duration. We first drop the duration variable since it has a larger corresponding p-value, then we re t the model. A regression summary for the new model is shown in Table 8.7.
In the new model, there is not strong evidence that the coefficient for stock photo is different from zero, even though the p-value decreased slightly, and the other p-values remain very small. Next, we again eliminate the variable with the largest non-significant p-value, stock photo, and re t the model. The updated regression summary is shown in Table 8.8.
In the latest model, we see that the two remaining predictors have statistically significant coefficients with p-values of about zero. Since there are no variables remaining that could be eliminated from the model, we stop. The final model includes only the cond_new and wheels variables in predicting the total auction price:
\[ \begin{align} \hat {y} &= b_0 + b_1x_1 + b_4x_4 \\ &= 36.78 + 5.58x_1 + 7.23x_4 \end{align} \]
where \(x_1\) represents cond new and x4 represents wheels.
An alternative to using p-values in model selection is to use the adjusted \(R^2\). At each elimination step, we refit the model without each of the variables up for potential elimination. For example, in the first step, we would fit four models, where each would be missing a different predictor. If one of these smaller models has a higher adjusted \(R^2\) than our current model, we pick the smaller model with the largest adjusted \(R^2\). We continue in this way until removing variables does not increase \(R^2_{adj}\) . Had we used the adjusted \(R^2\) criteria, we would have kept the stock photo variable along with the cond new and wheels variables.
Notice that the p-value for stock photo changed a little from the full model (0.309) to the model that did not include the duration variable (0.275). It is common for p-values of one variable to change, due to collinearity, after eliminating a different variable. This fluctuation emphasizes the importance of refitting a model after each variable elimination step. The p-values tend to change dramatically when the eliminated variable is highly correlated with another variable in the model.
The
forward-selection strategy is the reverse of the backward-elimination technique. Instead of eliminating variables one-at-a-time, we add variables one-at-a-time until we cannot nd any variables that present strong evidence of their importance in the model.
Estimate Std. Error t value Pr(>|t|)
(Intercept)
cond_new
stock_photo
wheels
36.0483
5.1763
1.1177
7.2984
0.9745
0.9961
1.0192
0.5448
36.99
5.20
1.10
13.40
0.0000
0.0000
0.2747
0.0000
Estimate Std. Error t value Pr(>|t|)
(Intercept)
cond_new
wheels
36.7849
5.5848
7.2328
0.7066
0.9245
0.5419
52.06
6.04
13.35
0.0000
0.0000
0.0000
Example \(\PageIndex{4}\): forward selection strategy
Construct a model for the mario kart data set using the forward selection strategy.
Solution
We start with the model that includes no variables. Then we t each of the possible models with just one variable. That is, we fit the model including just the cond new predictor, then the model including just the stock photo variable, then a model with just duration, and a model with just wheels. Each of the four models (yes, we fit four models!) provides a p-value for the coefficient of the predictor variable. Out of these four variables, the wheels variable had the smallest p-value. Since its p-value is less than 0.05 (the p-value was smaller than 2e-16), we add the Wii wheels variable to the model. Once a variable is added in forward-selection, it will be included in all models considered as well as the nal model.
Since we successfully found a first variable to add, we consider adding another. We fit three new models: (1) the model including just the cond_new and wheels variables (output in Table 8.8), (2) the model including just the stock photo and wheels variables, and (3) the model including only the duration and wheels variables. Of these models, the first had the lowest p-value for its new variable (the p-value corresponding to cond new was 1.4e-08). Because this p-value is below 0.05, we add the cond_new variable to the model. Now the final model is guaranteed to include both the condition and wheels variables.
We must then repeat the process a third time, fitting two new models: (1) the model including the stock photo, cond_new, and wheels variables (output in Table 8.7) and (2) the model including the duration, cond new, and wheels variables. The p-value corresponding to stock photo in the first model (0.275) was smaller than the p-value corresponding to duration in the second model (0.682). However, since this smaller p-value was not below 0.05, there was not strong evidence that it should be included in the model. Therefore, neither variable is added and we are finished.
The final model is the same as that arrived at using the backward-selection strategy.
Example \(\PageIndex{5}\): backward-selection strategy
As before, we could have used the \(R^2_{adj}\) criteria instead of examining p-values in selecting variables for the model. Rather than look for variables with the smallest p-value, we look for the model with the largest \(R^2_{adj}\) . What would the result of forward-selection be using the adjusted \(R^2\) approach?
Solution
Using the forward-selection strategy, we start with the model with no predictors. Next we look at each model with a single predictor. If one of these models has a larger \(R^2_{adj}\) than the model with no variables, we use this new model. We repeat this procedure, adding one variable at a time, until we cannot nd a model with a larger \(R^2_{adj}\) . If we had done the forward-selection strategy using \(R^2_{adj}\) , we would have arrived at the model including cond new, stock photo, and wheels, which is a slightly larger model than we arrived at using the p-value approach and the same model we arrived at using the adjusted \(R^2\) and backwards-elimination.
Model selection strategies
The backward-elimination strategy begins with the largest model and eliminates variables one-by-one until we are satis ed that all remaining variables are important to the model. The forward-selection strategy starts with no variables included in the model, then it adds in variables according to their importance until no other important variables are found.
There is no guarantee that the backward-elimination and forward-selection strategies will arrive at the same nal model using the p-value or adjusted \(R^2\) methods. If the backwards-elimination and forward-selection strategies are both tried and they arrive at different models, choose the model with the larger \(R^2_{adj}\) as a tie-breaker; other tie-break options exist but are beyond the scope of this book.
It is generally acceptable to use just one strategy, usually backward-elimination with either the p-value or adjusted \(R^2\) criteria. However, before reporting the model results, we must verify the model conditions are reasonable.
Contributors
David M Diez (Google/YouTube), Christopher D Barr (Harvard School of Public Health), Mine Çetinkaya-Rundel (Duke University)
David M Diez (Google/YouTube), Christopher D Barr (Harvard School of Public Health), Mine Çetinkaya-Rundel (Duke University) |
ISSN:
1556-1801
eISSN:
1556-181X
All Issues
Networks & Heterogeneous Media
June 2006 , Volume 1 , Issue 2
Select all articles
Export/Reference:
Abstract:
This work is concerned with some aspects of the social life of the amoebae
Dictyostelium discoideum(Dd). In particular, we shall focus on the early stages of the starvation-induced aggregation of Dd cells. Under such circumstances, amoebae are known to exchange a chemical messenger (cAMP) which acts as a signal to mediate their individual behaviour. This molecule is released from aggregation centres and advances through aggregation fields, first as circular waves and later on as spiral patterns. We shall recall below some of the basic features of this process, paying attention to the mathematical models that have been derived to account for experimental observations. Abstract:
Under consideration is the finnite-size scaling of effective thermoelastic properties of random microstructures from a Statistical Volume Element (SVE) to a Representative Volume Element (RVE), without invoking any periodic structure assumptions, but only assuming the microstructure's statistics to be spatially homogeneous and ergodic. The SVE is set up on a mesoscale, i.e. any scale finite relative to the microstructural length scale. The Hill condition generalized to thermoelasticity dictates uniform Neumann and Dirichlet boundary conditions, which, with the help of two variational principles, lead to scale dependent hierarchies of mesoscale bounds on effective (RVE level) properties: thermal expansion and stress coefficients, effective stiffness, and specific heats. Due to the presence of a non-quadratic term in the energy formulas, the mesoscale bounds for the thermal expansion are more complicated than those for the stiffness tensor and the heat capacity. To quantitatively assess the scaling trend towards the RVE, the hierarchies are computed for a planar matrix-inclusion composite, with inclusions (of circular disk shape) located at points of a planar, hard-core Poisson point field. Overall, while the RVE is attained exactly on scales infinitely large relative to the microscale, depending on the microstructural parameters, the random fluctuations in the SVE response may become very weak on scales an order of magnitude larger than the microscale, thus already approximating the RVE.
Abstract:
We consider coupling conditions for the “Aw–Rascle” (AR) traffic flow model at an arbitrary road intersection. In contrast with coupling conditions previously introduced in [10] and [7], all the moments of the AR system are conserved
andthe total flux at the junction is maximized. This nonlinear optimization problem is solved completely. We show how the two simple cases of merging and diverging junctions can be extended to more complex junctions, like roundabouts. Finally, we present some numerical results. Abstract:
We investigate coupling conditions for gas transport in networks where the governing equations are the isothermal Euler equations. We discuss intersections of pipes by considering solutions to Riemann problems. We introduce additional assumptions to obtain a solution near the intersection and we present numerical results for sample networks.
Abstract:
The aim of this paper is to optimize tra±c distribution coefficients in order to maximize the trasmission speed of packets over a network. We consider a macroscopic fluidodynamic model dealing with packets flow proposed in [10], where the dynamics at nodes (routers) is decided by a routing algorithm depending on traffic distribution (and priority) coefficients. We solve the general problem for a node with
mincoming and noutgoing lines and explicit the optimal parameters for the simple case of two incoming and two outgoing lines. Abstract:
We consider the initial value problem for the filtration equation in an inhomogeneous medium
The equation is posed in the whole space $\mathbb R^n$ , $n \geq 2$, for $0 < t < \infty$; $p(x)$ is a positive and bounded function with a certain behaviour at infinity. We take initial data $u(x,0) = u_0(x) \geq 0$, and prove that this problem is well-posed in the class of solutions with finite "energy", that is, in the weighted space $L^1_p$, thus completing previous work of several authors on the issue. Indeed, it generates a contraction semigroup.
We also study the asymptotic behaviour of solutions in two space dimensions when $p$ decays like a non-integrable power as $|x| \rightarrow \infty$ : $p(x)$ $|x|^\alpha$ ~ $1$ with $\alpha \epsilon (0,2)$ (infinite mass medium). We show that the intermediate asymptotics is given by the unique selfsimilar solution $U_2(x, t; E)$ of the singular problem
$ |x|^{- \alpha} u(x,0) = E\delta(x), E = ||u_0||_{L^1_p}$
Readers Authors Editors Referees Librarians Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top] |
I'm trying to solve the following partial differential equations: $$ u_t + \frac{1}{3}{u_x}^3 = 0 \tag{a} $$ $$ u_t + \frac{1}{3}{u_x}^3 = -cu \tag{b} $$ with the initial value problem $$ u(x,0)=h(x)= \left\lbrace \begin{aligned} &e^{x}-1 & &\text{for}\quad x<0\\ &e^{-x}-1 & &\text{for}\quad x>0 \end{aligned} \right. $$ My idea was to set $v(x,t)=u_x(x,t)$, because then I get the transport equation in $v$ which I am able to solve: $v_t + v^2v_x =0$. But when I do this, my solution for $v$ is $$ v(x,t)= \left\lbrace \begin{aligned} &\phantom{-}ae^{x-v^2 t} & &\text{for}\quad x<0\\ &{-a}e^{-x+v^2 t} & &\text{for}\quad x>0 \end{aligned} \right. $$ Can someone help me with this equation? Is $u_x = a e^{x-u_x^2 t}$ the correct answer? Or should I maybe do something very different to solve this equation?
Both equations (a) and (b) are Hamilton-Jacobi equations. Indeed, they are derived from a canonical transformation involving a type-2 generating function $u(x,t)$ which makes vanish the new Hamiltonian $$ K = H(x,u_x) + (\partial_t + c)\, u \, . $$ Here, $H(q,p)=\frac{1}{3}p^3$ is the original Hamiltonian and $\partial_t + c$ defines the time-differentiation operator. Setting $v=u_x$, we have $$ v_t = u_{tx} = -\left(\tfrac{1}{3}{u_x}^3\right)_x - cu_x = -v^2v_{x} - cv \, . $$ Thus, we consider the first-order quasilinear PDE $v_t + v^2v_{x} = -cv$ with initial data $v(x,0) = h'(x) = \pm e^{\pm x}$ for $\pm x<0$, and we apply the method of characteristics:
$\frac{\text d t}{\text d s} = 1$, letting $t(0)=0$, we know $t=s$. $\frac{\text d v}{\text d s} = -cv$, letting $v(0)=h'(x_0)$, we know $v=h'(x_0)e^{-ct}$. $\frac{\text d x}{\text d s} = v^2$, letting $x(0)=x_0$, we know $x=\frac{1}{2c}h'(x_0)^2(1-e^{-2ct}) + x_0$.
Injecting $h'(x_0) = ve^{ct}$ in the equation of characteristics, one obtains the implicit equation $$ v = h'\!\left(x-v^2\frac{e^{2ct}-1}{2c}\right) e^{-ct}\, . $$ Along the same characteristic curves, we have
$\frac{\text d u}{\text d s} = \tfrac23 v^3 - c u$, letting $u(0) = h(x_0)$, we know $u = h(x_0) e^{-ct} + \frac23\! \int_0^t e^{-c(t-s)} v(s)^3 \text d s$.
Thus, we get$$u = \left(h\!\left(x-v^2\frac{e^{2ct}-1}{2c}\right) + h'\!\left(x-v^2\frac{e^{2ct}-1}{2c}\right)^3 \frac{1-e^{-2ct}}{3c} \right) e^{-ct} \, ,$$where the link between $x_0$ and $v$ along characteristics has been used.For short times, the previous solution is valid. The method of characteristics breaks down when characteristics intersect (
breaking time). We use the fact that $\frac{\text d x}{\text d x_0}$ vanishes at the breaking time$$t_B = \inf_{x_0\in \Bbb R} \frac{-1}{2 c} \ln\left(1 + \frac{c}{h'(x_0)h''(x_0)}\right) .$$However, it seems pointless to look further for full analytical expressions in the general case.
If $c=0$, the characteristics are straight lines $x=x_0+v^2t$ along which $v=h'(x_0)$ is constant, and along which $u = h(x_0) + \frac23 v^3 t$. A sketch of the $x$-$t$ plane is displayed below:
The breaking time becomes $t_B = \inf_{x_0} -(2h'(x_0)h''(x_0))^{-1} = 1/2$. For short times $t<t_B$, the implicit equation for $v$ reads $v = h'(x-v^2t)$, i.e. $v = \pm e^{\pm (x-v^2t)}$ if $\pm(x-v^2t)<0$. Its analytic solution $$ v(x,t) = \pm\exp\! \left(\pm x- \tfrac{1}{2}W(\pm 2t e^{\pm 2x})\right) \quad\text{for}\quad {\pm (}x-t) < 0 $$ involves the Lambert W function. The expression of $u$ is deduced from $u = h(x-v^2 t) + \frac23 v^3 t$. For larger times $t>t_B$, particular care should be taken when computing weak solutions (shock waves) since the flux $f:v\mapsto \frac{1}{3}v^3$ is nonconvex. |
Votes cast (9)
all time by type 9 up 7 question 0 down 2 answer
4
$\sum x_n$ and $\sum y_n$ converge, show $\sum (x_n + y_n)$ converges
2
Prove that the translation of an integrable function is also integrable.
2
Prove Lipschitz function $f$ with constant $K$ is integrable on $[0, 1].$
1
If a function $f(x)$ is Riemann integrable on $[a,b]$ and $|f(x)| \le M$ for $x \in [a, b]$, show $|\int_a^b f(x)dx| \le M(b - a).$
0
Show $f$ is constant zero function on $[a, b]$ if $\int_a^x f(t) dt = \int_x^b f(t) dt \forall x \in [a, b].$
all time by type 9 up 7 question 0 down 2 answer |
This question was inspired by the Homotopy Type Theory Book.
Might we define a weak $\omega$-category as described below?
Is any similar approach already considered in the literature?
Let $\def\Ob{{\rm Ob}\,} \Ob\mathcal C$ be a set [class] with partial functions $\def\dom{\rm dom} \def\cod{\rm cod} \dom,\cod$ defined on a subset $\mathcal C'\subseteq\Ob\mathcal C$ (i.e. $\dom,\cod:\mathcal C'\to\Ob\mathcal C$). We write of course $f:A\to B$ for $\dom(f)=A,\ \cod(f)=B$ and elements of $\mathcal C'$ are also called
arrows. We require a composition of consecutive arrows and identity arrows, satisfying associativity and unit constraintments with coherence isomorphisms such that the coherence diagrams commute up to equivalence. (Objects $A$ and $B$ are equivalent if there is a binary tree of objects $X_{\bf t}$ and arrows $f_{\bf t}$, ${\bf t}\in\{0,1\}^*$ such that $X_0=A,\ \ X_1=B,\ \ \ f_{{\bf t}0}:X_{{\bf t}0}\to X_{{\bf t}1},\ \ f_{{\bf t}1}:X_{{\bf t}1}\to X_{{\bf t}0}$where objects $X_{{\bf t}00},\ X_{{\bf t}01},\ X_{{\bf t}10},\ X_{{\bf t}11}$ are defined to be $f_{{\bf t}1}\circ f_{{\bf t}0} $, $\ 1_{X_{{\bf t}0}}$,$f_{{\bf t}0}\circ f_{{\bf t}1} $, $\ 1_{X_{{\bf t}1}}$; so that $A\simeq B \iff \exists f:A\to B,\ g:B\to A\,$ s.t. $\,g\circ f\simeq 1_A,\ f\circ g\simeq 1_B$) To formulate these, we also need a horizontal composition functor, from the full substructure of $\mathcal C'\times\mathcal C'$ generated by consecutive pairs of arrows as objects, to $\mathcal C'$. (For first,) globularity is assumed (i.e., $\alpha:f\to g$ with $f,g\in\mathcal C'$ implies $\cod(f)=\cod(g)$ and $\dom(f)=\dom(g)$).
Objects are $0$-cells, and an arrow $f:A\to B$ is regarded an $n+1$-cell whenever $A$ and $B$ are $n$-cells. [Note that an $n$-cell is automatically also $k$-cell for any $k<n$.]
More details can be found in my note. |
ISSN:
1556-1801
eISSN:
1556-181X
All Issues
Networks & Heterogeneous Media
September 2006 , Volume 1 , Issue 3
Select all articles
Export/Reference:
Abstract:
We consider the conductivity problem in an array structure with square closely spaced absolutely conductive inclusions of the high concentration, i.e. the concentration of inclusions is assumed to be close to 1. The problem depends on two small parameters: $\varepsilon$, the ratio of the period of the micro-structure to the characteristic macroscopic size, and $\delta$, the ratio of the thickness of the strips of the array structure and the period of the micro-structure. The complete asymptotic expansion of the solution to problem is constructed and justified as both $\varepsilon$ and $\delta$ tend to zero. This asymptotic expansion is uniform with respect to $\varepsilon$ and $\delta$ in the area $\{\varepsilon=O(\delta^{\alpha}),~\delta =O(\varepsilon^{\beta})\}$ for any positive $\alpha, \beta.$
Abstract:
The paper deals with a fluid dynamic model for supply chains. A mixed continuum-discrete model is proposed and possible choices of solutions at nodes guaranteeing the conservation of fluxes are discussed. Fixing a rule a Riemann solver is defined and existence of solutions to Cauchy problems is proved.
Abstract:
Solid tumours grow through two distinct phases: the avascular and the vascular phase. During the avascular growth phase, the size of the solid tumour is restricted largely by a diffusion-limited nutrient supply and the solid tumour remains localised and grows to a maximum of a few millimetres in diameter. However, during the vascular growth stage the process of cancer invasion of peritumoral tissue can and does take place. A crucial component of tissue invasion is the over-expression by the cancer cells of proteolytic enzyme activity, such as the urokinase-type plasminogen activator (uPA) and matrix metalloproteinases (MMPs). uPA itself initiates the activation of an enzymatic cascade that primarily involves the activation of plasminogen and subsequently its matrix degrading protein plasmin. Degradation of the matrix then enables the cancer cells to migrate through the tissue and subsequently to spread to secondary sites in the body.
In this paper we consider a relatively simple mathematical model of cancer cell invasion of tissue (extracellular matrix) which focuses on the role of a generic matrix degrading enzyme such as uPA. The model consists of a system of reaction-diffusion-taxis partial differential equations describing the interactions between cancer cells, the matrix degrading enzyme and the host tissue. The results obtained from numerical computations carried out on the model equations produce dynamic, heterogeneous spatio-temporal solutions and demonstrate the ability of a rather simple model to produce complicated dynamics, all of which are associated with tumour heterogeneity and cancer cell progression and invasion.
Abstract:
In urban transportation, combined mode trips are increasing in importance due to current urban transportation polices which encourage the use of transit through the creation of apposite parking lots and improvements in the public transportation system. It is widely recognized that parking policy plays an important role in urban management: parking policy measures not only affect the parking system, but also generate impacts to the transport and socioeconomic system of a city. The present paper attempts to expand on previous research concerning the development of models to capture drivers' parking behavior. It introduces in the modeling structure additional variables to the ones usually employed, with which the drivers' behavior to changes in prices and distances (mainly walking) are better captured. We develop a network model that represents trips as a combination of private and transit modes. A graph representing four different modes (car, bus, metro and pedestrian) is defined and a set of free park and ride facilities is introduced to discourage the use of private cars. An algorithm that evaluates the location and the effects of the parking price variation using multi-modal shortest paths is proposed together with an application to the City of Rome. Computational results are shown.
Abstract:
In this paper we establish a simplified model of general spatially periodic linear electronic analog networks. It has a two-scale structure. At the macro level it is an algebro-differential equation and a circuit equation at the micro level. Its construction is based on the concept of two-scale convergence, introduced by the author in the framework of partial differential equations, adapted to vectors and matrices. Simple illustrative examples are detailed by hand calculation and a numerical simulation is reported.
Abstract:
This work is devoted to the solution to Riemann Problems for the $p$-system at a junction, the main goal being the extension to the case of an ideal junction of the classical results that hold in the standard case.
Readers Authors Editors Referees Librarians Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top] |
So how do you convert percentages to fractions and decimals and vice versa? This post will show examples of each.
Convert a percentage to a fraction:
This one is easy as if you remember, a percentage is already a fraction where the numerator is displayed and the denominator is 100. So you just create the fraction and simplify it (see my posts on fractions):\[
{40}{\%}\hspace{0.33em}{=}\hspace{0.33em}\frac{40}{100}\hspace{0.33em}{=}\hspace{0.33em}\frac{{2}{0}\hspace{0.33em}\times\hspace{0.33em}{2}}{{20}\hspace{0.33em}\times\hspace{0.33em}{5}\hspace{0.33em}}\hspace{0.33em}{=}\hspace{0.33em}\frac{2}{5}
\]
2.
Convert a percentage to a decimal:
This one is just a matter of moving the decimal point, two places to the left. Keep in mind that the decimal point will not usually show at the end of integer percentage, but you can assume it to be at the end of the number:
37% = 37.% = 0.37
18.5% = 0.185
112% = 1.12
0.15% = 0.0015
Any 0’s at the end of the decimal, can be left off:
40% = 0.40 = 0.4
3. Convert a decimal to a percentage:
This is just the opposite of of the above: you just move the decimal point two places to the right, then add the % symbol:
0.25 = 25% (if an integer results, you can leave the decimal point off)
0.2786 = 27.86%
0.002 = 0.2%
2.345 = 234.5%
4. Convert a fraction to a percentage:
Here you multiply by 100/1, simplify, then multiply numerators together and denominator together. It is advisable to simplify before multiplying:\[
\frac{3}{5}\hspace{0.33em}\times\hspace{0.33em}\frac{100}{1}\hspace{0.33em}{=}\hspace{0.33em}\frac{3}{\rlap{/}{5}\hspace{0.33em}\times\hspace{0.33em}{1}}\hspace{0.33em}\times\hspace{0.33em}\frac{\rlap{/}{5}\hspace{0.33em}\times\hspace{0.33em}{20}}{1}\hspace{0.33em}{=}\hspace{0.33em}{60}{\%}
\]
Sometimes though, not as much cancels and you will need to do some division in the end (long or short – see my post on long division):\[
\frac{8}{9}\hspace{0.33em}\times\hspace{0.33em}\frac{100}{1}\hspace{0.33em}{=}\hspace{0.33em}\frac{800}{9}\hspace{0.33em}{=}\hspace{0.33em}{800}\hspace{0.33em}\div\hspace{0.33em}{9}\hspace{0.33em}{=}\hspace{0.33em}{88}{.}{89}{\%}
\]
In my next post, I will show how to do some of the more common problems using percentages. |
I want to expand Einstein-Hilbert action for the metric
$$ g_{\mu \nu} = \eta_{\mu \nu} + h_{\mu \nu} $$
up to quadratic order in $h_{\mu \nu}$. For this purpose I need to calculate the Ricci tensor at some stage. There is no problem in linear terms of the metric perturbation but there happens to be a problem when I intend to calculate the quadratic terms using
$$ R_{\mu \nu}^{(2)} = \partial_{\alpha} \Gamma^{\alpha} {} _{\mu \nu}^{(2)} - \partial_{\nu} \Gamma^{\alpha} {} _{\mu \alpha}^{(2)} + \Gamma^{\alpha} {} _{\beta \alpha}^{(1)} \Gamma^{\beta} {} _{\mu \nu}^{(1)} - \Gamma^{\alpha} {} _{\beta \nu}^{(1)} \Gamma^{\beta} {} _{\mu \alpha}^{(1)} $$
When I use
$$ \Gamma^{\alpha} {} _{\mu \nu}^{(1)} = \frac{1}{2} \left( \partial_{\mu} h^{\alpha} {} _{\nu} + \partial_{\nu} h^{\alpha} {} _{\mu} - \partial^{\alpha} h_{\mu \nu} \right) $$ $$ \Gamma^{\alpha} {} _{\mu \nu}^{(2)} = -\frac{1}{2}h^{\alpha \beta} \left( \partial_{\mu} h_{\beta \nu} + \partial_{\nu} h_{\mu \beta} - \partial_{\beta} h_{\mu \nu}\right) $$
in the above expression for $R_{\mu \nu}^{(2)}$ I suppose to find
$$ R_{\mu \nu}^{(2)} = \frac{1}{2}\Bigg[ \frac{1}{2}\partial_{\mu}h_{\alpha \beta}\partial_{\nu}h^{\alpha \beta} + \partial_{\beta}h_{\nu \alpha}\left( \partial^{\beta}h^{\alpha}{}_{\mu} - \partial^{\alpha}h^{\beta}{}_{\mu} \right) + h_{\alpha \beta} \left( \partial_{\mu}\partial_{\nu}h^{\alpha \beta} + \partial^{\alpha}\partial^{\beta}h_{\mu \nu} - \partial^{\beta}\partial_{\nu}h^{\alpha}{}_{\mu} - \partial^{\beta}\partial_{\mu}h^{\alpha}{}_{\nu} \right) - \left( \partial_{\alpha}h^{\alpha \beta} - \frac{1}{2}\partial^{\beta}h\right)\left( \partial_{\mu} h_{\nu \beta} + \partial_{\nu} h_{\mu \beta} - \partial_{\beta} h_{\mu \nu}\right) \Bigg] $$
but I do not. I tried it over and over again but somehow I cannot get the correct result. It seems I am missing something but I don't know what. I will be glad if someone can help.
This post imported from StackExchange Physics at 2015-02-13 11:37 (UTC), posted by SE-user sahin |
ISSN:
1556-1801
eISSN:
1556-181X
All Issues
Networks & Heterogeneous Media
December 2006 , Volume 1 , Issue 4
Select all articles
Export/Reference:
Abstract:
The launching meeting of Networks and Heterogeneous Media took place on June 21-23 2006 in Maiori (Salerno, Italy). The meeting was sponsored by the American Institute of Mathematical Sciences, the Istituto per le Applicazioni del Calcolo of Roma, and the DIIMA of University of Salerno. For more information please click the "Full Text" above.
Abstract:
A multiscale model for vascular tumour growth is presented which includes systems of ordinary differential equations for the cell cycle and regulation of apoptosis in individual cells, coupled to partial differential equations for the spatio-temporal dynamics of nutrient and key signalling chemicals. Furthermore, these subcellular and tissue layers are incorporated into a cellular automaton framework for cancerous and normal tissue with an embedded vascular network. The model is the extension of previous work and includes novel features such as cell movement and contact inhibition. We present a detailed simulation study of the effects of these additions on the invasive behaviour of tumour cells and the tumour's response to chemotherapy. In particular, we find that cell movement alone increases the rate of tumour growth and expansion, but that increasing the tumour cell carrying capacity leads to the formation of less invasive dense hypoxic tumours containing fewer tumour cells. However, when an increased carrying capacity is combined with significant tumour cell movement, the tumour grows and spreads more rapidly, accompanied by large spatio-temporal fluctuations in hypoxia, and hence in the number of quiescent cells. Since, in the model, hypoxic/quiescent cells produce VEGF which stimulates vascular adaptation, such fluctuations can dramatically affect drug delivery and the degree of success of chemotherapy.
Abstract:
We study a curvature-dependent motion of plane curves in a two-dimensional cylinder with periodically undulating boundary. The law of motion is given by $V=\kappa + A$, where $V$ is the normal velocity of the curve, $\kappa$ is the curvature, and $A$ is a positive constant. We first establish a necessary and sufficient condition for the existence of periodic traveling waves, then we study how the average speed of the periodic traveling wave depends on the geometry of the domain boundary. More specifically, we consider the homogenization problem as the period of the boundary undulation, denoted by $\epsilon$, tends to zero, and determine the homogenization limit of the average speed of periodic traveling waves. Quite surprisingly, this homogenized speed depends only on the maximum opening angle of the domain boundary and no other geometrical features are relevant. Our analysis also shows that, for any small $\epsilon>0$, the average speed of the traveling wave is smaller than $A$, the speed of the planar front. This implies that boundary undulation always lowers the speed of traveling waves, at least when the bumps are small enough.
Abstract:
The Internet's layered architecture and organizational structure give rise to a number of different topologies, with the lower layers defining more physical and the higher layers more virtual/logical types of connectivity structures. These structures are very different, and successful Internet topology modeling requires annotating the nodes and edges of the corresponding graphs with information that reflects their network-intrinsic meaning. These structures also give rise to different representations of the traffic that traverses the heterogeneous Internet, and a traffic matrix is a compact and succinct description of the traffic exchanges between the nodes in a given connectivity structure. In this paper, we summarize recent advances in Internet research related to (i) inferring and modeling the router-level topologies of individual service providers (i.e., the physical connectivity structure of an ISP, where nodes are routers/switches and links represent physical connections), (ii) estimating the intra-AS traffic matrix when the AS's router-level topology and routing configuration are known, (iii) inferring and modeling the Internet's AS-level topology, and (iv) estimating the inter-AS traffic matrix. We will also discuss recent work on Internet connectivity structures that arise at the higher layers in the TCP/IP protocol stack and are more virtual and dynamic; e.g., overlay networks like the WWW graph, where nodes are web pages and edges represent existing hyperlinks, or P2P networks like Gnutella, where nodes represent peers and two peers are connected if they have an active network connection.
Abstract:
This paper describes some simplifications allowed by the variational theory of traffic flow(VT). It presents general conditions guaranteeing that the solution of a VT problem with bottlenecks exists, is unique and makes physical sense; i.e., that the problem is well-posed. The requirements for well-posedness are mild and met by practical applications. They are consistent with narrower results available for kinematic wave or Hamilton-Jacobi theories. The paper also describes some duality ideas relevant to these theories. Duality and VT are used to establish the equivalence of eight traffic models. Finally, the paper discusses how its ideas can be used to model networks of multi-lane traffic streams.
Abstract:
The reconstitution of a proper and functional vascular network is a major issue in tissue engineering and regeneration. The limited success of current technologies may be related to the difficulties to build a vascular tree with correct geometric ratios for nutrient delivery. The present paper develops a mathematical model suggesting how an anisotropic vascular network can be built
in vitroby using exogenous chemoattractant and chemorepellent. The formation of the network is strongly related to the nonlinear characteristics of the model. Abstract:
We formulate a hierarchy of models relevant for studying coupled well-reservoir flows. The starting point is an integral equation representing unsteady single-phase 3-D porous media flow and the 1-D isothermal Euler equations representing unsteady well flow. This $2 \times 2$ system of conservation laws is coupled to the integral equation through natural coupling conditions accounting for the flow between well and surrounding reservoir. By imposing simplifying assumptions we obtain various hyperbolic-parabolic and hyperbolic-elliptic systems. In particular, by assuming that the fluid is incompressible we obtain a hyperbolic-elliptic system for which we present existence and uniqueness results. Numerical examples demonstrate formation of steep gradients resulting from a balance between a local nonlinear convective term and a non-local diffusive term. This balance is governed by various well, reservoir, and fluid parameters involved in the non-local diffusion term, and reflects the interaction between well and reservoir.
Abstract:
We consider a supply network where the flow of parts can be controlled at the vertices of the network. Based on a coarse grid discretization provided in [6] we derive discrete adjoint equations which are subsequently validated by the continuous adjoint calculus. Moreover, we present numerical results concerning the quality of approximations and computing times of the presented approaches.
Abstract:
In this paper we systematically review the control volume finite element (CVFE) methods for numerical solutions of second-order partial differential equations. Their relationships to the finite difference and standard (Galerkin) finite element methods are considered. Through their relationship to the finite differences, upstream weighted CVFE methods and the conditions on positive transmissibilities (positive flux linkages) are studied. Through their relationship to the standard finite elements, error estimates for the CVFE are obtained. These estimates are comparable to those for the standard finite element methods using piecewise linear elements. Finally, an application to multiphase flows in porous media is presented.
Readers Authors Editors Referees Librarians Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top] |
Is there any theoretical or practical limit to the maximum number of passengers - and therefore size - one can build an airplane for?
Yes, there is an upper limit, but that upper limit might change with technological innovation.
An airplane flies because of the lift coefficient $L=\frac12\rho v^2 A C_L$, with $v$ the airspeed, which is a combination of the speed of the plane and the wind speed, $\rho \approx 1 \, \text{kg m}^{-3}$ at a theoretical mimimum of 5 km height (remember that most planes reach 10 km, but I took this a little more extreme to show an upper limit), $A$ the area, and $C_L$ an coefficent with a typical value less than 2, that might change with technological innovation.
So the only factors we can influence are $v$ and $A$. However, if we increase $A$, the mass $m$ increases faster than the area $A$ because there is more material needed to avoid the plane form breaking under the huge forces. Increasing $A$ quadratically gives more than a quadratically in $m$, and hence in the needed $L$.
If we increase $v$, we need more fuel. The amount of fuel per distance unit increases linearily in $v$, because it increases quadratically per unit of time in $v$. Hence $L$ increases quadratically where $m$ only increases linearily. This means that we might do something with increasing $v$. That means that airplanes have to go faster before liftoff, which will need drastically longer takeoff lanes. Note that we can't keep increasing $v$ because we can't lose control.
In summary, things we can improve are $v$, the speed, $C_L$, with technological innovations and $\rho$ by lowering flying height. However, it is not practical.
It is no accident that the biggest birds are flightless. The ability to fly goes down with increasing size, so there is also an upper limit for aircraft. The main reason is that as size increases, masses go up with the cube of the size increase while the load-carrying structures like wing spar cross sections only grow with the square of the size increase. This power law is the simplest of the scaling laws.
Since the loads on an aircraft's wing depend not only on its size, but also on many more parameters (angle of attack, speed, aspect ratio …), there is no clear boundary, and advances in materials help to shift the size limit up. If one would try to build the biggest airplane in the world, the wingspan could easily be double of what todays biggest airplanes measure, but the utility of this airplane would be very limited.
Using this airplane for passenger travel would add more restrictions like the number of emergency exits and the maximum distance to the nearest exit, but this could be overcome by using several smaller fuselages. A double-hull airplane would also spread out the payload weight, so the wing would experience a reduced root bending moment. Going from this (source):
to this (source):
would immediately raise the size limit substantially. However, it would require new, wider runways to take off from. And adding more fuselages to the same wing will run into flutter problems soon.
The next indication could be designs which have been studied and deemed feasible but were eventually not built for economical reasons. Here the biggest are ground effect vehicles: Flying slowly in dense air pushes the size limit up. The Boeing Pelican was planned with 152 m wingspan, and Beriev proposed one with 2500 t take-off mass and 125.5 m wing span.
I would guess that a wingspan of 200 m is still feasible, and when spreading the weight even 500 m should be realistic, but totally unpractical. Taking a lesson from history, this would be a multi-hulled flying boat which flies in ground effect, similar to the (mono-hulled) biggest aircraft record holders in the 1920s.
Theoretically, unlimited (well bigger than is practically necessary)...
TL;DR
Airplanes scale fairly well and it would be
physically possible to build and airplane of just about any size. Granted there are some things that come into play from a structure standpoint but there are surely ways around that. You may have to stray away from the traditional single fuselage, two wing and empennage design but none the less.
There are lots of realistic factors that will stand in your way before you even need to design such a plane.
You don't have enough money to build such a plane Boeing does not have enough money to build such a plane Juan Trippe has no interest in such a plane so there is probably no reason to build it. There are no runways that could handle such an aircraft. Planes like the A380 and 747 are already route limited by runway length/weight capacity. You would need to modify runways to handle anything significantly bigger. That is of course assuming that it lands in a similar fashion to most airliners (i.e. not VTOL) What route is it going to fly? Planes are not the size they are because we cant build them bigger, they are the size they are because the routes dictate them being such a size. Do you really need to move 1,000 people on a given route at once? How many routes have this much travel density?
Lets look at this hypothetically, Jamiec makes an excellent point that a plane that has the capacity in excess of 7Bn would be kind of useless so lets take that as a maximum. XKCD what if covers this in a similar question and estimates that shoulder to shoulder all the people on earth take up roughly the size of Rhode Island. For the sake of argument lets say that you would need seats and toilets and what not for this many people so to fit everyone on earth in a plane you would need maybe twice the size of Rhode island or a bit more. Unlike that question we can build up so a 4-8 story (or any amount) of plane would be plausible. The average FAA person weighs about 180 lb (81.65 kg) so you would need to lift
1,260,000,000,000 lb Or 630,000,000 tons (572,000,000,000 kg or 572,000,000 metric ton )
To provide a frame of reference the A380 has a maximum structural payload of 330,300 lb (149,822 kg). Keep in mind this is only payload, you also need to lift the weight of the airframe, engines, and fuel (if you actually want to go anywhere). So basically you would need a multi level plane close to the size of Rhode Island that had most of the combined thrust available on the planet and enough liquor to keep everyone calm for the flight.
The structural issue in some regards boils down to wing loading. One commonly self imposed limit these days is that most planes are typical low wing cantilever monoplanes so we view things in relation to that. In other words the fuselage may stress about the wing mounting points and the wings are generally long and low but there is nothing preventing us from using an alternate design with multiple wings or a full lifting body design to accomplish the structure we need. As such the idea of big wings can be overcome and historically this is how the problem was solved in early aviation (tri-planes etc...) The weight issue (from a practical standpoint) is something we concern ourselves with for efficiency reasons. If we are just building a giant plane we can use jets, rockets and all manners of high thrust devices for the purpose of science. You could fly a cement plane if it was shaped right and you had enough thrust.
Remember if Thrust > Drag and Lift > Weight you will fly (I learned this the first day in flight school). Doing so in a controlled and organized manner took significantly more time to learn....
-- Edit --
Since the question has been edited to involve the passenger count there are other issue that crop up.
You need to realistically load and unload the plane which takes time. A plane that takes to long to load and unload will not be economical to operate. Weight (Americans are getting fat) (see above) You need a reason to move that many people that far all at once. Depending on flight length you need to consider food, water, and restrooms to accommodate everyone (waste tanks are not infinitely large). Are there even enough people in one place. Planes move people (and often stuff) from one place to another but lets take NYC for example which has a population of about 8.5 million people the chances that all of them are going to the same place at once is near 0%. So you don't need an 8.5 million passenger plane, but you can start a bit smaller.
From an airline standpoint (The people actually buying) planes like the 747 are already big enough and expensive enough. The 747 nearly bankrupt Boeing at the time but has had a fairly successful run since. The A380 is fairly new but it will be interesting to see how it changes the game.
Certainly a plane can't be bigger than Earth. I would even say 10 miles long aircraft is quite useless, because you need to have some transportation inside of it to deliver all passengers on theirs seats. An airport for keeping such places should be also a big. Therefore limitations are coming not from weight/lifting power, they are coming from practical uses of such big planes.
As most of the problems seem related to taking off and landing, best probably would be to keep such a giant plane constantly airborne, very high where the air is more stable and compose of multiple modules that could take off self-dependently and then join that bigger "flying castle" - like a space station. Docking seems tricky but should be possible as areal refueling is possible.
This castle could be solar-powered or nuclear-powered, for instance.
It is more difficult to think what would be the use of it.
The Convair XC-99, cargo and passenger version of B-36 bomber, was rejected by airlines, they say, because airports were not prepared to handle over 200 passengers and their luggage disembarking at the same time, one of the main problems in the giant Airbus may be that it's close in undercarriage width to some airstrips, you remind an aircraft carrier ship deck take off and landing when watching an Airbus 380.
The type of take-off and landing of Saab Viggen, designed to eventually operate from Swedish highways, is not acceptable for commercial flights, and the marginal vortex in the tips of Airbus 380 is so powerful, that the airport must be closed for some minutes after one of these giants used it, in order to avoid an smaller airplane being send down when entering the turbulence, thus limiting the whale airplane advantage in passengers per day in this airport.
Regarding huge size flying machines, you may have to look at Sci-Fi and UFO writers, eg., 'Rendezvous with Rama', or the case of a commercial flight pilot, if I remember well, going from Barcelona to Pamplona, in Spain, who watched an stationary round cloud hovering high over a dam lake, that draw his attention, and reported requesting permit from air traffic control tower to turn 360º around the cloud. He concluded the cloud wasn't a cloud, but a metallic object one and a half kilometers in size (sorry, I don't know if it was diameter or circumference), this is really much bigger than the Kalinin K-7, a heavy bomber built in 1933, that crashed for an structural failure of because a mid-air collision with a smaller airplane.
Who knows what the future may be in airplane size?
In the end, as in the movie: 'The yellow Rolls-Royce', 'Tomorrow never comes'
(source: h-cdn.co)
This is about as big as you can build a conventional airplane. The weight of the wing requires a downward wing slope, which cannot be sustained for larger sizes.
Assuming away that constraint, we run into other problems. With perfectly laminar airflow, a wing can be extended infinitely. In reality, though, there will be minor irregularities which will strain the wing.
This is a YB-49. It later broke apart in flight due to materials strain from a dive.
(source: check-six.com)
The exact math would be difficult, because of the complexity of fluid dynamics. But I don't think you could build a mile wide airplane without it tearing apart in the first flight.
A helicopter or rocket is a different matter, because it does not require a wing.
The limits you are asking about have been pushed by the antonov. Anything bigger than that is certainly possible but not very much economical and may not be safe. While the global economy is in melting ice and a few solid patches here and there it would be a very unsuitable experiment to justify. Rather diverting the funds to improving the current airplanes may have a better result compared to an increase in size.
A larger aircraft could exist as basically 2 aircraft glued loosely together side by side with flexible joint and computers to carefully manage engines and controls to keep all the pieces from breaking apart... you could extend it by making entire aircraft a flexible wing that follows curve of earth.
A larger aircraft could be mostly a "lighter than air" airship that is a little heavier than air and using "lifting body" forward motion to get the extra little bit of lift needed to rise.
"Limits" can be overcome but end result becomes less and less practical beyond a certain size. |
Let $X,Y\in\mathbb{R}^{n\times n}$ be symmetric matrices. You may assume that $X$ is positive semidefinite and $Y$ negative semidefinite, if needed, but not that they are invertible.
I would like to find a way to factor the $2n\times 2n$ block matrix $$ \begin{bmatrix} X & I\\\\ I & Y \end{bmatrix} $$ into some form of the kind $MDM^T$, where:
$D$ should be a "simple" matrix, ideally diagonal or of the form $D=\begin{bmatrix}0 & I\\\\I & 0\end{bmatrix}$, or something similar; The factorization should take explicit advantage of the identities being there, without treating them as general matrices and thus depending on too many parameters, so the Cholesky factorization is ruled out.
Is there some nice identity that I am missing? |
Abbreviation:
Frm
A
is a structure $\mathbf{A}=\langle A, \bigvee, \wedge, e, 0\rangle$ of type $\langle\infty, 2, 0, 0\rangle$ such that frame
$\langle A, \bigvee, 0\rangle$ is a complete semilattice with $0=\bigvee\emptyset$,
$\langle A, \wedge, e\rangle$ is a meet semilattice with identity, and
$\wedge$ distributes over $\bigvee$: $x\wedge(\bigvee Y)=\bigvee_{y\in Y}(x\wedge y)$
Remark: This is a template. If you know something about this class, click on the 'Edit text of this page' link at the bottom and fill out this page.
It is not unusual to give several (equivalent) definitions. Ideally, one of the definitions would give an irredundant axiomatization that does not refer to other classes.
Let $\mathbf{A}$ and $\mathbf{B}$ be frames. A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a function $h:A\rightarrow B$ that is a homomorphism: $h(\bigvee X)=\bigvee h[X]$ for all $X\subseteq A$ (hence $h(0)=0$), $h(x \wedge y)=h(x) \wedge h(y)$ and $h(e)=e$.
A
is a structure $\mathbf{A}=\langle A,...\rangle$ of type $\langle...\rangle$ such that …
$...$ is …: $axiom$
$...$ is …: $axiom$
Example 1:
Feel free to add or delete properties from this list. The list below may contain properties that are not relevant to the class that is being described.
$\begin{array}{lr} f(1)= &1\\ f(2)= &\\ f(3)= &\\ f(4)= &\\ f(5)= &\\ \end{array}$ $\begin{array}{lr} f(6)= &\\ f(7)= &\\ f(8)= &\\ f(9)= &\\ f(10)= &\\ \end{array}$
[[...]] subvariety [[...]] expansion [[...]] supervariety [[...]] subreduct |
If $f,g$ are uniformly continuous, then is $\alpha f+\beta g$ uniformly continuous?
So far, I looked at here If $f,g$ are uniformly continuous prove $f+g$ is uniformly continuous, but I didn't understand something. I know that from what I've been told,: 1. $\forall\epsilon >0$ $\exists\delta_1 >0$, if $|x-y|<\delta_1$ then $|f(x)-f(y)|<\epsilon$ 2. $\forall\epsilon >0$ $\exists\delta_2 >0$, if $|x-y|<\delta_2$ then $|g(x)-g(y)|<\epsilon$
I need to show that for every $\epsilon$ there exists some $\delta$ such that if $|x-y|<\delta$ then $|(\alpha f+ \beta g)(x) - (\alpha f+ \beta g)(y)|<\epsilon$. So I know the method, I need to show that from $|(\alpha f+ \beta g)(x) - (\alpha f+ \beta g)(y)|<\epsilon$ - I need to do some manipulations and get $|x-y|<\delta$ (I need to find that $\delta$). So, why in the answers there it just shows that $|(\alpha f+ \beta g)(x) - (\alpha f+ \beta g)(y)|<\epsilon$?
Thanks! |
The Riemann zeta function is $$\zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s}$$ Now let's say that our input $r \in \mathbb{N}$ and $r > 1$. That way we can create the parametric function $$f_r(x) = \frac{1}{x^r}$$
Now my theory was that $\zeta(s)$ could be calculated by $$\zeta(r) \approx \int_{1}^{\infty}f_r(x)dx = \lim_{n \to \infty} F_r(n) - F_r(1)$$ For that I calculated that for $r>1$ $$\int f_r(x)dx = \int(\frac{1}{x^r})dx = \int(x^{-r})dx = \frac{1}{-r+1} * x^{-r+1}$$ and for $r=1$ $\int f_r(x)dx = \int \frac{1}{x}dx = \ln(x)$
Using the information I had now I figured that for instance $\zeta(2)$ could be calculated approximately using$$\zeta(2) \approx \lim_{n \to \infty} F_2(n) - F_2(1) = \lim_{n \to \infty} \frac{1}{-2+1}*n^{-2+1}-\frac{1}{-2+1}*1^{-2+1} = \lim_{n \to \infty} \frac{1}{-n}+\frac{1}{1} = 1$$which is
nothing close to the $\frac{\pi^2}{6}$ Wikipedia told me about.
Now my question is what went wrong. Is the theory just plainly wrong or did I miscalculate something, for example in the anti-derivative?
Thanks in advance. |
For an integer $n > 0$, let $\mathrm{P}_n$ denote the vector space of polynomials with real coefficients of degree $2$ or less. Define the map $T : \mathrm{P}_2 \rightarrow \mathrm{P}_4$ by\[ T(f)(x) = f(x^2).\]
Determine if $T$ is a linear transformation.
If it is, find the matrix representation for $T$ relative to the basis $\mathcal{B} = \{ 1 , x , x^2 \}$ of $\mathrm{P}_2$ and $\mathcal{C} = \{ 1 , x , x^2 , x^3 , x^4 \}$ of $\mathrm{P}_4$.
Let $V$ be the vector space of $2 \times 2$ matrices with real entries, and $\mathrm{P}_3$ the vector space of real polynomials of degree 3 or less. Define the linear transformation $T : V \rightarrow \mathrm{P}_3$ by\[T \left( \begin{bmatrix} a & b \\ c & d \end{bmatrix} \right) = 2a + (b-d)x – (a+c)x^2 + (a+b-c-d)x^3.\]
Let $T:\R^2 \to \R^2$ be a linear transformation of the $2$-dimensional vector space $\R^2$ (the $x$-$y$-plane) to itself which is the reflection across a line $y=mx$ for some $m\in \R$.
Then find the matrix representation of the linear transformation $T$ with respect to the standard basis $B=\{\mathbf{e}_1, \mathbf{e}_2\}$ of $\R^2$, where\[\mathbf{e}_1=\begin{bmatrix}1 \\0\end{bmatrix}, \mathbf{e}_2=\begin{bmatrix}0 \\1\end{bmatrix}.\]
Let $P_n$ be the vector space of all polynomials with real coefficients of degree $n$ or less.Consider the differentiation linear transformation $T: P_n\to P_n$ defined by\[T\left(\, f(x) \,\right)=\frac{d}{dx}f(x).\]
(a) Consider the case $n=2$. Let $B=\{1, x, x^2\}$ be a basis of $P_2$. Find the matrix representation $A$ of the linear transformation $T$ with respect to the basis $B$.
(b) Compute $A^3$, where $A$ is the matrix obtained in part (a).
(c) If you computed $A^3$ in part (b) directly, then is there any theoretical explanation of your result?
(d) Now we consider the general case. Let $B$ be any basis of the vector space of $P_n$ and let $A$ be the matrix representation of the linear transformation $T$ with respect to the basis $B$.Prove that without any calculation that the matrix $A$ is nilpotent.
Let $\mathbf{u}=\begin{bmatrix}1 \\1 \\0\end{bmatrix}$ and $T:\R^3 \to \R^3$ be the linear transformation\[T(\mathbf{x})=\proj_{\mathbf{u}}\mathbf{x}=\left(\, \frac{\mathbf{u}\cdot \mathbf{x}}{\mathbf{u}\cdot \mathbf{u}} \,\right)\mathbf{u}.\]
(a) Calculate the null space $\calN(T)$, a basis for $\calN(T)$ and nullity of $T$.
(b) Only by using part (a) and no other calculations, find $\det(A)$, where $A$ is the matrix representation of $T$ with respect to the standard basis of $\R^3$.
(c) Calculate the range $\calR(T)$, a basis for $\calR(T)$ and the rank of $T$.
(d) Calculate the matrix $A$ representing $T$ with respect to the standard basis for $\R^3$.
(e) Let\[B=\left\{\, \begin{bmatrix}1 \\0 \\0\end{bmatrix}, \begin{bmatrix}-1 \\1 \\0\end{bmatrix}, \begin{bmatrix}0 \\-1 \\1\end{bmatrix} \,\right\}\]be a basis for $\R^3$.Calculate the coordinates of $\begin{bmatrix}x \\y \\z\end{bmatrix}$ with respect to $B$.
(The Ohio State University, Linear Algebra Exam Problem)
Let $\calF[0, 2\pi]$ be the vector space of all real valued functions defined on the interval $[0, 2\pi]$.Define the map $f:\R^2 \to \calF[0, 2\pi]$ by\[\left(\, f\left(\, \begin{bmatrix}\alpha \\\beta\end{bmatrix} \,\right) \,\right)(x):=\alpha \cos x + \beta \sin x.\]We put\[V:=\im f=\{\alpha \cos x + \beta \sin x \in \calF[0, 2\pi] \mid \alpha, \beta \in \R\}.\]
(a) Prove that the map $f$ is a linear transformation.
(b) Prove that the set $\{\cos x, \sin x\}$ is a basis of the vector space $V$.
(c) Prove that the kernel is trivial, that is, $\ker f=\{\mathbf{0}\}$.(This yields an isomorphism of $\R^2$ and $V$.)
(d) Define a map $g:V \to V$ by\[g(\alpha \cos x + \beta \sin x):=\frac{d}{dx}(\alpha \cos x+ \beta \sin x)=\beta \cos x -\alpha \sin x.\]Prove that the map $g$ is a linear transformation.
(e) Find the matrix representation of the linear transformation $g$ with respect to the basis $\{\cos x, \sin x\}$.
Let $P_3$ be the vector space of polynomials of degree $3$ or less with real coefficients.
(a) Prove that the differentiation is a linear transformation. That is, prove that the map $T:P_3 \to P_3$ defined by\[T\left(\, f(x) \,\right)=\frac{d}{dx} f(x)\]for any $f(x)\in P_3$ is a linear transformation.
(b) Let $B=\{1, x, x^2, x^3\}$ be a basis of $P_3$. With respect to the basis $B$, find the matrix representation of the linear transformation $T$ in part (a). |
LaTeX supports many worldwide languages by means of some special packages. In this article is explained how to import and use those packages to create documents in
Italian.
Contents
Italian language has some accentuated words. For this reason the preamble of your document must be modified accordingly to support these characters and some other features.
documentclass{article} \usepackage[utf8]{inputenc} \usepackage[italian]{babel} \usepackage[T1]{fontenc} \begin{document} \tableofcontents \vspace{2cm} %Add a 2cm space \begin{abstract} Questo è un breve riassunto dei contenuti del documento scritto in italiano. \end{abstract} \section{Sezione introduttiva} Questa è la prima sezione, possiamo aggiungere alcuni elementi aggiuntivi e tutto digitato correttamente. Inoltre, se una parola è troppo lunga e deve essere troncato babel cercherà per troncare correttamente a seconda della lingua. \section{Teoremi Sezione} Questa sezione è quello di vedere cosa succede con i comandi testo definendo \[ \lim x = \sin{\theta} + \max \{3.52, 4.22\} \] \end{document}
There are two packages in this document related to the encoding and the special characters. These packages will be explained in the next sections.
If your are looking for instructions on how to use more than one language in a sinlge document, for instance English and Italian, see the International language support article.
Modern computer systems allow you to input letters of national alphabets directly from the keyboard. In order to handle a variety of input encodings used for different groups of languages and/or on different computer platforms LaTeX employs the
inputenc package to set up input encoding. In this case the package properly displays characters in the Italian alphabet. To use this package add the next line to the preamble of your document: \usepackage[utf8]{inputenc} The recommended input encoding is utf-8. You can use other encodings depending on your operating system.
To proper LaTeX document generation you must also choose a font encoding which has to support specific characters for Italian language, this is accomplished by the
package: fontenc \usepackage[T1]{fontenc} Even though the default encoding works well in Italian, using this specific encoding will avoid glitches with some specific characters. The default LaTeX encoding is
OT1.
To extended the default LaTeX capabilities, for proper hyphenation and translating the names of the document elements, import the
babel package for the Italian language. \usepackage[italian]{babel} As you may see in the example at the introduction, instead of "abstract" and "Contents" the Italian words "Sommario" and "Indice" are used.
Sometimes for formatting reasons some words have to be broken up in syllables separated by a
- (
hyphen) to continue the word in a new line. For example, matematica could become mate-matica. The package babel, whose usage was described in the previous section, usually does a good job breaking up the words correctly, but if this is not the case you can use a couple of commands in your preamble.
\usepackage{hyphenat} \hyphenation{mate-mati-ca recu-perare}
The first command will import the package
hyphenat and the second line is a list of space-separated words with defined hyphenation rules. On the other side, if you want a word not to be broken automatically, use the
{\nobreak word} command within your document.
For more information see |
Establishing the type of distribution, sample size, and known or unknown standard deviation can help you figure out how to go about a hypothesis test. However, there are several other factors you should consider when working out a hypothesis test.
Rare Events
Suppose you make an assumption about a property of the population (this assumption is the null hypothesis). Then you gather sample data randomly. If the sample has properties that would be very
unlikely to occur if the assumption is true, then you would conclude that your assumption about the population is probably incorrect. (Remember that your assumption is just an assumption—it is not a fact and it may or may not be true. But your sample data are real and the data are showing you a fact that seems to contradict your assumption.)
For example, Didi and Ali are at a birthday party of a very wealthy friend. They hurry to be first in line to grab a prize from a tall basket that they cannot see inside because they will be blindfolded. There are 200 plastic bubbles in the basket and Didi and Ali have been told that there is only one with a $100 bill. Didi is the first person to reach into the basket and pull out a bubble. Her bubble contains a $100 bill. The probability of this happening is \(\frac{1}{200} = 0.005\). Because this is so unlikely, Ali is hoping that what the two of them were told is wrong and there are more $100 bills in the basket. A "rare event" has occurred (Didi getting the $100 bill) so Ali doubts the assumption about only one $100 bill being in the basket.
Using the Sample to Test the Null Hypothesis
Use the sample data to calculate the actual probability of getting the test result, called the \(p\)-value. The \(p\)-value is the probability that, if the null hypothesis is true, the results from another randomly selected sample will be as extreme or more extreme as the results obtained from the given sample.
A large \(p\)-value calculated from the data indicates that we should not reject the null hypothesis. The smaller the \(p\)-value, the more unlikely the outcome, and the stronger the evidence is against the null hypothesis. We would reject the null hypothesis if the evidence is strongly against it.
Draw a graph that shows the \(p\)-value. The hypothesis test is easier to perform if you use a graph because you see the problem more clearly.
Example \(\PageIndex{1}\)
Suppose a baker claims that his bread height is more than 15 cm, on average. Several of his customers do not believe him. To persuade his customers that he is right, the baker decides to do a hypothesis test. He bakes 10 loaves of bread. The mean height of the sample loaves is 17 cm. The baker knows from baking hundreds of loaves of bread that the standard deviation for the height is 0.5 cm. and the distribution of heights is normal.
The null hypothesis could be \(H_{0}: \mu \leq 15\) The alternate hypothesis is \(H_{a}: \mu > 15\)
The words
"is more than" translates as a "\(>\)" so "\(\mu > 15\)" goes into the alternate hypothesis. The null hypothesis must contradict the alternate hypothesis.
Since \(\sigma\)
is known (\(\sigma = 0.5 cm.\)), the distribution for the population is known to be normal with mean \(μ = 15\) and standard deviation
\[\dfrac{\sigma}{\sqrt{n}} = \frac{0.5}{\sqrt{10}} = 0.16. \nonumber\]
Suppose the null hypothesis is true (the mean height of the loaves is no more than 15 cm). Then is the mean height (17 cm) calculated from the sample unexpectedly large? The hypothesis test works by asking the question how
unlikely the sample mean would be if the null hypothesis were true. The graph shows how far out the sample mean is on the normal curve. The p-value is the probability that, if we were to take other samples, any other sample mean would fall at least as far out as 17 cm. The \(p\) -value, then, is the probability that a sample mean is the same or greater than 17 cm. when the population mean is, in fact, 15 cm. We can calculate this probability using the normal distribution for means.
Figure \(\PageIndex{1}\)
\(p\text{-value} = P(\bar{x} > 17)\) which is approximately zero.
A \(p\)-value of approximately zero tells us that it is highly unlikely that a loaf of bread rises no more than 15 cm, on average. That is, almost 0% of all loaves of bread would be at least as high as 17 cm.
purely by CHANCE had the population mean height really been 15 cm. Because the outcome of 17 cm. is so unlikely (meaning it is happening NOT by chance alone), we conclude that the evidence is strongly against the null hypothesis (the mean height is at most 15 cm.). There is sufficient evidence that the true mean height for the population of the baker's loaves of bread is greater than 15 cm.
Exercise \(\PageIndex{1}\)
A normal distribution has a standard deviation of 1. We want to verify a claim that the mean is greater than 12. A sample of 36 is taken with a sample mean of 12.5.
\(H_{0}: \mu leq 12\) \(H_{a}: \mu > 12\)
The \(p\)-value is 0.0013
Draw a graph that shows the \(p\)-value.
Answer:
\(p\text{-value} = 0.0013\)
Figure \(\PageIndex{2}\) Decision and Conclusion
A systematic way to make a decision of whether to reject or not reject the null hypothesis is to compare the \(p\)-value and a preset or preconceived \(\alpha\) (also called a "
significance level"). A preset \(\alpha\) is the probability of a Type I error (rejecting the null hypothesis when the null hypothesis is true). It may or may not be given to you at the beginning of the problem.
When you make a decision to reject or not reject \(H_{0}\), do as follows:
If \(\alpha > p\text{-value}\), reject \(H_{0}\). The results of the sample data are significant. There is sufficient evidence to conclude that \(H_{0}\) is an incorrect belief and that the alternative hypothesis, \(H_{a}\), may be correct. If \(\alpha \leq p\text{-value}\), do not reject \(H_{0}\). The results of the sample data are not significant.There is not sufficient evidence to conclude that the alternative hypothesis,\(H_{a}\), may be correct.
When you "do not reject \(H_{0}\)", it does not mean that you should believe that
H 0 is true. It simply means that the sample data have failed to provide sufficient evidence to cast serious doubt about the truthfulness of \(H_{0}\).
Conclusion: After you make your decision, write a thoughtful conclusion about the hypotheses in terms of the given problem.
Example \(\PageIndex{2}\)
When using the \(p\)-value to evaluate a hypothesis test, it is sometimes useful to use the following memory device
If the \(p\)-value is low, the null must go. If the \(p\)-value is high, the null must fly.
This memory aid relates a \(p\)-value less than the established alpha (the \(p\) is low) as rejecting the null hypothesis and, likewise, relates a \(p\)-value higher than the established alpha (the \(p\) is high) as not rejecting the null hypothesis.
Fill in the blanks.
Reject the null hypothesis when ______________________________________.
The results of the sample data _____________________________________.
Do not reject the null when hypothesis when __________________________________________.
The results of the sample data ____________________________________________.
Answer
Reject the null hypothesis when
the \(p\) -value is less than the established alpha value. The results of the sample data support the alternative hypothesis.
Do not reject the null hypothesis when
the \(p\) -value is greater than the established alpha value. The results of the sample data do not support the alternative hypothesis.
Exercise \(\PageIndex{2}\)
It’s a Boy Genetics Labs claim their procedures improve the chances of a boy being born. The results for a test of a single population proportion are as follows:
\(H_{0}: p = 0.50, H_{a}: p > 0.50\) \(\alpha = 0.01\) \(p\text{-value} = 0.025\)
Interpret the results and state a conclusion in simple, non-technical terms.
Answer
Since the \(p\)-value is greater than the established alpha value (the \(p\)-value is high), we do not reject the null hypothesis. There is not enough evidence to support It’s a Boy Genetics Labs' stated claim that their procedures improve the chances of a boy being born.
Chapter Review
When the probability of an event occurring is low, and it happens, it is called a rare event. Rare events are important to consider in hypothesis testing because they can inform your willingness not to reject or to reject a null hypothesis. To test a null hypothesis, find the
p-value for the sample data and graph the results. When deciding whether or not to reject the null the hypothesis, keep these two parameters in mind: \(\alpha > p-value\), reject the null hypothesis \(\alpha \leq p-value\), do not reject the null hypothesis Glossary Level of Significance of the Test probability of a Type I error (reject the null hypothesis when it is true). Notation: \(\alpha\). In hypothesis testing, the Level of Significance is called the preconceived \(\alpha\) or the preset \(\alpha\). \(p\)-value the probability that an event will happen purely by chance assuming the null hypothesis is true. The smaller the \(p\)-value, the stronger the evidence is against the null hypothesis. Contributors
Barbara Illowsky and Susan Dean (De Anza College) with many other contributing authors. Content produced by OpenStax College is licensed under a Creative Commons Attribution License 4.0 license. Download for free at http://cnx.org/contents/30189442-699...b91b9de@18.114. |
First for some notation $$ l(\lambda) = \text{ number of parts in a partition } \lambda \vdash n$$
$$ f_{\lambda} = \text{number of standard Young tableau of shape } \lambda\vdash n$$
The number $f_{\lambda}$ is given by the hook length formula and might not acquire a "nice form". Given an integer $k$ consider the problem of computing $$ \tau_{k}(n) = \displaystyle\sum_{\lambda\vdash n \text{, } l(\lambda)\leq k}f_{\lambda}$$
Contrary to expectation, relatively neat closed forms are known for $\tau_{2}(n)$, $\tau_{3}(n)$ and $\tau_{4}(n)$. Gessel link text proved the following $$\tau_{2}(n) = \binom{n}{\lfloor \frac{n}{2} \rfloor}$$ $$\tau_{3}(n) = M_{n}$$ $$\tau_{4}(n) = C_{\lfloor \frac{n+1}{2} \rfloor}C_{\lceil \frac{n+1}{2} \rceil}$$ where $M_{n}$ denotes the n'th Motzkin number link text and $C_{n}$ denotes the n'th Catalan number link text
(Here both sequences are indexed starting 0)
(Aside: Proving the first two identities bijectively is a cute exercise in my opinion.)
As a by-product of my research I obtained the following identity $$ \displaystyle\sum_{\lambda\vdash n, l(\lambda)=5,\lambda_{5}=1}f_{\lambda} = \displaystyle\frac {\lfloor \frac{k+1}{2} \rfloor (\lceil \frac{k+1}{2} \rceil +1)}{k+1}C_{\lfloor \frac{k+1}{2} \rfloor}C_{\lceil \frac{k+1}{2} \rceil} - C_{\lfloor \frac{k}{2} \rfloor +1}C_{\lceil \frac{k}{2} \rceil +1}+M_{k}$$ where the sum on the left runs over all partitions $\lambda$ with length exactly 5 and minimum part $\lambda_{5}$ being 1. Admittedly this is very specific but my question is what is known about sums of the above sort
a) where the minimum part is fixed and so is the length of the partition ?
b) where $l(\lambda) \leq k$ and the k'th part $\lambda_{k} \leq i$ for a fixed non-negative integer $i$?
Gessel, I believe, used some really clever symmetric function manipulation to obtain the identities mentioned earlier. I'd appreciate if somebody has seen this stuff elsewhere ( i.e. reference other than Gessel / Gouyou-Beauchamps) and directs me. Thanks! |
Problem 676
Let $V$ be the vector space of $2 \times 2$ matrices with real entries, and $\mathrm{P}_3$ the vector space of real polynomials of degree 3 or less. Define the linear transformation $T : V \rightarrow \mathrm{P}_3$ by
\[T \left( \begin{bmatrix} a & b \\ c & d \end{bmatrix} \right) = 2a + (b-d)x – (a+c)x^2 + (a+b-c-d)x^3.\]
Find the rank and nullity of $T$.Add to solve later
Problem 675
The space $C^{\infty} (\mathbb{R})$ is the vector space of real functions which are infinitely differentiable. Let $T : C^{\infty} (\mathbb{R}) \rightarrow \mathrm{P}_3$ be the map which takes $f \in C^{\infty}(\mathbb{R})$ to its third order Taylor polynomial, specifically defined by
\[ T(f)(x) = f(0) + f'(0) x + \frac{f^{\prime\prime}(0)}{2} x^2 + \frac{f^{\prime \prime \prime}(0)}{6} x^3.\] Here, $f’, f^{\prime\prime}$ and $f^{\prime \prime \prime}$ denote the first, second, and third derivatives of $f$, respectively.
Prove that $T$ is a linear transformation.Add to solve later
Problem 674
Let $\mathrm{P}_n$ be the vector space of polynomials of degree at most $n$. The set $B = \{ 1 , x , x^2 , \cdots , x^n \}$ is a basis of $\mathrm{P}_n$, called the standard basis. Let $T : \mathrm{P}_4 \rightarrow \mathrm{P}_{4}$ be the map defined by, for $f \in \mathrm{P}_4$,
\[ T (f) (x) = f(x) – x – 1.\]
Determine if $T(x)$ is a linear transformation. If it is, find the matrix representation of $T$ relative to the standard basis of $\mathrm{P}_4$.Add to solve later
Problem 673
Let $\mathrm{P}_n$ be the vector space of polynomials of degree at most $n$. The set $B = \{ 1 , x , x^2 , \cdots , x^n \}$ is a basis of $\mathrm{P}_n$, called the
standard basis.
Let $T : \mathrm{P}_3 \rightarrow \mathrm{P}_{5}$ be the map defined by, for $f \in \mathrm{P}_3$,
\[T (f) (x) = ( x^2 – 2) f(x).\]
Determine if $T(x)$ is a linear transformation. If it is, find the matrix representation of $T$ relative to the standard basis of $\mathrm{P}_3$ and $\mathrm{P}_{5}$.Add to solve later
Problem 672
For an integer $n > 0$, let $\mathrm{P}_n$ be the vector space of polynomials of degree at most $n$. The set $B = \{ 1 , x , x^2 , \cdots , x^n \}$ is a basis of $\mathrm{P}_n$, called the standard basis.
Let $T : \mathrm{P}_n \rightarrow \mathrm{P}_{n+1}$ be the map defined by, for $f \in \mathrm{P}_n$,
\[T (f) (x) = x f(x).\]
Prove that $T$ is a linear transformation, and find its range and nullspace.Add to solve later
Problem 669 (a) Suppose that a $3\times 3$ system of linear equations is inconsistent. Is the coefficient matrix of the system nonsingular? (b) Suppose that a $3\times 3$ homogeneous system of linear equations has a solution $x_1=0, x_2=-3, x_3=5$. Is the coefficient matrix of the system nonsingular?
Add to solve later
(c) Let $A$ be a $4\times 4$ matrix and let \[\mathbf{v}=\begin{bmatrix} 1 \\ 2 \\ 3 \\ 4 \end{bmatrix} \text{ and } \mathbf{w}=\begin{bmatrix} 4 \\ 3 \\ 2 \\ 1 \end{bmatrix}.\] Suppose that we have $A\mathbf{v}=A\mathbf{w}$. Is the matrix $A$ nonsingular? Problem 668
Consider the system of differential equations
\begin{align*} \frac{\mathrm{d} x_1(t)}{\mathrm{d}t} & = 2 x_1(t) -x_2(t) -x_3(t)\\ \frac{\mathrm{d}x_2(t)}{\mathrm{d}t} & = -x_1(t)+2x_2(t) -x_3(t)\\ \frac{\mathrm{d}x_3(t)}{\mathrm{d}t} & = -x_1(t) -x_2(t) +2x_3(t) \end{align*} (a) Express the system in the matrix form. (b) Find the general solution of the system.
Add to solve later
(c) Find the solution of the system with the initial value $x_1=0, x_2=1, x_3=5$. Solve the Linear Dynamical System $\frac{\mathrm{d}\mathbf{x}}{\mathrm{d}t} =A\mathbf{x}$ by Diagonalization Problem 667 (a) Find all solutions of the linear dynamical system \[\frac{\mathrm{d}\mathbf{x}}{\mathrm{d}t} =\begin{bmatrix} 1 & 0\\ 0& 3 \end{bmatrix}\mathbf{x},\] where $\mathbf{x}(t)=\mathbf{x}=\begin{bmatrix} x_1 \\ x_2 \end{bmatrix}$ is a function of the variable $t$.
Add to solve later
(b) Solve the linear dynamical system \[\frac{\mathrm{d}\mathbf{x}}{\mathrm{d}t}=\begin{bmatrix} 2 & -1\\ -1& 2 \end{bmatrix}\mathbf{x}\] with the initial value $\mathbf{x}(0)=\begin{bmatrix} 1 \\ 3 \end{bmatrix}$. Prove that $\{ 1 , 1 + x , (1 + x)^2 \}$ is a Basis for the Vector Space of Polynomials of Degree $2$ or Less Problem 665
Let $\mathbf{P}_2$ be the vector space of polynomials of degree $2$ or less.
(a) Prove that the set $\{ 1 , 1 + x , (1 + x)^2 \}$ is a basis for $\mathbf{P}_2$.
Add to solve later
(b) Write the polynomial $f(x) = 2 + 3x – x^2$ as a linear combination of the basis $\{ 1 , 1+x , (1+x)^2 \}$. Problem 663
Let $\R^2$ be the $x$-$y$-plane. Then $\R^2$ is a vector space. A line $\ell \subset \mathbb{R}^2$ with slope $m$ and $y$-intercept $b$ is defined by
\[ \ell = \{ (x, y) \in \mathbb{R}^2 \mid y = mx + b \} .\]
Prove that $\ell$ is a subspace of $\mathbb{R}^2$ if and only if $b = 0$.Add to solve later
Problem 659
Fix the row vector $\mathbf{b} = \begin{bmatrix} -1 & 3 & -1 \end{bmatrix}$, and let $\R^3$ be the vector space of $3 \times 1$ column vectors. Define
\[W = \{ \mathbf{v} \in \R^3 \mid \mathbf{b} \mathbf{v} = 0 \}.\] Prove that $W$ is a vector subspace of $\R^3$. Problem 658
Let $V$ be the vector space of $n \times n$ matrices with real coefficients, and define
\[ W = \{ \mathbf{v} \in V \mid \mathbf{v} \mathbf{w} = \mathbf{w} \mathbf{v} \mbox{ for all } \mathbf{w} \in V \}.\] The set $W$ is called the center of $V$.
Prove that $W$ is a subspace of $V$.Add to solve later |
We have recently discovered the densest known crystalline packing of congruent ellipsoids, which are periodic packings with two particles in the fundemantal cell. The resutls was published in the letter entitled
Unusually Dense Crystal Ellipsoid Packings ( Phys. Rev. Lett. 92 , 255506 (2004)) by A. Donev, F. H. Stillinger, P. M. Chaikin and S. Torquato. In this Letter, we report on the densest known packings of congruent ellipsoids. The family of new packings consists of crystal arrangements of spheroids with a wide range of aspect ratios, and with density always surpassing that of the densest Bravais lattice packing, i.e., 0.7405. A remarkable maximum density of 0.7707 is achieved for maximal aspect ratios larger than \(\sqrt{3}\), when each ellipsoid has 14 touching neighbors. Our results are directly relevant to understanding the equilibrium behavior of systems of hard ellipsoids, and, in particular, the solid and glassy phases.
We obtained the densest known packings of superdisks and superballs, which are certain Bravais lattice packings with symmetries consistent with that of the particles. See Y. Jiao, F. H. Stillinger and S. Torquato
Optimal Packing of Superdisks and the Role of Symmetry ( Phys. Rev. Lett. 100 , 245505 (2008)) for superdisk packings, where we provide exact constructions for the densest known two-dimensional packings of superdisks whose shapes are defined by \(|x_1|^{2p}+|x_2|^{2p}\le 1\) and thus contain a large family of both convex (\(p \gt 0.5\)) and concave (\(p \lt 0.5\)) particles. Our candidate maximal packing arrangements are achieved by certain families of Bravais lattice packings, and the maximal density is nonanalytic at the circular-disk point \((p=1)\) and increases dramatically as p moves away from unity. Moreover, we show that the broken rotational symmetry of superdisks influences the packing characteristics in a nontrivial way.
See Y. Jiao, F. H. Stillinger and S. Torquato
Optimal Packing of Superballs ( Phys. Rev. E 79 , 041309 (2009)) for superball packings, where we provide analytical constructions for the densest known superball packings for all convex and concave cases, defined by \(|x_1|^{2p}+|x_2|^{2p}+|x_3|^{2p}\le 1\). The candidate maximally dense packings are certain families of Bravais lattice packings in which each particle has 12 contacting neighbors possessing the global symmetries that are consistent with certain symmetries of a superball. We also provide strong evidence that our packings for convex superballs (\(p\gt 0.5\)) are most likely the optimal ones. The maximal packing density as a function of p is nonanalytic at the sphere point \(p=1\) and increases dramatically as p moves away from unity. Two more nontrivial nonanalytic behaviors occur at \(p_c=1.1509\) and \(p_o =\ln{3}/\ln{4}=0.7924\) for cubic and octahedral superballs, respectively, where different Bravais lattice packings possess the same densities. — The featured cover story of Aug. 13, 2009 issue of Nature
Abstract:
Dense particle packings have served as useful models of the structures of liquid, glassy and crystalline states of matter, granular media, heterogeneous materials and biological systems. Probing the symmetries and other mathematical properties of the densest packings is a problem of interest in discrete geometry and number theory. Previous work has focused mainly on spherical particles, very little is known about dense polyhedral packings. Torquato and Jiao formulate the generation of dense packings of polyhedra as an optimization problem, using an adaptive fundamental cell subject to periodic boundary conditions (termed as the ‘adaptive shrinking cell’ scheme). Using a variety of multi-particle initial configurations, Torquato and Jiao find the densest known packings of the four non-tiling Platonic solids (the tetrahedron, octahedron, dodecahedron and icosahedron) in three-dimensional Euclidean space. The densities are \(0.782…, 0.947…, 0.904…\) and \(0.836…\), respectively. Unlike the densest tetrahedral packing, which must not be a Bravais lattice packing, the densest packings of the other non-tiling Platonic solids that we obtain are their previously known optimal (Bravais) lattice packings. Combining the simulation results with derived rigorous upper bounds and theoretical
arguments leads to the conjecture that the densest packings of the Platonic and Archimedean solids with central symmetry are given by their corresponding densest lattice packings. This is the analogue of Kepler’s sphere conjecture for these solids. See S. Torquato and Y. Jiao, Nature 460, 876 (2009) for details.
The packing details can be found here.
Here are some links to discussions of this work in the popular press:
The determination of the densest packings of regular tetrahedra (one of the five Platonic solids) is attracting great attention as evidenced by the rapid pace at which packing records are being broken and the fascinating packing structures that have emerged. Here we provide the most general analytical formulation to date to construct dense periodic packings of tetrahedra with four particles per fundamental cell. This analysis results in six-parameter family of dense tetrahedron packings that includes as special cases recently discovered “dimer” packings of tetrahedra, including the densest known packings with density \(\phi = 4000/4671 =0.856347…\). This study strongly suggests that the latter set of packings are the densest among all packings with a four-particle basis. Whether they are the densest packings of tetrahedra among all packings is an open question, but we offer remarks about this issue. Moreover, we describe a procedure that provides estimates of upper bounds on the maximal density of tetrahedron packings, which could aid in assessing the packing efficiency of candidate dense packings.
— Featured on the cover of Journal of Chemical Physics
The Platonic and Archimedean polyhedra possess beautiful symmetries and arise in many natural and synthetic structures. Dense polyhedron packings are useful models of a variety of condensed matter systems, including liquids, glasses and crystals, granular media, and heterogeneous materials. Understanding how nonspherical particles pack is a first step toward a better understanding of how biological cells pack. Probing the symmetries and other mathematical properties of the densest packings is a problem of interest in discrete geometry and number theory.
Recently, there has been a large effort devoted to finding dense packings of polyhedra. Although organizing principles for the types of structures associated with the densest polyhedron packings have been put forth, much remains to be done to find the maximally dense packings for specific shapes. Here, we analytically construct the densest known packing of truncated tetrahedra with packing fraction \(207/208=0.995 192 …\), which is amazingly close to unity and strongly implies the optimality of the packing. This construction is based on a generalized organizing principle for polyhedra that lack central symmetry. Moreover, we find that the holes in this putative optimal packing are small regular tetrahedra, leading to a new tiling of space by regular tetrahedra and truncated tetrahedra. We also numerically study the equilibrium melting properties of what apparently is the densest packing of truncated tetrahedra as the system undergoes decompression. Our simulations reveal two different stable crystal phases, one at high densities and the other at intermediate densities, as well as a first-order liquid-crystal phase transition.
(Left) A dense packing of unlinked ring tori. (Right) A dense packing of linked ring tori.
Dense packings of nonoverlapping bodies in three-dimensional Euclidean space \(\mathbb{R}^3\) are useful models of the structure of a variety of many-particle systems that arise in the physical and biological sciences. Here we investigate the packing behavior of congruent ring tori in
R 3, which are multiply connected nonconvex bodies of genus 1, as well as horn and spindle tori. Specifically, we analytically construct a family of dense periodic packings of unlinked tori guided by the organizing principles originally devised for simply connected solid bodies [Torquato and Jiao, Phys.Rev.E 86, 011102 (2012)]. We find that the horn tori as well as certain spindle and ring tori can achieve a packing density not only higher than that of spheres (i.e., \(\pi/\sqrt{18}=0.7404…\)) but also higher than the densest known ellipsoid packings (i.e., \(0.7707…\)). In addition, we study dense packings of clusters of pair-linked ring tori (i.e., Hopf links), which can possess much higher densities than corresponding packings consisting of unlinked tori.
Invited talk by Professor Torquato entitled “Packing Nonspherical Particles: All Shapes Are Not Created Equal” given at the March American Physical Society Meeting in Boston on February 28, 2012. See also the APS link: here.
See also:
A New Tool to Help Mathematicians Pack
The Densest Local Packings of Identical Spheres in Three Dimensions
We have used a novel algorithm combining nonlinear programming methods with a random search of configuration space to find the densest local packings of spheres in three-dimensional Euclidean space. Our results reveal a wealth of information about packings of spheres, including counterintuitive results concerning the physics of dilute mixtures of spherical solute particles in a solvent composed of same-size spheres and about the presence of unjammed spheres (rattlers) in the densest local structures.
The Densest Local Packings of Identical Disks in Two Dimensions
N=15, point group D
5h
The densest local packings of \(N\) identical nonoverlapping spheres within a radius \(R_{min}(N)\) of a fixed central sphere of the same size are obtained using a nonlinear programming method operating in conjunction with a stochastic search of configuration space. We find and present the putative densest packings and corresponding \(R_{min}(N)\) for selected values of \(N\) up to \(N=348\) and use this knowledge to construct a realizability condition for the pair correlation functions of sphere packings and an upper bound on the maximal density of infinite sphere packings.
The Densest Local Packings of Spheres in Any Dimension and the Golden Ratio
The optimal spherical code problem involves the placement of the centers of N nonoverlapping spheres of unit diameter onto the surface of a sphere of radius R such that R is minimized. We prove that in any dimension, all solutions between unity and the golden ratio to the optimal spherical code problem for N spheres are also solutions to the corresponding DLP problem. |
I am currently working with Earl Hammon Jr's Presentation PBR Diffuse Lighting for GGX+Smith Microsurfaces (now mentioned as [PBR, p.XYZ] and have read through (among others) Brent Burley's Physically-Based Shading at Disney (now mentioned as [DIS, p. XYZ] to get a good diffuse BRDF component. I am stuck at combining the two with the Fresnel Term.
Short introduction for vectors and angles as I use them:
$\omega_i$ is the light vector $\omega_o$ is the view vector
$\omega_n$ is the macro geometry normal
$\theta_i$ is the angle between $\omega_i$ and $\omega_n$
$\theta_o$ is the angle between $\omega_o$ and $\omega_n$ $\theta_h$ is the angle between $\omega_n$ and $\omega_h$ $\alpha_{hi}$ is the angle between $\omega_i$ and $\omega_h$ $\alpha_{ho}$ is the angle between $\omega_o$ and $\omega_h$ (this distinction is for clarification) $\alpha_h$ is any of the angles $\alpha_{hi}$, $\alpha_{ho}$, since they are equal
now given that $r_s$ is the BRDF term of the specular component without the fresnel factor and $r_d$ accordingly the term of the diffuse component without the fresnel stuff, the fresnel factor is written as $F(angle)$. [PBR, p.105] mentions that the diffuse light is transmitted two times, once in and once out. Thus, the Fresnel component has to be multiplied two times. [PBR, p. 106] goes on to say Fresnel's laws are symmetrix, meaning entering and leaving is direction independent (i.e. it does not matter that once we go into the material from air and once we leave into air). Now I would assume (for $F_1$ is Fresnel for entering and $F_2$ is for Fresnel leaving the material) to use
$(1-F_1(\alpha_{hi}))*(1-F_2(\alpha_{ho}))$
$F_1$ and $F_2$ are the same function, and $\alpha_{hi}$ and $\alpha_{ho}$ are the same angle, therefore
$(1-F(\alpha_h))^2$
This would lead to a brdf $f$:
$f = F(\alpha_h) * r_s + (1-F(\alpha_h))^2 * r_d$
But both [PBR, p.113] and [DIS, p.14] use
$f = F(\alpha_h) * r_s + (1-F(\theta_i))*(1-F(\theta_o)) * r_d$
as does the original paper to use this kind of calculation by Shirely et al. 1997. I just don't get this, why do they change from the microfacet angles to the macro angles? The microfacet angles lead to energy conversation
$F \in [0, 1]$ $\Rightarrow(1-F) \in [0, 1] $ $\Rightarrow(1-F)^2 \in [0, 1]$ and $(1-F(\alpha_h)) >= (1-F(\alpha_h))^2)$
it should be reciprocal
$f(\theta_i, \theta_o) = F(\alpha_{hi}) * r_s + (1-F(\alpha_{hi}))*(1-F(\alpha_{ho}) * r_d = F(\alpha_h) * r_s + (1-F(\alpha_h))^2 * r_d = F(\alpha_{ho}) * r_s + (1-F(\alpha_{ho}))*(1-F(\alpha_{hi})) * r_d = f(\theta_o, \theta_i)$
and thus fulfill BRDF requirements. The microfacet angle is used for the specular term, therefore it is the more sensible thing to interpolate between specular and diffuse component (ignoring the fact of two transmissions for this argument). Instead, [PBR, p.113] and [DIS, p. 14] put the $\theta_h$ into a roughness calculation and leave that rather unexplained.
Additionally to my confusion about this, in the explanation slides [PBR, p.187] uses the dot product $\omega_h * \omega_o$ (and therefore the $\alpha_{ho}$ angle) and later on [PBR, p. 191] also the dot product $\omega_h * \omega_i$ ($\Rightarrow\alpha_{hi}$). |
The weekend before last, I overcame my reluctance to travel and went to a mathematics conference, the American Mathematical Society’s Spring Central Sectional Meeting. I gave a talk in the “Recent Advances in Packing” session, spreading the word about SICs. My talk followed those by Steve Flammia and Marcus Appleby, who spoke about the main family of known SIC solutions while I covered the rest (the sporadic SICs). The co-organizer of that session, Dustin Mixon, has posted an overall summary and the speakers’ slides over at his blog.
I typed the following notes during Hiroki Sayama‘s presentation on “Phase separation and dynamic pattern formation in heterogeneous self-propelled particle systems.” Unfortunately, I couldn’t get a WiFi signal in the room where Sayama gave his talk, so I’m falling short of the gonzo science ideal, posting about the talk after it was given instead of as it occurs.
Sayama is speaking about particle swarm systems, and the phase-separation and dynamic pattern formation behaviors they exhibit. He adds the novel feature of
heterogeneity to the particle system. Research on self-propelled particles goes back to Reynolds (1987), Vicsek et al. (1995), Aldana et al. (2003), Chuang et al. (2006), etc. Reynolds was a computer scientist who created a method for simulating bird flocking, which developed into the simulation which created the bats in the otherwise unremarkable Batman Begins. Vicsek and Aldana were physicists.
These systems show collective behaviors such as random clustering, coherent motions and milling. The same system can exhibit all of these behaviors, depending upon the input parameters. Cranking up the noise can induce phase transitions. Almost all of this work focused on
homogeneous particle systems, in which all particles share the same kinetic particles. What, then, would happen if two or more types of self-propelled particles were mixed together?
Sayama works in a framework he calls Swarm Chemistry, which is implemented as a Java applet that can be run online.
Continue reading ICCS: Emergence in Particle Systems 1
The parts between talks are the best parts of conferences. Sure, it’s great to hear Greg Chaitin deliver his sermon about the ideal realm of pure mathematics being an infinite ocean of complexity, out of which we can only seize finite buckets — but Chaitin writes about that kind of thing, and you can read it for free online. It’s an altogether different experience to discuss during the coffee break Mike Stay and Cristian Calude’s paper, “From Heisenberg to Gödel via Chaitin,” with one of the three men in the title.
Question-and-answer sessions after the presentations can also be quite good. Last night, for example, Barbara Jasny of
Science Magazine explained how that publication is adapting to the whizbang modern world. It’s reassuring to hear that at least one person in the publishing community has a common-sense understanding of what cheap, open digital access means: journals can only justify charging prices if those prices reflect the actual value which those journals add. More interesting than that, however, was Jasny’s reaction to the question from Frannie Leautier, former Vice-President of the World Bank and currently head of the World Bank Institute. Leautier asked if Science would publish articles which used cartoons as illustrations (instantly endearing herself to all the Larry Gonick and Sid Harris fans in the audience). Continue reading ICCS: Monday Evening
The following is my first attempt to liveblog ICCS 2007. I arrived at the Quincy Marriott shortly before 8:30 this morning, having driven south on I-93 from Boston. Unlike the first time I drove out here, I didn’t get lost in Braintree, since I took the left fork at the “Braintree split,” where I-93 undergoes mitosis. These things are important to know.
The morning’s plenary talks began with Diana Dabby (Franklin W. Olin College of Engineering), who spoke about chaotic transformations one can apply to music in order to generate musical variations, as in “Variations on a Theme of Beethoven.” Her scheme begins by breaking the musical performance into a sequence of pitches, denoted [tex]p_i[/tex], and then mapping each [tex]p_i[/tex] to a section of a dynamical trajectory on a chaotic attractor like the Lorentz owl/butterfly mask.
Continue reading ICCS: Sunday Morning
Well, in the past two days I’ve linked to an Internet quiz and some anime videos, so in order to retain my street cred in the Faculty Lounge, it’s time to post a homework assignment. Don’t worry: if you haven’t met me in person, there’s no way I can grade you on it (unless our quantum states are somehow entangled). This problem set covers everything in our first two seminar sessions on QM, except for the kaon physics which we did as a lead-up to our next topic, Bell’s Inequality. I’ve chosen six problems, arranged in roughly increasing order of difficulty. The first two are on commutator relations, the third involves position- and momentum-space wavefunctions, the fourth brings on the harmonic oscillator (with some statistical mechanics), the fifth tests your knowledge about the Heisenberg picture, and the sixth gets into the time evolution of two-state systems.
“So, Blake,” I sez to myself. “You’ve been selected for multiple editions of the Skeptic’s Circle. You’ve been linked, twice, from Pharyngula. Clearly, you’re rising to astonishing heights of science-blogebrity. What worlds are left to conquer?”
“Well,” I replied. “There’s going out for a milkshake with Rebecca Watson.”
I shook my head. “Not gonna happen — she’s just too picky counting tentacles. Anything else?”
“Well, you could do what Revere warned you not to do.”
“Ah, yes, write a sixteen-part series on mathematical modeling! But the modeling of antiviral resistance isn’t really my field.”
“True, but didn’t you spend your spring break in Amsterdam a few years ago, writing that paper which was the first article Prof. Rajagopal ever graded with an A-double-plus?”
“Hey, yeah, on supersymmetric quantum mechanics and the Dirac Equation!”
“So,” I suggested to me, “why don’t you break that paper down into several blag posts, interleave it with some Bill Hicks videos so not all your readers wander away, and have yourself a continuing physics series?”
“Could work, I suppose. But that paper was written for third-term quantum mechanics students, so I’d probably have to build up to it, even just a little.”
“Bah,” I said. “At least you’ll have a purpose in life. And you can
start by expounding on the canonical commutation relation for position and momentum. That’ll be your warm-up, after which you can do angular momentum and central potentials —”
“Which I
do have written up somewhere,” I interposed, “since I discovered I could type LaTeX as fast as my professors could lecture.”
“Weirdo,” I said.
Continue reading Friday Quantum Mechanics
One reason I call this site a “blag” and not a “blog” is that I’m always late.
For example, I’m finally typing up the group-theory homework assignment which Ben gave last Monday (and which will be due next Monday). During our seminar over in BU territory, we discussed the relations among the Lie groups
SU(2) and SO(3) and the manifolds S 3 and RP 3. Problems will be given below the fold.
Also, Eric will be discussing statistical physics this afternoon at NECSI.
Continue reading Group Theory Homework
We’re reviewing the IEEE papers.
After reviewing the first batch of papers, we came up with some questions to answer in the future, in order of difficulty/open-ness:
1. Given the robustness of a code (e.g. due to a many-to-one codon->AA mapping), can we calculate bounds on the channel capacity of such a code? How does the empirical
R (info transmission rate) of the codon->AA code compare with the empirical C (channel capacity, e.g. from mutation rates)?
2. How does the length/entropy/complexity of code-bits (e.g. parity bits, non-message bits used for error correcting) relate to the complexity of the error-correcting task, and e.g. the entropy and length of the data-bit sections (e.g. the actual message you’re sending) to satisfy
R≤ C?
– Is the von Neumann entropy,
[tex]H = {\rm Tr\,} \rho\log\rho = \sum\lambda_i\log\lambda_i [/tex] where {λ} are the eigenvalues of the matrix, useful for discussing network robustness? (There’s a paper where they use Kolmogorov-Sinai/Shannon entropy to do this, which Blake has somewhere…) If so, then can we apply this to a genetic-regulatory network, and tie in the error-correcting or homeostatic abilities of such a network with VNE or other network metrics?
Next week:
We meet Monday at BU for group theory. Ben will be discussing
SU(2) and SO(3) from Artin. Friday, Blake will present on the symmetry-breaking-in-genetics paper, and possibly on the information-theoretic considerations for BLAST. BLAKE SEZ: I believe the paper which caught my eye was L. Demetrius and T. Manke (15 February 2005) “Robustness and network evolution — an entropic principle” Physica A 346 (3–4): 682–96.
Friday 4/6/07 we reviewed Rao et al. (2004)
IEEE Trans Info Theor, V 50 (6) “Cumulative Residual Information […]”, here.
We decided that while the motivation for the paper was valid, that it was undesirable for a number of reasons — mostly that the CRE of many well-behaved distributions (power laws notably) diverged. We’re all currently working on better generalizations.
Yesterday evening, we had our first seminar session on the group theory track, led by Ben Allen. We covered the definition of groups, semigroups and monoids, and we developed several examples by transforming a pentagon. After a brief interlude on discrete topology and — no snickers, please — pointless topology, Ben introduced the concept of
generators and posed several homework questions intended to lead us into the study of Lie groups and Lie algebras.
Notes are available in PDF format, or as a gzipped tarball for those who wish to play with the original LaTeX source. Likewise, the current notes for the entropy and information-theory seminar track (the Friday sessions) are available in both PDF and tarball flavors.
Our next session will be Friday afternoon at NECSI, where we will continue discussing Claude Shannon’s classic paper,
A/The Mathematical Theory of Communication (1948). The following Monday, Eric will treat us to the grand canonical ensemble. |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
If we are given
$u=\cot^{-1}\sqrt{\cos 2m} - \arctan \sqrt{\cos2m}$
Then we have to prove $\sin u = \tan^2m$
I tried and it as $\tan u = \frac{(1+\cos2m)}{2\sqrt{\cos2m})}$
But got stuck after that .
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
If we are given
$u=\cot^{-1}\sqrt{\cos 2m} - \arctan \sqrt{\cos2m}$
Then we have to prove $\sin u = \tan^2m$
I tried and it as $\tan u = \frac{(1+\cos2m)}{2\sqrt{\cos2m})}$
But got stuck after that .
$$u =\cot^{-1} \sqrt{\cos 2m} -\arctan\sqrt{\cos 2m}$$ $$\Rightarrow u = \frac{\pi}{2}-2\arctan\sqrt{\cos 2m}$$ $$\Rightarrow \tan(\frac{\frac{\pi}{2}-u}{2}) = \sqrt{\cos 2m}$$ $$\Rightarrow \tan^{2} (\frac{\frac{\pi}{2}-u}{2}) = \cos 2m =\frac{1-\tan^{2}m}{1+\tan^{2}m}$$ Using componendo and dividendo, we can easily get $$\cos(\frac{\pi}{2}-u) = \tan^{2}m \Rightarrow \sin u=\tan^{2}m $$ Hope it helps.
As $\sqrt{\cos2m}\ge0$
$u=$arccot$\sqrt{\cos2m}-\arctan\sqrt{\cos2m}=\arctan\dfrac1{\sqrt{\cos2m}}-\arctan\sqrt{\cos2m}$ $=\arctan\dfrac{1-\cos2m}{2\sqrt{\cos2m}}$
$\tan u=\dfrac{1-\cos2m}{2\sqrt{\cos2m}}$
As $\dfrac{1-\cos2m}{2\sqrt{\cos2m}}\ge0,0\le u\le\dfrac\pi2;$
$\cos u=+\dfrac1{\sqrt{1+\tan^2u}}=?$
$\sin u=\tan u\cdot\cos u=?$ |
Imhotep-SMT A novel SMT-based solver for secure state estimation Tutorial: Securing an Unmanned Ground Vehicle
In this tutorial, we are going to demonstrate how to use Imhotep-SMT to secure the control of an unmanned ground vehicle (UGV) against attacks to its sensors. The UGV has two state variables: the position $x(t)$ and the velocity $v(t)$. A model for the time evolution of these variables, as derived using energy conservation laws, can be expressed as follows: \begin{equation*} \begin{bmatrix} \dot{x} \\ \dot{v} \end{bmatrix} = \begin{bmatrix}0 & 1\\0 & \frac{-B}{M} \end{bmatrix} \begin{bmatrix} x(t) \\ v(t) \end{bmatrix} + \begin{bmatrix} 0 \\ \frac{1}{M}\end{bmatrix} F(t), \end{equation*} where $M$ and $B$ denote, respectively, the mechanical mass and the translational friction coefficients, while $F(t)$ is the input force to the UGV at time $t$. The UGV is equipped with a GPS sensor, which measures its position, and two motor encoders, which measure the translational velocity. The resulting output equation is: \begin{equation*} \begin{bmatrix} y_{\text{GPS}}(t) \\ y_{\text{right encoder}}(t) \\ y_{\text{left encoder}}(t) \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} x(t) \\ v(t) \end{bmatrix} + \begin{bmatrix}\psi_{\text{GPS}}(t) \\ \psi_{\text{right encoder}}(t) \\ \psi_{\text{left encoder}}(t)\end{bmatrix}, \end{equation*} where $\psi$ is the measurement noise at each sensor. These sensor measurements are used to estimate the states of the UGV. However, we assume that an adversarial attacker is manipulating one of the UGV sensors in order to deceive the state estimator. In the remaining of this tutorial, we will show how Imhotep-SMT can be used to detect the existence of such an attacker, and estimate the actual state of the UGV.
The Matlab files for this tutorial can be downloaded from here.
Step 0: Download Imhotep-SMT
Download the latest version of Imhotep-SMT from github as a zip file. Extract the contents of the zip file to "folder_path/ImhotepSMT". Open Matlab, add "folder_path/ImhotepSMT/" to your Matlab path (e.g. by selecting "File->Set path->Add folder" from the main Menu in Matlab). Step 1: Initialize the Solver
From the Matlab terminal, create a new instance of the Imhotep-SMT solver as follows:
smt = ImhotepSMT(); Step 2: Offline Configuration of the Solver
Imhotep-SMT must be configured offline by specifying:
A discrete-time dynamical model of the system in the state space, e.g. by specifying how the state evolves over time as well as how the sensor measurements are related to the system state. The maximum number of sensors that can be under attack for the estimation to be theoretically feasible. A set of sensors which is known to be attack-free a priori(this is an optional parameter). The upper bound on the magnitude (norm) of the measurement noise. 1- Discrete-time dynamic model.
Since our solver only accepts discrete-time models, we start by discretizing the UGV model.
For $M = 0.8$ and $B = 1$ and a sampling time $T_s = 100 ms$, we obtain:
\begin{equation*}
\begin{bmatrix} x(t+1) \\ v(t+1) \end{bmatrix}
= \begin{bmatrix}1.0000 & 0.0099\\0 & 0.9876 \end{bmatrix}
\begin{bmatrix} x(t) \\ v(t) \end{bmatrix} +
\begin{bmatrix} 0.0001 \\ 0.0124\end{bmatrix} F(t),
\end{equation*}
which can be specified in Matlab as:
A = [1 0.0099; 0 0.9876]; B = [0.0001; 0.0124]; C = [1 0; 0 1; 0 1]; ugv = ss(A,B,C,0,0.001); 2- Maximum number of sensors that can be under attack.
As discussed above, the attacker can attack only one sensor.
This can be specified as follows:
max_sensors_under_attack = int8(1); 3- Set of safe sensors.
To show how Imhotep-SMT analyzes the system specification, we initially
assume that the attacker can attack any of the three sensors. We will then
show how Imhotep-SMT can actually identify which sensors must be secured for
the UGV to operate safely.
safe_sensors = []; 4- Upper bound on measurement noise.
In general, each sensor measurement suffers from noise. The bound on this
noise is an important parameter which affects the capability of Imhotep-SMT to distinguish between
noise and attack. In this example, we assume that all sensors are ideal and are not affected by
noise, as follows:
noise_bound = [0; 0; 0];
We can now initialize the SMT solver using the init() method as:
smt.init(ugv, max_sensors_under_attack, safe_sensors, noise_bound);
Imhotep-SMT will check the
security index for the system, and make sure
it matches the specified maximum number of sensors under attack.
In this case, the following report is generated:
The maximum number of attacks specified by the user is: 1 Theoretical upper bound on maximum number of attacks is: 0 ERROR: System structure does not match the maximum number of attacked sensors specified by the user Sensors that can improve system security are: Sensor#1
The generated report warns the user that the specified system can not tolerate any attacks
on its sensors. The structure of the system is such that a
corruption to the sensor measurements can preclude
any
security algorithm from detecting the attack. In other words, if all the sensors are allowed to be
attacked, it is theoretically not possible to detect the attack.
To avoid this situation, Imhotep-SMT suggests that the
security level of the system can be potentially increased if the GPS signal (sensor 1)
is not attacked.
Therefore, we re-define the set of safe sensors as:
safe_sensors = [1];
and initialize the solver again as follows:
smt.init(ugv, max_sensors_under_attack, safe_sensors, noise_bound); The maximum number of attacks specified by the user is: 1 Theoretical upper bound on maximum number of attacks is: 2 Disclaimer: Correction of the solver outputs is guaranteed if and only if the system passes the s-sparse observability test. To run this combinatorial test, call the checkObservabilityCondition()
The final step to complete the offline configuration phase consists in checking the sparse observability condition. This is the necessary and sufficient condition to guarantee the correctness of the results. Depending on the structure of the system, the test may take some time. This step can be skipped if the user knows that the sparse observability condition holds for the specified system.
smt.checkObservabilityCondition(); INFO: The sparse observability test ensures that the system is observable after removing every combination of 2 sensors. This requires checking the observability of 1 combinations ... This may take some time Iteration number 1 out of 1 combinations ... PASS! Sparse observability condition passed! Solver is ready to run! Step 2- Online State Estimation
To detect the attack, we need to provide the solver with the input and output signals
of the system to be controlled, as shown in the figure below.
Since the solver is implemented within Matlab, we need to use the capabilities embedded in the "Real-time toolbox" of Matlab to deploy it on a real application while guaranteeing real-time operation. However, for the purpose of this tutorial, we simply use Matlab to simulate the dynamical system as follows. x = randn(2,1); %unknown initial condition attacked_sensor_index = 2; % the attacker chooses to attack the second sensor. for t = 1 : 1000 % simulate the system for 1000 time steps y = C*x; % the system measurements % the attacker corrupts the second sensor with random data y(attacked_sensor_index) = y(attacked_sensor_index) + 50*randn(1); F = 1; % the force supplied to the UGV x = A*x + B*F; % simulate the system [xhat, sensor_under_attack] = smt.addInputsOutputs(F, y); error = norm(xhat - x) sensor_under_attack end
Except for the first iteration, the code above results into the following output at each iteration of the simulation:
error = 8.8818e-16 sensor_under_attack = 2
In general, for the first $n-1$ iterations (where $n$ is the number of state variables of the system), the output of the solver may be incorrect since the information stored is not enough to correctly compute the state and determine which sensors are under attack. |
Wave Optics Polarisation Polarisation is the property of obstructing the vibration of particle. Polarisation Establishes the transverse nature of Light waves If the Electric vector vibrates in all directions in a plane perpendicular to the direction of propagation is called unpolarised light. If the Electric field vector is contained to a plane passing through the direction of propagation it is called plane polarised light. Polariser is the crystal used to polarise the unpolarised light. Analyser is the crystal used to analyse the polarised light. Malus law is given by I = I 0cos 2θ Amplitude from malus law a = a 0cos θ
When three polaroids are present with consecutive angles θ 1and θ 2Then \tt I^{1}=\frac{I}{2}\cos^{2}\theta_{1}\cos^{2}\theta_{2} polarising angle or Brewster's angle is the angle of incidence at which reflected light is polarised. Brewster law States that tangent of polarising angle is equal to refractive index μ = Tan θp \tt \tan\ \theta_{p} =\frac{\mu}{1};\sin\ \theta_{p} =\frac{\mu}{\sqrt{\mu^{2}+1}};\cos\ \theta_{p} =\frac{1}{\sqrt{\mu^{2}+1}} Ordinary ray in double refraction is that ray which obeys laws of refraction. Extraordinary ray in double refraction is that ray which does not obeys law of refraction. For ordinary ray \tt \mu_{0}=\frac{\sin i}{\sin r} Optic axis the axis along which ordinary and Extraordinary rays have same speed. Dichroism is the property of unequal absorption of ordinary and Extraordinary rays by some crystals. man made polarising materials are called polaroids Polarisation is used to study the helical structure of nucleic acids Polarisation is used in polaroid sun goggles. Optical activity of a substance is measured with the help of polarimeter Speed of light wave is same it is called isotropic media Speed of light wave is not same it is called anisotropic media Diffraction pattern is due to interference of light from secondary waves of the same wave front Sound waves cannot be polarised Intensity of fraunhoffer diffraction at single slit \tt I_{0}=I_{0}\left(\frac{\sin\alpha}{\alpha}\right)^{2}\ where\ \alpha=\frac{\pi a}{\lambda}\ \sin\theta Youngs double slit Experiment
View the Topic in this video From 00:14 To 3:54
Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers. |
Image Dimensions Contents Describing the fields of the Canvas Properties Dialog
The user access the image dimensions in the Canvas Properties Dialog.
The Other tab
Here some properties can simply be locked (such that they can't be changed) and linked (so that changes in one entry simultaneously change other entries as well).
The Image tab
Obviously here the image dimensions can be set. There seem to be basically three groups of fields to edit:
The on-screen size(?) The fields Widthand Heighttell synfigstudio how many pixels the image shall cover at a zoom level of 100%. The physical size The physical width and height should tell how big the image is on some physical media. That could be when printing out images on paper, or maybe even on transparencies or film. Not all file formats can save this on exporting/rendering images. The mysterious Image Area Given as two points (upper-left and lower-right corner) which also define the image span (Pythagoras: Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://api.formulasearchengine.com/v1/":): {\displaystyle \scriptstyle\text{span}=\sqrt{\Delta x^2 + \Delta y^2}}). The unit seems to be not pixels but units, which are at 60 pixels each. If the ratio of the image size and image area dimensions are off, for example circles will appear as an ellipse (see image). These settings seem to influence how large one Image Sizepixel is being rendered. This might be useful when one has to deal with non-square output pixels. Effects of the Image Area
Somehow the image area setting seems to be saved when copy&pasting between image, see also bug #2116947.
Possible intended effects of out-of-ratio image areas
As mentioned above, different ratios might be needed when then output needs to be specified in pixels, but those pixels are not squares. That might happen for several kinds of media, such as videos encoded in some PAL formats or for dvds. For further reading, look at Wikipedia.
Still, it is probably consensus that the image, as shown on screen while editing should look as closely as possible like when viewed by the final audience. So, while specifying a different output resolution at
rendering time may well be wanted, synfigstudio should (for the majority of monitors) show square pixels, i.e. circles should stay circles. Feature wishlist to simplify working across documents See also
Explanation by dooglus on the synfig-dev mailing list. |
-- Sorry for the long post, but I prefer to do that way because "
Devil is in the details." :)
I am writing a path tracer from the scratch and it is working nicely for perfectly diffuse (Lambertian) surfaces (
i.e. the furnace test indicates - at least visually - that it is energy conserving, and rendered images match those generated with Mitsuba renderer for the same parameters). Now I am implementing the support for the specular term of the original Cook-Torrance microfacet model, in order to render some metallic surfaces. However, it seems that this BRDF is reflecting more energy than that received. See the example images below:
Above image:
Mitsuba reference (assumed to be correct) image: Path tracing with direct light sampling, importance hemisphere sampling, max path lenght = 5, 32 stratified spp, box filter, surface roughness = 0.2, RGB.
Above image:
Actual rendered image: Brute force naïve path tracing, uniform hemisphere sampling, max path lenght = 5, 4096 stratified spp, box filter, surface roughness = 0.2, RGB. Despite some differences with respect to the rendering settings, it is clear that the rendered image will not converge to the reference shown before.
I tend to think that it is not an implementation problem, but an issue regarding the proper use of the Cook-Torrance model within the rendering equation framework. Below I explain how I am evaluating the specular BRDF and I would like to know if I am doing it properly and, if not, why.
Before going into the nitty-gritty details, notice that the renderer is quite simple: 1) implements only the brute force naïve path tracing algorithm - no direct light sampling, no bi-directional path tracing, no MLT; 2) all sampling is uniform on the hemisphere above the intersection point - no importance sampling at all, neither for diffuse surfaces; 3) the ray path has a fixed maximum length of 5 - no russian roulette; 4) radiance/reflectance is informed through RGB tuples - no spectral rendering.
Cook Torrance microfacet model
Now I will try to construct the path I've followed to implement the specular BRDF evaluation expression. Everything starts with the rendering equation $$ L_o(\textbf{p}, \mathbf{w_o}) = L_e + \int_{\Omega} L_i(\textbf{p}, \mathbf{w_i}) fr(\mathbf{w_o}, \mathbf{w_i}) \cos \theta d\omega $$ where $\textbf{p}$ is the intersection point at the surface, $\mathbf{w_o}$ is the viewing vector, $\mathbf{w_i}$ is the light vector, $L_o$ is the outgoing radiance along $\mathbf{w_o}$, $L_i$ is the radiance incident upon $\textbf{p}$ along $\mathbf{w_i}$ and $\cos \theta = \mathbf{n} \cdot \mathbf{w_i}$.
The above integral (
i.e. the reflection term of the rendering equation) can be approximated with the following Monte Carlo estimator$$\frac{1}{N} \sum_{k=1}^{N} \frac{ L_i(\textbf{p}, \mathbf{w_k}) fr(\mathbf{w_k}, w_o) \cos \theta }{p(\mathbf{w_k})}$$where $p$ is the probability density function (PDF) that describes the distribution of the sampling vectors $\mathbf{w_k}$.
For actual rendering, the BRDF and PDF must be specified. In the case of the specular term of the Cook-Torrance model, I am using the following BRDF $$ fr(\mathbf{w_i}, \mathbf{w_o}) = \frac{DFG}{\pi (\mathbf{n} \cdot \mathbf{w_i})(\mathbf{n} \cdot \mathbf{w_o})} $$ where $$ D = \frac{1}{m^2 (\mathbf{n} \cdot \mathbf{h})^4} \exp \left( {\frac{(\mathbf{n} \cdot \mathbf{h})^2 - 1}{m^2 (\mathbf{n} \cdot \mathbf{h})^2}} \right) $$ $$ F = c_{spec} + (1 - c_{spec}) (1 - \mathbf{w_i} \cdot \mathbf{h})^5 $$ $$ G = \min \left( 1, \frac{2(\mathbf{n} \cdot \mathbf{h})(\mathbf{n} \cdot \mathbf{w_o})}{\mathbf{w_o} \cdot \mathbf{h}}, \frac{2(\mathbf{n} \cdot \mathbf{h})(\mathbf{n} \cdot \mathbf{w_i})}{\mathbf{w_o} \cdot \mathbf{h}} \right) $$ In the above equations, $\mathbf{h} = \frac{\mathbf{w_o} + \mathbf{w_i}}{|\mathbf{w_o} + \mathbf{w_i}|}$ and $c_{spec}$ is the specular color. All equations, with the exception of $F$, were extracted from the original paper. $F$, also known as the Schlick's approximation, is an efficient and less accurate approximation to the actual Fresnel term.
It would be mandatory to use importance sampling in the case of rendering smooth specular surfaces. However, I am modeling only reasonably rough surfaces ($m \approx 0.2$), thus, I've decided to keep with uniform sampling for a while (at the cost of longer rendering times). In this case, the PDF is $$ p(\mathbf{w_k}) = \frac{1}{2 \pi} $$ By substituting the uniform PDF and Cook-Torrance BRDF into the Monte Carlo estimator (notice that $\mathbf{w_i}$ is substituted by $\mathbf{w_k}$, the random variable), I get $$ \frac{1}{N} \sum_{k=1}^{N} \frac{ L_i(\textbf{p}, \mathbf{w_k})\left( \frac{DFG}{\pi (\mathbf{n} \cdot \mathbf{w_k})(\mathbf{n} \cdot \mathbf{w_o})} \right) \cos \theta }{\left( \frac{1}{2\pi} \right)} $$ Now we can cancel the $\pi$'s and remove the summation because we shoot only one random ray from the intersection point. We end up with $$ 2 L_i(\textbf{p}, \mathbf{w_k})\left( \frac{DFG}{(\mathbf{n} \cdot \mathbf{w_k})(\mathbf{n} \cdot \mathbf{w_o})} \right) \cos \theta $$ Since $\cos \theta = \mathbf{n} \cdot \mathbf{w_k}$, we can further simplify it $$ 2 L_i(\textbf{p}, \mathbf{w_k})\left( \frac{DFG}{\mathbf{n} \cdot \mathbf{w_o}} \right) $$
So, that's the expression I am evaluating when a ray hits an specular surface whose reflectance is described by the Cook-Torrance BRDF. That is the expression that seems to be reflecting more energy than that received. I am almost sure that there is something wrong with it (or in the derivation process), but I just can't spot it.
Interestingly enough, if I multiply the above expression by $\frac{1}{\pi}$, I get results that look correct. However, I've refused to do that because I can't mathematicaly justify it.
Any help is very welcome! Thank you!
UPDATE
As @wolle pointed out below, this paper presents a new formulation better suited for path tracing, where the normal distribution function (NDF) $D$ includes the $\frac{1}{\pi}$ factor and the BRDF $fr$ includes the $\frac{1}{4}$ factor. Thus $$ D_{new} = \frac{1}{\pi m^2 (\mathbf{n} \cdot \mathbf{h})^4} \exp \left( {\frac{(\mathbf{n} \cdot \mathbf{h})^2 - 1}{m^2 (\mathbf{n} \cdot \mathbf{h})^2}} \right) $$ and $$ fr_{new}(\mathbf{w_i}, \mathbf{w_o}) = \frac{DFG}{4 (\mathbf{n} \cdot \mathbf{w_i})(\mathbf{n} \cdot \mathbf{w_o})} $$ Afer the inclusion of the above equations into the rendering equation, I ended up with $$ \frac{\pi}{2} L_i(\textbf{p}, \mathbf{w_k})\left( \frac{D_{new}FG}{\mathbf{n} \cdot \mathbf{w_o}} \right) $$ which worked nicely! PS: The issue now is to better understand how the new formulation for $D$ and $fr$ help in maintaining the energy conservation... but this is another topic.
UPDATE 2
As pointed out by PeteUK, the authorship of the Fresnel formulation presented in the original text of my question was wrongly attributed to Cook and Torrance. The Fresnel formulation used above is actually known as the Schlick's approximation and is named after Christophe Schlick. The original text of the question was modified accordingly. |
Search
Now showing items 1-10 of 24
Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV
(Springer, 2015-01-10)
The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ...
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV
(Springer Berlin Heidelberg, 2015-04-09)
The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV
(Springer, 2015-05-27)
The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ...
Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(American Physical Society, 2015-03)
We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV
(American Physical Society, 2015-06)
The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ...
Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Springer, 2015-11)
The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ...
K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2015-02)
The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ... |
During Sage Days 31, I worked on theticket #11459 to implement
rst to
sws conversion. This post intends to give some documentationabout this new feature. Here is an example of a ReStructuredText file
calculus.rst:
******** Calculus ******** Let's do some calculus using Sage. Differentiation =============== The derivative of $\\sin(x)$ is:: sage: diff(sin(x), x) cos(x) The derivative of $\\log(x) + x^2$ is:: sage: diff(log(x) + x^2, x) 2*x + 1/x Integration =========== Sage can integrate $\\int x^3 dx$:: sage: f = x^3 sage: f.integral(x) 1/4*x^4 Let's compute $\\int x \\sin(x^2) dx$:: sage: f = x*sin(x^2) sage: integral(f,x) -1/2*cos(x^2)
Both the files calculus.rst andcalculus.html, which was generated from thefile
calculus.rst using Docutils andthe command
rst2html.py calculus.rst calculus.html, can be uploaded intothe Sage Notebook (copy their URL):
This will create a new worksheet:
I also implemented two command line scripts to generate the worksheet text file:
sage -rst2txt file.rst file.txt
and to generate the Sage worksheet (
.sws) directly:
sage -rst2sws file.rst file.sws
I also added the possibility to automatically escape every backslashes if they are not already so that they don't get lost in the translation process. For more info, consult the documentation:
sage -rst2txt -h sage -rst2sws -h |
ISSN:
1935-9179
eISSN:
1935-9179
All Issues
Electronic Research Announcements
2011 , Volume 18
Select all articles
Export/Reference:
Abstract:
We present a novel class of functions that can describe the stable and unstable manifolds of the Hénon map. We propose an algorithm to construct these functions by using the Borel-Laplace transform. Neither linearization nor perturbation is applied in the construction, and the obtained functions are exact solutions of the Hénon map. We also show that it is possible to depict the chaotic attractor of the map by using one of these functions without explicitly using the properties of the attractor.
Abstract:
We describe the structure of the automorphism groups of algebras Morita equivalent to the first Weyl algebra $ A_1(k) $. In particular, we give a geometric presentation for these groups in terms of amalgamated products, using the Bass-Serre theory of groups acting on graphs. A key rôle in our approach is played by a transitive action of the automorphism group of the free algebra $ k< x, y>$ on the Calogero-Moser varieties $ \CC_n $ defined in [5]. In the end, we propose a natural extension of the Dixmier Conjecture for $ A_1(k) $ to the class of Morita equivalent algebras.
Abstract:
Based on the classic multiplicative ergodic theorem and the semi-uniform subadditive ergodic theorem, we show that there always exists at least one ergodic Borel probability measure such that the joint spectral radius of a finite set of square matrices of the same size can be realized almost everywhere with respect to this Borel probability measure. The existence of at least one ergodic Borel probability measure, in the context of the joint spectral radius problem, is obtained in a general setting.
Abstract:
An element of a free Leibniz algebra is called Jordan if it belongs to a free Leibniz-Jordan subalgebra. Elements of the Jordan commutant of a free Leibniz algebra are called weak Jordan. We prove that an element of a free Leibniz algebra over a field of characteristic 0 is weak Jordan if and only if it is left-central. We show that free Leibniz algebra is an extension of a free Lie algebra by left-center. We find the dimensions of the homogeneous components of the Jordan commutant and the base of its multilinear part. We find criterion for an element of free Leibniz algebra to be Jordan.
Abstract:
In this note we announce the computation of the triangular spectrum (as defined by P. Balmer) of two classes of tensor triangulated categories which are quite common in algebraic geometry. One of them is the derived category of $G$-equivariant sheaves on a smooth quasi projective scheme $X$ for a finite group $G$ which acts on $X$. The other class is the derived category of split super-schemes.
Abstract:
Let $T:C^1(RR)\to C(RR)$ be an operator satisfying the derivation equation
$T(f\cdot g)=(Tf)\cdot g + f \cdot (Tg),$
where $f,g\in C^1(RR)$, and some weak additional assumption. Then $T$ must be of the form
$(Tf)(x) = c(x) \, f'(x) + d(x) \, f(x) \, \ln |f(x)|$
for $f \in C^1(RR), x \in RR$, where $c, d \in C(RR)$ are suitable continuous functions, with the convention $0 \ln 0 = 0$. If the domain of $T$ is assumed to be $C(RR)$, then $c=0$ and $T$ is essentially given by the entropy function $f \ln |f|$. We can also determine the solutions of the generalized derivation equation
$T(f\cdot g)=(Tf)\cdot (A_1g) + (A_2f) \cdot (Tg), $
where $f,g\in C^1(RR)$, for operators $T:C^1(RR)\to C(RR)$ and $A_1, A_2:C(RR)\to C(RR)$ fulfilling some weak additional properties.
Abstract:
Zapolsky's inequality gives a lower bound for the $L_1$ norm of the Poisson bracket of a pair of $C^1$ functions on the two-dimensional sphere by means of quasi-states. Here we show that this lower bound is sharp.
Abstract:
This is an announcement of the proof of the
inverse conjecture for the Gowers $U^{s+1}[N]$-normfor all $s \geq 3$; this is new for $s \geq 4$, the cases $s = 1,2,3$ having been previously established. More precisely we outline a proof that if $f : [N] \rightarrow [-1,1]$ is a function with ||$f$|| $U^{s+1}[N] \geq \delta$ then there is a bounded-complexity $s$-step nilsequence $F(g(n)\Gamma)$ which correlates with $f$, where the bounds on the complexity and correlation depend only on $s$ and $\delta$. From previous results, this conjecture implies the Hardy-Littlewood prime tuples conjecture for any linear system of finite complexity. In particular, one obtains an asymptotic formula for the number of $k$-term arithmetic progressions $p_1 < p_2 < ... < p_k \leq N$ of primes, for every $k \geq 3$. Abstract:
Let $X \rightarrow\ S$ be a smooth projective surjective morphism, where $X$ and $S$ are integral schemes over $\mathbb C$. Let $L_0\, L_1\, \cdots \, L_{n-1}\, L_{n}$ be line bundles over $X$. There is a natural isomorphism of the Deligne pairing 〈$L_0\, \cdots\, L_{n}$〉with the determinant line bundle $Det(\otimes_{i=0}^{n} (L_i- \mathcal O_{X}))$.
Abstract:
The purpose of this note is to announce complete answers to the following questions. (1) For an essential simple loop on a 2-bridge sphere in a 2-bridge link complement, when is it null-homotopic in the link complement? (2) For two distinct essential simple loops on a 2-bridge sphere in a 2-bridge link complement, when are they homotopic in the link complement? We also announce applications of these results to character varieties and McShane's identity.
Abstract:
We characterize order preserving transforms on the class of lower-semi-continuous convex functions that are defined on a convex subset of $\mathbb{R}^n$ (a "window") and some of its variants. To this end, we investigate convexity preserving maps on subsets of $\mathbb{R}^n$. We prove that, in general, an order isomorphism is induced by a special convexity preserving point map on the epi-graph of the function. In the case of non-negative convex functions on $K$, where $0\in K$ and $f(0) = 0$, one may naturally partition the set of order isomorphisms into two classes; we explain the main ideas behind these results.
Abstract:
The degree zero part of the quantum cohomology algebra of a smooth Fano toric symplectic manifold is determined by the superpotential function, $W$, of its moment polytope. In particular, this algebra is semisimple, i.e. splits as a product of fields, if and only if all the critical points of $W$ are non-degenerate. In this paper, we prove that this non-degeneracy holds for all smooth Fano toric varieties with facet-symmetric duals to moment polytopes.
Readers Authors Editors Referees Librarians Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top] |
I just began the new Coursera MOOC on probability, given by UPenn Dr. Santosh S. Venkatesh, and at the very first lecture, my interest was piqued by this intriguing idea, called the Chevalier de Méré's Paradox.. If I ask you which is more likely:
Getting a "6" four times in four throws of a die Getting a "double 6" twenty-four times in twenty-four throws of a pair of dice
You probably won't come up with actual numbers, but your intuition will align nicely with reality in telling you that while the probability associated with (1) is pretty small.. the probability associated with (2) is astronomically so:
$P(all\ 6s) = P(6) ^ 4 = \frac{1}{6} ^ 4 \approx ~0.00077$ $P(all\ \left\{6,6\right\}s) = P(\left\{6,6\right\})^{24} = \frac{1}{36} ^{24} \approx ~4.45 \times 10^{-38}$
However if I change the wording of the question slightly:
Getting a "6" at least oncein four throws of a die Getting a "double 6" at least oncein twenty-four throws of a pair of dice
You might be then tempted to answer that they are now both equally likely, by a reasoning which could be similar to this : there are a certain number of independent events and I'm asked about the probability of their
union, therefore I must consider the sum of their probabilities, i.e.: $P(at\ least\ one\ 6) = P(6) \cdot 4 = \frac{1}{6} \cdot 4 = \frac{2}{3}$ $P(at\ least\ one\ \left\{6,6\right\}) = P(\left\{6,6\right\}) \cdot 24 = \frac{1}{36} \cdot {24} = \frac{2}{3}$
But in that case of course your intuition would be completely wrong (just consider what would happen to the probability of getting a "6" in
six throws of a die, under this reasoning). In the case of (1), the independent events that we should be counting are not the individual results, from 1 to 6, but rather, the different configurations in which the four throws could end up, which is quite different. Let's examine a simpler case with only two throws instead of four. The 36 different configurations are:
pd.DataFrame([(t1, t2, 'Yes' if 6 in {t1, t2} else '') for t1 in range(1, 7) for t2 in range(1, 7)], columns=['Throw 1', 'Throw 2', 'Has 6'])
Throw 1 Throw 2 Has 6 0 1 1 1 1 2 2 1 3 3 1 4 4 1 5 5 1 6 Yes 6 2 1 7 2 2 8 2 3 9 2 4 10 2 5 11 2 6 Yes 12 3 1 13 3 2 14 3 3 15 3 4 16 3 5 17 3 6 Yes 18 4 1 19 4 2 20 4 3 21 4 4 22 4 5 23 4 6 Yes 24 5 1 25 5 2 26 5 3 27 5 4 28 5 5 29 5 6 Yes 30 6 1 Yes 31 6 2 Yes 32 6 3 Yes 33 6 4 Yes 34 6 5 Yes 35 6 6 Yes
The third row highlights the configurations where a "6" occurs, and you'll notice that there are 11 of them (and not 12, as intuition would lead you to falsely believe), which entails that:
$ P(at\ least\ a\ 6\ in\ two\ throws) = \frac{11}{36} = 0.30\overline{5} $
We notice that this is a bit less than $\frac{1}{3}$ of course. By a similar reasoning, and by noticing that it's actually easier to count the configurations which we must reject (instead of those that are of interest), we arrive at the conclusion that in four throws of a die and twenty-four throws of a pair of dice:
$P(at\ least\ one\ 6) = 1 - \frac{5^4}{6^4} \approx 0.512 $ $P(at\ least\ one\ \left\{6,6\right\}) = 1 - \frac{35^{24}}{36^{24}} \approx 0.491 $
Finally, if we are still a bit skeptical, nothing beats a little simulation to bring intuition, math and reality together:
from random import randintn = 100000print(sum([1 for s in [{randint(1, 6) for _ in range(4)} for _ in range(n)] if 6 in s]) / n)print(sum([1 for s in [{(randint(1, 6), randint(1, 6)) for _ in range(24)} for _ in range(n)] if (6, 6) in s]) / n)
0.51723 0.49148 |
Let's assume the basic nouns of our language to describe the physical world are the members of Lie groups. Okay, this is a pompous-sounding statement and somewhat arbitrary, but my justification is that these objects describe all the
continuous symmetries there can be, and almost every clarification of physics using mathematics is done either (1) by viewing a mathematical object from a different standpoint (unification of hitherto seemingly unrelated concepts) or (2) by exploiting symmetries to reduce or get rid of the redundant complexity in a statement. In our continuous manifold descriptions of the physical World, these symmetries are all continuous. So, somewhere in that list of symmetries, we meet $U(1)$, $SU(2)$, $SO(3)$, $U(N)$ and so forth. So we would needfully be doing calculations and simplifcations with these objects when we exploit symmetries of a problem. Whether or not we choose to single out an object like:
$$\left(\begin{array}{cc}0&-1\\1&0\end{array}\right)\in U(1), SU(2), SO(3), U(N) \cdots$$
and give it a special symbol $i$ where $i^2=-1$ is a "matter of taste", so in this sense the use of complex numbers is not essential. Nonetheless, we would needfully still meet this object and ones like it and would have to handle statements involving such objects when describing physics in a continuous manifold - there's no way around this as it belongs to any full description of symmetries of the World. So in this sense, complex numbers, quaternions, octonions and so forth are all there and essential in such description. Notice that complex numbers and their algebra are wonted to almost everyone in physics, quaternions to somewhat fewer physicists and octonions not really to that many. This is simply related to how often the relevant symmetry calculations come up: almost any interesting continuous symmetry involves Lie group objects for which $i^2=-1$ and so we single these out and commit all the rules of their algebra to stop ourselves going outright spare and committed to lunatic asylums writing out their full Lie theoretical representations all the time. Singling out quaternions and doing the same saves some work, but not so much, because quaternions come up in fewer symmetries. By the time we get to octonions, the symmetries wherein they come up are quite seldom, so not that many of us are very adept with their special algebra (me included): we can do the full matrix / Lie calculations without too much pain because we don't do them that often, so we don't notice their octonionhood so readily.
Footnote: One can take "Lie Group members" and "Continuous Symmetries" to be the same by dint of:
The solution to Hilbert's fifth problem by Montgomery, Gleason and Zippin i.e. we don't need the concept of manifold nor the concept of analyticity ($C^\omega$) - these "build themselves" from the basic idea of a continuous topological group; The classification of all Lie algebras by Wilhelm Killing (whose saw that he could do it, but botched the proof a little) and the great Elie Cartan - so we know what all continuous symmetries look like. Once we have classified all Lie algebras, we can find all possible Lie groups, since every Lie group has a Lie algebra, every Lie algebra can be exponentiated into a Lie group ( e.g. through the matrix exponential, since every Lie algebra can be represented as a matrix Lie algebra (Ado's theorem)) and the (global-topological) relationships between Lie groups that have the same Lie algebra is also known. |
In the Green-Schwarz formalism, for F1 strings, we have the action
$$S=S_1+S_2$$
Where
$${S_1} = - T\int_{}^{} {\sqrt { - \det \left( {{\Pi _{\alpha \mu }}\Pi _\beta ^\mu } \right)} {{\text{d}}^2}\sigma } $$
and $S_2$ is the additional action term that arises through Kappa Symmetry.
The transformations of kappa symmetry for F1 strings are intuitively related to that for D0 branes, etc. They are given by:
$$\begin{gathered} \delta {X^\mu } = {{\bar \Theta }^A}{\gamma ^\mu } \\ \delta {\Theta ^A} = - \delta {{\bar \Theta }^A}{\gamma ^\mu }{\Theta ^A} \\ \end{gathered} $$
So that:
$$\delta \Pi _\alpha ^\mu = - 2\delta {\bar \Theta ^A}{\gamma ^\mu }{\partial _\alpha }{\Theta ^A}$$
From this transformation, the variation in $S_1$ by kappa symmetry, is almost trivially (not that trivial, thiough...):
$$\delta {S_1} = \frac{2}{\pi }\int_{}^{} {\sqrt { - \lambda } {\lambda ^{\alpha \beta }}\Pi _\alpha ^\mu \delta {{\bar \Theta }^A}{\gamma _\mu }{\partial _\beta }{\Theta ^A}{{\text{d}}^2}\sigma } $$
Here,
$$\begin{gathered} \lambda = \det {\lambda ^{\alpha \beta }} \\ {\lambda ^{\alpha \beta }} = {\Pi _{\alpha \mu }}\Pi _\beta ^\mu \\ \end{gathered} $$
To determine $S_2$, howefver, is not all that trivial since the simple procedure I've mentioned here, for example, is just not practical by any means.
We there fore use a 2-form, form $\Omega_2$, such that:
$${S_2} = \int_{}^{} {{\Omega _2}} = \int {{\epsilon ^{\alpha \beta }}{\Omega _{\alpha \beta }}{{\text{d}}^2}\sigma } $$
We may also (formally) define a 3-form $\Omega_3=\mbox{d}\Omega_2,$ so that by Stokes' theorem, we have it that:
$$ \int_M {{\Omega _2}} = \int_D^{} {{\Omega _3}} $$
$M$ would be the worldsheet whereas $D$ would be it's interior, ; $M=\partial D$.
There are 3 supersymmetric 1-forms prominent to us namely, ${\text{d}}{\Theta ^1},{\text{d}}{\Theta ^2},{\Pi ^\mu }$. If ${\Omega _3}$ is to be supersymmetryic, which it better be, then, we shoul'd have it that ${\Omega _3}$ involves just these 3 1-forms.
A very sensible choice of 3-form for $\Omega_3$ is:
$${\Omega _3} = A\left( {{\text{d}}{{\bar \Theta }^1}{\gamma _\mu }{\text{d}}{\Theta ^1} -{\kern 1pt} {\kern 1pt} {\kern 1pt} {\text{d}}{{\bar \Theta }^2}{\gamma _\mu }{\text{d}}{\Theta ^2}} \right){\Pi ^\mu }$$
Now, when I first learnt this, I was pretty confused at the "$-$" si1gn, (probably because of the fact that I was too confused to bother reading the next line of bbs : ) ...).
As I guess many others may also be confused at this, let me ask and answer the question:
Is this minus sign, by any chance, related to the fact that Type IIA String Theory is chiral? Does this mean that the above expression only holds for the Type IIA, and not for the Type IIB ? . |
Skills to Develop
To learn what the observed significance of a test is. To learn how to compute the observed significance of a test. To learn how to apply the \(p\)-value approach to hypothesis testing. The Observed Significance
The conceptual basis of our testing procedure is that we reject \(H_0\) only if the data that we obtained would constitute a rare event if \(H_0\) were actually true. The level of significance α specifies what is meant by “rare.” The observed significance of the test is a measure of how rare the value of the test statistic that we have just observed would be if the null hypothesis were true. That is, the observed significance of the test just performed is the probability that, if the test were repeated with a new sample, the result of the new test would be at least as contrary to \(H_0\) and in support of \(H_a\) as what was observed in the original test.
Definition: observed significance
The
observed significance or \(p\)-value of a specific test of hypotheses is the probability, on the supposition that \(H_0\) is true, of obtaining a result at least as contrary to \(H_0\) and in favor of \(H_a\) as the result actually observed in the sample data.
Think back to "Example 8.2.1", Section 8.2 concerning the effectiveness of a new pain reliever. This was a left-tailed test in which the value of the test statistic was \(-1.886\). To be as contrary to \(H_0\) and in support of \(H_a\) as the result \(Z=-1.886\) actually observed means to obtain a value of the test statistic in the interval \((-\infty ,-1.886]\). Rounding \(-1.886\) to \(-1.89\), we can read directly from Figure 7.1.5 that \(P(Z\leq -1.89)=0.0294\). Thus the \(p\)-value or observed significance of the test in "Example 8.2.1", Section 8.2 is \(0.0294\) or about \(3\%\). Under repeated sampling from this population, if \(H_0\) were true then only about \(3\%\) of all samples of size \(50\) would give a result as contrary to \(H_0\) and in favor of \(H_a\) as the sample we observed. Note that the probability \(0.0294\) is the area of the left tail cut off by the test statistic in this left-tailed test.
Analogous reasoning applies to a right-tailed or a two-tailed test, except that in the case of a two-tailed test being as far from \(0\) as the observed value of the test statistic but on the opposite side of \(0\) is just as contrary to \(H_0\) as being the same distance away and on the same side of \(0\), hence the corresponding tail area is doubled.
Computational Definition of the Observed Significance of a Test of Hypotheses
The
observed significance of a test of hypotheses is the area of the tail of the distribution cut off by the test statistic (times two in the case of a two-tailed test).
Example \(\PageIndex{1}\)
Compute the observed significance of the test performed in "Example 8.2.2", Section 8.2.
Solution:
The value of the test statistic was \(z=2.490\), which by Figure 7.1.5 cuts off a tail of area \(0.0064\), as shown in Figure \(\PageIndex{1}\). Since the test was two-tailed, the observed significance is \(2\times 0.0064=0.0128\).
Figure \(\PageIndex{1}\): Area of the Tail for Example \(\PageIndex{1}\). The p-value Approach to Hypothesis Testing
In "Example 8.2.1", Section 8.2 the test was performed at the \(5\%\) level of significance: the definition of “rare” event was probability \(\alpha =0.05\) or less. We saw above that the observed significance of the test was \(p=0.0294\) or about \(3\%\). Since \(p=0.0294<0.05=\alpha\) (or \(3\%\) is less than \(5\%\)), the decision turned out to be to reject: what was observed was sufficiently unlikely to qualify as an event so rare as to be regarded as (practically) incompatible with \(H_0\).
In "Example 8.2.2", Section 8.2 the test was performed at the \(1\%\) level of significance: the definition of “rare” event was probability \(\alpha =0.01\) or less. The observed significance of the test was computed in "Example \(\PageIndex{1}\)" as \(p=0.0128\) or about \(1.3\%\). Since \(p=0.0128>0.01=\alpha\) (or \(1.3\%\) is greater than \(1\%\)), the decision turned out to be not to reject. The event observed was unlikely, but not sufficiently unlikely to lead to rejection of the null hypothesis.
The reasoning just presented is the basis for a slightly different but equivalent formulation of the hypothesis testing process. The first three steps are the same as before, but instead of using \(\alpha\) to compute critical values and construct a rejection region, one computes the \(p\)-value \(p\) of the test and compares it to \(\alpha\), rejecting \(H_0\) if \(p\leq \alpha\) and not rejecting if \(p>\alpha\).
Systematic Hypothesis Testing Procedure: p-Value Approach Identify the null and alternative hypotheses. Identify the relevant test statistic and its distribution. Compute from the data the value of the test statistic. Compute the \(p\)-value of the test. Compare the value computed in Step 4 to significance level α and make a decision: reject \(H_0\) if \(p\leq \alpha\) and do not reject \(H_0\) if \(p>\alpha\). Formulate the decision in the context of the problem, if applicable.
Example \(\PageIndex{2}\)
The total score in a professional basketball game is the sum of the scores of the two teams. An expert commentator claims that the average total score for NBA games is \(202.5\). A fan suspects that this is an overstatement and that the actual average is less than \(202.5\). He selects a random sample of \(85\) games and obtains a mean total score of \(199.2\) with standard deviation \(19.63\). Determine, at the \(5\%\) level of significance, whether there is sufficient evidence in the sample to reject the expert commentator’s claim.
Solution: Step 1. Let \(\mu\) be the true average total game score of all NBA games. The relevant test is \[H_0: \mu =202.5\\ \text{vs}\\ H_a: \mu <202.5\; @\; \alpha =0.05\] Step 2. The sample is large and the population standard deviation is unknown. Thus the test statistic is \[Z=\frac{\bar{x}-\mu _0}{s/\sqrt{n}}\] and has the standard normal distribution. Step 3. Inserting the data into the formula for the test statistic gives \[Z=\frac{\bar{x}-\mu _0}{s/\sqrt{n}}=\frac{199.2-202.5}{19.63/\sqrt{85}}=-1.55\] Step 4. The area of the left tail cut off by \(z=-1.55\) is, by Figure 7.1.5, \(0.0606\), as illustrated in Figure \(\PageIndex{2}\). Since the test is left-tailed, the \(p\)-value is just this number, \(p=0.0606\). Step 5. Since \(p=0.0606>0.05=\alpha\), the decision is not to reject \(H_0\). In the context of the problem our conclusion is:
The data do not provide sufficient evidence, at the \(5\%\) level of significance, to conclude that the average total score of NBA games is less than \(202.5\).
: Figure \(\PageIndex{2}\) Test Statistic for "Example\(\PageIndex{2}\) "
Example \(\PageIndex{3}\)
Mr. Prospero has been teaching Algebra II from a particular textbook at Remote Isle High School for many years. Over the years students in his Algebra II classes have consistently scored an average of \(67\) on the end of course exam (EOC). This year Mr. Prospero used a new textbook in the hope that the average score on the EOC test would be higher. The average EOC test score of the \(64\) students who took Algebra II from Mr. Prospero this year had mean \(69.4\) and sample standard deviation \(6.1\). Determine whether these data provide sufficient evidence, at the \(1\%\) level of significance, to conclude that the average EOC test score is higher with the new textbook.
Solution: Step 1. Let \(\mu\) be the true average score on the EOC exam of all Mr. Prospero’s students who take the Algebra II course with the new textbook. The natural statement that would be assumed true unless there were strong evidence to the contrary is that the new book is about the same as the old one. The alternative, which it takes evidence to establish, is that the new book is better, which corresponds to a higher value of \(\mu\). Thus the relevant test is \[H_0: \mu =67\\ \text{vs}\\ H_a: \mu >67\; @\; \alpha =0.01\] Step 2. The sample is large and the population standard deviation is unknown. Thus the test statistic is \[Z=\frac{\bar{x}-\mu _0}{s/\sqrt{n}}\] and has the standard normal distribution. Step 3. Inserting the data into the formula for the test statistic gives \[Z=\frac{\bar{x}-\mu _0}{s/\sqrt{n}}=\frac{69.4-67}{6.1/\sqrt{64}}=3.15\] Step 4. The area of the right tail cut off by \(z=3.15\) is, by Figure 7.1.5, \(1-0.9992=0.0008\), as shown in Figure \(\PageIndex{3}\). Since the test is right-tailed, the \(p\)-value is just this number, \(p=0.0008\). Step 5. Since \(p=0.0008<0.01=\alpha\), the decision is to reject \(H_0\). In the context of the problem our conclusion is:
The data provide sufficient evidence, at the \(1\%\) level of significance, to conclude that the average EOC exam score of students taking the Algebra II course from Mr. Prospero using the new book is higher than the average score of those taking the course from him but using the old book.
: Figure \(\PageIndex{3}\) Test Statistic for "Example\(\PageIndex{3}\) "
Example \(\PageIndex{4}\)
For the surface water in a particular lake, local environmental scientists would like to maintain an average pH level at \(7.4\). Water samples are routinely collected to monitor the average pH level. If there is evidence of a shift in pH value, in either direction, then remedial action will be taken. On a particular day \(30\) water samples are taken and yield average pH reading of \(7.7\) with sample standard deviation \(0.5\). Determine, at the \(1\%\) level of significance, whether there is sufficient evidence in the sample to indicate that remedial action should be taken.
Solution: Step 1. Let \(\mu\) be the true average pH level at the time the samples were taken. The relevant test is \[H_0: \mu =7.4\\ \text{vs}\\ H_a: \mu \neq 7.4\; @\; \alpha =0.01\] Step 2. The sample is large and the population standard deviation is unknown. Thus the test statistic is \[Z=\frac{\bar{x}-\mu _0}{s/\sqrt{n}}\] and has the standard normal distribution. Step 3. Inserting the data into the formula for the test statistic gives \[Z=\frac{\bar{x}-\mu _0}{s/\sqrt{n}}=\frac{7.7-7.4}{0.5/\sqrt{30}}=3.29\] Step 4. The area of the right tail cut off by \(z=3.29\) is, by Figure 7.1.5, \(1-0.9995=0.0005\), as illustrated in Figure \(\PageIndex{4}\). Since the test is two-tailed, the p-value is the double of this number, p=2×0.0005=0.0010. Step 5. Since \(p=0.0010<0.01=\alpha\), the decision is to reject \(H_0\). In the context of the problem our conclusion is:
The data provide sufficient evidence, at the \(1\%\) level of significance, to conclude that the average pH of surface water in the lake is different from \(7.4\). That is, remedial action is indicated.
: Figure \(\PageIndex{4}\) Test Statistic for "Example\(\PageIndex{4}\) "
Key Takeaway
The observed significance or \(p\)-value of a test is a measure of how inconsistent the sample result is with \(H_0\) and in favor of \(H_a\). The \(p\)-value approach to hypothesis testing means that one merely compares the \(p\)-value to \(\alpha\) instead of constructing a rejection region. There is a systematic five-step procedure for the \(p\)-value approach to hypothesis testing. Contributor
Anonymous |
Class "Gammad"
The Gammad distribution with parameters
shape \(=\alpha\), by default
= 1, and
scale \(=\sigma\), by default
= 1, has density $$ d(x)= \frac{1}{{\sigma}^{\alpha}\Gamma(\alpha)} {x}^{\alpha-1} e^{-x/\sigma}% $$ for \(x > 0\), \(\alpha > 0\) and \(\sigma > 0\). The mean and variance are \(E(X) = \alpha\sigma\) and \(Var(X) = \alpha\sigma^2\). C.f.
rgamma
Keywords distribution Objects from the Class
Objects can be created by calls of the form
Gammad(scale, shape).This object is a gamma distribution.
Slots
img
Object of class
"Reals": The space of the image of this distribution has got dimension 1 and the name "Real Space".
param
Object of class
"GammaParameter": the parameter of this distribution (scale and shape), declared at its instantiation
r
Object of class
"function": generates random numbers (calls function rgamma)
d
Object of class
"function": density function (calls function dgamma)
p
Object of class
"function": cumulative function (calls function pgamma)
q
Object of class
"function": inverse of the cumulative function (calls function qgamma)
.withArith
logical: used internally to issue warnings as to interpretation of arithmetics
.withSim
logical: used internally to issue warnings as to accuracy
.logExact
logical: used internally to flag the case where there are explicit formulae for the log version of density, cdf, and quantile function
.lowerExact
logical: used internally to flag the case where there are explicit formulae for the lower tail version of cdf and quantile function
Symmetry
object of class
"DistributionSymmetry"; used internally to avoid unnecessary calculations.
Extends
Class
"ExpOrGammaOrChisq", directly.Class
"AbscontDistribution", by class
"ExpOrGammaOrChisq".Class
"UnivariateDistribution", by class
"AbscontDistribution".Class
"Distribution", by class
"UnivariateDistribution".
Methods initialize
signature(.Object = "Gammad"): initialize method
scale
signature(object = "Gammad"): returns the slot
scaleof the parameter of the distribution
scale<-
signature(object = "Gammad"): modifies the slot
scaleof the parameter of the distribution
shape
signature(object = "Gammad"): returns the slot
shapeof the parameter of the distribution
shape<-
signature(object = "Gammad"): modifies the slot
shapeof the parameter of the distribution
+
signature(e1 = "Gammad", e2 = "Gammad"): For the Gamma distribution we use its closedness under convolutions.
*
signature(e1 = "Gammad", e2 = "numeric"): For the Gamma distribution we use its closedness under positive scaling transformations.
See Also Aliases Gammad-class Gammad initialize,Gammad-method Examples
# NOT RUN {G <- Gammad(scale=1,shape=1) # G is a gamma distribution with scale=1 and shape=1.r(G)(1) # one random number generated from this distribution, e.g. 0.1304441d(G)(1) # Density of this distribution is 0.3678794 for x=1.p(G)(1) # Probability that x<1 is 0.6321206.q(G)(.1) # Probability that x<0.1053605 is 0.1.## in RStudio or Jupyter IRKernel, use q.l(.)(.) instead of q(.)(.)scale(G) # scale of this distribution is 1.scale(G) <- 2 # scale of this distribution is now 2.# }
Documentation reproduced from package distr, version 2.8.0, License: LGPL-3 |
I have been asked to solve this exercise.
An environment of $V=1500$ $m^3$, it is considered a reverberated field acoustic pressure level of $70$ $dB$ when an acoustic source of $80$ $dB$ is activated. Compute the total absorption (in $m^2$) in order to obtain a reverberation time of $4$ seconds.
Let me show you my idea: I have to use Sabine's formula $$ T_{60} = \beta \frac{V}{A} \text{ with } \beta=0.16$$
The surface $A$ should be computed as
$$A = \sum_i a_i S_i $$
where $a_i$ is the absorption coefficient of the $i$-th surface $S_i$.
Can someone help me to find the missing surfaces and coefficients because I can't understand in which way I must find them?
Thanks in advance. |
I'm looking for some general insight on preconditioning. In particular, relevant references/resources/comments would be greatly appreciated. Note, I have been through some of the literature, but am having trouble finding a reference for a problem similar to the one I am solving.
The system I'm solving takes the form
$Q_{ij} \ddot{y}_j + S_{ijk} \dot{y}_j\dot{y}_k +V_i =0$
where Einstein summation is assumed, and the arrays $Q_{ij}, S_{ijk},V_i$ are all low order polynomials in the dependent variables $\{ y_1, ... ,y_N\}$. Here, $N$ is the 'resolution' of the model, and $(i,j,k)\in(1,..,N)$.
Now, because $Q_{ij}$ is non trivial to invert, I've been solving this using an initial differential algebraic equation solver, namely the IDA solver that is a part of the SUNDIALS suite, via the FORTRAN interface. This solver uses a "variable-order, variable-coefficient" backwards differentiation formula. I also change variables, to make the system 1st order, i.e. $\dot{y}_k = v_k$.
The solver, in general, works as follows. Our governing equation can be written as
$F(t,\vec{y},\dot{\vec{y}}) = 0; \quad \vec{y}(t_o) = \vec{y}_o; \quad \dot{\vec{y}}=\dot{\vec{y}}_o$.
where $\vec{y} = (y_1,...,y_N)$. Now, the backwards differentiation formula, of qth order, implies
$\sum_{i=0}^q \alpha(n,i) \vec{y}(n-i) =h(n) \dot{\vec{y}}(n)$,
where $h(n)=t(n)-t(n-1)$ is the step size, and $\alpha(n,i)$ are functions of q and the history of the time steps. Substituting this back into the governing equation, we have equations of the form
$G(\vec{y}(n)) = F\left(t(n), \vec{y}(n),h(n)^{-1}\sum_{i=0}^q \alpha(n,i) \vec{y}(n-i)\right) = 0$.
The solver works by finding the solution to this equation via some form of Newton iteration. That is, it looks at
$J[\vec{y}(n)^{m+1}-\vec{y}(n)^m] = -G(\vec{y}(n)^m)$,
where the superscript $(m,m+1)$ refers to the $m$th, and $m+1$th approximation to the variable $\vec{y}$ and the Jacobian $J$ is given by
$J= \frac{\partial G}{\partial y} = \frac{\partial F}{\partial y}+ \alpha \frac{\partial F}{\partial \dot{y}}; \quad \alpha =\frac{\alpha(n,0)}{h(n)}$
Now, as the system approaches the phenomena I wish to study, the equations take much longer to integrate. I have optimized the 'low hanging fruit', e.g. the linear algebra is done using OpenBLAS, OpenMP is implemented etc...
I suspect that I need to use an iterative linear solver for these larger problems, and this calls for a preconditioner. My understanding is that a preconditioner $P$ is a matrix such $P^{-1}J$ gives the same solutions as would be the case for $P=I$, but is potentially easier to solve.
Now, I know that there exists a permanent progressive solution to my governing equation, such that $y_n = a_n e^{inct}$ where $a_n,c$ are numerically computed constants. The solutions I'm interested in modeling are going to be perturbations from these. So atleast for short times, the calculated solutions will be small deviations from the permanent progressive solutions.
So, I am interested in using information from these permanent progressive solutions to see if I can improve the efficiency of solving the more general equations.
In a sentence, I am unclear of how to do this, and have struggled to find relevant resources where specific examples are carried out in detail.
Naively, I would like to output the Jacobian of my permanent progressive solution so that I could get a better idea of its structure, and from here I could pick a preconditioner. I cannot seem to figure out how to do this in SUNDIALS.
Any information would be greatly appreciated,
Nick |
My reference is L. Simon's
Lectures on Geometric Measure Theory. He defines a measure on a set $X$ as a countably subadditive function $\mu:2^X\to[0,\infty]$ with $\mu(\emptyset)=0.$
When $X$ is a locally compact and separable topological space, he defines a measure $\mu$ on $X$ to be Radon if it is Borel regular (i.e. every set is contained in a Borel set with the same measure, and all Borel sets are measurable) and finite on compact sets.
Why make the assumption that $X$ is locally compact and separable? They seem extraneous to me. |
Proof following statement with interference rules ( without truth table) that$$ (\neg C \wedge B \wedge (A \rightarrow C) \wedge (B \rightarrow D ) )\implies (\neg A \wedge D ) $$Attempt to proof$B$ (premise)$B \rightarrow D$ (premise)$D$ (Modus ponens 1,2)$A \rightarrow C $ (premise)...
Gallian says: In PID, any strictly increasing chain of ideals $I_1\subset I_2\subset\cdots$ must be finite in length. I don't see why we need strictly increasing, proof does not seem to use strictly anywhere.
Just to confirm, if $\omega\in \Lambda^k(V)$ for $V$ an $n$-dimensional vector space,
Just to confirm, if $\omega\in \Lambda^k(V)$ for $V$ an $n$-dimensional vector space, $$\omega = \sum_{1=i_1<\dots<i_k=n}a_{i_1,\dots,i_k}dx_{i_1}\wedge\dots\wedge dx_{i_k}$$
where $$\omega(v_1,\dots,v_k)=\sum_{1=i_1<\dots<i_k=n}a_{i_1,\dots,i_k}\begin{vmatrix}dx_{i_1}(v_1)&\dots& dx_{i_1}(v_k)\\\dots\\dx_{i_k}(v_1)&\dots&dx_{i_k}(v_k)\end{vmatrix}$$
That's how we actually apply this form to a collection of $k$ vectors of $n$-dimensional $V$ right?
Where $dx_{i_m}$ is the dual vector to $e_{i_m}$ so in particular the above is: $$\omega(v_1,\dots,v_k)=\sum_{1=i_1<\dots<i_k=n}a_{i_1,\dots,i_k}\begin{vmatrix}v_1^{i_1}&\dots& v_k^{i_1}\\\dots\\v_1^{i_k}&\dots&v_k^{i_k}\end{vmatrix}$$
where $v_i^j$ means the $j^{th}$ component of the vector $v_i$
In mathematics, the Higman group, introduced by Graham Higman (1951), was the first example of an infinite finitely presented group with no non-trivial finite quotients.The quotient by the maximal proper normal subgroup is a finitely generated infinite simple group. Higman (1974) later found some finitely presented infinite groups Gn,r that are simple if n is even and have a simple subgroup of index 2 if n is odd, one of which is one of the Thompson groups.Higman's group is generated by 4 elements a, b, c, d with the relationsa−...
let $f: D \to R$ be continuous increasing function. To prove surjectivity:
Suppose that $\exists y_j: \; \forall x_i f(x_i) \neq y_j$. $f$ is inj. $\implies \; [\forall \;x_1,x_2 \in D f(x_1) < f(x_2) \implies x_1 < x_2]$. Since $f$ is continuous then $y_j $ is out of range out of $f$.
The partial sums of the Möbius inverse of the Harmonic numbers fit within this linear programming problem: https://pastebin.com/WJT75cSE Setting the upper bound on the variables to zero gives us a square root bound, but then it is no longer an arithmetic sequence.
Why are you enumerating these elements? is $D$ countable? Is $y_j$ supposed to be an arbitrary element of $R$? (If yes, you will run into problems with enumeration) Clearly there exists a $y$ for every $x\in D$ such that $f(x)\neq y$, because there are more than one real number (did you mean to put the quantifiers in a different order there?),
That statement is obviously true though. For each $x$, $f(x)$ is just a real number, so you can just pick $y$ to be any other real number (and this is clearly possible because there are more than one real number).
@Thorgott well the minkowski bound depends on the discriminant of your number field so if you take it small enough the Minkowski bound won't exceed $2$, in which case you have no prime ideals to generate the ideal class group
(but obviously these are not the only cases of rings of integers of number fields that are UFDs because the bound also depends on the degree of the extension and the number of complex embeddings)
I need help in the following question:I have a field: $$F=\left(y^3-3y+xy^2,3x-x^3+x^2y\right)$$bounded in region $D$ defined by $x^2+y^2\leq 2.$I need to find a path $C$ that goes from $(1,1)$ to $(-1,-1)$ inside $D$ such that the value of the line integral/work of $F$ on the path $C$ is max...
I see. I usually ask the question for a closed curve passing through a given point. You want to make a closed curve out of this question (so, close it up by following the curve and then the line segment from $(-1,-1)$ to $(1,1)$). Now use Green's Theorem and look at what double integral you're getting.
I think it's useful for young people who have an interest in pursuing an academic career to think of the eduation system less as a hierarchy of degree attainment than a continuous progression from layperson to novice to apprentice and that a chain of degrees is more a progression through your "training" for your eventual job
at least it sounds like a perspective I wish I'd had as a younger person
Let $(M^m,g)$ be a compact Riemannian manifold with smooth nonempty boundary, and $N^n\subseteq \Bbb R^d$ a boundaryless isometrically embedded Riemannian manifold. For $1\le p<\infty$ we define as usual$$W^{1,p}(M,N):=\{u\in W^{1,p}(M,\Bbb R^d):u(x)\in N\text{ a.e.}\}.$$Using the Euclidean tr...
Well, what you've written makes absolutely no sense to me. If $R(f,x,y)$ is the redefinition of a function at $x$ only, you need its value at $w$ for what you've written to make sense, so you would need $R(f,x,y)(w)$, etc. That difficulty propagates in what you've written.
If I have a linear functional defined a proper subspace of a Banach space, how does one use Hahn-Banach to extend it to the entire vector space? Rudin simply says this can be done, but he doesn't show how it is done.
Consider rational functions in two positive integer variables $a,b$ with integer coëfficiënts : $f_i(a,b)$.Some of them have special properties.For instance$$ \frac{a^2 + b^2}{1 + ab} = f_2(a,b) = c $$It is obvious that $c$ is a positive fraction.But the special things here are :Prop... |
While reading up on quadratic reciprocity, I learned that if $p = 4k+1$ then $-1$ has a square root in $\mathbb{Z} / p \mathbb{Z}$.
Let $r_p$ be an integer with $0\leq r_p < p$ and $r_p^2 \equiv -1 \mod p$. How then is $\frac{r_p}{p} \in \mathbb{Q}$ distributed in $[0,1]$? Naively I would guess this is uniform distribution. How can we prove that?
Edit I noticed in the comments, it might be simpler to ask about the equidistribution of $$\{ \tfrac{1}{\sqrt{p}}(a,b): a^2 + b^2 = p\} \subset S^1$$
still in the case $p = 4k+1$. |
I have a question regarding the treatment of Robin boundary conditions in a multigrid solver. I am solving the Poisson equation in $\Omega=(0,1)^2$ with Robin boundary conditions on the boundary, $$- \Delta u = f \text{ in } \Omega$$ and $$\frac{\partial u}{\partial n} - au= g \text{ on } \partial \Omega.$$ I use a finite difference discretization with central differences (a standard five point stencil) for the interior points, and eliminate the boundary conditions so that, for example, at a point $x_C$ on the left boundary with east, south and north neighbors $x_E, x_S, x_N$ the resulting equation is
$$\frac{1}{h^2}\left( 4u(x_C)-u(x_S)-u(x_N)-2u(x_E)\right) - 2\frac{a}{h}u(x_C) = \frac{2}{h}g(x_C)+f(x_C).$$
In chapter 5 of the book 'Multigrid' of Trottenberg et. al., it is explained that if this approach is followed for a Neumann problem ($a=0$) and the full-weight restriction operator $R$ is modified on the boundary points, e.g., on a vertical boundary the modified stencil is $$ \frac{1}{16}\left[\begin{array}{cc} 2 &2 \\ 4&4 \\2&2 \end{array}\right]_{h}^{2h} $$ then a relaxation method can be applied directly on the equations for the boundary points with the eliminated boundary condition (no need of transferring the equation and boundary condition separately). I am using the interpolation operator $I=4R^T$ and Galerkin coarse grid systems.
My question is: Is this multigrid approach correct (modified FW operator on the boundaries + relaxation on the equation with eliminated boundary conditions and Galerkin coarse systems) also for more general Robin boundary conditions, provided a good relaxation method is used?
Thanks in advance. |
"Abstract: We prove, once and for all, that people who don't use superspace are really out of it. This includes QCDers, who always either wave their hands or gamble with lettuce (Monte Zuma calculations). Besides, all nonsupersymmetric theories have divergences which lead to problems with things like renormalons, instantons, anomalons, and other phenomenons. Also, they can't hide from gravity forever."
Can a gravitational field possess momentum? A gravitational wave can certainly possess momentum just like a light wave has momentum, but we generally think of a gravitational field as a static object, like an electrostatic field.
"You have to understand that compared to other professions such as programming or engineering, ethical standards in academia are in the gutter. I have worked with many different kinds of people in my life, in the U.S. and in Japan. I have only encountered one group more corrupt than academic scientists: the mafia members who ran Las Vegas hotels where I used to install computer equipment. "
so Ive got a small bottle that I filled up with salt. I put it on the scale and it's mass is 83g. I've also got a jup of water that has 500g of water. I put the bottle in the jug and it sank to the bottom. I have to figure out how much salt to take out of the bottle such that the weight force of the bottle equals the buoyancy force.
For the buoyancy do I: density of water * volume of water displaced * gravity acceleration?
so: mass of bottle * gravity = volume of water displaced * density of water * gravity?
@EmilioPisanty The measurement operators than I suggested in the comments of the post are fine but I additionally would like to control the width of the Poisson Distribution (much like we can do for the normal distribution using variance). Do you know that this can be achieved while still maintaining the completeness condition $$\int A^{\dagger}_{C}A_CdC = 1$$?
As a workaround while this request is pending, there exist several client-side workarounds that can be used to enable LaTeX rendering in chat, including:ChatJax, a set of bookmarklets by robjohn to enable dynamic MathJax support in chat. Commonly used in the Mathematics chat room.An altern...
You're always welcome to ask. One of the reasons I hang around in the chat room is because I'm happy to answer this sort of question. Obviously I'm sometimes busy doing other stuff, but if I have the spare time I'm always happy to answer.
Though as it happens I have to go now - lunch time! :-)
@JohnRennie It's possible to do it using the energy method. Just we need to carefully write down the potential function which is $U(r)=\frac{1}{2}\frac{mg}{R}r^2$ with zero point at the center of the earth.
Anonymous
Also I don't particularly like this SHM problem because it causes a lot of misconceptions. The motion is SHM only under particular conditions :P
I see with concern the close queue has not shrunk considerably in the last week and is still at 73 items. This may be an effect of increased traffic but not increased reviewing or something else, I'm not sure
Not sure about that, but the converse is certainly false :P
Derrida has received a lot of criticism from the experts on the fields he tried to comment on
I personally do not know much about postmodernist philosophy, so I shall not comment on it myself
I do have strong affirmative opinions on textual interpretation, made disjoint from authoritorial intent, however, which is a central part of Deconstruction theory. But I think that dates back to Heidegger.
I can see why a man of that generation would be leaned towards that idea. I do too. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.