text stringlengths 256 16.4k |
|---|
kernel Spectral Clustering for Community Detection in Complex Networks --This paper proposes a kernel spectral clustering approach for community detection in unweighted networks. The authors employ the primal-dual framework and make use of out-of-sample extension. They also propose a method to extract from a network a subgraph representative for the overall community structure. The commonly used modularity statistic is used as a model selection procedure. The effectiveness of the model is demonstrated through synthetic networks and benchmark real network data. Abstract Contents Backgrounds Network
A network (graph) consists of a set of vertices or nodes and a collection of edges that connect pairs of nodes. A way to represent a network with
N nodes is to use a similarity matrix S, which is an [math]N\times N[/math] matrix. This paper only deals with unweighted networks, where the components of S is 1 or 0. The matrix S is called adjacency matrix and [math]S_{ij}=1[/math] if there is an edge connecting nodes i and j; otherwise, [math]S_{ij}=0[/math]. The degree matrix D is defined as a diagonal matrix with diagonal entries [math]d_i=\sum{_{j}}S_{ij}[/math] indicating the degree of node i, i.e. the number of edges connected to node i. Community Detection
The structure of networks sometimes reveals a high degree of organization. Nodes with similar properties/behaviors are more likely to be linked together and tend to form modules. Discovering such modules is called community detection, which is a hot topic in network data analysis. Moreover, once communities have been detected, the roles of the nodes in each community can be further investigated. The community structure of a graph can be used to give a compact visualization of large networks, i.e. one community can be treated as a big node. This kind of representation is called a supernetwork.
There are other studies which use probabilistic generative models to model and detect community (such as <ref> Gopalan, Prem, Sean Gerrish, Michael Freedman, David M. Blei, and David M. Mimno. "Scalable inference of overlapping communities." In Advances in Neural Information Processing Systems, pp. 2258-2266. 2012.</ref> and <ref> Airoldi, Edoardo M., David M. Blei, Stephen E. Fienberg, and Eric P. Xing. "Mixed membership stochastic blockmodels." The Journal of Machine Learning Research 9 (2008): 1981-2014.</ref>)
Introduction
Community detection is indeed a clustering problem. Spectral clustering method is a standard technique for clustering, which is based on the eigendecomposition of a Laplacian matrix. A spectral clustering framework as a weighted kernel PCA with primal and dual representations is proposed by <ref>C. Alzate and J. A. K. Suykens, Multiway spectral clustering with out-of-sample extensions through weighted kernel PCA.
IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 2, pp. 335-347, 2010 </ref>. The main advantage is the extension of the clustering model to out-of-sample nodes. The clustering model can be trained on a small subset of the network and then be applied to the rest of the graph. In other words, it is baded on a predictive model previously learned and can be used for online clustering of huge growing networks. This is new to the field of community detection on networks. Other contributions of the paper includes the following two points. First, a criterion based on Modularity <ref>M. E. J. Newman, Modularity and community structure in networks, Proc. Natl. Acad. Sci. USA, vol. 103, no. 23, pp. 8577-8582, 2006</ref> is used for parameter and model selection. Second, regarding the selection of a small representative subgraph as training set, a method based on Expansion factor <ref>A. S. Maiya and T. Y. Berger-Wolf, Sampling community structure, in Proc. 19th ACM Intl. Conference on the World Wide Web, 2010.</ref> is proposed. Kernel Spectral Clustering Model General picture
The classical spectral clustering involves the study of the eigenspectrum of graph Laplacian matrices, i.e.
L=D-S. For kernel spectral clustering model, a graph over the network needs to be build up in order to describe the similarity among the nodes in the kernel-based framework. Primal-dual formulation
The kernel spectral clustering model is described by a primal-dual formulation. Suppose we have a network with
N nodes and an adjacency matrix S. Let [math]x_i[/math] denote the i-th row/column of S. Given N tr training nodes [math]\{x_1, x_2, \ldots, x_{N_{tr}}\}\subset\mathbb{R}^N [/math] and the number of communities k, the primal problem of spectral clustering via weighted kernel PCA is formulated as follows:
[math] \underset{w^{(l)},e^{(l)},b_l}{\operatorname{min}}\frac{1}{2}\sum_{l=1}^{k-1}w^{(l)^T}w^{(l)}-\frac{1}{2N}\sum_{l=1}^{k-1}r_le^{(l)^T}D_{\Omega}^{-1}e^{(l)} [/math]
where
[math]e^{(l)}=\Phi w^{(l)}+b_l 1_{N_{tr}}[/math] is the projection vector with length
N tr, l=1, 2,..., k-1 indicates the number of score variables needed to encode the kclusters;
[math]\Phi=[\phi(x_1),...,\phi(x_{N_{tr}})][/math] is the [math]N_{tr}\times d_h[/math] feature matrix and [math]\phi:\mathbb{R}^N\rightarrow\mathbb{R}^{d_h}[/math] is a mapping to a high-dimensional feature space;
[math]\{b_l\}[/math] are bias terms;
[math]D_{\Omega}^{-1}\in \mathbb{R}^{N_{tr}\times N_{tr}}[/math] is the inverse of the degree matrix associated to the kernel matrix [math]\Omega[/math] (explained later);
[math]r_l \in \mathbb{R}^+[/math] are regularization constant.
The dual problem related to this primal formulation is:
[math] D_{\Omega}^{-1}M_D \Omega \alpha^{(l)}=\lambda_l \alpha^{(l)} [/math]
where
[math]\Omega[/math] is the kernel matrix with
ij-th entry [math]\Omega_{ij}=K(x_i,x_j)=\phi(x_i)^T\phi(x_j)[/math];
[math]D_{\Omega}[/math] is the diagonal matrix with diagonal entries [math]d_i^{\Omega}=\sum\limits_{j}\Omega_{ij}[/math];
[math]M_D[/math] is a centering matrix defined as [math]M_D=I_{N_{tr}}-(1/1_{N_{tr}}^T D_{\Omega}^{-1}1_{N_{tr}})(1_{N_{tr}}1_{N_{tr}}^T D_{\Omega}^{-1})[/math];
[math]\{\alpha^{(l)}\}[/math] are dual variables.
The kernel function [math]K:\mathbb{R}^N \times \mathbb{R}^N \rightarrow \mathbb{R}[/math] plays the role of the similarity function of the network. The community kernel <ref>Y. Kang and S. Choi, Kernel PCA for community detection,
in Business Intelligence Conference, 2009</ref> is proposed to build up the similarity matrix of the graph. The kernel [math]K(x_i, x_j)[/math] is defined as the number of edges connecting the common neighbors of nodes i and j, i.e. Encoding/decoding scheme
In the ideal case of
k well separated clusters and properly chosen kernel parameters, the matrix [math]D_{\Omega}^{-1} M_D \Omega[/math] has k-1 peicewise constant eigenvectors with eigenvalue 1.The codebook [math]\mathcal{CB}=\{c_p\}_{p=1}^k[/math] can be obtained in the training process from the row of the binarized projected variables matrix for training data [math][sign(e^{(1)}),\ldots,sign(e^{(k)})][/math]. The cluster indicators for the out-of-sample points is by obtained by [math]sign(e_{test}^{(l)})=sign(\Omega_{test}\alpha^{(l)}+b_l 1_{N_{test}})[/math]. Model Selection Criterion
Modularity is a quality function to evaluate the community structure of networks. The idea is to compare the actual density of edges with the density of a random graph. Modularity can be either positive or negative, with positive high values indicating the possible presence of a strong community structure. The modularity is defined as
[math] Q=\frac{1}{2m}\sum_{ij}(S_{ij}-\frac{d_id_j}{2m})\delta_{ij} [/math]
where
d i is the degree of node i; m is the overall degree of the network and the Kronecker delta [math]\delta_{ij}[/math] indicates whether or not node iand jbelong to the same community. The model with the highest Modularity value will be selected.
It can be shown that to find the highest Modularrity value is equivalent to the following optimization problem:
[math] \underset{X}{\max}\left\{ tr(X^{T}MX) \right\}, s.t. X^{T}X=D^{M}[/math]
where
[math]M=S-\frac{1}{2m}dd^{T}[/math] is the Modularity matrix or Q-laplacian
[math]d=[d_{1},\cdots,d_{N}][/math] indicates the vector of the degrees of each node.
[math]D^{M} \in \mathbb{R}^{k \times k}[/math] is a diagonal matrix with the i-th diagonal entry being the number of nodes in the i-th cluster.
[math]X[/math] represents the cluster indicator matrix
Selecting a Representative Subgraph
The expansion factor (EF) is used to select the subgraph.
A greedy strategy for optimizing EF is provided in the paper:
Algorithm EF Input:
network of [math]N[/math] nodes [math]\mathcal{V}=\left\{ n_{i} \right\}^{N}_{i=1}[/math](represented as a [math]N \times N[/math] adjacency matrix [math]A[/math])
size of subgraph [math]m[/math]
Output: active set of [math]m[/math] selected nodes 1). select randomly an initial subgraph [math]\mathcal{G}=\left\{ n_{j} \right\}^{m}_{j=1} \subset \mathcal{V}[/math]
2). compute [math]EF(\mathcal{G})[/math]
3). randomly pick two nodes as [math]n_{*}\in \mathcal{V}[/math] and [math]n_{+} \in \mathcal{V}-\mathcal{G}[/math]
4). let [math]m[/math]
5). if [math]EF(\mathcal{W})\gt EF(\mathcal{G})[/math], swap([math]\left\{ n_{*} \right\},\left\{ n_{+} \right\}[/math])
6). Repeat the above procedure untill change in EF value is too small (comparing to a threshold specified by the user)
7). reture the final result [math]\mathcal{G}[/math]
The time for selection depends on the size of the entire network [math]N[/math] and its density, the chosen size [math]m[/math], and the threshold [math]\epsilon[/math].
Comments
In this paper, the authors used 'kernel' spectral clustering. However, in fact this is spectral clustering with a different similarity measure. There is no implicit mapping from the input space to a RKHS.
References
<references/> |
Oscillations and Waves Transverse and longitudinal waves Transverse wave: A wave in which the particles of the medium vibrate at right angles to the direction of propagation of wave is called a transverse wave.This wave travel in the form of crests and troughs.
Longitudinal wave: A wave in which the particles of the medium vibrate in the same direction in which wave is propagating is called a longitudinal wave.This wave travel in the form of compressions and rare fractions. Transverse Waves View the Topic in this video From 0:23 To 10:15 Longitudinal Waves View the Topic in this video From 0:20 To 4:54
Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers.
1. The equation of displacement relation in a progressive wave is given by
y( x, t) = a sin ( kx − ω t + Φ)
2. The speed of transverse wave on a stretched string is given by
\tt v = \sqrt{\frac{T}{\mu}}
3. The general formula for speed of longitudinal waves in a medium is
\tt v = \sqrt{\frac{\beta}{\rho}}
4. The speed of longitudinal waves in a solid bar is
\tt v = \sqrt{\frac{Y}{\rho}}
5. The speed of a longitudinal wave in an ideal gas is given by
\tt v = \sqrt{\frac{p}{\rho}}
6.
Laplace's correction He pointed out that the pressure variation in the propagation of sound are adiabatic and not isothermal. Thus, \tt v = \sqrt{\frac{\gamma p}{\rho}} |
In principle, the Hamiltonian represents the energy of a system. Whether or not you want to model your system to have kinetic energy is up to you and what you need. For example: consider an atom with an electron that can be approximated as a two level system (i.e. it as only its ground state and an excited state).
The ground state $|g\rangle$ has some energy $E_0$, and the excited state $|e\rangle$ has some energy $E_0+\Delta E$, you are always free to chose the ground state energy as you like, so chose $E_0=-\frac{\Delta E}{2}$ and you have the Hamiltonian of your system
$$H=\frac{\Delta E}{2}\left(|e\rangle\langle e|-|g\rangle\langle g|\right)=\frac{\Delta E}{2}\begin{pmatrix}-1&0\\0&1\end{pmatrix}=-\frac{\Delta E}{2}\sigma_z $$
with $\sigma_z$ the Pauli matrix. This is a perfectly valid (albeit a bit boring) Hamiltonian that has no kinetic term, basically, you decided that you don't really care about the kinetic energy of the atom, you're interested in the state of the electron. More interesting Hamiltonians that don't model kinetic degrees of freedom are given in Paradoxy's answer.
And as a note, a Hamiltonian is just a hermitian operator bounded below. There is, in principle, no further requirement. You can take any hermitian operator bounded below and start solving the Schrödinger equation. Of course, whether or not this has any physical meaning is another story. |
Tagged: symmetric matrix Problem 572
The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017.
There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold). The time limit was 55 minutes. Problem 7. Let $A=\begin{bmatrix} -3 & -4\\ 8& 9 \end{bmatrix}$ and $\mathbf{v}=\begin{bmatrix} -1 \\ 2 \end{bmatrix}$. (a) Calculate $A\mathbf{v}$ and find the number $\lambda$ such that $A\mathbf{v}=\lambda \mathbf{v}$. (b) Without forming $A^3$, calculate the vector $A^3\mathbf{v}$. Problem 8. Prove that if $A$ and $B$ are $n\times n$ nonsingular matrices, then the product $AB$ is also nonsingular. Problem 9. Determine whether each of the following sentences is true or false. (a) There is a $3\times 3$ homogeneous system that has exactly three solutions. (b) If $A$ and $B$ are $n\times n$ symmetric matrices, then the sum $A+B$ is also symmetric. (c) If $n$-dimensional vectors $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3$ are linearly dependent, then the vectors $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3, \mathbf{v}_4$ is also linearly dependent for any $n$-dimensional vector $\mathbf{v}_4$. (d) If the coefficient matrix of a system of linear equations is singular, then the system is inconsistent.
Add to solve later
(e) The vectors \[\mathbf{v}_1=\begin{bmatrix} 1 \\ 0 \\ 1 \end{bmatrix}, \mathbf{v}_2=\begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix}, \mathbf{v}_3=\begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}\] are linearly independent. Problem 564
Let $A$ and $B$ be $n\times n$ skew-symmetric matrices. Namely $A^{\trans}=-A$ and $B^{\trans}=-B$.
(a) Prove that $A+B$ is skew-symmetric. (b) Prove that $cA$ is skew-symmetric for any scalar $c$. (c) Let $P$ be an $m\times n$ matrix. Prove that $P^{\trans}AP$ is skew-symmetric. (d) Suppose that $A$ is real skew-symmetric. Prove that $iA$ is an Hermitian matrix. (e) Prove that if $AB=-BA$, then $AB$ is a skew-symmetric matrix. (f) Let $\mathbf{v}$ be an $n$-dimensional column vecotor. Prove that $\mathbf{v}^{\trans}A\mathbf{v}=0$.
Add to solve later
(g) Suppose that $A$ is a real skew-symmetric matrix and $A^2\mathbf{v}=\mathbf{0}$ for some vector $\mathbf{v}\in \R^n$. Then prove that $A\mathbf{v}=\mathbf{0}$. Problem 556
Let $\mathbf{v}$ be a nonzero vector in $\R^n$.
Then the dot product $\mathbf{v}\cdot \mathbf{v}=\mathbf{v}^{\trans}\mathbf{v}\neq 0$. Set $a:=\frac{2}{\mathbf{v}^{\trans}\mathbf{v}}$ and define the $n\times n$ matrix $A$ by \[A=I-a\mathbf{v}\mathbf{v}^{\trans},\] where $I$ is the $n\times n$ identity matrix.
Prove that $A$ is a symmetric matrix and $AA=I$.
Conclude that the inverse matrix is $A^{-1}=A$. Problem 538 (a) Suppose that $A$ is an $n\times n$ real symmetric positive definite matrix. Prove that \[\langle \mathbf{x}, \mathbf{y}\rangle:=\mathbf{x}^{\trans}A\mathbf{y}\] defines an inner product on the vector space $\R^n$. (b) Let $A$ be an $n\times n$ real matrix. Suppose that \[\langle \mathbf{x}, \mathbf{y}\rangle:=\mathbf{x}^{\trans}A\mathbf{y}\] defines an inner product on the vector space $\R^n$.
Prove that $A$ is symmetric and positive definite.Add to solve later
Problem 457
Let $A$ be a real symmetric $n\times n$ matrix with $0$ as a simple eigenvalue (that is, the algebraic multiplicity of the eigenvalue $0$ is $1$), and let us fix a vector $\mathbf{v}\in \R^n$.
(a) Prove that for sufficiently small positive real $\epsilon$, the equation \[A\mathbf{x}+\epsilon\mathbf{x}=\mathbf{v}\] has a unique solution $\mathbf{x}=\mathbf{x}(\epsilon) \in \R^n$. (b) Evaluate \[\lim_{\epsilon \to 0^+} \epsilon \mathbf{x}(\epsilon)\] in terms of $\mathbf{v}$, the eigenvectors of $A$, and the inner product $\langle\, ,\,\rangle$ on $\R^n$. ( University of California, Berkeley, Linear Algebra Qualifying Exam) Problem 396
A real symmetric $n \times n$ matrix $A$ is called
positive definite if \[\mathbf{x}^{\trans}A\mathbf{x}>0\] for all nonzero vectors $\mathbf{x}$ in $\R^n$. (a) Prove that the eigenvalues of a real symmetric positive-definite matrix $A$ are all positive.
Add to solve later
(b) Prove that if eigenvalues of a real symmetric matrix $A$ are all positive, then $A$ is positive-definite. Problem 385
Let
\[A=\begin{bmatrix} 2 & -1 & -1 \\ -1 &2 &-1 \\ -1 & -1 & 2 \end{bmatrix}.\] Determine whether the matrix $A$ is diagonalizable. If it is diagonalizable, then diagonalize $A$. That is, find a nonsingular matrix $S$ and a diagonal matrix $D$ such that $S^{-1}AS=D$. |
Search
Now showing items 1-10 of 192
J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-02)
Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ...
Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV
(Elsevier, 2013-04-10)
The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ...
Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-12)
The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ...
Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC
(Springer, 2014-10)
Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ...
Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV
(Elsevier, 2017-12-21)
We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ...
Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV
(American Physical Society, 2017-09-08)
The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ...
Online data compression in the ALICE O$^2$ facility
(IOP, 2017)
The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ...
Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV
(American Physical Society, 2017-09-08)
In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ...
J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(American Physical Society, 2017-12-15)
We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ...
Highlights of experimental results from ALICE
(Elsevier, 2017-11)
Highlights of recent results from the ALICE collaboration are presented. The collision systems investigated are Pb–Pb, p–Pb, and pp, and results from studies of bulk particle production, azimuthal correlations, open and ... |
Animation Nodes Version
Animation Nodes
v2.1 includes a very fast and efficient noise functions and so I will be using this version in my answer. However, I also provided an alternative using older versions of Animation Nodes, though it is not as efficient as I stated above.
The Theory
The image you posted above is a
trace visualization for what is known as a divergence free vector field. To demonstrate what that mean, consider particles that move along the lines you see above (The field represents their velocity), if the field is indeed divergence free, those particles will never collide. That's what gives this visualizations their beauty, the lines never intersects.
Mathematically, a vector field $F$ is divergence free if $\nabla \cdot F = 0$. In the the paper
"Curl-Noise for Procedural Fluid Flow", Robert Bridson described a method to generate those divergence free vector field from simple Perlin noise. He proposed computing the curl of a vector field composed of Perlin noise to get a divergence free vector field, it is a known identity that the curl of any field is automatically divergence free. In particular, he proposed an equation to compute the "2D curl" of simple perlin scalar fields which is exactly what we want to get the visualization above. The equation he proposed is:
$$\vec{v}(x, y) = \left( \frac{\partial \Psi}{\partial y}, -\frac{\partial \Psi}{\partial x} \right)$$
Where Psi is the perlin noise field. Don't worry if you don't understand the equation, I will walk you through it. It can be noted that the computed vector is just the gradient rotated by 90 degrees.
Now that we know how to compute the the vector field, we can generate the splines by tracing the vector field using what is known as
Euler's Integration, if you don't understand this method, don't worry, you will get it through our implementation. Implementation
To compute the partial derivatives in the equation above, we are going to use what is known as the
Central finite Difference method. I will leave you to study that for study and practice and I will just do the implementation directly:
We just take the input vector, move it a bit to the right and a bit to the left, evaluate the noise at the new location then take the difference. Same for the $y$ axis. When we make the amplitude the reciprocal of the epsilon (the bit we moved the vector by), we won't have to divide by it.
Next we will make a loop that computes the points of the splines using Euler's method:
We start at some vector, compute the curl using the equation above by using the group we did above. Then we move in the direction of that curl and reassign the initial location to be the new location.
Next we will make a loop that generates the splines for multiple points:
And by viewing the output splines:
We get something similar to what you want to achieve. Now by adding the value of the perlin noise as the z location of the spline points, we get exactly the result you are looking for:
It should be noted that there is much more efficient way to make this, but this is the easiest way, if you want to know the other way, let me know.
This should be it for splines. I think particles are not hard to create and you should create them as a practice. Let me know if you need elaboration or help on any part.
Edit 1 Using Older Versions Of Animation Nodes:
The only node that is not available in older version is the
Vector Noise node. The
mathutils python module provide an alternative, but it is much slower and doesn't give a lot of control, and that's why I didn't want to use it.
All you have to do is replace the
Vector Noise node with an expression node that takes a vector list and returns a float list with an expression:
[mathutils.noise.noise(x) for x in vectors]
Making sure you have imported mathutils and named the vector list
vectors:
There are also other types of noise that you can experiment with, you can find the API here.
Deform It Along A Grid
Notice that in the Splines loop, we set the z location to be the value of the noise. Instead, you can just set it to anything else. In your case, if you want to deform it along a grid, you can use a BVH tree:
And yes, this works with lattice deformed grids as you may see in my example above.
Edit 2 What To Put At A
The hidden nodes are
Get List Element Nodes with indices, 0,1,2,3 from top to bottom. How To Add A Generator
In the
Loop Input node at the far left, there is a plus button called new Generator Output, search for vectors after pressing it. File for v2.0 to study
Edit 3
To assign different colors to each spline, you should separate splines to different objects (This is your best option), this can be done using such node tree:
Then assign the material to all the generated splines by adding it to a single one, selecting all and then
Ctrl+
L >> material. For this material, we will use such a node tree:
Which will assign a random color to each spline. |
I found an interesting infinite sequence recently in the form of a 'two storey continued fraction' with natural number entries:
$$\frac{e^2-3}{e^2+1}=\cfrac{2-\cfrac{3-\cfrac{4-\cdots}{4+\cdots}}{3+\cfrac{4-\cdots}{4+\cdots}}}{2+\cfrac{3-\cfrac{4-\cdots}{4+\cdots}}{3+\cfrac{4-\cdots}{4+\cdots}}}$$
The numerical computation was done 'backwards', starting from some $x_n=1$ we compute:
$$x_{n-1}=\frac{a_n-x_n}{a_n+x_n}$$
And so on, until we get to $x_0$. The sequence converges for $n \to \infty$ if $a_n>1$ (or so it seems).
For constant $a_n$ we seem to have quadratic irrationals, for example:
$$\frac{\sqrt{17}-3}{2}=\cfrac{2-\cfrac{2-\cfrac{2-\cdots}{2+\cdots}}{2+\cfrac{2-\cdots}{2+\cdots}}}{2+\cfrac{2-\cfrac{2-\cdots}{2+\cdots}}{2+\cfrac{2-\cdots}{2+\cdots}}}$$
For $a_n=2^n$ we seems to have:
$$\frac{1}{2}=\cfrac{2-\cfrac{4-\cfrac{8-\cdots}{8+\cdots}}{4+\cfrac{8-\cdots}{8+\cdots}}}{2+\cfrac{4-\cfrac{8-\cdots}{8+\cdots}}{4+\cfrac{8-\cdots}{8+\cdots}}}$$
I found no other closed forms so far, and I don't know how to prove the formulas above. How can we prove them? What is known about such continued fractions?
There is another curious thing. If we try to expand some number in this kind of fraction, we can do it the following way:
$$x_0=x$$
$$a_0=\left[\frac{1}{x_0} \right]$$
$$x_1=\frac{1-a_0x_0}{1+a_0x_0}$$
$$a_1=\left[\frac{1}{x_1} \right]$$
However, this kind of expansion will not give us the above sequences. We will get faster growing entries. Moreover, the fraction will be finite for any rational number. For example, in the list notation:
$$\frac{3}{29}=[9,28]$$
You can easily check this expansion for any rational number.
As for the constant above we get:
$$\frac{e^2-3}{e^2+1}=[1,3,31,74,315,750,14286,\dots]$$
Not the same as $[1,2,3,4,5,6,7,\dots]$ above!
We have similar sequences growing exponentially for any irrational number I checked.
$$e-2=[1,6,121,284,1260,3404,25678,\dots]$$
$$\pi-3=[7,224,471,2195,10493,46032,119223,\dots]$$
By the way, if we try CF convergents, we get almost the same expansion, but finite:
$$\frac{355}{113}-3=[7,225]$$
$$\frac{4272943}{1360120}-3=[7,224,471,2195,18596,227459,\dots]$$
So, the convergents of this sequence are not the same as for the simple continued fraction, but similar.
Comparing the expansion by the method above and the closed forms at the top of the post, we can see that, unlike for simple continued fractions, this expansion is not unique. Can we explain why?
Here is the Mathematica code to compute the limit of the first fraction:
Nm = 50;Cf = Table[j, {j, 1, Nm}];b0 = (Cf[[Nm]] - 1)/(Cf[[Nm]] + 1);Do[b1 = N[(Cf[[Nm - j]] - b0)/(Cf[[Nm - j]] + b0), 7500]; b0 = b1, {j, 1, Nm - 2}]N[b0/Cf[[1]], 50]
And here is the code to obtain the expansion in the usual way:
x = (E^2 - 3)/(E^2 + 1);x0 = x;Nm = 27;Cf = Table[1, {j, 1, Nm}];Do[If[x0 != 0, a = Floor[1/x0]; x1 = N[(1 - x0 a)/(x0 a + 1), 19500]; Print[j, " ", a, " ", N[x1, 16]]; Cf[[j]] = a; x0 = x1], {j, 1, Nm}]b0 = (1 - 1/Cf[[Nm]])/(1 + 1/Cf[[Nm]]);Do[b1 = N[(1 - b0/Cf[[Nm - j]])/(1 + b0/Cf[[Nm - j]]), 7500]; b0 = b1, {j, 1, Nm - 2}]N[x - b0/Cf[[1]], 20]
Update
I have derived the forward recurrence relations for numerator and denominator:
$$p_{n+1}=(a_n-1)p_n+2a_{n-1}p_{n-1}$$ $$q_{n+1}=(a_n-1)q_n+2a_{n-1}q_{n-1}$$
They have the same form as for generalized continued fractions (a special case). Now I understand why the expansions are not unique. |
Let $U_n=\sum_{i=1}^n X_i,V_n=\sum_{i=1}^n Y_i$, $n\geq 1$, be a two-dimensional random walk with i.i.d. increments $(X_n, Y_n)$, where $X_n, Y_n$ are discrete random variables with joint pmf $P_{X,Y}$. $X_n,Y_n$ have the following properties: \begin{align} 0 < \mathbb{E}[X_n]=\mu_X, \quad 0 < \mathbb{E}[Y_n]=\mu_Y,\quad |X_n/\mu_X| \leq K_X \text{ and } |Y_n/\mu_Y| \leq K_Y \end{align} for finite $K_X,K_Y$. The stopping time $\tau(t)$ is given by \begin{align} \tau(t) = \min(n \geq 0: U_n/\mu_X \geq t, V_n/\mu_Y \geq t) \end{align}
I am looking for an upper bound for $E[\tau(t)]$ that captures the asymptotic behavior as $t\rightarrow \infty$. I hope for an upper bound similar to \begin{align} E[\tau(t)] \leq t + \frac{1}{\sqrt{2\pi}} \sqrt{\text{Var}\left(\frac{X_1}{\mu_X} - \frac{Y_1}{\mu_Y}\right)}\sqrt{t} + \mathcal{O}(1), \end{align} as simulations suggest. However, a higher constant in front of $\sqrt{\text{Var}\left(\frac{X_1}{\mu_X} - \frac{Y_1}{\mu_Y}\right)}\sqrt{t}$ or faster growing remainder terms are also sufficient, i.e. $\mathcal{O}(t^{1/4})$ instead of $\mathcal{O}(1)$ is fine.
In the one-dimensional cases, with stopping times $ \tau_1(t)=\min(n\geq 0: U_n/\mu_X \geq t)$, $\tau_2(t)=\min(n\geq 0: V_n/\mu_Y \geq t)$ and $\tau_{12}(t)=\min(n \geq0 : \frac{1}{2}(U_n/\mu_X + V_n/\mu_Y) \geq t)$, the following bounds hold \begin{align} \mathbb{E}[\tau_1(t)] &= \mathbb{E}[U_{\tau_1(t)}/\mu_X] \leq \mu_X t + K_X\\ \mathbb{E}[\tau_2(t)] &= \mathbb{E}[V_{\tau_2(t)}/\mu_Y] \leq \mu_Y t + K_Y,\\ \mathbb{E}[\tau_{12}(t)] &= \frac{1}{2}\mathbb{E}[U_{\tau_{12}}/\mu_X+V_{\tau_{12}(t)}/\mu_Y] \leq t + \max(K_X,K_Y), \end{align} for $t > 0$, where the equalities follow from Wald's equality and the inequalities follow since $X_n$ and $Y_n$ are bounded.
Partial Solution
My main idea is to write the stopping time $\tau(t)$ in two terms; the time until $\frac{1}{2}(U_n/\mu_X + V_n/\mu_Y)$ hits the boundary $t$ plus the time until $U_n$ hits the boundary $\mu_X t$ starting from $U_{\tau_{12}(t)}$ or $V_n$ hits the boundary $\mu_Y t$ starting from $V_{\tau_{12}(t)}$: \begin{align} \mathbb{E}[\tau(t)] &\stackrel{(a)}{\leq} \mathbb{E}[\tau_{12}(t) + \tau_1(t-U_{\tau_{12}(t)}/\mu_X)+\tau_2(t-V_{\tau_{12}(t)}/\mu_Y)]+\mathcal{O}(1)\\ &\leq t +\mathbb{E}\left[1\{U_{\tau_{12}(t)}\leq \mu_X t\}\left(t- U_{\tau_{12}(t)}/\mu_X\right)\right]\nonumber\\ &\quad+\mathbb{E}\left[1\{V_{\tau_{12}(t)}\leq \mu_Y t\}\left(t- V_{\tau_{12}(t)}/\mu_Y\right)\right]+\mathcal{O}(1)\\ \end{align} However, I have not been able to come up with an argument for whether/under what conditions (a) is true, since the random variables $X_n$ and $Y_n$ are allowed to take negative values. My main concern is that $U_n$ may have decreased below $\mu_X t$ when $V_n$ hits the boundary $\mu_Y t$ or visa versa.
Assuming that (a) is correct, I was able to obtain the bound \begin{align} \mathbb{E}[\tau(t)] \leq t+ \frac{1}{2}\sqrt{\text{Var}\left(\frac{X_1}{\mu_X} - \frac{Y_1}{\mu_Y}\right)}\sqrt{t}+\mathcal{O}(t^{1/4}), \end{align} which is sufficient for my application.
For more details, see link.
Any suggestions or ideas are appreciated. |
Tagged: kernel of a matrix Problem 270
Let
\[A=\begin{bmatrix} 4 & 1\\ 3& 2 \end{bmatrix}\] and consider the following subset $V$ of the 2-dimensional vector space $\R^2$. \[V=\{\mathbf{x}\in \R^2 \mid A\mathbf{x}=5\mathbf{x}\}.\] (a) Prove that the subset $V$ is a subspace of $\R^2$.
Add to solve later
(b) Find a basis for $V$ and determine the dimension of $V$. Problem 260
Let \[A=\begin{bmatrix}
1 & 1 & 2 \\ 2 &2 &4 \\ 2 & 3 & 5 \end{bmatrix}.\] (a) Find a matrix $B$ in reduced row echelon form such that $B$ is row equivalent to the matrix $A$. (b) Find a basis for the null space of $A$. (c) Find a basis for the range of $A$ that consists of columns of $A$. For each columns, $A_j$ of $A$ that does not appear in the basis, express $A_j$ as a linear combination of the basis vectors.
Add to solve later
(d) Exhibit a basis for the row space of $A$. Problem 252
Let $W$ be the subset of $\R^3$ defined by
\[W=\left \{ \mathbf{x}=\begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix}\in \R^3 \quad \middle| \quad 5x_1-2x_2+x_3=0 \right \}.\] Exhibit a $1\times 3$ matrix $A$ such that $W=\calN(A)$, the null space of $A$. Conclude that the subset $W$ is a subspace of $\R^3$. Problem 222
Suppose that $n\times n$ matrices $A$ and $B$ are similar.
Then show that the nullity of $A$ is equal to the nullity of $B$.
In other words, the dimension of the null space (kernel) $\calN(A)$ of $A$ is the same as the dimension of the null space $\calN(B)$ of $B$. Problem 211
In this post, we explain how to diagonalize a matrix if it is diagonalizable.
As an example, we solve the following problem.
Diagonalize the matrix
\[A=\begin{bmatrix} 4 & -3 & -3 \\ 3 &-2 &-3 \\ -1 & 1 & 2 \end{bmatrix}\] by finding a nonsingular matrix $S$ and a diagonal matrix $D$ such that $S^{-1}AS=D$.
(Update 10/15/2017. A new example problem was added.)
Read solution Problem 154
Define the map $T:\R^2 \to \R^3$ by $T \left ( \begin{bmatrix}
x_1 \\ x_2 \end{bmatrix}\right )=\begin{bmatrix} x_1-x_2 \\ x_1+x_2 \\ x_2 \end{bmatrix}$. (a) Show that $T$ is a linear transformation. (b) Find a matrix $A$ such that $T(\mathbf{x})=A\mathbf{x}$ for each $\mathbf{x} \in \R^2$.
Add to solve later
(c) Describe the null space (kernel) and the range of $T$ and give the rank and the nullity of $T$. Problem 121
Let $A$ be an $m \times n$ real matrix. Then the
null space $\calN(A)$ of $A$ is defined by \[ \calN(A)=\{ \mathbf{x}\in \R^n \mid A\mathbf{x}=\mathbf{0}_m\}.\] That is, the null space is the set of solutions to the homogeneous system $A\mathbf{x}=\mathbf{0}_m$.
Prove that the null space $\calN(A)$ is a subspace of the vector space $\R^n$.
(Note that the null space is also called the kernel of $A$.) Read solution Problem 38
Let $A$ be an $m \times n$ real matrix.
Then the of $A$ is defined as $\ker(A)=\{ x\in \R^n \mid Ax=0 \}$. kernel
The kernel is also called the
of $A$. null space
Suppose that $A$ is an $m \times n$ real matrix such that $\ker(A)=0$. Prove that $A^{\trans}A$ is invertible.
(
Stanford University Linear Algebra Exam) |
(a) If $AB=B$, then $B$ is the identity matrix. (b) If the coefficient matrix $A$ of the system $A\mathbf{x}=\mathbf{b}$ is invertible, then the system has infinitely many solutions. (c) If $A$ is invertible, then $ABA^{-1}=B$. (d) If $A$ is an idempotent nonsingular matrix, then $A$ must be the identity matrix. (e) If $x_1=0, x_2=0, x_3=1$ is a solution to a homogeneous system of linear equation, then the system has infinitely many solutions.
Let $A$ and $B$ be $3\times 3$ matrices and let $C=A-2B$.If\[A\begin{bmatrix}1 \\3 \\5\end{bmatrix}=B\begin{bmatrix}2 \\6 \\10\end{bmatrix},\]then is the matrix $C$ nonsingular? If so, prove it. Otherwise, explain why not.
(a) Suppose that a $3\times 3$ system of linear equations is inconsistent. Is the coefficient matrix of the system nonsingular?
(b) Suppose that a $3\times 3$ homogeneous system of linear equations has a solution $x_1=0, x_2=-3, x_3=5$. Is the coefficient matrix of the system nonsingular?
(c) Let $A$ be a $4\times 4$ matrix and let\[\mathbf{v}=\begin{bmatrix}1 \\2 \\3 \\4\end{bmatrix} \text{ and } \mathbf{w}=\begin{bmatrix}4 \\3 \\2 \\1\end{bmatrix}.\]Suppose that we have $A\mathbf{v}=A\mathbf{w}$. Is the matrix $A$ nonsingular?
Suppose that $B=\{\mathbf{v}_1, \mathbf{v}_2\}$ is a basis for $\R^2$. Let $S:=[\mathbf{v}_1, \mathbf{v}_2]$.Note that as the column vectors of $S$ are linearly independent, the matrix $S$ is invertible.
Prove that for each vector $\mathbf{v} \in V$, the vector $S^{-1}\mathbf{v}$ is the coordinate vector of $\mathbf{v}$ with respect to the basis $B$.
Prove that the matrix\[A=\begin{bmatrix}0 & 1\\-1& 0\end{bmatrix}\]is diagonalizable.Prove, however, that $A$ cannot be diagonalized by a real nonsingular matrix.That is, there is no real nonsingular matrix $S$ such that $S^{-1}AS$ is a diagonal matrix.
The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017.There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold).The time limit was 55 minutes.
This post is Part 3 and contains Problem 7, 8, and 9.Check out Part 1 and Part 2 for the rest of the exam problems.
Problem 7. Let $A=\begin{bmatrix}-3 & -4\\8& 9\end{bmatrix}$ and $\mathbf{v}=\begin{bmatrix}-1 \\2\end{bmatrix}$.
(a) Calculate $A\mathbf{v}$ and find the number $\lambda$ such that $A\mathbf{v}=\lambda \mathbf{v}$.
(b) Without forming $A^3$, calculate the vector $A^3\mathbf{v}$.
Problem 8. Prove that if $A$ and $B$ are $n\times n$ nonsingular matrices, then the product $AB$ is also nonsingular.
Problem 9.Determine whether each of the following sentences is true or false.
(a) There is a $3\times 3$ homogeneous system that has exactly three solutions.
(b) If $A$ and $B$ are $n\times n$ symmetric matrices, then the sum $A+B$ is also symmetric.
(c) If $n$-dimensional vectors $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3$ are linearly dependent, then the vectors $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3, \mathbf{v}_4$ is also linearly dependent for any $n$-dimensional vector $\mathbf{v}_4$.
(d) If the coefficient matrix of a system of linear equations is singular, then the system is inconsistent.
An $n\times n$ matrix $A$ is called nonsingular if the only vector $\mathbf{x}\in \R^n$ satisfying the equation $A\mathbf{x}=\mathbf{0}$ is $\mathbf{x}=\mathbf{0}$.Using the definition of a nonsingular matrix, prove the following statements.
(a) If $A$ and $B$ are $n\times n$ nonsingular matrix, then the product $AB$ is also nonsingular.
(b) Let $A$ and $B$ be $n\times n$ matrices and suppose that the product $AB$ is nonsingular. Then:
The matrix $B$ is nonsingular.
The matrix $A$ is nonsingular. (You may use the fact that a nonsingular matrix is invertible.)
Let $A$ be an $n\times (n-1)$ matrix and let $\mathbf{b}$ be an $(n-1)$-dimensional vector.Then the product $A\mathbf{b}$ is an $n$-dimensional vector.Set the $n\times n$ matrix $B=[A_1, A_2, \dots, A_{n-1}, A\mathbf{b}]$, where $A_i$ is the $i$-th column vector of $A$.
Prove that $B$ is a singular matrix for any choice of $\mathbf{b}$.
For each of the following $3\times 3$ matrices $A$, determine whether $A$ is invertible and find the inverse $A^{-1}$ if exists by computing the augmented matrix $[A|I]$, where $I$ is the $3\times 3$ identity matrix. |
Effective January 2014, Psychological Science recommends the use of the "new statistics" - effect sizes, confidence intervals, and meta-analysis - to avoid problems associated with null-hypothesis significance testing (NHST).
Confidence interval provides an alternative method to NHST, which some have argued provides more information on the NHST. A confidence interval (CI) is a type of interval estimate, instead of a point estimate, of a population parameter.
Let $\theta$ denote a population parameter (unknowns) and $X$ denote a random variable (e.g., GPA) from which the data can be observed. Assume the observed outcome for $X$ is $x$. We can calculate an interval $[l(x),u(x)]$ based on the observed data. More generally, we can define
\[\Pr(l(X) \leq \theta \leq u(X))=1-\alpha=C\]
Then $[l(X),u(X)]$ is a confidence interval with confidence level $1-\alpha=C$, or $100(1-\alpha)\%$. $l(X)$ and $u(X)$ are called confidence limits (bounds), lower limit and upper limit, respectively.
The basic idea to get a CI is straightforward in theory but can be very difficult in practice. It involves three steps:
Suppose we want to estimate and obtain the confidence interval estimate of the average GPA ($\mu$) of all undergraduate students at University of Notre Dame. GPA typically follows a normal distribution \(X\sim N(\mu,\sigma)\). Instead of going out to collect data from students, we will simulate (generate) some data for our example. To simulate data, we need to know the population mean and standard deviation of GPA. Here we assume the mean is $\mu=3.5$ and the standard deviation $\sigma=.2$. Furthermore, we would like to generate a sample of data with the sample size 100.
In R, to generate random number from a normal distribution, the function
rnorm() can be used. Specifically for this example, the code
x<-rnorm(100,3.5,0.2) generates 100 values from a normal distribution with mean 3.5 and standard deviation 0.2. Therefore, in the function, the first number is the number of the values to generate, the second is the mean and the third is the standard deviation. The code below generates the values, prints them in the output, and displays the histogram of the generated data. Note that the histogram shows a bell shape.
With the simulated data for 100 students, an estimate of the average GPA ($\theta$) is \[\bar{x}=\frac{1}{100}\sum_{i=1}^{100}x_{i}.\] Based on the central limit theorem, if the population variance $\sigma$ is known, regardless of the shape of the population distribution, $\bar{x}$ is at least approximately normally distributed with mean ($\mu$) and standard deviation (standard error of the mean) \[s.e.(\bar{x})=\sqrt{\frac{1}{n}\sigma^{2}}=.1\sigma.\] Now we the point estimate is $\hat{\theta}=\bar{x}$ and its sampling distribution is a normal distribution. Then a 95% equal-tail confidence interval can be constructed using the 2.5% and 97.5% percentile of the normal distribution as $[\Phi^{-1}(0.025), \Phi^{-1}(0.975)]$ where $\Phi$ is the normal distribution function. The whole procedure to calculate a CI for a set of simulated data is shown below.> x<-rnorm(100,3.5,0.2) > x ## show x [1] 3.334000 3.586837 3.359713 3.524029 3.447670 3.368659 3.375683 3.549234 [9] 3.654781 3.667078 3.382636 3.231061 3.265321 3.543612 3.508240 3.719976 [17] 3.810934 3.401276 3.540178 3.435721 3.836820 3.527963 3.367449 3.282790 [25] 3.684809 3.746624 3.676275 3.691510 3.359611 3.174088 3.503263 3.724812 [33] 3.709836 4.136255 3.554183 3.435994 3.512146 3.391283 3.320681 3.693763 [41] 3.363223 3.816180 3.536341 3.287929 3.468621 3.684756 3.681145 3.409627 [49] 3.695873 3.313115 3.409239 3.306808 3.765370 3.280114 3.655706 3.718136 [57] 3.706299 3.558405 3.718321 3.880794 3.568745 3.520628 3.653579 3.055296 [65] 3.217441 3.271952 3.799409 3.400029 3.600566 3.234875 3.749574 3.624902 [73] 3.422975 3.673681 3.451874 3.809673 3.442798 3.434386 3.699813 3.486470 [81] 3.187778 3.432287 3.253338 3.600950 2.868837 2.980158 3.548014 3.453090 [89] 2.961468 3.741704 3.530058 3.793508 3.540110 3.834930 3.107434 3.745801 [97] 3.363361 3.483301 3.348338 3.601043 > hist(x) ## histogram > > x <- rnorm(100,3.5,0.2) > xbar <- mean(x) > s.e. <- 0.2/10 > qnorm(c(.025, .975), xbar, s.e.) [1] 3.468670 3.547069 >
Now, try run the code above one more time. Do you get the same confidence interval?
A CI changes each time with a study. If we repeat the same study again and again, $100(1-\alpha)\%$ of the time the obtained confidence intervals would cover the true population parameter value. This can be shown through a simulation study or experiment. Using the GPA example, we can conduct an experiment using the following steps:
The R code below carries out the experiment. The output shows that the among the 1000 sets of CIs calculated based on the 1000 sets of simulated data, 949 of them cover the population value 3.5.
> count<-0 > > for (i in 1:1000){ + x<-rnorm(100, 3.5, .2) + xbar<-mean(x) + s.e.<-.2/10 + l<-qnorm(.025, xbar, s.e.) + u<-qnorm(.975, xbar, s.e.) + if (l<3.5 & u>3.5){ + count<-count+1 + } + } > count [1] 949 >
For a given CI, it either covers the population value or not. This can be best demonstrated by plotting the CIs. The R code and output are given below. In the code, we generate 100 CIs, among which 97 cover the population value and 3 do not.
> count<-0 > all.l<-all.u<-NULL > for (i in 1:100){ + x<-rnorm(100, 3.5, .2) + xbar<-mean(x) + s.e.<-.2/10 + l<-qnorm(.025, xbar, s.e.) + u<-qnorm(.975, xbar, s.e.) + if (l<3.5 & u>3.5){ + count<-count+1 + } + all.l<-c(all.l, l) + all.u<-c(all.u, u) + } > count [1] 97 > > ## generate a plot > plot(c(1,1), c(all.l[1], all.u[1]), type='l', + ylim=c(min(all.l)-.01, max(all.u)+.01), + xlim=c(1,100), xlab='replications', + ylab='CI') > abline(h=3.5) > for (i in 2:100){ + if (all.l[i]<3.5 & all.u[i]>3.5){ + lines(c(i,i), c(all.l[i], all.u[i])) + }else{ + lines(c(i,i),c(all.l[i], all.u[i]),col='red') + } + } >
Confidence intervals do not require a-priori hypotheses, nor do they test trivial hypotheses. A confidence interval provides information on both the effect and its precision. A smaller interval usually suggests the estimate is more precise. For example, [3.3, 3.7] is more precise than [3,4].
A confidence interval can be used for hypothesis testing. For example, given the null hypothesis \[\theta=\theta_{0}\] for any value of $\theta_{0}$. If a confidence interval with confidence level $C=1-\alpha$ contains $\theta_{0}$, we fail to reject the corresponding null hypothesis at the significance level $\alpha$. Otherwise, we reject the null hypothesis at the significance level $\alpha$.
For example, suppose we are interested in testing whether a training intervention method is effective or not. Based on a pre- and post-test design, we find the confidence interval for the change after training is [0.7, 1.5] with the confidence level 0.95. Since this CI does not include 0, we would reject the null hypothesis that the change is 0 at the alpha level 0.05.
Using CI for hypothesis testing does not provide the exact p-value. However, a CI can be used to test multiple hypotheses. For example, for any null hypothesis that the change score is less than 0.7, one would reject it.
CI kinds of focuses on the alternative hypothesis, the effect of interest. It provides a range of plausible values to estimate the effect of interest.
Reichardt and Gollob (1997) discussed conditions that NHST and CI can be useful. NHST is shown generally to be more informative than confidence intervals when assessing (1) the probability that a parameter equals a pre-specified value; (2) the direction of a parameter relative to a pre-specified value (e.g., 0); and (3) the probability that a parameter lies within a pre-specified range.
On the other hand, confidence intervals are shown generally to be more informative than NHST when assessing the size of a parameter (1) without reference to a pre-specified value or range of values; (2) with reference to many pre-specified values or ranges of values. Hagen (1997) pointed out: "We cannot escape the logic of NHST [null hypothesis statistical testing] by turning to point estimates and confidence intervals" (p. 22). In addition, Schmidt and Hunter (1997) suggested: "The assumption underlying this objection is that because confidence intervals can be interpreted as significance tests, they must be so interpreted. But this is a false assumption" (p. 50). |
When the switch is closed in the
RLC circuit of Figure(a), the capacitor begins to discharge and electromagnetic energy is dissipated by the resistor at a rate \(i^2 R\). With U given by [link], we have
\[\frac{dU}{dt} = \frac{q}{C} \frac{dq}{dt} + Li \frac{di}{dt} = -i^2 R\]
where
i and q are time-dependent functions. This reduces to
\[L\frac{d^2q}{dt^2} + R\frac{dq}{dt} + \frac{1}{C}q = 0.\]
Figure \(\PageIndex{1}\): (a) An RLC circuit. Electromagnetic oscillations begin when the switch is closed. The capacitor is fully charged initially. (b) Damped oscillations of the capacitor charge are shown in this curve of charge versus time, or q versus t. The capacitor contains a charge \(q_0\) before the switch is closed.
This equation is analogous to
\[m\frac{d^2x}{dt^2} + b\frac{dx}{dt} + kx = 0,\]
which is the equation of motion for a
damped mass-spring system (you first encountered this equation in Oscillations). As we saw in that chapter, it can be shown that the solution to this differential equation takes three forms, depending on whether the angular frequency of the undamped spring is greater than, equal to, or less than b/2 m. Therefore, the result can be underdamped \((\sqrt{k/m} > b/2m)\), critically damped \((\sqrt{k/m} = b/2m)\), or overdamped \((\sqrt{k/m} < b/2m)\). By analogy, the solution q( t) to the RLC differential equation has the same feature. Here we look only at the case of under-damping. By replacing m by L, b by R, k by 1/ C, and x by q in Equation, and assuming \(\sqrt{1/LC} > R/2L\), we obtain
Note
\[q(t) = q_0 e^{-Rt/2L} cos (\omega't + \phi)\]
where the angular frequency of the oscillations is given by
Note
\[\omega' = \sqrt{\frac{1}{LC} - \left(\frac{R}{2L}\right)^2}\]
This underdamped solution is shown in Figure(b). Notice that the amplitude of the oscillations decreases as energy is dissipated in the resistor. Equation can be confirmed experimentally by measuring the voltage across the capacitor as a function of time. This voltage, multiplied by the capacitance of the capacitor, then gives
q( t).
Note
Try an interactive circuit construction kit that allows you to graph current and voltage as a function of time. You can add inductors and capacitors to work with any combination of
R, L, and C circuits with both dc and ac sources.
Note
Try out a circuit-based java applet website that has many problems with both dc and ac sources that will help you practice circuit problems.
Note
Check Your Understanding
In an
RLC circuit, \(L = 5.0 \, mH\), \(C = 6.0 \, \mu F\), and \(R = 200 \, \Omega\). (a) Is the circuit underdamped, critically damped, or overdamped? (b) If the circuit starts oscillating with a charge of \(3.0 \times 10^{-3}C\) on the capacitor, how much energy has been dissipated in the resistor by the time the oscillations cease?
[Hide Solution]
a. overdamped; b. 0.75 J
Contributors
Paul Peter Urone (Professor Emeritus at California State University, Sacramento) and Roger Hinrichs (State University of New York, College at Oswego) with Contributing Authors: Kim Dirks (University of Auckland) and Manjula Sharma (University of Sydney). This work is licensed by OpenStax University Physics under a Creative Commons Attribution License (by 4.0). |
I have some trouble in understanding properly the concept of abelianization in the case of the symmetric group $S_n$. More specifically, it is known that the commutator group of $S_n$ is $A_n$, the group of all even permutations in $S_n$. Now, assume that we are given two elements $[\rho_1], [\rho_2] \in S_n / A_n$ such that $[\rho_1], [\rho_2] \neq e$ in $S_n / A_n$ ($e$ is the neutral element). This means that $\rho_1$ and $\rho_2$ are odd permutations. At the same time, however, we get $[\rho_1] \cdot [\rho_2] = [\rho_1 \circ \rho_2] = e$ because the composition of two odd permutations results in an even permutation (which is an element of $A_n$). Since the inverse of a group element must be unique, we thus infer that $S_n / A_n$ is not a group, which makes not much of a sense.
What am I missing?
Another question that I have is: is it true that $|S_n / A_n| \leq \binom{n}{2} + 1$? (The argumentation would go like that: since any $[\rho] \in S_n / A_n$ with $[\rho] \neq e$ must be such that $\rho$ is odd, we can write $\rho$ as a product of transpositions $\tau_1 \circ \tau_2 \circ \ldots \circ \tau_k$ with $k$ being odd. Noting that $\tau_2 \circ \ldots \circ \tau_k$ is in $A_n$, we infer that $[\tau_1] = [\rho]$. The estimation follows by observing that there are $\binom{n}{2}$ transpositions.)
Thanks a lot. |
I'm working through a homework sheet for a Fluid Mechanics module. The question is given:
Consider the flow described by the complex potential $$w=4z+\frac{8}{z}.$$
Determine $\psi$, $\phi$, $u$ and $v$ in plane polar coordinates $(r,\theta)$. Determine the location of the stagnation points. Show that this complex potential describes an inviscid flow around a solid object, What is the shape of the object? Sketch the streamlines for the flow outside the object.
My working out so far for the question is:
Let $z=re^{i\theta}$, and therefore \begin{align} w&=4re^{i\theta}+\frac{8}{r}e^{-i\theta} \\ &=4r(\cos(\theta)+i\sin(\theta))+\frac{8}{r}(\cos(\theta)-i\sin(\theta)) \\ &=(4r+\frac{8}{r})\cos(\theta)+(4r-\frac{8}{r})i\sin(\theta). \end{align} Using the Cauchy-Riemann equations, write $w=\phi+i\psi$, $\phi=(4r+\frac{8}{r})\cos(\theta)$ and $\psi=(4r-\frac{8}{r})\sin(\theta)$. Also, we have that $$u=\frac{\partial\phi}{\partial r}\implies u=(4-8r^{-2})\cos(\theta)$$ and $$v=\frac{1}{r}\frac{\partial\phi}{\partial\theta} \implies v=-(r+8r^{-2})\sin(\theta).$$ Stagnation points are given by $u=0$ and $v=0$. So, from $u=0$, we have that $r^2=2$ or $\cos(\theta)=0$. Similarly from $v=0$, we have that $r^2=-2$ and $\sin(\theta)=0$. Since $r\in\mathbb{R}$ (as it is a distance), we have that $r^2=2$ (from $u=0$) and $\sin(\theta))=0$ (from $v=0$). Therefore, the stagnation points occur at $(r,\theta)=(\sqrt{2},0),(\sqrt{2},\pi)$.
From here (ie 3 onwards), I fall down. I think that I should use that $\textbf{u}\cdot\textbf{n}=0$, but I'm not too sure how to use this information. Should I be using Bernoulli's theorem for pressure? Is there some assumption I am missing?
Any help would be much appreciated! |
Suppose $F$ is a field s.t $\left|F\right|=q$. Take $p$ to be some prime. How many monic irreducible polynomials of degree $p$ do exist over $F$?
Thanks!
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
The number of such polynomials is exactly $\displaystyle \frac{q^{p}-q}{p}$ and this is the proof:
The two main facts which we use (and which I will not prove here) are that $\mathbb{F}_{q^{p}}$ is the splitting field of the polynomial $g\left(x\right)=x^{q^{p}}-x$,
and that every monic irreducible polynomial of degree $p$ divides $g$.
Now: $\left|\mathbb{F}_{q^{p}}:\mathbb{F}_{q}\right|=p$ and therefore there could be no sub-extensions. Therefore, every irreducible polynomial that divides $g$ must be of degree $p$ or 1. Since each linear polynomial over $\mathbb{F}_{q}$ divides $g$ (since for each $a\in \mathbb{F}_{q}$, $g(a)=0$), and from the fact that $g$ has distinct roots, we have exactly $q$ different linear polynomials that divide $g$.
Multiplying all the irreducible monic polynomials that divide $g$ will give us $g$, and therefore summing up their degrees will give us $q^{p}$.
So, if we denote the number of monic irreducible polynomials of degree $p$ by $k$ (which is the number we want), we get that $kp+q=q^{p}$, i.e $\displaystyle k=\frac{q^{p}-q}{p}$.
the number of monic irreducible polynomials of degree $n$ over the finite field $\mathbb{F}_{q}$ is given by Gauss’s formula $$\frac{1}{n}\sum\limits_{d \mid n} \ \mu(n/d) \cdot q^{d}$$
For a complete proof : Please refer
Abstract Algebra:
Dummit and Foote, Chapter 14, Galois theory, Pages $567-568$.
You might also want to see this paper, which actually presents a new idea of counting irreducible polynomials using Inclusion - Exclusion Principle. Link: http://arxiv.org/pdf/1001.0409 |
The operation of differentiation or finding the derivative of a function has the fundamental property of linearity. This property makes taking the derivative easier for functions constructed from the basic elementary functions using the operations of addition and multiplication by a constant number. The basic differentiation rules allow us to compute the derivatives of such functions without using the formal definition of the derivative. Consider these rules in more detail.
Derivative of a Constant
If \(f\left( x \right) = C,\) then
\[f’\left( x \right) = C’ = 0.\]
The proof of this rule is considered on the Definition of the Derivative page.
Constant Multiple Rule
Let \(k\) be a constant. If \(f\left( x \right)\) is differentiable, then \(kf\left( x \right)\) is also differentiable and
\[{\left( {kf\left( x \right)} \right)^\prime } = kf’\left( x \right).\]
Sum Rule
Let \(f\left( x \right)\) and \(g\left( x \right)\) be differentiable functions. Then the sum of two functions is also differentiable and
\[{\left( {f\left( x \right) + g\left( x \right)} \right)^\prime } ={ f’\left( x \right) + g’\left( x \right).}\]
Let \(n\) functions \({f_1}\left( x \right)\), \({f_2}\left( x \right)\), \(\ldots\), \({f_n}\left( x \right)\) be differentiable. Then their sum is also differentiable and
\[
{{\left[ {{f_1}\left( x \right) + {f_2}\left( x \right) + \ldots + {f_n}\left( x \right)} \right]^\prime } } = {{f_1}^\prime \left( x \right) + {f_2}^\prime \left( x \right) + \ldots + {f_n}^\prime \left( x \right).} \]
Combining the both rules we see that the derivative of difference of two functions is equal to the difference of the derivatives of these functions assuming both of the functions are differentiable:
\[{\left( {f\left( x \right) – g\left( x \right)} \right)^\prime } = {f’\left( x \right) – g’\left( x \right).}\]
We can write the common rule:
Linear Combination Rule
Suppose \(f\left( x \right)\) and \(g\left( x \right)\) are differentiable functions and \(a,\) \(b\) are real numbers. Then the function \(h\left( x \right) = af\left( x \right) + bg\left( x \right)\) is also differentiable and
\[h’\left( x \right) = af’\left( x \right) + bg’\left( x \right).\]
We add to this list one more simple rule:
Derivative of the Function \(y = x\)
If \(f\left( x \right) = x,\) then
\[f’\left( x \right) = {\left( x \right)^\prime } = 1.\]
This formula is derived on the Definition of the derivative page.
Solved Problems
Click a problem to see the solution.
Example 1Find the derivative of the function \(y = {x^2} – 5x.\) Example 2Find the derivative of the function \(y = {\large\frac{{ax + b}}{{a + b}}\normalsize}\), where \(a\) and \(b\) are constants. Example 3Find the derivative of the function \(y = 2\sqrt x – 3\sin x.\) Example 4Calculate the derivative of the function \(y = 3\sin x + 2\cos x.\) Example 5Let \(y = x + \left| {{x^2} – 8} \right|.\) Find the derivative of the function at \(x = 3.\) Example 6Find the derivative of the function \(y = {\large\frac{2}{{3x}}\normalsize} + 3{x^4}.\) Example 7Differentiate the function \(y = \large{\frac{{\sin x}}{2}}\normalsize + 3{x^3}.\) Example 8Find the derivative of the function \(y = \sqrt[3]{x} + 8x.\) Example 9Calculate the derivative of the function \(y = x\left( {2 + 3x} \right)\) without using the product rule. Example 10Find the derivative of the function \({y = 3{x^2} – 2\sqrt x + 1.}\) Example 11Calculate the derivative of the function \(y = \left( {2 – {\large\frac{x}{3}\normalsize}} \right)\left( {{\large\frac{1}{3}\normalsize} + {x^2}} \right).\) Example 12Calculate the derivative of the function \(y = \left( {x – 1} \right){\left( {x – 2} \right)^2}\) without using the product rule for the derivative. Example 13Calculate the derivative of the function \(y = \large{\frac{{{x^2} – x – 1}}{x}}\normalsize\) without using the quotient rule. Example 14Find the derivative of the function \(y = {\large\frac{{{x^2} + 3x + 1}}{x}\normalsize}\) without using the quotient rule for the derivative. Example 15Find the derivative of the irrational function \(y = 2\sqrt x – 3\sqrt[3]{x}.\) Example 16Find the derivative of the function \(y = {\left( {x + 4} \right)^3}\) without using the power rule. Example 17Determine the derivative of \(y = {\left( {x – 1} \right)^4}\) without using the power rule. Example 1.Find the derivative of the function \(y = {x^2} – 5x.\)
Solution.
Using the linear differentiation rules, we have
\[
{y’\left( x \right) = {\left( {{x^2} – 5x} \right)^\prime } } = {{\left( {{x^2}} \right)^\prime } – {\left( {5x} \right)^\prime } } = {{\left( {{x^2}} \right)^\prime } – 5{\left( x \right)^\prime } } = {2x – 5 \cdot 1 = 2x – 5.} \] Example 2.Find the derivative of the function \(y = {\large\frac{{ax + b}}{{a + b}}\normalsize}\), where \(a\) and \(b\) are constants.
Solution.
\[
{y’\left( x \right) = {\left( {\frac{{ax + b}}{{a + b}}} \right)^\prime } } = {\frac{1}{{a + b}} \cdot {\left( {ax + b} \right)^\prime } = {\frac{a}{{a + b}}.}} \] Example 3.Find the derivative of the function \(y = 2\sqrt x – 3\sin x.\)
Solution.
Using the basic differentiation rules, we obtain:
\[
{y’\left( x \right) = {\left( {2\sqrt x – 3\sin x} \right)^\prime } } = {{\left( {2\sqrt x } \right)^\prime } – {\left( {3\sin x} \right)^\prime } } = {2{\left( {\sqrt x } \right)^\prime } – 3{\left( {\sin x} \right)^\prime } } = {2 \cdot \frac{1}{{2\sqrt x }} – 3\cos x } = {\frac{1}{{\sqrt x }} – 3\cos x.} \] Example 4.Calculate the derivative of the function \(y = 3\sin x + 2\cos x.\)
Solution.
This expression is a linear combination of two trigonometric functions. The derivative has the following form:
\[ {y’\left( x \right) = {\left( {3\sin x + 2\cos x} \right)^\prime } } = {{\left( {3\sin x} \right)^\prime } + {\left( {2\cos x} \right)^\prime } } = {3{\left( {\sin x} \right)^\prime } + 2{\left( {\cos x} \right)^\prime } } = {3 \cdot \cos x + 2 \cdot \left( { – \sin x} \right) } = {3\cos x – 2\sin x.} \]
Example 5.Let \(y = x + \left| {{x^2} – 8} \right|.\) Find the derivative of the function at \(x = 3.\)
Solution.
As \({3^2} – 8 = 1 \gt 0\), the function at the point \(x = 3\) is equivalent to
\[y\left( x \right) = x + {x^2} – 8.\]
So, by the sum rule and the power rule (see “Derivatives of Power Functions“),
\[ {y’\left( x \right) = {\left( {x + {x^2} – 8} \right)^\prime } } = {x’ + {\left( {{x^2}} \right)^\prime } – 8′ } = {1 + 2x + 0 }={ 2x + 1.} \]
At the point \(x = 3\) the value of the derivative is
\[y’\left( 3 \right) = 2 \cdot 3 + 1 = 7.\] |
Can someone give me suggestions how can I construct a 2-tape Turing machine which simulates PDA ?
closed as unclear what you're asking by Evil, David Richerby, Rick Decker, Juho, hengxin Jul 9 '17 at 13:00
Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.
The first tape for input and the second for storing symbols (like on a stack). The first tape should be read-only so that the TM reads symbols one by one from left to right.
Every time your PDA writes a symbol on the stack, your TM moves the second head right and writes a symbol on the rightmost empty cell. When the PDA removes a symbol from the stack the TM replaces the rightmost symbol with the blank symbol (erases) and moves the second head left. On each state transition the first head should always move right.
In addition, your TM shouldn't be allowed to arbitrarily move heads right and left violating PDA rules.
Consider the following formal definition of moves for PDA (assume deterministic for simplicity):
UPDATE Definition of PDA moves
$\delta(q, a, Z) = (p, \alpha)$: the PDA in state $q$ with input $a$ and $Z$ on the top of the stack. The PDA enters the state $p$ and replaces the top symbol $Z$ with the symbols of the string $\alpha$. So, $\delta(q, a, Z) = (p, \epsilon)$ means remove the top stack symbol (pop). Advance the input head one symbol.
$\delta(q,\epsilon, Z) = (p, \alpha)$: the PDA in state $q$ with $Z$ on the top of the stack. Independent of the input symbol, PDA enters the state $p$ and replaces the top symbol $Z$ with the symbols of the strings $\alpha$. Input head does not advance in this case.
Examples of translation of PDA moves into TM moves Example 1.PDA move $\delta(q_1, 0, A) = (q_2, B)$ is translated as: TM in state $q_1$, head 1 (input head on the tape 1) reads $0$, and head 2 (stack head on the tape 2) reads $A$. Replace $A$ with $B$ (head 2 writes the symbol $B$ while it is on $A$). Advance head 1 one symbol. Enter state $q_2$. Example 2.$\delta(q_1, 0, A) = (q_2, BC)$ is translated as: TM in state $q_1$, head 1 reads $0$, and head 2 reads $A$. Head 2 writes $B$ (replaces $A$), advances head 2 one symbol, TM enters state $q_{21}$. Then, while TM in state $q_{21}$ and head 1 reads $0$, and head 2 reads the blank symbol: head 2 writes symbol $C$, enters state $q_2$. Advance head 1 one symbol.
The basic idea: in order to write $BC$ we introduced a new state $q_{21}$.
Analogously, $\delta(q,\epsilon, Z) = (p, \alpha)$ means head 2 just writes the blank symbol and moves left.
$\delta(q,\epsilon, Z) = (p, \alpha)$ means that you define the same transition
for every input symbol.
This is how a multitape Turing machine is defined. |
Given this definition of weak bisimilarity:
A configuration relation $\mathcal{R}$ is a weak bisimulation provided that whenever $P\ \mathcal{R}\ Q$ and $\alpha$ is $\mu$ or $\tau$ action then:
$P \to^\alpha P'$ then $Q \Rightarrow^\widehat{\alpha} Q'$ for some $Q'$ s.t. $P'\ \mathcal{R}\ Q'$
$Q \to^\alpha Q'$ then $P \Rightarrow^\widehat{\alpha} P'$ for some $P'$ s.t. $P'\ \mathcal{R}\ Q'$
P and Q are weakly bisimilar, written $P \ \approx \ Q$, if $P \ \mathcal{R} \ Q$ for some configuration relation $\mathcal{R}$.
My questions are:
For the relation $\mathcal{R} = \{(\tau.a, 0)\}$, $\mathcal{R} \nsubseteq \approx$ but $\mathcal{R} \subseteq \, \approx \! \mathcal{R} \approx$. Why is $\mathcal{R} \subseteq \, \approx \! \mathcal{R} \approx$?
According to Sangiorgi's book, P uses $\tau$ and ends up with $a \approx \tau.a$ and Q has $0$ and ignores the move. I am wondering, how is it that you can ignore the move? I thought that one is forced to do a move...
Could you please provide an detailed explanation of why weak bisimilarity up-to $\approx$ is unsound? (You can use the example above but I would appreciate a detailed explanation
Thanks. |
Theorem. $\int_0^\infty \sin x \phantom. dx/x = \pi/2$.
Poof. For $x>0$ write $1/x = \int_0^\infty e^{-xt} \phantom. dt$,and deduce that $\int_0^\infty \sin x \phantom. dx/x$ is$$\int_0^\infty \sin x \int_0^\infty e^{-xt} \phantom. dt \phantom. dx= \int_0^\infty \left( \int_0^\infty e^{-tx} \sin x \phantom. dx \right)\phantom. dt= \int_0^\infty \frac{dt}{t^2+1},$$which is the arctangent integral for $\pi/2$, QED.
The theorem is correct, and usually obtained as an application ofcontour integration, or of Fourier inversion ($\sin x / x$ is a multiple ofthe Fourier transform of the characteristic function of an interval).The poof, which is the first one I saw(given in a footnote in an introductory textbook on quantum physics),is not correct, because the integral does not converge absolutely.One can rescue it by writing $\int_0^M \sin x \phantom. dx/x$as a double integral in the same way, obtaining$$\int_0^M \sin x \frac{dx}{x} =\int_0^\infty \frac{dt}{t^2+1}- \int_0^\infty e^{-Mt} (\cos M + t \cdot \sin M) \frac{dt}{t^2+1}$$and showing that the second integral approaches $0$ as $M \rightarrow \infty$;but this detour makes for a much less appealing alternative to the usualproof by complex or Fourier analysis.
Still the double-integral trick can be used legitimately to evaluate$\int_0^\infty \sin^m x \phantom. dx/x^n$ for integers $m,n$ such thatthe integral converges absolutely (that is, with $2 \leq n \leq m$;NB unlike the contour or Fourier approach this technique appliesalso when $m \not\equiv n \bmod 2$).Write $(n-1)!/x^n = \int_0^\infty t^{n-1} e^{-xt} \phantom. dt$ to obtain$$\int_0^\infty \sin^m x \frac{dx}{x^n} = \frac1{(n-1)!} \int_0^\infty t^{n-1} \left( \int_0^\infty e^{-tx} \sin^m x \phantom. dx \right)\phantom. dt,$$in which the inner integral is a rational function of $t$,and then the integral with respect to $t$ is elementary.For example, when $m=n=2$ we find$$\int_0^\infty \sin^2 x \frac{dx}{x^2}= \int_0^\infty t \frac2{t^3+4t} dt= 2 \int_0^\infty \frac{dt}{t^2+4} = \frac\pi2.$$As a bonus, we recover a correct proof of our starting theorem byintegration by parts:
$$\frac\pi2 = \int_0^\infty \sin^2 x \frac{dx}{x^2} = \int_0^\infty \sin^2 x \phantom. d(-1/x) = \int_0^\infty \frac1x d(\sin^2 x) = \int_0^\infty 2 \sin x \cos x \frac{dx}{x};$$since $2 \sin x \cos x = \sin 2x$, the desired$\int_0^\infty \sin x \phantom. dx/x = \pi/2$follows by a linear change of variable.
Exercise Use this technique to prove that$\int_0^\infty \sin^3 x \phantom. dx/x^2 = \frac34 \log 3$,and more generally$$\int_0^\infty \sin^3 x \frac{dx}{x^\nu} = \frac{3-3^{\nu-1}}{4} \cos \frac{\nu\pi}{2} \Gamma(1-\nu)$$when the integral converges. [Both are in Gradshteyn and Ryzhik,page 449, formula 3.827; the $\nu=2$ case is 3.827#3, credited toD. Bierens de Haan, Nouvelles tables d'intégrales définies,Amsterdam 1867; the general case is 3.827#1, from Gröbner andHofreiter's Integraltafel II, Springer: Vienna and Innsbruck 1958.] |
6.11.1
I think the first thing that I would do, would be to convert the coordinates to degrees and decimals (or maybe even radians and decimals, though I do it below in
degrees and decimals):
Antares: \(α = 247.375 \quad δ = -26.433\)
Deneb \(α = 309.400 \quad δ = +45.283\)
We already did a similar problem in Chapter 3, Section 3.5, Example 2, so I shan’t do it again. I make the answer:
One pole: \(α = 11^\text{h} 47^\text{m} .3 \quad δ = + 56^\circ 11^\prime\)
The other pole: \(α = 23^\text{h} 47^\text{m} .3 \quad δ = + 123^\circ 49^\prime\) 6.11.2
I have drawn the North Celestial Pole \(\text{N}\), and the colures from \(\text{N}\) to Antares (\(\text{A}\)) and to Deneb (\(\text{D}\)), together with their north polar distances in degrees. I have also marked the difference between their right ascensions, in degrees. We can immediately calculate, from the cosine rule for spherical triangles, equation 3.5.2, the angular distance \(ω\) between the two stars in the sky. I make it \(ω = 91^\circ .190 \ 79\).
Now that we know the angle between the stars, we can use a plane triangle to calculate the distance between them:
I have marked Antares (\(\text{A}\)), Deneb (\(\text{D}\)) and us (\(\text{O}\)), and the distances from us to the two stars, in parsecs. (That’s the reciprocal of their parallaxes in arcsec.) I have also marked the angles, in degrees, between Antares and Deneb. We can now use the cosine rule for planes triangles, equation 3.2.2, to find the distance \(\text{AD}\). I make it 1011 parsecs.
A parsec is the distance at which an astronomical unit (approximately the radius of Earth’s orbit) would subtend an angle of one arcsecond. This also means, if you come to think of it, that the number of astronomical units in a parsec is equal to the number of arcseconds in a radian, which is \(360 \times 3600 \div (2π) = 2.062648 \times 10^5\) . The distance between the stars is therefore \(1011 \times 2.062648 \times 10^5\) astronomical units. Multiply this by \(1.495 \ 98 \times 10^8\) , to get the distance in km. I make the distance \(3.120 \times 10^{16} \ \text{km}\).
This would take light \(1.040596 \times 10^8\) seconds to travel, or 3298 years, so the distance between the stars is 3298 light-years.
6.11.3
Let’s see if we can develop a formula for a general case. We’ll have the first meteor start at \((α_{11}, \ δ_{11})\) and finish at \((α_{12}, \ δ_{12})\). The second meteor starts at \((α_{21}, \ δ_{21})\) and finishes at \((α_{22}, \ δ_{22}\). We have to find the coordinates \((α, \ δ\) of the point from which the two meteors diverge.
This is not a particularly easy problem – but is one that is obviously useful for meteor observers. I’ll just outline some suggestions here, and leave the reader to work out the details. I’ll draw below one of the meteors, and the radiant, and the North Celestial Pole:
Use the cotangent rule (equation 3.5.5) on the righthand triangle to get an expression for \(\cot θ\):
\[\sin δ_{11} \cos (α_{12} - α_{11}) = \cos δ_{11} \tan δ_{12} + \sin (α_{12} - α_{11}) \cot θ.\]
Equate these two expression for \(\cot θ\) (i.e. eliminate \(θ\) between the two equations). This will give you a single equation containing the two unknowns, \(α\) and \(δ\), everything else in the equation being a known quantity. (This will be obvious if you are actually doing a numerical example.)
Now do the same thing for the second meteor, and you will get a second equation in α and δ. In principle you are now home free, though there may be a bit of heavy algebra and trigonometry to go through before you finally get there.
I make the answer as follows:
\[\tan α = \frac{\cos α_{22} \tan δ_{22} - \cos α_{12} \tan δ_{12} + a_1 \sin α_{12} - a_2 \sin α_{22}}{\sin α_{12} \tan δ_{12} - \sin α_{22} \tan δ_{22} + a_1 \cos α_{12} - a_2 \cos α_{22}},\]
where \[a_1 = \frac{\tan δ_{11}}{\sin(α_{11} - α_{12})} - \frac{\tan δ_{12}}{\tan(α_{11} - α_{12})}\]
and \[a_2 = \frac{\tan δ_{21}}{\sin(α_{21} - α_{22})} - \frac{\tan δ_{22}}{\tan(α_{21} - α_{22})}\]
Then \[\tan δ = \cos (α - α_{12}) \tan δ_{12} + \sin (α - α_{12} ) [\csc (α_{11} - α_{12}) \tan δ_{11} - \cot (α_{11} - α_{12}) \tan δ_{12}]\]
or \[\tan δ = \cos (α - α_{22}) \tan δ_{22} + \sin (α - α_{22} ) [\csc (α_{21} - α_{22}) \tan δ_{21} - \cot (α_{21} - α_{22}) \tan δ_{22}].\]
Either of these two equations for \(\tan δ\) should give the same result. In the computer program I use for this calculation, I get it to calculate \(\tan δ\) from
both equations, just as a check for mistakes.
This may look complicated, but all terms are just calculable numbers for any particular case. If the equinoctial colure gets in the way (as it did – deliberately – in the numerical example I gave), I suggest just add 24 hours to all right ascensions.
For the numerical example I gave, I make the coordinates of the radiant to be:
\[α = 22^\text{h} 01^\text{m}.3 \quad δ = - 00^\circ 37^\prime .\]
Contributor |
Mathematics - Functional Analysis and Mathematics - Metric Geometry
Abstract
The following strengthening of the Elton-Odell theorem on the existence of a $(1+\epsilon)-$separated sequences in the unit sphere $S_X$ of an infinite dimensional Banach space $X$ is proved: There exists an infinite subset $S\subseteq S_X$ and a constant $d>1$, satisfying the property that for every $x,y\in S$ with $x\neq y$ there exists $f\in B_{X^*}$ such that $d\leq f(x)-f(y)$ and $f(y)\leq f(z)\leq f(x)$, for all $z\in S$. Comment: 15 pages, to appear in Bulletin of the Hellenic Mayhematical Society
Given a finite dimensional Banach space X with dimX = n and an Auerbach basis of X, it is proved that: there exists a set D of n + 1 linear combinations (with coordinates 0, -1, +1) of the members of the basis, so that each pair of different elements of D have distance greater than one. Comment: 15 pages. To appear in MATHEMATIKA |
Difference between revisions of "stat946w18/Implicit Causal Models for Genome-wide Association Studies"
(→Implicit causal model in Edward)
(→Implicit causal model in Edward)
Line 203: Line 203:
== Implicit causal model in Edward ==
== Implicit causal model in Edward ==
−
[[File: coddde.png|600px
+
[[File: coddde.png|600px]]
Revision as of 23:47, 20 April 2018 Contents 1 Introduction and Motivation 2 Implicit Causal Models 3 Implicit Causal Models with Latent Confounders 4 Likelihood-free Variational Inference 5 Empirical Study 6 Conclusion 7 Critique 8 References 9 Implicit causal model in Edward Introduction and Motivation
There is currently much progress in probabilistic models which could lead to the development of rich generative models. The models have been applied with neural networks, implicit densities, and with scalable algorithms to very large data for their Bayesian inference. However, most of the models are focused on capturing statistical relationships rather than causal relationships. Causal relationships are relationships where one event is a result of another event, i.e. a cause and effect. Causal models give us a sense of how manipulating the generative process could change the final results.
Genome-wide association studies (GWAS) are examples of causal relationships. Genome is basically the sum of all DNAs in an organism and contain information about the organism's attributes. Specifically, GWAS is about figuring out how genetic factors cause disease among humans. Here the genetic factors we are referring to are single nucleotide polymorphisms (SNPs), and getting a particular disease is treated as a trait, i.e., the outcome. In order to know about the reason of developing a disease and to cure it, the causation between SNPs and diseases is investigated: first, predict which one or more SNPs cause the disease; second, target the selected SNPs to cure the disease.
The figure below depicts an example Manhattan plot for a GWAS. Each dot represents an SNP. The x-axis is the chromosome location, and the y-axis is the negative log of the association p-value between the SNP and the disease, so points with the largest values represent strongly associated risk loci.
This paper focuses on two challenges to combining modern probabilistic models and causality. The first one is how to build rich causal models with specific needs by GWAS. In general, probabilistic causal models involve a function [math]f[/math] and a noise [math]n[/math]. For working simplicity, we usually assume [math]f[/math] as a linear model with Gaussian noise. However problems like GWAS require models with nonlinear, learnable interactions among the inputs and the noise.
The second challenge is how to address latent population-based confounders. Latent confounders are issues when we apply the causal models since we cannot observe them nor know the underlying structure. For example, in GWAS, both latent population structure, i.e., subgroups in the population with ancestry differences, and relatedness among sample individuals produce spurious correlations among SNPs to the trait of interest. The existing methods cannot easily accommodate the complex latent structure.
For the first challenge, the authors develop implicit causal models, a class of causal models that leverages neural architectures with an implicit density. With GWAS, implicit causal models generalize previous methods to capture important nonlinearities, such as gene-gene and gene-population interaction. Building on this, for the second challenge, they describe an implicit causal model that adjusts for population-confounders by sharing strength across examples (genes).
There has been an increasing number of works on causal models which focus on causal discovery and typically have strong assumptions such as Gaussian processes on noise variable or nonlinearities for the main function.
Implicit Causal Models
Implicit causal models are an extension of probabilistic causal models. Probabilistic causal models will be introduced first.
Probabilistic Causal Models
Probabilistic causal models have two parts: deterministic functions of noise and other variables. Consider background noise [math]\epsilon[/math], representing unknown background quantities which are jointly independent and global variable [math]\beta[/math], some function of this noise, where
Each [math]\beta[/math] and [math]x[/math] is a function of noise; [math]y[/math] is a function of noise and [math]x[/math],
The target is the causal mechanism [math]f_y[/math] so that the causal effect [math]p(y|do(X=x),\beta)[/math] can be calculated. [math]do(X=x)[/math] means that we specify a value of [math]X[/math] under the fixed structure [math]\beta[/math]. By other paper’s work, it is assumed that [math]p(y|do(x),\beta) = p(y|x, \beta)[/math].
An example of probabilistic causal models is additive noise model.
[math]f(.)[/math] is usually a linear function or spline functions for nonlinearities. [math]\epsilon[/math] is assumed to be standard normal, as well as [math]y[/math]. Thus the posterior [math]p(\theta | x, y, \beta)[/math] can be represented as
where [math]p(\theta)[/math] is the prior which is known. Then, variational inference or MCMC can be applied to calculate the posterior distribution.
Implicit Causal Models
The difference between implicit causal models and probabilistic causal models is the noise variable. Instead of using an additive noise term, implicit causal models directly take noise [math]\epsilon[/math] as input and outputs [math]x[/math] given parameter [math]\theta[/math].
[math] x=g(\epsilon | \theta), \epsilon \tilde s(\cdot) [/math]
The causal diagram has changed to:
They used fully connected neural network with a fair amount of hidden units to approximate each causal mechanism. Below is the formal description: Implicit Causal Models with Latent Confounders
Previously, they assumed the global structure is observed. Next, the unobserved scenario is being considered.
Causal Inference with a Latent Confounder
Similar to before, the interest is the causal effect [math]p(y|do(x_m), x_{-m})[/math]. Here, the SNPs other than [math]x_m[/math] is also under consideration. However, it is confounded by the unobserved confounder [math]z_n[/math]. As a result, the standard inference method cannot be used in this case.
The paper proposed a new method which include the latent confounders. For each subject [math]n=1,…,N[/math] and each SNP [math]m=1,…,M[/math],
The mechanism for latent confounder [math]z_n[/math] is assumed to be known. SNPs depend on the confounders and the trait depends on all the SNPs and the confounders as well.
The posterior of [math]\theta[/math] is needed to be calculate in order to estimate the mechanism [math]g_y[/math] as well as the causal effect [math]p(y|do(x_m), x_{-m})[/math], so that it can be explained how changes to each SNP [math]X_m[/math] cause changes to the trait [math]Y[/math].
Note that the latent structure [math]p(z|x, y)[/math] is assumed known.
In general, causal inference with latent confounders can be dangerous: it uses the data twice, and thus it may bias the estimates of each arrow [math]X_m → Y[/math]. Why is this justified? This is answered below:
Proposition 1. Assume the causal graph of Figure 2 (left) is correct and that the true distribution resides in some configuration of the parameters of the causal model (Figure 2 (right)). Then the posterior [math]p(θ | x, y)[/math] provides a consistent estimator of the causal mechanism [math]f_y[/math].
Proposition 1 rigorizes previous methods in the framework of probabilistic causal models. The intuition is that as more SNPs arrive (“M → ∞, N fixed”), the posterior concentrates at the true confounders [math]z_n[/math], and thus we can estimate the causal mechanism given each data point’s confounder [math]z_n[/math]. As more data points arrive (“N → ∞, M fixed”), we can estimate the causal mechanism given any confounder [math]z_n[/math] as there is an infinity of them.
Implicit Causal Model with a Latent Confounder
This section is the algorithm and functions to implementing an implicit causal model for GWAS.
Generative Process of Confounders [math]z_n[/math].
The distribution of confounders is set as standard normal. [math]z_n \in R^K[/math] , where [math]K[/math] is the dimension of [math]z_n[/math] and [math]K[/math] should make the latent space as close as possible to the true population structural.
Generative Process of SNPs [math]x_{nm}[/math].
Given SNP is coded for,
The authors defined a [math]Binomial(2,\pi_{nm})[/math] distribution on [math]x_{nm}[/math]. And used logistic factor analysis to design the SNP matrix.
A SNP matrix looks like this:
Since logistic factor analysis makes strong assumptions, this paper suggests using a neural network to relax these assumptions,
This renders the outputs to be a full [math]N*M[/math] matrix due the the variables [math]w_m[/math], which act as principal component in PCA. Here, [math]\phi[/math] has a standard normal prior distribution. The weights [math]w[/math] and biases [math]\phi[/math] are shared over the [math]m[/math] SNPs and [math]n[/math] individuals, which makes it possible to learn nonlinear interactions between [math]z_n[/math] and [math]w_m[/math].
Generative Process of Traits [math]y_n[/math].
Previously, each trait is modeled by a linear regression,
This also has very strong assumptions on SNPs, interactions, and additive noise. It can also be replaced by a neural network which only outputs a scalar,
Likelihood-free Variational Inference
Calculating the posterior of [math]\theta[/math] is the key of applying the implicit causal model with latent confounders.
could be reduces to
However, with implicit models, integrating over a nonlinear function could be suffered. The authors applied likelihood-free variational inference (LFVI). LFVI proposes a family of distribution over the latent variables. Here the variables [math]w_m[/math] and [math]z_n[/math] are all assumed to be Normal,
For LFVI applied to GWAS, the algorithm which similar to the EM algorithm has been used:
Empirical Study
The authors performed simulation on 100,000 SNPs, 940 to 5,000 individuals, and across 100 replications of 11 settings. Four methods were compared:
implicit causal model (ICM); PCA with linear regression (PCA); a linear mixed model (LMM); logistic factor analysis with inverse regression (GCAT).
The feedforward neural networks for traits and SNPs are fully connected with two hidden layers using ReLU activation function, and batch normalization.
Simulation Study
Based on real genomic data, a true model is applied to generate the SNPs and traits for each configuration. There are four datasets used in this simulation study:
HapMap [Balding-Nichols model] 1000 Genomes Project (TGP) [PCA] Human Genome Diversity project (HGDP) [PCA] HGDP [Pritchard-Stephens-Donelly model] A latent spatial position of individuals for population structure [spatial] The table shows the prediction accuracy. The accuracy is calculated by the rate of the number of true positives divide the number of true positives plus false positives. True positives measure the proportion of positives that are correctly identified as such (e.g. the percentage of SNPs which are correctly identified as having the causal relation with the trait). In contrast, false positives state the SNPs has the causal relation with the trait when they don’t. The closer the rate to 1, the better the model is since false positives are considered as the wrong prediction.
The result represented above shows that the implicit causal model has the best performance among these four models in every situation. Especially, other models tend to do poorly on PSD and Spatial when [math]a[/math] is small, but the ICM achieved a significantly high rate. The only comparable method to ICM is GCAT, when applying to simpler configurations.
Real-data Analysis
They also applied ICM to GWAS of Northern Finland Birth Cohorts, which measure 10 metabolic traits and also contain 324,160 SNPs and 5,027 individuals. The data came from the database of Genotypes and Phenotypes (dbGaP) and used the same preprocessing as Song et al. Ten implicit causal models were fitted, one for each trait to be modeled. For each of the 10 implicit causal models the dimension of the counfounders was set to be six, same as what was used in the paper by Song. The SNP network used 512 hidden units in both layers and the trait network used 32 and 256. et al. for comparable models in Table 2.
The numbers in the above table are the number of significant loci for each of the 10 traits. The number for other methods, such as GCAT, LMM, PCA, and "uncorrected" (association tests without accounting for hidden relatedness of study samples) are obtained from other papers. By comparison, the ICM reached the level of the best previous model for each trait.
Conclusion
This paper introduced implicit causal models in order to account for nonlinear complex causal relationships, and applied the method to GWAS. It can not only capture important interactions between genes within an individual and among population level, but also can adjust for latent confounders by taking account of the latent variables into the model.
By the simulation study, the authors proved that the implicit causal model could beat other methods by 15-45.3% on a variety of datasets with variations on parameters.
The authors also believed this GWAS application is only the start of the usage of implicit causal models. The authors suggest that it might also be successfully used in the design of dynamic theories in high-energy physics or for modeling discrete choices in economics.
Critique
This paper is an interesting and novel work. The main contribution of this paper is to connect the statistical genetics and the machine learning methodology. The method is technically sound and does indeed generalize techniques currently used in statistical genetics.
The neural network used in this paper is a very simple feed-forward 2 hidden-layer neural network, but the idea of where to use the neural network is crucial and might be significant in GWAS.
It has limitations as well. The empirical example in this paper is too easy, and far away from the realistic situation. Despite the simulation study showing some competing results, the Northern Finland Birth Cohort Data application did not demonstrate the advantage of using implicit causal model over the previous methods, such as GCAT or LMM.
Another limitation is about linkage disequilibrium as the authors stated as well. SNPs are not completely independent of each other; usually, they have correlations when the alleles at close locus. They did not consider this complex case, rather they only considered the simplest case where they assumed all the SNPs are independent.
Furthermore, one SNP maybe does not have enough power to explain the causal relationship. Recent papers indicate that causation to a trait may involve multiple SNPs. This could be a future work as well.
References
Tran D, Blei D M. Implicit Causal Models for Genome-wide Association Studies[J]. arXiv preprint arXiv:1710.10742, 2017.
Patrik O Hoyer, Dominik Janzing, Joris M Mooij, Jonas Peters, and Prof Bernhard Schölkopf. Non- linear causal discovery with additive noise models. In Neural Information Processing Systems, 2009.
Alkes L Price, Nick J Patterson, Robert M Plenge, Michael E Weinblatt, Nancy A Shadick, and David Reich. Principal components analysis corrects for stratification in genome-wide association studies. Nature Genetics, 38(8):904–909, 2006.
Minsun Song, Wei Hao, and John D Storey. Testing for genetic associations in arbitrarily structured populations. Nature, 47(5):550–554, 2015.
Dustin Tran, Rajesh Ranganath, and David M Blei. Hierarchical implicit models and likelihood-free variational inference. In Neural Information Processing Systems, 2017. |
Sound, like all waves, travels at a certain speed and has the properties of frequency and wavelength. You can observe direct evidence of the speed of sound while watching a fireworks display. The flash of an explosion is seen well before its sound is heard, implying both that sound travels at a finite speed and that it is much slower than light. You can also directly sense the frequency of a sound. Perception of frequency is called pitch. The wavelength of sound is not directly sensed, but indirect evidence is found in the correlation of the size of musical instruments with their pitch. Small instruments, such as a piccolo, typically make high-pitch sounds, while large instruments, such as a tuba, typically make low-pitch sounds. High pitch means small wavelength, and the size of a musical instrument is directly related to the wavelengths of sound it produces. So a small instrument creates short-wavelength sounds. Similar arguments hold that a large instrument creates long-wavelength sounds.
Figure \(\PageIndex{1}\): When a firework explodes, the light energy is perceived before the sound energy. Sound travels more slowly than light does. (credit: Dominic Alves, Flickr)
The relationship of the speed of sound, its frequency, and wavelength is the same as for all waves:
\[v_w = f\lambda,\]
where \(v_w\) is the speed of sound, \(f\) is its frequency, and \(\lambda\) is its wavelength. The wavelength of a sound is the distance between adjacent identical parts of a wave—for example, between adjacent compressions as illustrated in Figure \(\PageIndex{2}\). The frequency is the same as that of the source and is the number of waves that pass a point per unit time.
Figure \(\PageIndex{2}\): A sound wave emanates from a source vibrating at a frequency \(f\), propagates at \(v_w\), and has a wavelength \(\lambda\).
Table \(\PageIndex{1}\) makes it apparent that the speed of sound varies greatly in different media. The speed of sound in a medium is determined by a combination of the medium’s rigidity (or compressibility in gases) and its density. The more rigid (or less compressible) the medium, the faster the speed of sound. This observation is analogous to the fact that the frequency of a simple harmonic motion is directly proportional to the stiffness of the oscillating object. The greater the density of a medium, the slower the speed of sound. This observation is analogous to the fact that the frequency of a simple harmonic motion is inversely proportional to the mass of the oscillating object.The speed of sound in air is low, because air is compressible. Because liquids and solids are relatively rigid and very difficult to compress, the speed of sound in such media is generally greater than in gases.
Medium \(v_w(m/s)\) Gases at \(0^oC\) Air 331 Carbon dioxide 259 Oxygen 316 Helium 965 Hydrogen 1290 Liquids at \(20^oC\) Ethanol 1160 Mercury 1450 Water, fresh 1480 Sea water 1540 Human tissue 1540 Solids (longitudinal or bulk) Vulcanized rubber 54 Polyethylene 920 Marble 3810 Glass, Pyrex 5640 Lead 1960 Aluminum 5120 Steel 5960
Earthquakes, essentially sound waves in Earth’s crust, are an interesting example of how the speed of sound depends on the rigidity of the medium. Earthquakes have both longitudinal and transverse components, and these travel at different speeds. The bulk modulus of granite is greater than its shear modulus. For that reason, the speed of longitudinal or pressure waves (P-waves) in earthquakes in granite is significantly higher than the speed of transverse or shear waves (S-waves). Both components of earthquakes travel slower in less rigid material, such as sediments. P-waves have speeds of 4 to 7 km/s, and S-waves correspondingly range in speed from 2 to 5 km/s, both being faster in more rigid material. The P-wave gets progressively farther ahead of the S-wave as they travel through Earth’s crust. The time between the P- and S-waves is routinely used to determine the distance to their source, the epicenter of the earthquake.
The speed of sound is affected by temperature in a given medium. For air at sea level, the speed of sound is given by
\[v_w = (331 \, m/s)\sqrt{\dfrac{T}{273 \, K}},\]
where the temperature (denoted as \(T\)) is in units of kelvin. The speed of sound in gases is related to the average speed of particles in the gas, \(v_{rms}\), and that
\[v_{rms} = \sqrt{\dfrac{3 \, kT}{m}},\]
where \(k\) is the Boltzmann constant \((1.38 \times 10^{-23} \, J/K)\) and \(m\) is the mass of each (identical) particle in the gas. So, it is reasonable that the speed of sound in air and other gases should depend on the square root of temperature. While not negligible, this is not a strong dependence. At \(0^oC\), the speed of sound is 331 m/s, whereas at \(20^oC\) it is 343 m/s, less than a 4% increase. Figure \(\PageIndex{3}\) shows a use of the speed of sound by a bat to sense distances. Echoes are also used in medical imaging.
Figure \(\PageIndex{3}\): A bat uses sound echoes to find its way about and to catch prey. The time for the echo to return is directly proportional to the distance.
One of the more important properties of sound is that its speed is nearly independent of frequency. This independence is certainly true in open air for sounds in the audible range of 20 to 20,000 Hz. If this independence were not true, you would certainly notice it for music played by a marching band in a football stadium, for example. Suppose that high-frequency sounds traveled faster—then the farther you were from the band, the more the sound from the low-pitch instruments would lag that from the high-pitch ones. But the music from all instruments arrives in cadence independent of distance, and so all frequencies must travel at nearly the same speed. Recall that
\[v_w = f\lambda.\]
In a given medium under fixed conditions, \(v_w\) is constant, so that there is a relationship between \(f\) and \(\lambda\); the higher the frequency, the smaller the wavelength. See Figure \(\PageIndex{4}\) and consider the following example.
Figure \(\PageIndex{4}\): Because they travel at the same speed in a given medium, low-frequency sounds must have a greater wavelength than high-frequency sounds. Here, the lower-frequency sounds are emitted by the large speaker, called a woofer, while the higher-frequency sounds are emitted by the small speaker, called a tweeter.
Example \(\PageIndex{1}\): Calculating Wavelengths: What Are the Wavelengths of Audible Sounds?
Calculate the wavelengths of sounds at the extremes of the audible range, 20 and 20,000 Hz, in \(30.0^oC\) air. (Assume that the frequency values are accurate to two significant figures.)
Strategy
To find wavelength from frequency, we can use \(v_w = f\lambda\).
Solution Identify knowns. The value for \(v_w\), is given by \[v_w = (331 \, m/s)\sqrt{\dfrac{T}{273 \, K}}. \nonumber\] Convert the temperature into kelvin and then enter the temperature into the equation \[v_w = (331 \, m/s)\sqrt{\dfrac{303 \, K}{273 \, K}} = 348.7 \, m/s. \nonumber\] Solve the relationship between speed and wavelength for \(\lambda\): \[\lambda = \dfrac{v_w}{f}. \nonumber \] Enter the speed and the minimum frequency to give the maximum wavelength: \[\lambda_{max} = \dfrac{348.7 \, m/s}{20 \, Hz} = 17 \, m. \nonumber\] Enter the speed and the maximum frequency to give the minimum wavelength: \[\lambda_{min} = \dfrac{348.7 \, m/s}{20,000 \, Hz} = 0.017 \, m = 1.7 \, cm. \nonumber\] Discussion
Because the product of \(f\) multiplied by \(\lambda\) equals a constant, the smaller \(f\) is, the larger \(\lambda\) must be, and vice versa.
The speed of sound can change when sound travels from one medium to another. However, the frequency usually remains the same because it is like a driven oscillation and has the frequency of the original source. If \(v_w\) changes and \(f\) remains the same, then the wavelength \(\lambda\) must change. That is, because \(v_w = f\lambda\), the higher the speed of a sound, the greater its wavelength for a given frequency.
MAKING CONNECTIONS: TAKE-HOME INVESTIGATION - VOICE AS A SOUND WAVE
Suspend a sheet of paper so that the top edge of the paper is fixed and the bottom edge is free to move. You could tape the top edge of the paper to the edge of a table. Gently blow near the edge of the bottom of the sheet and note how the sheet moves. Speak softly and then louder such that the sounds hit the edge of the bottom of the paper, and note how the sheet moves. Explain the effects.
Exercise \(\PageIndex{1A}\)
Imagine you observe two fireworks explode. You hear the explosion of one as soon as you see it. However, you see the other firework for several milliseconds before you hear the explosion. Explain why this is so.
Answer
Sound and light both travel at definite speeds. The speed of sound is slower than the speed of light. The first firework is probably very close by, so the speed difference is not noticeable. The second firework is farther away, so the light arrives at your eyes noticeably sooner than the sound wave arrives at your ears.
Exercise \(\PageIndex{1B}\)
You observe two musical instruments that you cannot identify. One plays high-pitch sounds and the other plays low-pitch sounds. How could you determine which is which without hearing either of them play?
Answer
Compare their sizes. High-pitch instruments are generally smaller than low-pitch instruments because they generate a smaller wavelength.
Summary The relationship of the speed of sound \(v_w\), its frequency \(f\), and its wavelength \(\lambda\) is given by \(v_w = f\lambda,\) which is the same relationship given for all waves. In air, the speed of sound is related to air temperature \(T\) by \(v_w = (331 \, m/s) \sqrt{\dfrac{T}{273 \, K}}.\) \(v_w\) is the same for all frequencies and wavelengths. Glossary pitch the perception of the frequency of a sound Contributors
Paul Peter Urone (Professor Emeritus at California State University, Sacramento) and Roger Hinrichs (State University of New York, College at Oswego) with Contributing Authors: Kim Dirks (University of Auckland) and Manjula Sharma (University of Sydney). This work is licensed by OpenStax University Physics under a Creative Commons Attribution License (by 4.0). |
Difference between revisions of "stat946w18/Implicit Causal Models for Genome-wide Association Studies"
(→Implicit causal model in Edward)
(→Implicit causal model in Edward)
Line 203: Line 203:
== Implicit causal model in Edward ==
== Implicit causal model in Edward ==
+
[[File: coddde.png|600px]]
[[File: coddde.png|600px]]
Revision as of 23:48, 20 April 2018 Contents 1 Introduction and Motivation 2 Implicit Causal Models 3 Implicit Causal Models with Latent Confounders 4 Likelihood-free Variational Inference 5 Empirical Study 6 Conclusion 7 Critique 8 References 9 Implicit causal model in Edward Introduction and Motivation
There is currently much progress in probabilistic models which could lead to the development of rich generative models. The models have been applied with neural networks, implicit densities, and with scalable algorithms to very large data for their Bayesian inference. However, most of the models are focused on capturing statistical relationships rather than causal relationships. Causal relationships are relationships where one event is a result of another event, i.e. a cause and effect. Causal models give us a sense of how manipulating the generative process could change the final results.
Genome-wide association studies (GWAS) are examples of causal relationships. Genome is basically the sum of all DNAs in an organism and contain information about the organism's attributes. Specifically, GWAS is about figuring out how genetic factors cause disease among humans. Here the genetic factors we are referring to are single nucleotide polymorphisms (SNPs), and getting a particular disease is treated as a trait, i.e., the outcome. In order to know about the reason of developing a disease and to cure it, the causation between SNPs and diseases is investigated: first, predict which one or more SNPs cause the disease; second, target the selected SNPs to cure the disease.
The figure below depicts an example Manhattan plot for a GWAS. Each dot represents an SNP. The x-axis is the chromosome location, and the y-axis is the negative log of the association p-value between the SNP and the disease, so points with the largest values represent strongly associated risk loci.
This paper focuses on two challenges to combining modern probabilistic models and causality. The first one is how to build rich causal models with specific needs by GWAS. In general, probabilistic causal models involve a function [math]f[/math] and a noise [math]n[/math]. For working simplicity, we usually assume [math]f[/math] as a linear model with Gaussian noise. However problems like GWAS require models with nonlinear, learnable interactions among the inputs and the noise.
The second challenge is how to address latent population-based confounders. Latent confounders are issues when we apply the causal models since we cannot observe them nor know the underlying structure. For example, in GWAS, both latent population structure, i.e., subgroups in the population with ancestry differences, and relatedness among sample individuals produce spurious correlations among SNPs to the trait of interest. The existing methods cannot easily accommodate the complex latent structure.
For the first challenge, the authors develop implicit causal models, a class of causal models that leverages neural architectures with an implicit density. With GWAS, implicit causal models generalize previous methods to capture important nonlinearities, such as gene-gene and gene-population interaction. Building on this, for the second challenge, they describe an implicit causal model that adjusts for population-confounders by sharing strength across examples (genes).
There has been an increasing number of works on causal models which focus on causal discovery and typically have strong assumptions such as Gaussian processes on noise variable or nonlinearities for the main function.
Implicit Causal Models
Implicit causal models are an extension of probabilistic causal models. Probabilistic causal models will be introduced first.
Probabilistic Causal Models
Probabilistic causal models have two parts: deterministic functions of noise and other variables. Consider background noise [math]\epsilon[/math], representing unknown background quantities which are jointly independent and global variable [math]\beta[/math], some function of this noise, where
Each [math]\beta[/math] and [math]x[/math] is a function of noise; [math]y[/math] is a function of noise and [math]x[/math],
The target is the causal mechanism [math]f_y[/math] so that the causal effect [math]p(y|do(X=x),\beta)[/math] can be calculated. [math]do(X=x)[/math] means that we specify a value of [math]X[/math] under the fixed structure [math]\beta[/math]. By other paper’s work, it is assumed that [math]p(y|do(x),\beta) = p(y|x, \beta)[/math].
An example of probabilistic causal models is additive noise model.
[math]f(.)[/math] is usually a linear function or spline functions for nonlinearities. [math]\epsilon[/math] is assumed to be standard normal, as well as [math]y[/math]. Thus the posterior [math]p(\theta | x, y, \beta)[/math] can be represented as
where [math]p(\theta)[/math] is the prior which is known. Then, variational inference or MCMC can be applied to calculate the posterior distribution.
Implicit Causal Models
The difference between implicit causal models and probabilistic causal models is the noise variable. Instead of using an additive noise term, implicit causal models directly take noise [math]\epsilon[/math] as input and outputs [math]x[/math] given parameter [math]\theta[/math].
[math] x=g(\epsilon | \theta), \epsilon \tilde s(\cdot) [/math]
The causal diagram has changed to:
They used fully connected neural network with a fair amount of hidden units to approximate each causal mechanism. Below is the formal description: Implicit Causal Models with Latent Confounders
Previously, they assumed the global structure is observed. Next, the unobserved scenario is being considered.
Causal Inference with a Latent Confounder
Similar to before, the interest is the causal effect [math]p(y|do(x_m), x_{-m})[/math]. Here, the SNPs other than [math]x_m[/math] is also under consideration. However, it is confounded by the unobserved confounder [math]z_n[/math]. As a result, the standard inference method cannot be used in this case.
The paper proposed a new method which include the latent confounders. For each subject [math]n=1,…,N[/math] and each SNP [math]m=1,…,M[/math],
The mechanism for latent confounder [math]z_n[/math] is assumed to be known. SNPs depend on the confounders and the trait depends on all the SNPs and the confounders as well.
The posterior of [math]\theta[/math] is needed to be calculate in order to estimate the mechanism [math]g_y[/math] as well as the causal effect [math]p(y|do(x_m), x_{-m})[/math], so that it can be explained how changes to each SNP [math]X_m[/math] cause changes to the trait [math]Y[/math].
Note that the latent structure [math]p(z|x, y)[/math] is assumed known.
In general, causal inference with latent confounders can be dangerous: it uses the data twice, and thus it may bias the estimates of each arrow [math]X_m → Y[/math]. Why is this justified? This is answered below:
Proposition 1. Assume the causal graph of Figure 2 (left) is correct and that the true distribution resides in some configuration of the parameters of the causal model (Figure 2 (right)). Then the posterior [math]p(θ | x, y)[/math] provides a consistent estimator of the causal mechanism [math]f_y[/math].
Proposition 1 rigorizes previous methods in the framework of probabilistic causal models. The intuition is that as more SNPs arrive (“M → ∞, N fixed”), the posterior concentrates at the true confounders [math]z_n[/math], and thus we can estimate the causal mechanism given each data point’s confounder [math]z_n[/math]. As more data points arrive (“N → ∞, M fixed”), we can estimate the causal mechanism given any confounder [math]z_n[/math] as there is an infinity of them.
Implicit Causal Model with a Latent Confounder
This section is the algorithm and functions to implementing an implicit causal model for GWAS.
Generative Process of Confounders [math]z_n[/math].
The distribution of confounders is set as standard normal. [math]z_n \in R^K[/math] , where [math]K[/math] is the dimension of [math]z_n[/math] and [math]K[/math] should make the latent space as close as possible to the true population structural.
Generative Process of SNPs [math]x_{nm}[/math].
Given SNP is coded for,
The authors defined a [math]Binomial(2,\pi_{nm})[/math] distribution on [math]x_{nm}[/math]. And used logistic factor analysis to design the SNP matrix.
A SNP matrix looks like this:
Since logistic factor analysis makes strong assumptions, this paper suggests using a neural network to relax these assumptions,
This renders the outputs to be a full [math]N*M[/math] matrix due the the variables [math]w_m[/math], which act as principal component in PCA. Here, [math]\phi[/math] has a standard normal prior distribution. The weights [math]w[/math] and biases [math]\phi[/math] are shared over the [math]m[/math] SNPs and [math]n[/math] individuals, which makes it possible to learn nonlinear interactions between [math]z_n[/math] and [math]w_m[/math].
Generative Process of Traits [math]y_n[/math].
Previously, each trait is modeled by a linear regression,
This also has very strong assumptions on SNPs, interactions, and additive noise. It can also be replaced by a neural network which only outputs a scalar,
Likelihood-free Variational Inference
Calculating the posterior of [math]\theta[/math] is the key of applying the implicit causal model with latent confounders.
could be reduces to
However, with implicit models, integrating over a nonlinear function could be suffered. The authors applied likelihood-free variational inference (LFVI). LFVI proposes a family of distribution over the latent variables. Here the variables [math]w_m[/math] and [math]z_n[/math] are all assumed to be Normal,
For LFVI applied to GWAS, the algorithm which similar to the EM algorithm has been used:
Empirical Study
The authors performed simulation on 100,000 SNPs, 940 to 5,000 individuals, and across 100 replications of 11 settings. Four methods were compared:
implicit causal model (ICM); PCA with linear regression (PCA); a linear mixed model (LMM); logistic factor analysis with inverse regression (GCAT).
The feedforward neural networks for traits and SNPs are fully connected with two hidden layers using ReLU activation function, and batch normalization.
Simulation Study
Based on real genomic data, a true model is applied to generate the SNPs and traits for each configuration. There are four datasets used in this simulation study:
HapMap [Balding-Nichols model] 1000 Genomes Project (TGP) [PCA] Human Genome Diversity project (HGDP) [PCA] HGDP [Pritchard-Stephens-Donelly model] A latent spatial position of individuals for population structure [spatial] The table shows the prediction accuracy. The accuracy is calculated by the rate of the number of true positives divide the number of true positives plus false positives. True positives measure the proportion of positives that are correctly identified as such (e.g. the percentage of SNPs which are correctly identified as having the causal relation with the trait). In contrast, false positives state the SNPs has the causal relation with the trait when they don’t. The closer the rate to 1, the better the model is since false positives are considered as the wrong prediction.
The result represented above shows that the implicit causal model has the best performance among these four models in every situation. Especially, other models tend to do poorly on PSD and Spatial when [math]a[/math] is small, but the ICM achieved a significantly high rate. The only comparable method to ICM is GCAT, when applying to simpler configurations.
Real-data Analysis
They also applied ICM to GWAS of Northern Finland Birth Cohorts, which measure 10 metabolic traits and also contain 324,160 SNPs and 5,027 individuals. The data came from the database of Genotypes and Phenotypes (dbGaP) and used the same preprocessing as Song et al. Ten implicit causal models were fitted, one for each trait to be modeled. For each of the 10 implicit causal models the dimension of the counfounders was set to be six, same as what was used in the paper by Song. The SNP network used 512 hidden units in both layers and the trait network used 32 and 256. et al. for comparable models in Table 2.
The numbers in the above table are the number of significant loci for each of the 10 traits. The number for other methods, such as GCAT, LMM, PCA, and "uncorrected" (association tests without accounting for hidden relatedness of study samples) are obtained from other papers. By comparison, the ICM reached the level of the best previous model for each trait.
Conclusion
This paper introduced implicit causal models in order to account for nonlinear complex causal relationships, and applied the method to GWAS. It can not only capture important interactions between genes within an individual and among population level, but also can adjust for latent confounders by taking account of the latent variables into the model.
By the simulation study, the authors proved that the implicit causal model could beat other methods by 15-45.3% on a variety of datasets with variations on parameters.
The authors also believed this GWAS application is only the start of the usage of implicit causal models. The authors suggest that it might also be successfully used in the design of dynamic theories in high-energy physics or for modeling discrete choices in economics.
Critique
This paper is an interesting and novel work. The main contribution of this paper is to connect the statistical genetics and the machine learning methodology. The method is technically sound and does indeed generalize techniques currently used in statistical genetics.
The neural network used in this paper is a very simple feed-forward 2 hidden-layer neural network, but the idea of where to use the neural network is crucial and might be significant in GWAS.
It has limitations as well. The empirical example in this paper is too easy, and far away from the realistic situation. Despite the simulation study showing some competing results, the Northern Finland Birth Cohort Data application did not demonstrate the advantage of using implicit causal model over the previous methods, such as GCAT or LMM.
Another limitation is about linkage disequilibrium as the authors stated as well. SNPs are not completely independent of each other; usually, they have correlations when the alleles at close locus. They did not consider this complex case, rather they only considered the simplest case where they assumed all the SNPs are independent.
Furthermore, one SNP maybe does not have enough power to explain the causal relationship. Recent papers indicate that causation to a trait may involve multiple SNPs. This could be a future work as well.
References
Tran D, Blei D M. Implicit Causal Models for Genome-wide Association Studies[J]. arXiv preprint arXiv:1710.10742, 2017.
Patrik O Hoyer, Dominik Janzing, Joris M Mooij, Jonas Peters, and Prof Bernhard Schölkopf. Non- linear causal discovery with additive noise models. In Neural Information Processing Systems, 2009.
Alkes L Price, Nick J Patterson, Robert M Plenge, Michael E Weinblatt, Nancy A Shadick, and David Reich. Principal components analysis corrects for stratification in genome-wide association studies. Nature Genetics, 38(8):904–909, 2006.
Minsun Song, Wei Hao, and John D Storey. Testing for genetic associations in arbitrarily structured populations. Nature, 47(5):550–554, 2015.
Dustin Tran, Rajesh Ranganath, and David M Blei. Hierarchical implicit models and likelihood-free variational inference. In Neural Information Processing Systems, 2017. |
The function $f(x)$ is continuous and differentiable in $[0,1]$ if $f'(x)\le 10$ for all $x\in[0,1]$ and $f(0)=0$,
What is the maximum possible value of $f(x)$ for $x\in [0,1]$ ?
Any help would be greatly appreciated, thanks.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
The function $f(x)$ is continuous and differentiable in $[0,1]$ if $f'(x)\le 10$ for all $x\in[0,1]$ and $f(0)=0$,
What is the maximum possible value of $f(x)$ for $x\in [0,1]$ ?
Any help would be greatly appreciated, thanks.
This question appears to be off-topic. The users who voted to close gave this specific reason:
$\forall x\in [0,1]$ we have,
$\frac{f(x)-f(0)}{x-0}\le 10$ {By lagrange's mean value theorem we have a $c\in [0,x]\ such\ that\ \frac{f(x)-f(0)}{x-0}=f'(c)\le10$}
$\therefore$ maximum value of f(x) is 10$x$ in [0,1].
Hope it helps:) |
We imagine a slippery (no friction) bar of soap slithering around in a conical basin. An isolated bar of soap in intergalactic space would require three coordinates to specify its position at any time, but, if it is subject to the holonomic constraint that it is to be in contact at all times with a conical basin, its position at any time can be specified with just two coordinates. I shall, first of all, analyse the problem with a newtonian approach, and then, for comparison, I shall analyse it using lagrangian methods. Either way, we start with a large diagram. In the newtonian approach we mark in the forces in red and the accelerations in green. See Figure XIII.2. The semi vertical angle of the cone is \( \alpha\).
The two coordinates that we need are \( r\), the distance from the vertex, and the azimuthal angle \( \phi\), which I’ll ask you to imagine, measured around the vertical axis from some arbitrary origin. The two forces are the weight \( mg\) and the normal reaction \( R\) of the basin on the soap. The accelerations are \( \ddot{r}\) and the centripetal acceleration as the soap moves at angular speed \( \dot{\phi}\) in a circle of radius \( r \sin \alpha\) is \( r \sin \alpha \dot{\phi}^{2}\).
We can write the newtonian equation of motion in various directions:
Horizontal: \( R\cos\alpha=m(r\sin\alpha\dot{\phi}^{2}-\ddot{r}\sin\alpha)\)
i.e.
\[ R=m\tan\alpha(r\dot{\phi}^{2}-\ddot{r}). \label{13.6.1}\]
Vertical:
\[ R\sin\alpha-mg=m\ddot{r}\cos\alpha. \label{13.6.2}\]
Perpendicular to surface:
\[ R-mg\sin\alpha=mr\sin\alpha\cos\alpha\dot{\phi}^{2}. \label{13.6.3}\]
Parallel to surface:
\[ g\cos\alpha=r\sin^{2}\alpha\dot{\phi}^{2}-\ddot{r}. \label{13.6.4}\]
Only two of these are independent, and we can choose to use whichever two we want to at our convenience. There are, however, three quantities that we may wish to determine, namely the two coordinates \( r\) and \( \phi\), and the normal reaction \( R\). Thus we need another equation. We note that, since there are no azimuthal forces, the angular momentum per unit mass, which is \( r^{2}\sin^{2}\alpha \dot{\phi}\), is conserved, and therefore \( r^{2}\dot{\phi}\) is constant and equal to its initial value, which I’ll call \( l^{2}\Omega\). That is, we start off at a distance \( l\) from the vertex with an initial angular speed \( \Omega\). Thus we have as our third independent equation
\[ r^{2}\dot{\phi}=l^{2}\Omega \label{13.6.5}\]
This last equation shows that \( \dot{\phi}\rightarrow\infty\) as \( r\rightarrow 0\).
One possible type of motion is circular motion at constant height (put \( \ddot{r}=0\)). From Equations \( \ref{13.6.1}\) and \( \ref{13.6.2}\) it is easily found that the condition for this is that
\[ r\dot{\phi}^{2}=\frac{g}{\sin\alpha\tan\alpha}. \label{13.6.6}\]
In other words, if the particle is projected initially horizontally (\( \dot{r}=0\)) at \( r=l\) and \( \dot{\phi}=\Omega\), it will describe a horizontal circle (for ever) if
\[ \Omega=(\frac{g}{l\sin\alpha\tan\alpha})^{\frac{1}{2}}=\Omega_{C}, \quad say. \label{13.6.7}\]
If the initial speed is less than this, the particle will describe an elliptical orbit with a minimum \( r < l\); if the initial speed is greater than this, the particle will describe an elliptical orbit with a maximum \( r > l\).
Now let’s do the same problem in a lagrangian formulation. This time we draw the same diagram, but we mark in the velocity components in blue. See Figure XIII.3. We are dealing with conservative forces, so we are going to use Equation 13.4.13, the most useful form of Lagrange’s equation.
We need not spend time wondering what to do next. The first and second things we always have to do are to find the kinetic energy \( T\) and the potential energy \( V\), in order that we can use Equation 13.4.13.
\[ T=\frac{1}{2}m(\dot{r}^{2}+r^{2}\sin^{2}\alpha\dot{\phi}^{2}) \label{13.6.8}\]
and
\[ V=mgr\cos\alpha+constant. \label{13.6.9}\]
Now go to Equation 13.4.13, with \( q_{i}=r\), and work out all the derivatives, and you should get, when you apply the lagrangian equation to the coordinate \( r\):
\[ \ddot{r}-r\sin^{2}\alpha\dot{\phi}^{2}=-g\cos\alpha. \label{13.6.10}\]
Now do the same thing with the coordinate \( \phi\). You see immediately that \( \frac{\partial T}{\partial\phi}\) and \( \frac{\partial V}{\partial\phi}\) are both zero. Therefore \( \frac{d}{dt}\frac{\partial T}{\partial\dot{\phi}}\) is zero and therefore \( \frac{\partial T}{\partial\dot{\phi}}\) is constant. That is, \( mr^{2}\sin^{2}\alpha\dot{\phi}\) is constant and so \( r^{2}\dot{\phi}\) is constant and equal to its initial value \( l^{2}\Omega\). Thus the second lagrangian equation is
\[ r^{2}\dot{\phi}=l^{2}\Omega. \label{13.6.11}\]
Since the lagrangian is independent of \( \phi\), \( \phi\) is called, in this connection, an “ignorable coordinate” – and the momentum associated with it, namely \( mr^{2}\dot{\phi}\) is constant.
Now it is true that we arrived at both of these equations also by the newtonian method, and you may not feel we have gained much. But this is a simple, introductory example, and we shall soon appreciate the power of the lagrangian method,
Having got these two equations, whether by newtonian or lagrangian methods, let’s explore them further. For example, let’s eliminate \( \dot{\phi}\) between them and hence get a single equation in \( r\):
\[ \ddot{r}-\frac{l^{4}\Omega^{2}\sin^{2}\alpha}{r^{3}}=-g\cos\alpha. \label{13.6.12}\]
We know enough by now (see Chapter 6) to write \( \ddot{r}\) as \( v\frac{dv}{dr}\), where \( v=\dot{r}\) and if we let the constants \( l^{4}\Omega^{2}\sin^{2}\alpha\) and \( g\cos\alpha\) equal \( A\) and \( B\) respectively, Equation \( \ref{13.6.12}\) becomes
\[ \nu\frac{d\nu}{dr}=\frac{A}{r^{3}}-B. \label{13.6.13}\]
(It may just be useful to note that the dimensions of \( A\) and \( B\) are L
4T - 2 and LT - 2 respectively. This will enable us to keep track of dimensional analysis as we go.)
If we start the soap moving horizontally (\(v=0\)) when \( r=l\), this integrates, with these initial conditions, to
\[ \nu^{2}=A(\frac{1}{l^{2}}-\frac{1}{r^{2}})+2B(l-r). \label{13.6.14}\]
Again, so that we can see what we are doing, let \( \frac{A}{l^{2}}+2Bl=C\)(note that [\( C\)] = L
2T - 2), and Equation \( \ref{13.6.14}\) becomes
\[ \nu^{2}=C-\frac{A}{r^{2}}-2Br. \label{13.6.15}\]
This gives \( \nu(=\dot{r})\) as a function of \( r\). The particle reaches is maximum or minimum height when \( \nu=0\); that is where
\[ 2Br^{3}-Cr^{2}+A=0. \label{13.6.16}\]
One solution of this is obviously \( r=l\). Of the other two solutions, one is positive (which we want) and the other is negative (which we don’t want).
If we go back to the original meanings of \( A\), \( B\) and \( C\), and write \( x=\frac{r}{l}\) equation (16) becomes, after a little tidying up
\[ x^{3}-(\frac{l\Omega^{2}\sin\alpha\tan\alpha}{2g}+1)x^{2}+\frac{l\Omega^{2}\sin\alpha\tan\alpha}{2g}=0. \label{13.6.17}\]
Recall from Equation \( \ref{13.6.7}\) that \( \Omega_{c}=(\frac{g}{l\sin\alpha\tan\alpha})^{\frac{1}{2}}\), and the equation becomes
\[ x^{3}-(\frac{\Omega^{2}}{2\Omega_{c}^{2}}+1)x^{2}+\frac{\Omega^{2}}{2\Omega_{c}^{2}}, \label{13.6.18}\]
or, with \( a=\frac{\Omega^{2}}{2\Omega_{c}^{2}}\),
\[ x^{3}-(a+1)x^{2}+a=0. \label{13.6.19}\]
This factorizes to
\[ (x-1)(x^{2}-ax-a)=0. \label{13.6.20}\]
The solution we are interested in is
\[ x=\frac{1}{2}(a+\sqrt{a(a+4)}). \label{13.6.21}\] |
Since $I_1+I_2=R$, there exists $a \in I_1$ and $b \in I_2$ such that\[a+b=1.\]Then we have\begin{align*}1&=1^{m+n-1}=(a+b)^{m+n-1}\\[6pt]&=\sum_{k=1}^{m+n-1}\begin{pmatrix}m+n-1 \\k\end{pmatrix}a^k b^{m+n-1-k}\\[6pt]&=\sum_{k=1}^{m-1}\begin{pmatrix}m+n-1 \\k\end{pmatrix}a^k b^{m+n-1-k}+\sum_{k=m}^{m+n-1}\begin{pmatrix}m+n-1 \\k\end{pmatrix}a^k b^{m+n-1-k}.\end{align*}In the third equality, we used the binomial expansion.
Note that the first sum is in $I_2^n$ since it is divisible by $b^n\in I_2^n$.The second sum is in $I_1^n$ since it is divisible by $a^m\in I_1^n$.
Thus the sum is in $I_1^m+I_2^n$, and hence we have $1 \in I_1^m+I_2^n$, which implies that $I_1^m+I_2^n=R$.
Nilpotent Element a in a Ring and Unit Element $1-ab$Let $R$ be a commutative ring with $1 \neq 0$.An element $a\in R$ is called nilpotent if $a^n=0$ for some positive integer $n$.Then prove that if $a$ is a nilpotent element of $R$, then $1-ab$ is a unit for all $b \in R$.We give two proofs.Proof 1.Since $a$ […]
Ring Homomorphisms and Radical IdealsLet $R$ and $R'$ be commutative rings and let $f:R\to R'$ be a ring homomorphism.Let $I$ and $I'$ be ideals of $R$ and $R'$, respectively.(a) Prove that $f(\sqrt{I}\,) \subset \sqrt{f(I)}$.(b) Prove that $\sqrt{f^{-1}(I')}=f^{-1}(\sqrt{I'})$(c) Suppose that $f$ is […]
Ideal Quotient (Colon Ideal) is an IdealLet $R$ be a commutative ring. Let $S$ be a subset of $R$ and let $I$ be an ideal of $I$.We define the subset\[(I:S):=\{ a \in R \mid aS\subset I\}.\]Prove that $(I:S)$ is an ideal of $R$. This ideal is called the ideal quotient, or colon ideal.Proof.Let $a, […] |
A few times I have been told to not to turn the titles to Latex only as that stops them from being indexed by search engines.
I see two problems with that :
1.What seems natural humans behaviour needs to be modified in order to accommodate machines. a latex of an integral is far more descriptive. e.g. compare $\int \ln \sin x dx$ and "integral involving trigonometric function and logarithm function"
2.It does not address the fundamental problem to be fixed, which is making latex available as search means, there seems to be no equivalent of being able to have an indexed/ordered dictionary for mathematics for example how to search for material on $\int \ln \sin x dx$ , $\int \sin \ln x dx$ , $\frac{d}{dx} \sin \ln x$ etc.
if Not having pure Latex as titles is important then it there should be an inbuilt feature to stop it (similar to having over 200 characters in title), or provide progressive means rather than by implicit understanding and convention means of prohibition. Title within the page does not need to be what is being indexed by the search engines, there are other ways of providing meta tags etc. |
Structure of Atom Quantum Mechanical Model of Atom and Concept of Atomic Orbitals Calculation of no.of waves in an orbit : \tt no.of \ waves = \frac{Circumference}{wavelength} =\frac{3.33 \times \left(\frac{n^{2}}{Z}\right)Å}{3.33\left(\frac{n}{z}\right)Å} n = orbit number Total no.of revolutions per second : \tt = \frac{velocity}{circumference} =\frac{2.18 \times 10^{6}\times Z \times Z}{3.3 \times 10^{-10} \times n \times n^{2}} =6.66 \times 10^{15}\left(\frac{z^{2}}{n^{3}}\right) Schrodinger's wave equation : \frac{\partial^{2}\psi}{\partial x^{2}} + \frac{\partial^{2}\psi}{\partial y^{2}} + \frac{\partial^{2}\psi}{\partial z^{2}} + \frac{8\pi^{2}m}{h^{2}} \left(E - V\right)\psi = 0 m = mass of electron h = planck's constant V = potential energy E = Total energy \frac{\partial}{\partial x} + \frac{\partial}{\partial y } + \frac{\partial}{\partial z} = \triangledown(Laplacian operator) \triangledown^{2}\psi = \frac{-8\pi^{2}m}{h^{2}}\left(E - V\right)\psi No.of radial nodes : n − l − 1 for s = n − 1 for p = n − 2 for d = n − 3 for f = n − 4
No. of angular nodes = '
l '
Total nodes = n − 1
View the Topic in this Video from 15:20 to 37:25
Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers.
1. (n - l - 1) = radial node
2. (l) = angular node 3. (n - 1) = total nodes |
My question is about quantum algorithms for QED (quantum electrodynamics) computations related to the fine structure constants. Such computations (as explained to me) amounts to computing Taylor-like series $$\sum c_k\alpha^k,$$ where $\alpha$ is the fine structure constant (around 1/137) and $c_k$ is the contribution of Feynman diagrams with $k$-loops.
This question was motivated by Peter Shor's comment (about QED and the fine structure constant) in a discussion regarding quantum computers on my blog. For some background here is a relevant Wikipedea article.
It is known that
a) The first few terms of this computation gives very accurate estimations for relations between experimental outcomes which are with excellent agreement with experiments. b) The computations are very heavy and computing more terms is beyond our computational powers. c) At some points the computation will explode - in other words, the radius of convergence of this power series is zero.
My question is very simple: Can these computations be carried out efficiently on a quantum computer.
Question 1 1): Can we actually efficiently compute (or well-approximate) with a quantum computers the coefficients $c_k$. 2) (Weaker) Is it at least feasible to compute the estimates given by QED computation in the regime before these coefficients explode? 3) (Even weaker) Is it at least feasible to compute the estimates given by these QED computation as long as they are relevant. (Namely for those terms in the series that gives good approximation to the physics.)
A similar question applies to QCD computations for computing properties of the proton or neutron. (Aram Harrow made a related comment on my blog on QCD computations, and the comments by Alexander Vlasov are also relevant.) I would be happy to learn the situation for QCD computations as well.
Following Peter Shor's comment:
Question 2 Can quantum computation give the answer more accurately than is possible classically because the coefficients explode?
In other words
Will quantum computers allow to model the situation and to give efficiently approximate answer to the actual physical quantities.
Another way to ask it:
Can we compute using quantum computers more and more digits of the fine structure constant, just like we can compute with a digital computer more and more digits of e and $\pi$?
(Ohh, I wish I was a believer :) )
more background
The hope that computations in quantum field theory can be carried our efficiently with quantum computers was (perhaps) one of Feynman’s motivation for QC. Important progress towards quantum algorithms for computations in quantum field theories was achieved in this paper: Stephen Jordan, Keith Lee, and John Preskill Quantum Algorithms for Quantum Field Theories. I don't know if the work by Jordan, Lee, and Preskill (or some subsequent work) implies an affirmative answer to my question (at least in its weaker forms).
A related question on the physics side
I am curious also if there are estimations for how many terms in the expansion before we witness explosion. (To put it on more formal ground: Are there estimates for the minimum k for which $\alpha c_k/c_{k+1} > 1/5$ (say).) And what is the quality of the approximation we can expect when we use these terms. In other words, how much better results can we expect from this QED computations with an unlimited computation power.
Here are two related questions on the physics sister site. QED and QCD with unlimited computational power - how precise they are going to be?; The fine structure constant - can it genuinely be a random variable? |
In one dimension, if I have a Riemann-integrable derivative $f'$ of a function $f$ which I don't know, I can (almost) recover $f$ from integrating $f'$.
A simple example would be $f'(x)=2x$, then by the Fundamental Theorem of Calculus, I get that $f(x)=x^2 + const,$ where the constant does not depend on $x$. I said 'almost recover' because to determine the constant, we need a point on the graph of $f$.
My question is now: Is it possible to recover a scalar field $f:\mathbb{R}^n \rightarrow \mathbb{R}$ when I only know the gradient $\left( \partial_1 f(x), \ldots, \partial_n f(x) \right)$.
My thoughts: I could apply the fundamental theorem of calculus in the first component by integrating over $x_1$, but then, I would get \begin{align} \int \partial_1 f(x)d x_1=f(x)+C, \end{align} where $C$ constant only in $x_1$, but may well vary with $x_2,\ldots,x_n$. I write $C=C_{-1}$ to indicate this. I could now proceed and do the same calculation for all the partial derivatives, i.e. \begin{align} \int \partial_i f(x)d x_i=f(x)+C_{-i}. \end{align} I cannot just add them together and divide by $n$ to recover $f$ plus a constant (in all variables). In fact, adding any two distinct $C_{-i}$ and $C_{-j}$ together woud give me a function depending on all $x_i$'s.
I thought about considering $f(x)+C_{-1}=f(x)+C_{-2}$ where $f$ would cancel. Because the left hand side does not depend on $x_1$ and the right hand side does not depend on $x_2$, neither side depends on either $x_1$ or $x_2$. It appears to me that this shows that the $C_{-i}$ all are constants in all $x_i$'s, but it cannot be correct, as the follwing example shows: Take $f(x_1,x_2)=x_1+x_2$. Then\begin{align}\left( \partial_1 f,\partial_2 f \right)=(1,1)\end{align}If now $C_{-1}$ would indeed be a constant, I would have \begin{align}f(x)=\int \partial_1 f(x)dx_1=x_1+c\end{align}which is wrong. I've looked around and found the
Gradient Theorem which appears to be the right statement, but I don't see how to use it to find $f$, probably because I don't know much about line integrals. |
Forgot password? New user? Sign up
Existing user? Log in
∫0∞ln1+x111+x3(1+x2)lnx dx= ?\large \int_0^\infty\frac{\ln\frac{1+x^{11}}{1+x^3}}{(1+x^2)\ln x}\, \mathrm dx = \, ?∫0∞(1+x2)lnxln1+x31+x11dx=?
The integral above has a closed form. Evaluate this integral and give your answer to three decimal places.
Problem Loading...
Note Loading...
Set Loading... |
This is a very common fallacy, yes!
So when we say that "fluid flows faster as the pipe gets narrower" we mean
within the same pipe. We do not mean across all circumstances. The cause of the increased fluid flow is that water is a highly incompressible fluid. Because of this, any mass that flows into a box must also flow out of that box, and any volume that flows into a box must also flow out of that box: otherwise we'd be compressing the fluid within that box.
It's worth taking a second to understand that point better, so let me recommend this experiment: in lots of places you can find thin plastic drink bottles in a 1.5L or 2L size, and generally they have screw-on caps which form a nice airtight seal. Take an empty one (containing only air) and put the top on it, and then squeeze it. Try to estimate how much you're actually compressing this volume—10%? 20%?—at different amounts of pressure from your fingertips. The point is that there is a curve: the more pressure you apply, the more you compress this thing. Now if you fill it with water, you will notice that whatever this curve is it is
very steep: it takes immense amounts of pressure to compress water even the barest of degrees. So this is not an absolute rule and in fact it will be violated at the sub-microscopic scale in non-steady-states all the time, but the point is that it works well to first approximation because any significant volume change would require much more pressure than you're typically dealing with.
Now when you compare two
different pipes, say with the same pressure across them, usually the fluid flows slower as the pipe gets narrower. One can get a rough idea of what this should be with a scaling argument and dimensional analysis: it should go proportional to pressure and inversely with viscosity; those together form $[[p/\mu]] = \text s^{-1}$ and one immediately recognizes that whatever we multiply this by must be a length having units of $\text m$ so that we get $\text m\cdot\text s^{-1}$ and we have a velocity. There are two length scales $D,L$ at play though -- the diameter of the flow and the length. But we expect that this sort of friction would cause twice the pressure drop over twice the $L$ at constant $v$, introducing a unit of $\text m^{-1}.$ So the full equation would be $v \propto p D^2/(\mu L).$ So at constant pressure with twice the diameter we can expect four times faster flow. How this argument fails
This argument is not fully rigorous and I am pulling a fast one on you by requiring $p\propto v$, which only applies in the laminar-flow regime. A fully rigorous dimensional-analysis argument could focus on the pressure loss $p$, finding a fully correct expression would be something more like $$p = \frac{\mu v L}{D^2}~f\left({\rho~v~D\over\mu},~{L\over D},~\frac\epsilon D\right),$$where an arbitrary unknown function $f$ is being taken of all of the
dimensionless coefficients of the flow and $\epsilon$ is the typical length scale of surface imperfections on the sides. The argument that $L$ should not enter into this function $f$ works for both turbulent and laminar flow.
Now this claim that $f$ is just a constant is true only for small velocities where we expect $p\propto v:$ but at larger velocities this ceases to be true and instead, at some laminar-to-turbulent flow regime, the flow velocity begins to depend very strongly on the exact pressure between these two, almost discontinuously jumping higher as one puts more pressure and more vortices can form in the fluid. Finally when turbulence has fully taken hold, the pressure instead hits a limit where $p\propto v^2.$ The above expression still holds but it would be more appropriate to write it as $(\rho~v^2~L/D)~\alpha(\epsilon/D)$ for some undetermined function $\alpha$. The sudden lack of importance of the viscosity is a huge clue about what's going on in this expression; momentum is now being lost by fluid molecules smacking into these surface imperfections $\epsilon$ and getting scattered in a random direction in this boundary layer of the flow. |
This question already has an answer here:
Dummies instead of the Chow test 1 answer
I am sitting on a pile of data concerning wages at a local company and other information, such as the gender, whether the person in question belongs to a minority group etc. What I would like to investigate is whether an additional year of education gives the same relative increase in wages for lower and higher education. For this purpose, I have divided the original data into two subcategories; one group with at most 12 years of education and the other with 13 or more years of education. What I would like to perform is a Chow test of structural change of the form
\begin{equation} F=\frac{\frac{(RSS-(RSS_{\text{lower}}+RSS_{\text{higher}}))}{2}}{\frac{RSS_{\text{lower}}+RSS_{\text{higher}}}{n-2k}}=\frac{(n-2k)((RSS-(RSS_{\text{lower}}+RSS_{\text{higher}}))}{2(RSS_{\text{lower}}+RSS_{\text{higher}})} \sim F(k,n-2k) \end{equation} where $n$ is the total number of observations and $k$ is the number of explanatory variables.
Clearly, I could simply calculate the $RSS:$s directly and then construct $F$ explicitly.
total<-lm(SALARY~EDUC+GENDER+MINORITY) lower<-lm(SALARY1~EDUC1+GENDER1+MINORITY1) higher<-lm(SALARY2~EDUC2+GENDER2+MINORITY2) RSStot<-sum(residuals(total)^2) RSSlow<-sum(residuals(lower)^2) RSShig<-sum(residuals(higher)^2) ((576-6)((RSStot-(RSSlow+RSShig)))/(2(RSSlow+RSShig))
Nevertheless, I am sure there must be a way to do this directly in R. How exactly would that be? I would truly appreciate any enlightenment from any kindhearted spirit. Cheers to all! |
Calculating the Heat Transfer Coefficient for Flat and Corrugated Plates
In many engineering applications involving conjugate heat transfer, such as designing heat exchangers and heat sinks, it’s important to calculate the heat transfer coefficient. Often determined with the aid of correlations and empirical relations, the heat transfer coefficient provides information about heat transfer between solids and fluids. In this blog post, we discuss and demonstrate how the COMSOL Multiphysics® software can be used to evaluate the heat transfer coefficient for plate geometries.
What Is the Heat Transfer Coefficient?
Let us consider a heated wall or surface over which a fluid is flowing. Heat transfer in the fluid is predominantly governed by convection. Similarly, convection is the primary mode of heat transport in the case of two fluids (through a solid surface), such as with heat exchangers. The rate at which heat transfer occurs in both cases is governed by a temperature difference and a proportionality coefficient called the
heat transfer coefficient. The heat transfer coefficient is indicative of the effectiveness for the rate of heat transport through a domain between the surface and the fluid.
Mathematically, his the ratio of the heat flux at the wall to the temperature difference between the wall and fluid; i.e.,
(1)
where q^{\prime \prime} is the heat flux, T_wis the wall temperature, and T_\infty is the characteristic fluid temperature.
The characteristic fluid temperature can also be the external temperature far from the wall or the bulk temperature in tubes.
When the object is surrounded by an infinitely large volume of air, we assume that the air temperature far away from the object is a constant, known value. The heat transfer coefficient evaluated in this case is referred to as the external heat transfer coefficient.
With the above assumption, if we look closely at the wall (if the thickness of the wall is defined across the
y direction, and y = 0 represents the surface/plane of the wall), it’s clear that the No Slip condition at the wall results in the formation of a stagnant, thin film of fluid. Therefore, heat transfer through the fluid immediately adjacent to the wall happens purely due to conduction.
This can be written mathematically (Ref. 1) as:
(2)
Here, k is the thermal conductivity of the fluid, with the Tderivative being evaluated in the fluid.
(3)
Calculating the Heat Transfer Coefficient in COMSOL Multiphysics®
Practically speaking, it is difficult to measure the temperature gradient at the wall. Additionally, it becomes essential to analyze a smart and computationally inexpensive approach for understanding the heat transfer at the wall. Therefore, nonanalytical ways of calculating the heat transfer coefficient are usually preferred.
One common approach is using convective correlations defined by the dimensionless Nusselt number. These correlations are available for various cases, including natural and forced convection as well as internal and external flows, and give fast results. However, this approach can only be used for regular geometric shapes, such as horizontal and vertical walls, cylinders, and spheres.
When complex shapes are involved, the heat transfer coefficient can instead be calculated by simulating the conjugate heat transfer phenomenon.
Let’s now discuss two different cases and approaches:
Calculating the heat transfer coefficient in regular geometries (like a horizontal plate) using: Conjugate heat transfer analysis Convective correlations; i.e., without considering flow Calculating the heat transfer coefficient in irregular/complex geometries (like a corrugated plate)
Note that the flow regime is an important consideration because the heat transfer coefficient is dependent on the velocity. In both cases, a pragmatic condition, such as a fast flow in a blower system or an electronic chip cooling device, needs to be considered. This indicates that it is necessary to model the cases as a turbulent flow coupled with heat transport.
Example 1: Forced Convection and Flow Past a Horizontal Plate
Let’s consider the situation of modeling the flow past a horizontal flat plate with a length of 5 m, which is subjected to a constant and homogeneous heat flux of 10 W/m
2. The plate is placed in an airflow with an average velocity of 0.5 m/s and temperature of 283 K. The figure below shows the schematic of the problem definition, including the velocity and temperature profiles for a laminar flow inside the momentum (say, \delta ) and thermal boundary layer (\delta {T}), respectively.
Schematics of laminar flow (top) and turbulent flow (bottom) past a horizontal plate. Conjugate Heat Transfer Analysis
The numerical solution is obtained in COMSOL Multiphysics by using the
Conjugate Heat Transfer interface, which couples the fluid flow and heat transfer phenomena. The velocity field and pressure are computed in the air domain, while the temperature is computed in the plate and in the air domain.
The temperature distribution inside the plate and fluid is shown in the figure below. The thermal and momentum boundary layers formed inside the fluid domain can be seen in the region that goes from the wall to 2 cm above the plate.
Temperature distribution (surface plot), isotherm at 11°C (red line), and velocity field (arrows) illustrating the thermal and momentum boundary layers next to the plate surface (anisotropic axis scale).
From the simulation results, it is possible to evaluate the heat flux using the corresponding predefined postprocessing variable. Dividing it by the temperature difference (T_w-T_\infty) gives the heat transfer coefficient (Eq. 3). The heat transfer coefficient along the plate obtained using the conjugate heat transfer analysis is plotted on a graph in a following section.
Heat Transfer Coefficient Based on Nusselt’s Number Correlations
The Nusselt number correlation for forced convection past a flat plate is available in literature (Ref. 1, for example).
In this second approach, the same model is solved without solving for flow; that is, using the heat transfer correlations. The computational domain is limited to the solid (plate). The heat loss from the hot plate to the cold fluid is defined using a
Heat Flux boundary condition. This boundary condition contains an option to define the heat transfer coefficient using predefined Nusselt number correlations, as shown below. Note that this correlation is predefined in COMSOL Multiphysics. Settings for the Heat Flux boundary condition.
Using this approach only, the temperature distribution in the plate is computed. From the heat transfer coefficient defined in the
Heat Flux boundary condition, it is possible to evaluate the heat flux at the plate surface, q=h\cdot(T_\infty-T). Evaluating the Heat Transfer Coefficient
For both approaches described above, it is possible to evaluate the heat transfer coefficient along the plate. The figure below compares the heat flux estimated using the two approaches.
Comparison of the heat transfer coefficient along the flat plate estimated using a conjugate heat transfer simulation (blue line) and a Nusselt correlation (green line).
We can see that the value obtained from the Nusselt number correlation is in close agreement with the value obtained from the full conjugate heat transfer simulation.
A quantity of interest is the heat rates over the plate that are obtained in the two cases:
Nusselt number correlation: 50 W/m Conjugate heat transfer: 49.884 W/m
For certain calculations, the approach based on Nusselt number correlations is able to predict the heat flux with good enough accuracy. Next, we examine a case with an uncommon shape, where the Nusselt number correlations are not easily available, and the only possible approach is to run a conjugate heat transfer simulation.
Example 2: Flow Past a Corrugated Horizontal Plate
Let’s consider a similar configuration as in the first case, except that the plate has a corrugated top surface. The figure below shows a schematic of the problem definition. In this model, the corrugations of the top plate are considered in one section of the geometry. The rest of the plate is flat.
Schematic of the flow past a horizontal plate.
Here, the flow field close to the wall has recirculation zones that enhance the heat transfer rate. In the image below, we can see the temperature distribution and velocity streamlines.
Temperature distribution in degrees Celsius (surface) and the velocity field (streamlines).
The left plot below shows the heat transfer coefficient along the length of the corrugated plate. With a geometry such as that of the wavy plate, the heat transfer coefficient is dependent on the temperature fields; velocity fields; and geometric parameters of the corrugations, such as the height. Hence, we can observe the enhanced heat transfer coefficient as compared to the flat plate (right image below).
Heat transfer coefficient along the corrugated plate (left) and along the flat plate (right).
While considering complex geometries containing corrugated surfaces, the conjugate heat transfer approach may be computationally expensive, and alternative approaches are desirable. A good approximation would be to reduce the geometric complexity by representing the surfaces as noncorrugated and extrapolating the heat transfer coefficient from this geometry of a corrugated plate, considering geometric parameters such as the corrugation height, flow velocity fields, and temperature variations on the surface. It is interesting to note that if the temperature is not truly isothermal or there is no constant heat flux, the heat transfer coefficient is still of interest within a given range for some geometries until the proximity to the initial configuration is maintained.
To check, we can consider a simple case wherein the heat transfer coefficients are calculated across velocity fields in the corrugated plate geometry. The data can be used to obtain an average heat transfer coefficient and can be extrapolated to the flat plate geometry model. The total heat loss from the surface, or the heat transfer coefficient obtained from flow simulations, can be investigated to understand the validity of the approximations.
Concluding Thoughts
In this blog post, we discussed how to calculate the heat transfer coefficient using two methods. With the conjugate heat transfer solution, you can use the built-in heat flux variables available in COMSOL Multiphysics. Using the
Heat Flux boundary condition with Nusselt number correlations, you can simulate problems involving simple shapes. We also discussed how to reduce geometric complexities to obtain the heat transfer coefficient for complex geometries. Next Steps
Learn more about the specialized features for modeling heat transfer in the COMSOL® software by clicking the button below.
Try the approaches discussed here in the following tutorials:
Natural Convection Cooling of a Vacuum Flask Nonisothermal Turbulent Flow Over a Flat Plate Nonisothermal Laminar Flow in a Circular Tube Reference A. Bejan et al., Heat Transfer Handbook, John Wiley & Sons, 2003. Comentários (1) CATEGORIAS Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science |
Search
Now showing items 1-2 of 2
D-meson nuclear modification factor and elliptic flow measurements in Pb–Pb collisions at $\sqrt {s_{NN}}$ = 5.02TeV with ALICE at the LHC
(Elsevier, 2017-11)
ALICE measured the nuclear modification factor ($R_{AA}$) and elliptic flow ($\nu_{2}$) of D mesons ($D^{0}$, $D^{+}$, $D^{⁎+}$ and $D^{s+}$) in semi-central Pb–Pb collisions at $\sqrt{s_{NN}} =5.02$ TeV. The increased ...
ALICE measurement of the $J/\psi$ nuclear modification factor at mid-rapidity in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV
(Elsevier, 2017-11)
ALICE at the LHC provides unique capabilities to study charmonium production at low transverse momenta ( p T ). At central rapidity, ( |y|<0.8 ), ALICE can reconstruct J/ ψ via their decay into two electrons down to zero ... |
Search
Now showing items 1-10 of 27
Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider
(American Physical Society, 2016-02)
The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ...
Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(Elsevier, 2016-02)
Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ...
Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(Springer, 2016-08)
The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ...
Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Elsevier, 2016-03)
The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ...
Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2016-03)
Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ...
Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV
(Elsevier, 2016-07)
The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ...
$^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2016-03)
The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ...
Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV
(Elsevier, 2016-09)
The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ...
Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV
(Elsevier, 2016-12)
We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ...
Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV
(Springer, 2016-05)
Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ... |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
Possible Duplicate: Series converges implies $\lim{n a_n} = 0$
Someone can help me? If $(a_n)$ is a decreasing sequence and $\sum a_n$ converges. Then $\lim {(n.a_n)} = 0$.
I don't have idea how to solve this.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Possible Duplicate: Series converges implies $\lim{n a_n} = 0$
Someone can help me? If $(a_n)$ is a decreasing sequence and $\sum a_n$ converges. Then $\lim {(n.a_n)} = 0$.
I don't have idea how to solve this.
By the Cauchy condensation test
$$\sum 2^m a_{2^m} < \infty$$
thus
$$\lim_n 2^m a_{2^m} =0$$
Now, for each $n$ chose some $m$ so that $2^m \leq n < 2^{m+1}$ and use
$$a_{2^m} \geq a_n \geq a_{2^{m+1}}$$ |
As the title mentioned, I've not known exactly about Fourier series and when I was reading an digital communication textbook, I wondered about below equation derivation of Fourier series like $$\alpha(f)=\frac1T\sum^{\infty}_{m=-\infty}\delta\left(f-\frac mT\right)$$ which is periodic with period $\frac1T$ and $\delta$ is the Dirac-Delta function. From Fourier series, we have $$\alpha(f)=\frac1{1/T}\sum^\infty_{m=-\infty}c_ne^{-i2\pi nf/(1/T)}=\sum^\infty_{m=-\infty}e^{-i2\pi nfT}$$ where $$c_n=\int^\frac1{2T}_{-\frac1{2T}}\alpha(f)e^{i2\pi nf/(1/T)}df=\int^\frac1{2T}_{-\frac1{2T}}\frac1T\delta(f)e^{i2\pi nf/(1/T)}df=\frac1T$$ As I read the Fourier series information on Wiki, but I don't know how that equation is changed simply although Fourier series is very complicated. Please help me understand detail procedure about that. Thank you.
What you need from the theory of Fourier series is the following statement:
Let $f$ be a function that periodic with period $1/T$, i.e. $f(t)=f(t+1/T)$, then this function can be expanded in the following form $$ f(t)=\sum_{n=-\infty}^{\infty} c_n \mathrm{e}^{2 \pi \mathrm{i}n\omega t}, $$ where $$ c_n =\frac1T\int_{-1/(2T)}^{1/(2T)} f(t) \mathrm{e}^{-2 \pi \mathrm{i}n\omega t} dt. $$
I will not go into detail on the convergence results, i.e. under which circumstances does the series converge to the function $f$. However, it is instructive to note that this tells us that it is possible to expand a periodic function into building blocks of the form $\mathrm{e}^{2 \pi \mathrm{i}n\omega t}$.
The function that is of interest to you, i.e. $\frac{1}{T} \sum_{m=-\infty}^\infty \delta(f-m/T)$ indeed has the period $1/T$, which is easy to see. Therefore you can use the statement that I quoted above to compute the coefficients of the Fourier series to write the function as a superposition of complex exponential functions. This is what happens in the text you quoted.
Should you require more information on Fourier series I would suggest you to read the section on the subject in Stephane Mallats book
A wavelet tour of signal processing, page 50. It contains an excellent and compact explanation of Fourier series and shows the connections to the standard Fourier transform. |
The action of $GL_6$ on $P(\wedge^3 \mathbb{C}^6)=P^{19}$ has 4 orbits (of dim 19, 18, 14, 9). Can you describe how the springer resolution applies to each of these orbits? It should have positive dimensonal fibers over the 14 and 9 dimensional orbits (probably some flag variety?).
A representative of the quartic orbit is $e_{156} + e_{246} + e_{345}$ (I write $e_{ijk}$ for $e_i\wedge e_j\wedge e_k$). A resolution of this orbit is given by the projective closure of the tangent bundle to the Grassmannian. Indeed, if $V = \mathbb{C}^6$ and $U$ is the tautological bundle on $X = Gr(3,V)$ then $$ T_X \cong U^*\otimes V/U \cong \Lambda^2U\otimes V/U\otimes O(1), $$ so $T_X(-1) \cong \Lambda^2U \otimes V/U$. Note that the tautological filtration $U\subset V$ induces a filtration of $\Lambda^3 V$ with factors $\Lambda^3U$, $\Lambda^2U\otimes V/U$, $U\otimes\Lambda^2(V/U)$, and $\Lambda^3(V/U)$. Consequently, there is a canonical exact sequence $$ 0 \to T^+ \to \Lambda^3 V\otimes O_X \to T^- \to 0, $$ where $T^+$ is the extension of $\Lambda^2U\otimes V/U$ by $\Lambda^3U$ and $T^-$ is the extension of $\Lambda^3(V/U)$ by $U\otimes\Lambda^2(V/U)$. In particular, $T^+$ fits into exact sequence $$ 0 \to O_X(-1) \to T^+ \to T_X(-1) \to 0. $$ The embedding $T^+\to \Lambda^3 V\otimes O_X$ induces a map $f:P_X(T^+) \to P(\Lambda^3 V)$. Its image is the invariant quartic hypersurface.
EDIT: Concerning the fibers of the map $f$. It is easy to show that the fiber of $f$ over a point $\lambda \in P(\Lambda^3V)$ is the subvariety of $Gr(3,V)$ consisting of those $U \subset V$ such that $$ \lambda \wedge \Lambda^2U = 0. $$ Using this it is easy to describe the fibers over representatives of the orbits: $$ f^{-1}(e_{156}+e_{246}+e_{345}) = \langle e_4,e_5,e_6 \rangle \in Gr(3,V), $$ $$ f^{-1}(e_{123}+e_{145}) = \{ U\ |\ e_1 \in U \subset \langle e_1,e_2,e_3,e_4,e_5 \rangle,\ (e_{23}+e_{45})\wedge \Lambda^2(U/e_1) = 0 \}, $$ so this is isomorphic to $Q^3 \subset Gr(2,4) \subset Gr(3,V)$, and $$ f^{-1}(e_{123})=\{ U\ |\ \dim (U \cap \langle e_1,e_2,e_3 \rangle) \ge 2 \}, $$ which is isomorphic to a $P^3$ bundle over $P^2$ with a section contracted to a point.
The dimension of the nilpotent orbits of $\mathfrak{gl}_{6}$ can be described using the corresponding partition $\pi: d_{1}+d_{2}+\ldots + d_{k} =6$ associated to a nilpotent orbit - so $\pi$ is the partition corresponding to the Jordan representative of the nilpotent orbit - as follows:
consider the 'dual partition of $\pi$', $\pi': e_{1}+\ldots +e_{l}=6$. If we consider the Young diagram of $\pi$ then $\pi'$ is the partition of 6 corresponding to the transpose Young diagram. Then, the dimension of a nilpotent orbit is $6^{2} - \sum_{i=1}^{l}e_{i}^{2}$. A quick check of the 11 partitions of 6 shows that there can't exist nilpotent orbits of dimension 9,14 or 19 (furthermore, orbits are always even dimensional). Perhaps you could give more information as to why these orbits given in the question are expected to be nilpotent orbits?
There are two orbits of dimension 18 (corresponding to the partitions $3+1+1+1$ and $2+2+2$, with dual partitions $4+1+1$ and $3+3$, respectively). In this case, more information as required. |
In section 9.2 on oscillator strengths, we first defined what we meant by absorption oscillator strength \(f_{12}\). We then showed that the equivalent width of a line is proportional to \(ϖ_1f_{12}\). We followed this by defining an emission oscillator strength \(f_{21}\) by the equation \(ϖ_2f_{21} = ϖ_1f_{12}\). Thereafter we defined a weighted oscillator strength \(ϖf\) to be used more or less as a single symbol equal to either \(ϖ_2f_{21}\text{ or }ϖ_1f_{12}\). Can we do a similar sort of thing with Einstein coefficient? That is, we have defined \(A_{21}\), the Einstein coefficient for spontaneous emission (i.e. downward transition) without any difficulty, and we have shown that the intensity or radiance of an emission line is proportional to \(ϖ_2A_{21}\). Can we somehow define an Einstein absorption coefficient \(A_{12}\)? But this would hardly make any sense, because atoms do not make spontaneous upward transitions! An upward transition requires either absorption of a photon or collision with another atom.
For absorption lines (upwards transitions) we can define an
Einstein \(B\) coefficient such that the rate of upward transitions from level 1 to level 2 is proportional to the product of two things, namely the number of atoms \(N_1\) currently in the initial (lower) level and the amount of radiation that is available to excite these upward transitions. The proportionality constant is the Einstein coefficient for the transition, \(B_{12}\). There is a real difficulty in that by “amount of radiation” different authors mean different things. It could mean, for example, any of the four things:
\(u_\lambda\) the energy density per unit wavelength interval at the wavelength of the line, expressed in \(\text{J m}^{-3} \ \text{m}^{-1}\);
\(u_\nu\) the energy density per unit frequency interval at the frequency of the line, expressed in \(\text{J m}^{-3} \ \text{Hz}^{-1}\);
\(L_\lambda\) radiance (unorthodoxly called “specific intensity” or even merely “intensity” and given the symbol \(I\) by many astronomers) per unit wavelength interval at the wavelength of the line, expressed in \(\text{W m}^{-2} \ \text{sr}^{-1} \ \text{m}^{-1}\);
\(L_\nu\) radiance per unit frequency interval at the frequency of the line, expressed in \(\text{W m}^{-2} \ \text{sr}^{-1} \ \text{Hz}^{-1}\).
Thus there are at least four possible definitions of the Einstein \(B\) coefficient and it is rarely clear which definition is intended by a given author. It is essential in all one’s writings to make this clear and always, in numerical work, to state the units. If we use the symbols \(B_{12}^a, \ B_{12}^b, \ B_{12}^c, \ B_{12}^d\) for these four possible definitions of the Einstein \(B\) coefficient, the SI units and dimensions for each are
\begin{array}{c c l}
B_{12}^a : & \text{s}^{-1} (\text{J m}^{-3} \text{ m}^{-1})^{-1} & \text{M}^{-1} \text{L}^2 \text{T} \\ B_{12}^b : & \text{s}^{-1} (\text{J m}^{-3} \text{ Hz}^{-1})^{-1} & \text{M}^{-1} \text{L} \\ B_{12}^c : & \text{s}^{-1} (\text{W m}^{-2} \text{ sr}^{-1} \text{ m}^{-1})^{-1} & \text{M}^{-1} \text{L} \text{T}^2 \\ B_{12}^d : & \text{s}^{-1} (\text{W m}^{-2} \text{ sr}^{-1} \text{ Hz}^{-1})^{-1} & \text{M}^{-1} \text{L} \\ \nonumber \end{array}
You can, of course, find equivalent ways of expressing these units (for example, you could express \(B_{12}^b\) in metres per kilogram if you thought that that was helpful!), but the ones given make crystal clear the meanings of the coefficients.
The relations between them are (omitting the subscripts 12):
\[B^a = \frac{\lambda^2}{c}B^b = \frac{c}{4 \pi} B^c = \frac{\lambda^2}{4 \pi} B^d ; \label{9.4.1}\]
\[B^b = \frac{\nu^2}{4 \pi}B^c = \frac{c}{4 \pi} B^d = \frac{\nu^2}{c} B^a ; \label{9.4.2}\]
\[B^c = \frac{\lambda^2}{c}B^d = \frac{4\pi}{c}B^a = \frac{4\pi \lambda^2}{c^2} B^b ; \label{9.4.3}\]
\[B^d = \frac{4 \pi \nu^2}{c^2}B^a = \frac{4\pi}{c}B^b = \frac{\nu^2}{c}B^c ; \label{9.4.4}\]
For the derivation of these, you will need to refer to equations 1.3.1, 1.15.3 and 1.17.12,
From this point henceforth, unless stated otherwise, I shall use the first definition without a superscript, so that the Einstein coefficient, when written \(B_{12}\), will be understood to mean \(B_{12}^a\). Thus the rate of radiation-induced upward transitions from level 1 to level 2 will be taken to be \(B_12\) times \(N_1\) times \(u_\lambda\).
Induced downward transitions.
The Einstein \(B_{12}\) coefficient and the oscillator strength \(f_{12}\) (which are closely related to each other in a manner that will be shown later this section) are concerned with the forced upward transition of an atom from a level 1 to a higher level 2 by radiation of a wavelength that corresponds to the energy difference between the two levels. The Einstein \(A_{21}\) coefficient is concerned with the spontaneous downward decay of an atom from a level 2 to a lower level 1.
There is another process. Light of the wavelength that corresponds to the energy difference between levels 2 and 1 may
induce a downward transition from an atom, initially in level 2, to the lower level 1. When it does so, the light is not absorbed; rather, the atom emits another photon of that wavelength. Of course the light that is irradiating the atoms induces upward transitions from level 1 to level 2, as well as inducing downward transitions from level 2 to level 1, and since, for any finite positive temperature, there are more atoms in level 1 than in level 2, there is a net absorption of light. (The astute leader will note that there may be more atoms in level 2 than in level 1 if it has a larger statistical weight, and that the previous statement should refer to states rather than levels.) If, however, the atoms are not in thermodynamic equilibrium and there are more atoms in the higher levels than in the lower (the atom is “top heavy”, corresponding to a negative excitation temperature), there will be Light Amplification by Stimulated Emission of Radiation (LASER). In this section, however, we shall assume a Boltzmann distribution of atoms among their energy levels and a finite positive excitation temperature. The number of induced downward transitions per unit time from level 2 to level 1 is given by \(B_{21}N_2 u_\lambda\). Here \(B_{21}\) is the Einstein coefficient for induced downward transition.
Let \(m\) denote a particular atomic level. Let \(n\) denote any level lower than \(m\) and let \(n^\prime\) denote any level higher than \(m\). Let \(N_m\) be the number of atoms in level \(m\) at some time. The rate at which \(N_m\) decreases with time as a result of these processes is
\[-\dot N_m = N_m \sum_n A_{mn} + N_m \sum_n B_{mn} u_{\lambda_{mn}} + N_m \sum_{n^\prime} B_{mn} u_{\lambda_{mn}} . \label{9.4.5}\]
This equation describes only the rate at which \(N_m\) is depleted by the three radiative processes. It does not describe the rate of replenishment of level \(m\) by transitions from other levels, nor with its depletion or replenishment by collisional processes. Equation \(\ref{9.4.5}\) when integrated results in
\[N_m(t) = N_m (0) e^{-\Gamma_m t} . \label{9.4.6}\]
Here \[\Gamma_m = \sum_n A_{mn} + \sum_n B_{mn} u_{\lambda_{mn}} + \sum_{n^\prime} B_{mn^\prime} u_{\lambda_{mn^\prime}} \label{9.4.7}\]
(Compare equation 9.3.3, which dealt with a two-level atom in the absence of stimulating radiation.)
The reciprocal of \(\Gamma_m\) is the mean lifetime of the atom in level \(m\).
Consider now just two levels – a level 2 and a level below it, 1. The rate of spontaneous and induced downward transitions from \(m\) to \(n\) is equal to the rate of forced upward transitions from \(n\) to \(m\):
\[A_{21}N_2 + B_{21} N_2 u_\lambda = B_{12} N_1 u_\lambda . \label{9.4.8}\]
I have omitted the subscripts 21 to \(\lambda\), since there in only one wavelength involved, namely the wavelength corresponding to the energy difference between the levels 2 and1. Let us assume that the gas and the radiation field are in thermodynamic equilibrium. In that case the level populations are governed by Boltzmann’s equation (equation 8.4.19), so that equation \(\ref{9.4.8}\) becomes
\[(A_{21} + B_{21} u_\lambda ) N_0 \frac{ϖ_2}{ϖ_0} e^{-E_2 / (kT)} = B_{12} u_\lambda N_0 \frac{ϖ_1}{ϖ_0} e^{-E_1 / (kT)} , \label{9.4.9}\]
from which \[u_\lambda = \frac{A_{21} ϖ_2}{B_{12} ϖ_1 e^{hc/(\lambda kT)}-B_{21} ϖ_2} , \label{9.4.10}\]
where I have made use of \[E_2 - E_1 = hc/ \lambda . \label{9.4.11}\]
Now, still assuming that the gas and photons are in thermodynamic equilibrium, the radiation distribution is governed by Planck’s equation (equations 2.6.4, 2.6.5, 2.6.9; see also equation 2.4.1):
\[u_\lambda = \frac{8 \pi hc}{\lambda^5 \left( e^{hc/(\lambda kT)}-1\right)} . \label{9.4.12}\]
On comparing equations \(\ref{9.4.10}\) and \(\ref{9.4.12}\), we obtain
\[ϖ_1 B_{12} = ϖ_2 B_{21} = \frac{\lambda^5}{8\pi hc} ϖ_2 A_{21} . \label{9.4.13}\]
A reminder here may be appropriate that the \(B\) here is \(B^a\) as defined near the beginning of this section. Also, in principle there would be no objection to defining an \(ϖB\) such that \(ϖB = ϖ_1 B_{12} = ϖ_2 B_{21}\), just as was done for oscillator strength, although I have never seen this done.
Einstein \(B_{12}\) coefficient and Equivalent width.
Imagine a continuous radiant source of radiance per unit wavelength interval \(L_\lambda\), and in front of it an optically thin layer of gas containing \(N_1\) atoms per unit area in the line of sight in level 1. The number of upward transitions per unit area per unit time to level 2 is \(B_{12}^c \mathcal{N}_1 L_{\lambda_{12}}\), and each of these absorbs an amount \(hc/\lambda_{12}\) of energy. The rate of absorption of energy per unit area per unit solid angle is therefore \(\frac{1}{4 \pi} \times B_{12}^c \mathcal{N}_1 L_{\lambda_{12}} \times \frac{hc}{\lambda_{12}} .\) This, by definition of equivalent width (in wavelength units), is equal to \(WL_{\lambda_{12}}\).
Therefore \[W = \frac{B_{12}^c \mathcal{N}_1 hc}{4 \pi \lambda_{12}} = \frac{h}{\lambda_{12}} \mathcal{N}_1 B_{12}^a .\label{9.4.14}\]
If we compare this with equation 9.2.4 we obtain the following relation between a \(B_{12}\) and \(f_{12}\):
\[B_{12}^a = \frac{e^2 \lambda^3}{4 ε_0 hmc^2}f_{12} . \label{9.4.15}\]
It also follows from equations \(\ref{9.4.13}\) and \(\ref{9.4.15}\) that
\[ϖ_2 A_{21} = \frac{2\pi e^2}{ε_0 mc \lambda^2} ϖ_1 f_{12} . \label{9.4.16}\]
I shall summarize the various relations between oscillator strength, Einstein coefficient and line strength in section 9.9. |
I can only offer a partial answer here. I don't really know how resurgence is used in QFT, so I am going to talk about the general principles.
First thing first. The definition of an asymptotic series does not need complex analysis. However an asyptotic series in a real setting is much less useful. Lets say that a function $f$ admits $x$ as an asymptotic series to $+\infty$. Now define the function $g$ to be $1$ on $[n,n+1)$ if $n$ is even and $0$ if $n$ is odd. Then the function $f\cdot g$ also admits $x$ as an asymptotic serries. So in the real setting, we do not have any form of uniquness and we cannot extract any information (well maybe the word any is too strong here).
So when we talk about resurgent functions we are talking necessarily about complex analytic functions. Of course in the complex setting, having one derivative is equivalent to having all of them and a bit more. I will come back to endlessly continuability later.
So now let's look at a somewhat trivial example: the ODE $-x''(t)+x(t)=1/t$.
If we assume that the solution is an analytic function in a neighbourhood of infinity we can substitute $\tilde x(t)=\sum_{n\ge0}\frac{c_n}{t^n}$ in the ODE and collect terms to find eventually that$$\tilde x(t)=\sum_{n\ge0}\frac{(2n)!}{t^{2n+1}}.$$And we realise that our assumption was wrong because this series has $0$ radius of convergence. So let's resum it. Let's us $s$ as the dual variable of $t$. The Borel transform of this is$$\hat x(s)=\sum_{n\ge0}s^{2n}.$$This series converge in the unit disc.Then because we were "lucky", we can easily extent it beyond the unit disc:$$ \hat x(s) = \frac{1}{1-s^2}. $$Then the Laplace transform of this is a solution of the ODE. But which Laplace transform?
We can define 2 here. One by integrating on the positive real axis and the other by integrating on the negative real axis. In practice we can move the contour of integration freely as long as we don't hit a singularity of the function. The singularities are of course $\pm i$, this is the reason that we have 2 Laplace transforms.
So the solutions of the ODE are$$ x^\pm(t) = \int_0^{\pm\infty}\frac{e^{-ts}}{1-s^2}ds. $$These are two particular solutions, i.e. $x^\pm(t)$ tends to $0$ as $t$ tends to $\pm\infty$.
OK, let's talk about the resurgence here. Unfortunately, the term resurgent function is used liberally to describe 3 different objects. In the above example we can call a resurgent function $\tilde x$, $\hat x$ or $x^\pm$. I think that it is more appropriate to call $\hat x$ a resurgent function and nothing else, so let's go with that.
So ok, what do we get from what we did in the example? We know that we have 2 solution and we know that these 2 solutions are not the same. But how different are they? First of all what is the domain of definition of these solutions? I will not go into details here but for $x^+$ it is $\mathbb C$ minus the negative real axis with the origin. If this is not clear to you why, you should look into the literature for the Laplace transform. The domain of definition for $x^-$ is the reflection with respect to the imaginary axis.
Then we can take the difference $x^+(t)-x^-(t)$ for non-real $t$ and this is well defined. In order to calculate this difference we notice that since$$ x^\pm(t) = \int_0^{\pm\infty}\frac{e^{-ts}}{1-s^2}ds, $$we have$$ x^+(t)-x^-(t) = \int_{-\infty}^\infty\frac{e^{-ts}}{1-s^2}ds. $$However as it is written this integral does not converge, we have to deform the contour of integration. So let's choose $t$ with negative imaginary part. This means that we have to tilt the contour of integration "upwards on both sides". Then because the singularities of the function are just poles we get$$ x^+(t)-x^-(t) = \oint_i\frac{e^{-ts}}{1-s^2}ds, $$which means the closed integral around the complex number $i$. So not only we know that the 2 solutions are different, but also we can explicitely compute their difference.
OK, so what is the big deal with resurgence? In general the situation is as follows. We want an analytic solution, but the solution fails to be analytic and we get an asymptotic series. Then we want to construct a solution and we need to use resummation. The problem we find there is that typically the solution we construct is different in different sections of the complex plane and the way it is different depends on the singularities of the Borel transform, $\hat x$ in our case. The theory of resurgent functions gives a toolset that allows us to analyse the singularities of such functions. Before this theory this was largely science fiction.
So let's go back to the endlessly continuability business. Initially we have $\hat x$ as a convergent series, i.e. it is a function defined in a disc, but we need to take its Laplace transform. The first thing we do is to extend it to (hopefully) its maximal domain, which means that it need to extend to infinity in a sector. If moreover the function is of exponential growth in this sector, then we can define the Laplace transfor. No other conditions are needed.
However, if we want to use the resurgence's toolkit we need to have a functions with isolated singularities. With meromorphic functions what isolated singularities are is straightforward. If the function is defined in a cover of $\mathbb C$, in other words "it has branching singularities", this becomes tricky. Let's say that this means that if we project the singularities of the funcition on $\mathbb C$, they are isolated points. This condition is stricter than what the theory needs, but I hope your case is covered by it.
So what do you need in order to rigorously prove your statement? All the above. You need to have a well defined asymptotic series. You need to prove that it's Borel transform converges. You need to prove that this functions is endlessly continuable. And finally you need to have exponential bounds for the growth of the functions in "most" directions.
I hope this helps. Unfortunately I only touched the surface of this amazing theory. In reality the rabbit hole is too deep. I hope this helps for now. Ask if something is unclear.
ADDED LATER:
I forgot to comment on this transseries business. So going back to the example we had $$ x^+(t)-x^-(t) = \oint_i\frac{e^{-ts}}{1-s^2}ds. $$Let's move this our the origin:$$ x^+(t)-x^-(t) = \oint_0\frac{e^{-t(s+i)}}{1-(s+i)^2}ds = e^{-it}\oint_0\frac{e^{-ts}}{1-(s+i)^2}ds. $$So you see that we get exponential in front of a "series" (in this case it is not really a series).
In the general case we get a function that has many singularities that are ramification points, let's look at another simple example: the finite difference equation $x(t+1)-x(t)=1/t^2$.
The Borel transform of this equation is $e^{-s} \hat x(s)-\hat x(s) = s$, so we get $$ \hat x(s) = \frac{s}{1-e^{-s}}. $$Now this equation has a pole on $2\pi i n$ for all integers $n$ but the origin.
We can do basically the same as above, i.e. define 2 solutions (that are of course not general) and take their difference. But now the difference of the solutions is:$$ x^+(t)-x^-(t) = \sum_{n=1}^\infty\oint_{2\pi i n}\frac{s\,e^{-ts}}{1-e^{-s}}ds $$and if we move all the integrals to the origin we get$$ x^+(t)-x^-(t) = \sum_{n=1}^\infty e^{-2\pi i n t} \oint_{0}\frac{(s+2\pi i n)\,e^{-ts}}{1-e^{-s-2\pi i n}}ds. $$Of course these integrals do not give us series because the function is simple. However in the general case, if for example add a non-linear term in the example, we will get ramification points instead of ajust a pole and then we will get an actual transseries.
There is one last thing that I was wondering if I should write. I decided to do so, but it may be confusing at least in the beginning. OK here we go:
Now that you read all the previous, I have to tell you that I lied. A lot. So the classical Borel transform is defined as the formal inverse of the Laplace transform and it devides by a factorial. This is the reason that we can get a convergent series. However, in resurgence we need a generalization of this.
I will just give you some references because this is too big to be written here.
So the proper way to define a resurgent function is through hyperfunction, which are a generalisation distributions. I think the easiest textbook on the subject is "Introduction to Hyperfunctions and Their Integral Transforms" by Urs Graf.
Resurgent functions can be thought as a very special class of hyperfunctions. The paper enter link description here gives an introduction but I think it is not so easy to follow if you don't know already this. I looked around a bit and I found this PhD thesis enter link description here that explains it a bit. It is rather dense, but potentially easier. Read Chapter 2.
Also I don't know if you know them already but Sternin and Shatalov are probably more relevant to you. They have a book titled "Borel-Laplace Transform and Asymptotic Theory: Introduction to Resurgent Analysis" which for some reason is very expensive and not all libraries have it. Take a look also at their papers.
A word of caution about the book though. Their definition of resurgent functions is not the same as Ecalle's. They define a smaller class that has some extra properties, which means that probably they are the only ones using this definition of resurgence. Nevertheless it can help you understand the theory. |
Consider the language $L_{k-distinct}$ consisting of all $k$-letter strings over $\Sigma$ such that no two letters are equal:
$$ L_{k-distinct} :=\{w = \sigma_1\sigma_2...\sigma_k \mid \forall i\in[k]: \sigma_i\in\Sigma ~\text{ and }~ \forall j\ne i: \sigma_j\ne\sigma_i \}$$
This language is finite and therefore regular. Specifically, if $\left|\Sigma\right|=n$, then $\left|L_{k-distinct}\right| = \binom{n}{k} k!$.
What is the smallest non-deterministic finite automaton that accepts this language?
I currently have the following loose upper and lower bounds:
The smallest NFA I can construct has $4^{k(1+o(1))}\cdot polylog(n)$ states.
The following lemma implies a lower bound of $2^k$ states:
Let $L ⊆ Σ^*$ be a regular language. Suppose there are $n$ pairs $P = \{ (x_i, w_i) \mid 1 ≤ i ≤ n \}$ such that $x_i\cdot w_j \in L$ if and only if $i=j$. Then any NFA accepting L has at least n states.
Another (trivial) lower bound is $log$$n\choose k$, which is the log of the size of the smallest DFA for the language.
I am also interested in NFAs that accept only a fixed fraction ($0<\epsilon<1$) of $L_{k-distinct}$, if the size of the automaton is smaller than $\epsilon\cdot 4^{k(1+o(1))}\cdot polylog (n)$.
Edit: I've just started a bounty that had a mistake in the text.
I meant we may assume $k=polylog(n)$ while I wrote $k=O(log(n))$.
Edit2:
The bounty is going to end soon, so if anyone is interested in what is perhaps an easier way to earn it, consider the following language:
$L_{(r,k)-distinct} :=\{w : w$ contains $k$ distinct symbols and no symbol appear more than $r$ times$\}$.
(i.e. $L_{(1,k)-distinct} = L_{k-distinct}$).
A similar construction as the one in the comments gives $O(e^k\cdot 2^{k\cdot log(1+r)}\cdot poly(n))$ sized automaton for $L_{(r,k)-distinct}$.
Can this be improved? What's the best lower bound we can show for this language? |
''Diamond Paradox'' by Diamond (1971)
This is a "less-known paradox," usually put as a counter to famous Bertrand paradox. It is a starting point in the literature on informational frictions in consumer markets, and the scientists in the field agree on its significance.
Its idea is diametrically opposite to that of Bertrand. Consider the following simple example. There are $2$ firms which produce homogeneous goods at zero marginal cost and compete in prices, $p$. This simultaneously set prices. Also there is a single consumer who supplies a demand given by $1-p$. Importantly, the consumer does not observe prices set by firms and, therefore, needs to search for them sequentially, where search is costly. Suppose that cost of visiting a firm is given by $0 < c \leq \frac{1}{2}$. Then, the unique equilibrium of the market is that both firms charge monopoly price $$p^M= \frac{1}{2}.$$
This is a diametrically opposite result to that of Bertrand.
The reasoning behind the result is as follows. Suppose both firms charge $p=0$. Then, the consumer randomly visits one of the firms, say firm $i$, and buys. However, firm $i$ could have charged $c$ and made positive profits as the consumer would have bought goods anyway because she would have suffered cost $c$ had she left firm $i$ in order to buy from the rival firm. By the same argument, one can see that $p=c$ cannot be an equilibrium as now firm $i$ can charge $c+c$ and improve its profit. Continuing this way, it is easy to arrive to an equilibrium where both firms charge $p^M$. A firm does not want to charge $p^M+c$ simply because its profit is maximized at $p^M$.
Formal Analysis of the Example
Timing: First, the firms simultaneously set prices. Second, the consumer without knowing prices engage into sequential search. The first search is free and the consumer visit each firm with equal probability. The consumer can come back to the previously searched firm for free. The consumer has to observe a price of a firm to buy goods from that firm.
Beliefs: In equilibrium, the consumer has correct belief about strategies of firms. If, upon visiting a firm, she observes a price different from an equilibrium one, the consumers assumes that the rival firm has deviated to the same price too. Thus, the consumer has symmetric (out-of-equilibrium beliefs). Note: the results of the game does not change if the consumers has passive beliefs.
Strategies: Strategies of the firms are prices. As mixing is allowed, let $F(p)$ represent the probability that a firm charges a price no greater than $p$. Strategy of the consumer is whether to search for the second price, upon observing the first one. This strategy is given by a reservation price $r$, such that upon observing a price lower than $r$ she buys outright, upon observing a price greater than $r$ she searches further, and upon observing a price equal to $r$ she is indifferent between buying immediately and searching further.
Equilibrium Notion: Concept of Perfect Bayesian Equilibrium (PBE) is employed. A PBE is characterized by price distribution $F(p)$ for each firm and the consumer's reservation price strategy given by $r$ such that $(i)$ each firms chooses $F(p)$ that maximizes its profit, given the equilibrium strategy of the other firm and the consumer's optimal search strategy, and (ii) the consumer searches according to the reservation price rule $r$, given correct beliefs concerning equilibrium strategies of firms.
Theorem: For any $c>0$, there exists a PBE characterized by triple $(p^M, p^M, r)$, where $p^M$'s are charged with probability $1$ and $$r=1.$$
Proof: First, I prove that $r=1$, or that the consumer buys outright when she observes any price lower than $1$. Clearly, if she observes a price greater than $1$ she does not buy from that firm as this yields a negative payoff to the consumer. Now, suppose she observes price $p'<r$. Then, she expects the rival firm to charge $p'$ too. Thus, if she buys outright her payoff is $\int_{p'}^{1}(1-p)dp$, and if she searches she expects a payoff equal to $\int_{p'}^{1}(1-p)dp - c$. As the former is greater than the latter, she better-off when she buys immediately. This proves that $r=1$.
Next, I prove that both firms charge $p^M$. Clearly, firms never charge above $1$ as they will never sell. Then, the expected profit of a firm is $\frac{1}{2}(1-p)p$ because the consumer visits a firm half of the time. It is easy to see that the profit is maximized at $p^M$.
QED. |
The second order equation
$\frac{d^2\vec{x}}{dt^2} = A\vec{x}\ + \vec{g}(t)$
models an earthquake's effect on a 7-story building. Let $x_j(t)$ be the displacement of the $j$th floor with respect to its equilibrium position. The ground moves with displacement $g(t)$.
Here
$\vec{x} = \begin{pmatrix} x_1\\ x_2\\ \vdots\\ x_7 \end{pmatrix}$
$\vec{g}(t) = \begin{pmatrix} g(t)\\ 0\\ \vdots\\ 0 \end{pmatrix}$ .
A
second order $7\times7$ system in $x_j(t)$ is given by $x_1'' = 10(x_2- x_1- 1)$ $x_2'' = 10(x_3- 2x_2+ x_1)$ $x_3'' = 10(x_4- 2x_3+ x_2)$ $x_4'' = 10(x_5- 2x_4+ x_3)$ $x_5'' = 10(x_6- 2x_5+ x_4)$ $x_6'' = 10(x_7- 2x_6+ x_5)$ $x_7'' = 10(x_6- x_7)$.
Write the above second order system as a
first order $14\times14$ system using the additional equations $v_j = x'_j$. |
In the Subset Sum problem can some of the given numbers $a_1,a_2,a_3,\dots,a_n$ be the same? For example, we might have $[1,1,1,2,3,4]$ and the target is $5$? Can I assume that I have a specific solution with numbers $2$ and $3$ and $1,1,1$ and $2$ is not?
One question we could ask is "Can we reduce this back to the subset sum problem?" In this case, the answer is
yes: for each duplicate $z$ we replace it with two numbers $x$ and $y$ such that $x+y=z$. $$[-1,-1,2,3]\to [-7,-1,2,3,6]$$
However, we need to be careful that we don't introduce additional solutions (those using just $x$ without $y$), which we can do by making $x>\lvert \Sigma(a_i)\rvert$ for $a_i<0\in A$, and $y<-\lvert\Sigma(a_i)\rvert$ for $a_i>0 \in A$. Specifically, this precludes the use of $x$ without $y$ (and vice versa) by making the sum of $x$ and all negative numbers strictly above zero (and thus doesn't satisfy the traditional subset sum problem). |
In a complex methods course I am taking, we were given an equation for a particular driven harmonic oscillator where the driving force is trigonometric. I have worked out the math and obtained an equation that tells me that the driving frequency at resonance is the natural frequency multiplied by i. My tutor tells me that this is a 90 degree phase shift, but I don't really understand why. Isn't a phase shift obtained by adding or subtracting 90 degrees? And how can a frequency, which is a measurable physical value, take on imaginary values? I would understand if we were talking about velocity. Because velocity has a direction, addition or scalar multiplication by a real value would not describe a 90 degree rotation of the vector. But frequency is a scalar quantity. What does it mean to have an imaginary frequency?
If your oscillating function is of the form
$e^{i\omega t}$,
a phase shift looks like
$e^{i(\omega t+\phi)}$,
which can be rewritten as
$e^{i\omega t}e^{i\phi}$.
Now, recall that
$e^{i\phi}=\cos\phi + i\sin\phi$.
A 90 degree phase shift corresponds to $\phi=\frac{\pi}{2}$.
Thus,
$e^{i\frac{\pi}{2}}=\cos\frac{\pi}{2} + i\sin\frac{\pi}{2} = 0 + i = i$.
So finally we have,
$e^{i(\omega t+\frac{\pi}{2})}=e^{i\omega t}e^{i\frac{\pi}{2}}=ie^{i\omega t}$.
So we see that a phase shift of 90 degrees corresponds to multiplication by $i$.
There is an article here: ( the optimal driving force is shown to be 90deg out of phase of the motion)
Also any vector like 4j + 3i can be expressed in phasor form as 5 /_ 41 degrees or in complex form 4+3i. Adding 90 deg is just a vector of same amplitude at 90 deg to the original.
It is a phase shift by 90 degrees if multiplied by $i$ indeed.
Note that $i=e^{i\pi/2}$. Writing whatever driving signal in complex form, since it is sinusoidally driven, it will have an $e^{i\omega t}$ in it, multiplying by $i$ multiplies by $e^{i\pi/2}$, and when you multiply the exponentials you add the exponents to get $e^{i(\omega t+\pi/2)}$.
Taking the real part to get an answer that actually makes sense physically, you would have a $\cos(\omega t+\pi/2)$ dependency in your driving.
I think this is what you are asking, hope this helps. |
This question combines two aspects:
what do we mean by the relativity of simultaneity, and does it always hold? what is a good way to understand the constantly accelerating reference frame (in flat spacetime)
1. Relativity of simultaneity
In special relativity, the relativity of simultaneity is the fact that if in one inertial frame two events are simultaneous, then there exist other inertial frames in which they are not simultaneous. In general relativity, the relativity of simultaneity is the fact that if two events share the same value of a temporal coordinate $t$ in some given set of coordinates used to chart a region of spacetime, then there can be other sets of coordinates in which those events do not share the same value of some other temporal coordinate $t'$. Here, by a 'temporal coordinate' I mean a coordinate such that small intervals where only this coordinate changes are timelike.
The relativity of simultaneity is an existence claim: it is the claim that
there exist coordinate-charts or inertial frames which differ concerning simultaneity. Therefore no single counter-example can be called a 'violation'; the only way one to 'violate' the claim would be to show it is never true---one would have to show that there are no pairs of frames which differ about simultaneity. But this will not be possible, because it is easy to find examples which do differ about simultaneity.
The question being asked here is, therefore, really the question:
2. what is a good way to understand the constantly accelerating reference frame (in flat spacetime)
The constantly accelerating frame in flat spacetime, also called Rindler frame, is a very good platform for learning various lessons in both special and general relativity. One could write whole books about it; Wikipedia provides a useful introduction. The basic idea is to chart a large region of flat spacetime using two different coordinate systems: either ordinary Minkowski coordinates $T,X,Y,Z$, or Rindler coordinates $t,x,y,z$, related to the former by$$T = x \sinh(\alpha t),\quad X=x\cosh(\alpha t),\quad Y=y,\quad Z=z$$where we have set $c=1$. In terms of the quantities quoted in the question, we have $\alpha = g$ and the unprimed coordinates in the question are equal to the $T,X,Y,Z$ coordinates adopted here.
The spacetime interval between two events separated by $dT,dX,dY,dZ$ is$$ds^2 = - dT^2 + dX^2 + dY^2 + dZ^2$$(the Minkowski metric). The spacetime interval between two events separated by $dt,dx,dy,dz$ is$$ds^2 = -(\alpha x)^2 dt^2 + dx^2 + dy^2 + dz^2$$(the Rindler metric).
The events along any given straight line through the origin in the $T,X$ plane (with slope less than 45$^\circ$) are simultaneous in the Rindler coordinates: they all have the same $t$. But they are not simultaneous in the Minkowski coordinates, so far from avoiding the relativity of simultaneity, this case illustrates that aspect of relativity perfectly well.
The following diagram shows the lines of constant $t$ (straight lines through origin) and the lines of constant $x$ (hyperbolae) in the $T,X$ plane.
The equation which is quoted in the question, namely$$``\,t = \frac{c}{g} \sinh (g t'/c)\,"$$is, in my notation,$$T = \frac{1}{g} \sinh (g t) .$$This is the equation for
one of the hyperbolae: it is the one with $x = 1/g$. So no wonder it makes no mention of $x$! But perhaps the question has arisen from another aspect of this case. Each hyperbola crosses the $T$ axis at some given $X$ (in fact at $X = x$), and the proper acceleration of a particle whose worldline is that particular hyperbola is itself proportional to $1/x$. So the equation$$T = x \sinh(\alpha t)$$can also be written$$T = \frac{c}{a_0} \sinh(\alpha t)$$where $a_0 = c/x$ is the proper acceleration for the given worldline. This hides the fact that $T$ depends on $x$, and perhaps this is the reason for the confusion that gave rise to the question. |
The Lie algebra of $ \mathfrak{so(3)} $ and $ \mathfrak{su(2)} $ are respectively
$$ [L_i,L_j] = i\epsilon_{ij}^{\;\;k}L_k $$ $$ [\frac{\sigma_i}{2},\frac{\sigma_j}{2}] = i\epsilon_{ij}^{\;\;k}\frac{\sigma_k}{2} $$
And of course, there is an isomorphism between these two algebras, $$ \Lambda : \mathfrak{su(2)} \rightarrow \mathfrak{so(3)} $$ such that $ \Lambda(\sigma_i/2) =L_i $
Now is it possible,
using $\Lambda$, to construct a group homomorphism between $SU(2)$ and $SO(3)$?
I was checking up on Lie group homomorphism, and in Wikipedia, there is a beautiful image
In this image's language, how are $\phi$ and $\phi_*$ related to each other (just like the algebra and group elements are).
Note : I know there is a one-to-two homomorphism between these two groups which can be directly found using the group elements. I am not looking for this. EDIT 1 : In $ SL(2,\mathbb{R}) $ the generators, say $X_1,X_2,X_3$, they obey the following commutation rules :
$$ [X_1,X_2] = 2X_2 $$ $$ [X_1,X_3] = -2X_3 $$ $$ [X_2,X_3] = X_1 $$
And in the case of $ SO(3) $ with a different basis, $ L_{\pm} = L_1 \pm i L_2 $ and $ L_z = L_3 $ with the commutators being,
$$ [L_z,L_{\pm}]= \pm L_{\pm} $$ $$ [L_+,L_-]= 2 L_z $$
This algebra is very similar to the algebra of the previous one, so why is that we can't define a map ?
EDIT 2:
Can the group homomorphism between these two groups be written like this (Something like what I expected) : $$ R = \exp(\sum_k i t_k L_k) = \exp\left(\sum_k i t_k \frac{\sigma_k}{2}\right) = \exp\left(\sum_k i t_k \frac{1}{2}ln(U_k)\right) $$
Now this seems like the map $\phi$,
$$ R = \phi(U) = \exp\bigg(\sum_k i t_k \frac{1}{2}ln(U_k)\bigg) $$ |
I would like to argue that the result follows if one demands that energy is differentiable in k space. To be precise, one would need $E = E\,(k_x, k_y)$ such that $\nabla_{\mathbf{k}}E$ always exists (I have reduced the dimensionality for ease of visualisation).
This is perhaps essential on physical grounds, since the group velocity of electron wavepackets is $\frac{1}{\hbar}\nabla_{\mathbf{k}}E$. Imagine forcing an electron along in k space using some external force. Suppose that $\nabla_{\mathbf{k}}E$ took different values on two directions of approach to the same point. As the electron's path crosses this point, its group velocity would have experienced a discontinuous jump, which is arguably unphysical since only an infinitesimal impulse was applied in getting it from one side of the discontinuity to the other. (This is only a plausability argument that will hopefully be convincing enough...)
Getting back to the main argument, the Fermi surface in two dimensions is a curve of constant energy (extending it in the third dimension makes a sheet). This implies that any vector $\mathbf{\delta k}$ that is tangent to the curve must satisfy $\mathbf{\delta k}\cdot\nabla_{\mathbf{k}}E = 0$. In other words, $\nabla_{\mathbf{k}}E$ is normal to the curve at any point. If the Fermi curve were to intersect a Brillouin zone boundary at anything other than a right angle, reflecting the curve across the boundary (by lattice periodicity) would produce a kink at that boundary. One would then find that the Fermi curve has two
different normal vectors when approaching the boundary from different zones. Hence $\nabla_{\mathbf{k}}E$ does not exist at the zone boundary, which violates our initial requirement.
There is one important exception when $\nabla_{\mathbf{k}}E$ is exactly zero at the point where the Fermi curve intersects the zone boundary. Then $\mathbf{\delta k}\cdot\nabla_{\mathbf{k}}E = 0$ is trivially satisfied for arbitrary $\mathbf{\delta k}$, and the Fermi curve is allowed to approach this point from any angle. This of course corresponds to the case where there is a stationary point at the zone boundary. |
The Annals of Mathematical Statistics Ann. Math. Statist. Volume 42, Number 5 (1971), 1671-1680. Limit Theorems for Some Occupancy and Sequential Occupancy Problems Abstract
Consider a situation in which balls are falling into $N$ cells with arbitrary probabilities. A limiting distribution for the number of occupied cells after $n$ falls is obtained, when $n$ and $N \rightarrow \infty$, so that $n^2/N \rightarrow \infty$ and $n/N \rightarrow 0$. This result completes some theorems given by Chistyakov (1964), (1967). Limiting distributions of the number of falls to achieve $a_N + 1$ occupied cells are obtained when $\lim \sup a_N/N < 1$. These theorems generalize theorems given by Baum and Billingsley (1965), and David and Barton (1962), when the balls fall into cells with the same probability for every cell.
Article information Source Ann. Math. Statist., Volume 42, Number 5 (1971), 1671-1680. Dates First available in Project Euclid: 27 April 2007 Permanent link to this document https://projecteuclid.org/euclid.aoms/1177693165 Digital Object Identifier doi:10.1214/aoms/1177693165 Mathematical Reviews number (MathSciNet) MR343347 Zentralblatt MATH identifier 0231.60022 JSTOR links.jstor.org Citation
Holst, Lars. Limit Theorems for Some Occupancy and Sequential Occupancy Problems. Ann. Math. Statist. 42 (1971), no. 5, 1671--1680. doi:10.1214/aoms/1177693165. https://projecteuclid.org/euclid.aoms/1177693165 |
I want to show that $(L^\infty,\|\cdot\|_\infty)$ is a normed vector space. I understand that there are two things to show; firstly, that $L^\infty(X,\mu)$ is a linear space and secondly that $\|\cdot\|_\infty$ defines a norm on this space.
I don't have solutions to check whether what I have done here is correct or not - could somebody verify for me and point out anything I have done wrong, or that could be done better? In particular:
In part (iii) of (b), how is it that taking out the factor of $|c|$ out of the $\inf$ doesn't affect the value of $M\in\mathbb R^+$ which bounds $|cf(x)|$?
Similarly, in part (iv) of (b), should there be a $\le$" sign on the jump from line two to three when taking the $\inf$ of the sum?
(a) $L^\infty(X,\mu)$ is a linear space:
Take $f+g\in L^\infty(X,\mu)$ then $\exists\,M,N\ge 0:|f(x)|\le M\,,\forall x\in X\setminus A$ and $|g(x)|\le N\,,\forall x\in X\setminus B$, where $A,B$ are measurable sets of measure zero.
Consider then $|(f+g)(x)|\le|f(x)|+|g(x)|\le M+N,\,\forall x\in X\setminus(A\cup B)$. So, defining $P:=M+N\ge0$ we have shown that $|(f+g)(x)|\le P$. Thus, $f+g\in L^\infty(X,\mu)$.
Take then $f\in L^\infty(X,\mu)$ and some $c\in\mathbb R$. Then consider $|cf(x)|=|c||f(x)|\le|c|M=:Q$ where, clearly, $Q\ge0$. And so, it follows that $cf\in L^\infty(X,\mu)$.
Hence, $L^\infty(X,\mu)$ is closed under the addition and scalar multiplication of it's elements, and so is a linear space.
(b) $(L^\infty,\|\cdot\|_\infty)$ is a normed vector space:
We recall that $\|f\|_\infty=\inf\{M:|f(x)|\le M,\,\forall x\in X\setminus A\}$, where $f\in L^\infty(X,\mu)$.
(i) Suppose $f\in L^\infty(X,\mu)$ and consider $\|f\|_\infty=\inf\{M:|f(x)|\le M,\,\forall x\in X\setminus A\}$. By definition, such an $M$ is greater than or equal to zero, hence $\|f\|_\infty\ge0$, showing that our norm is nonnegative.
(ii) Consider: $\|f\|_\infty=0\iff\inf\{M:|f(x)|\le M,\,\forall x\in X\setminus A\}=0$
$\iff\forall x\in X\setminus A,\,|f(x)|\le0\iff\forall x\in X\setminus A,\,0\le f(x)\le0$ which means that $f\equiv0$ on $X\setminus A$. This establishes nondegeneracy of the norm.
(iii) For $c\in\mathbb R$ consider $\|cf\|_\infty=\inf\{M:|cf(x)|\le M,\,\forall x\in X\setminus A\}=\inf\{M:|c||f(x)|\le M,\,\forall x\in X\setminus A\}$
$=|c|\inf\{M:|f(x)|\le M,\,\forall x\in X\setminus A\}=|c|\|f\|_\infty$, showing multiplicativity of our norm.
(iv)Lastly, for $f,g\in L^\infty(X,\mu)$ consider,
$$\|f+g\|_\infty=\inf\{P:|f(x)+g(x)|\le P,\,\forall x\in X\setminus (A\cup B)\}$$
$$\le\inf\{M+N:|f(x)|+|g(x)|\le M+N,\,\forall x\in X\setminus (A\cup B)\}$$
$$=\inf\{M:|f(x)|\le M,\,\forall x\in X\setminus A\}+\inf\{N:|g(x)|\le N,\,\forall x\in X\setminus B\}$$
$$=\|f\|_\infty +\|g\|_\infty$$
So showing the triangle inequality. |
Thrust is the wrong measurement to use for this comparison, as is thrust to weight. What matters is the Specific Impluse $I_{\text{sp}}$, which is a measure of the ability to change momentum per unit of propellent.
The RL10C has a specific impulse of 450s, while the Dawn engine is over 3,000, in other words, the Dawn engine can do over over 6 times more work per unit of propellant though its lower thrust means it will take longer to do it, but unless you are trying to escape a gravity well, there is no hurry. One source of the difference is the fact that a chemical motor includes its power source in the mass of its propellant through oxidation, while for an ion motor the power comes from an fission - reactor or solar panels etc. Now the weight of the engine itself starts to make a big difference. To use Jacks example, the 200Kg RL10C with 799Kg of fuel and 1 kilo of payload would produce a $\Delta v$ of:
$\displaystyle \Delta v=v_{\text{e}}\ln {\frac {m_{0}}{m_{f}}}$
Where
${\displaystyle v_{\text{e}}=I_{\text{sp}}\cdot g_{0}}$
$\displaystyle \Delta v=450 \times 9.8 \times \ln {\frac {(200+799+1)}{(200+1)}}$$ = 7075ms^{-1}$
The 8.3Kg Dawn engine and 2.5Kg of propellant with the same payload would get you
$\displaystyle \Delta v=3000 \times 9.8\ln {\frac {(8.3+2.5+1)}{(8.3+1)}}$$ = 6999ms^{-1}$
but would be much cheaper to get 11.8 kilos to LEO so that you can accelerate a 1kg payload to 7,000 m/s then getting 1000kgs to LEO to accelerate the same payload to the same speed.
Adding more engines does not change the Specific Impluse, it just increases the fuel flow (ie it increases the thrust), but since you are now moving the weight of the additional engines, it reduces your final $\Delta v$, as rocket scientists love to say, you get there faster but not as fast, (you reach a lower velocity but you reach it sooner). Engines which produce thrust in excess of their own mass can be combined to use that excess thrust to accelerate out of a gravity well, however, ion engines do not have any excess thrust so combining them has limited benefits.
Doing this with two chemical engines would give:$\displaystyle \Delta v=450 \times 9.8 \times \ln {\frac {(200+200+799+1)}{(200+200+1)}}$$ = 4833ms^{-1}$
Two ion engines would give you:
$\displaystyle \Delta v=3000 \times 9.8\ln {\frac {(8.3+8.3+2.5+1)}{(8.3+8.3+1)}}$$ = 3904ms^{-1}$ |
Tagged: abelian group
Abelian Group Problems and Solutions.
The other popular topics in Group Theory are:
Problem 616
Suppose that $p$ is a prime number greater than $3$.
Consider the multiplicative group $G=(\Zmod{p})^*$ of order $p-1$. (a) Prove that the set of squares $S=\{x^2\mid x\in G\}$ is a subgroup of the multiplicative group $G$. (b) Determine the index $[G : S]$.
Add to solve later
(c) Assume that $-1\notin S$. Then prove that for each $a\in G$ we have either $a\in S$ or $-a\in S$. If a Half of a Group are Elements of Order 2, then the Rest form an Abelian Normal Subgroup of Odd Order Problem 575
Let $G$ be a finite group of order $2n$.
Suppose that exactly a half of $G$ consists of elements of order $2$ and the rest forms a subgroup. Namely, suppose that $G=S\sqcup H$, where $S$ is the set of all elements of order in $G$, and $H$ is a subgroup of $G$. The cardinalities of $S$ and $H$ are both $n$.
Then prove that $H$ is an abelian normal subgroup of odd order.Add to solve later
Problem 497
Let $G$ be an abelian group.
Let $a$ and $b$ be elements in $G$ of order $m$ and $n$, respectively. Prove that there exists an element $c$ in $G$ such that the order of $c$ is the least common multiple of $m$ and $n$.
Also determine whether the statement is true if $G$ is a non-abelian group.Add to solve later
Problem 434
Let $R$ be a ring with $1$.
A nonzero $R$-module $M$ is called irreducible if $0$ and $M$ are the only submodules of $M$. (It is also called a simple module.) (a) Prove that a nonzero $R$-module $M$ is irreducible if and only if $M$ is a cyclic module with any nonzero element as its generator.
Add to solve later
(b) Determine all the irreducible $\Z$-modules. Problem 420
In this post, we study the
Fundamental Theorem of Finitely Generated Abelian Groups, and as an application we solve the following problem.
Add to solve later
Problem. Let $G$ be a finite abelian group of order $n$. If $n$ is the product of distinct prime numbers, then prove that $G$ is isomorphic to the cyclic group $Z_n=\Zmod{n}$ of order $n$. Problem 343
Let $G$ be a finite group and let $N$ be a normal abelian subgroup of $G$.
Let $\Aut(N)$ be the group of automorphisms of $G$.
Suppose that the orders of groups $G/N$ and $\Aut(N)$ are relatively prime.
Then prove that $N$ is contained in the center of $G$. |
Tagged: subspace Problem 709
Let $S=\{\mathbf{v}_{1},\mathbf{v}_{2},\mathbf{v}_{3},\mathbf{v}_{4},\mathbf{v}_{5}\}$ where
\[ \mathbf{v}_{1}= \begin{bmatrix} 1 \\ 2 \\ 2 \\ -1 \end{bmatrix} ,\;\mathbf{v}_{2}= \begin{bmatrix} 1 \\ 3 \\ 1 \\ 1 \end{bmatrix} ,\;\mathbf{v}_{3}= \begin{bmatrix} 1 \\ 5 \\ -1 \\ 5 \end{bmatrix} ,\;\mathbf{v}_{4}= \begin{bmatrix} 1 \\ 1 \\ 4 \\ -1 \end{bmatrix} ,\;\mathbf{v}_{5}= \begin{bmatrix} 2 \\ 7 \\ 0 \\ 2 \end{bmatrix} .\] Find a basis for the span $\Span(S)$. Problem 706
Suppose that a set of vectors $S_1=\{\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3\}$ is a spanning set of a subspace $V$ in $\R^5$. If $\mathbf{v}_4$ is another vector in $V$, then is the set
\[S_2=\{\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3, \mathbf{v}_4\}\] still a spanning set for $V$? If so, prove it. Otherwise, give a counterexample. Problem 663
Let $\R^2$ be the $x$-$y$-plane. Then $\R^2$ is a vector space. A line $\ell \subset \mathbb{R}^2$ with slope $m$ and $y$-intercept $b$ is defined by
\[ \ell = \{ (x, y) \in \mathbb{R}^2 \mid y = mx + b \} .\]
Prove that $\ell$ is a subspace of $\mathbb{R}^2$ if and only if $b = 0$.Add to solve later
Problem 659
Fix the row vector $\mathbf{b} = \begin{bmatrix} -1 & 3 & -1 \end{bmatrix}$, and let $\R^3$ be the vector space of $3 \times 1$ column vectors. Define
\[W = \{ \mathbf{v} \in \R^3 \mid \mathbf{b} \mathbf{v} = 0 \}.\] Prove that $W$ is a vector subspace of $\R^3$. Problem 658
Let $V$ be the vector space of $n \times n$ matrices with real coefficients, and define
\[ W = \{ \mathbf{v} \in V \mid \mathbf{v} \mathbf{w} = \mathbf{w} \mathbf{v} \mbox{ for all } \mathbf{w} \in V \}.\] The set $W$ is called the center of $V$.
Prove that $W$ is a subspace of $V$.Add to solve later
Problem 612
Let $C[-2\pi, 2\pi]$ be the vector space of all real-valued continuous functions defined on the interval $[-2\pi, 2\pi]$.
Consider the subspace $W=\Span\{\sin^2(x), \cos^2(x)\}$ spanned by functions $\sin^2(x)$ and $\cos^2(x)$. (a) Prove that the set $B=\{\sin^2(x), \cos^2(x)\}$ is a basis for $W$.
Add to solve later
(b) Prove that the set $\{\sin^2(x)-\cos^2(x), 1\}$ is a basis for $W$. Problem 611
An $n\times n$ matrix $A$ is called
orthogonal if $A^{\trans}A=I$. Let $V$ be the vector space of all real $2\times 2$ matrices.
Consider the subset
\[W:=\{A\in V \mid \text{$A$ is an orthogonal matrix}\}.\] Prove or disprove that $W$ is a subspace of $V$. Problem 604
Let
\[A=\begin{bmatrix} 1 & -1 & 0 & 0 \\ 0 &1 & 1 & 1 \\ 1 & -1 & 0 & 0 \\ 0 & 2 & 2 & 2\\ 0 & 0 & 0 & 0 \end{bmatrix}.\] (a) Find a basis for the null space $\calN(A)$. (b) Find a basis of the range $\calR(A)$. (c) Find a basis of the row space for $A$.
Add to solve later
(The Ohio State University, Linear Algebra Midterm) Read solution Problem 601
Let $V$ be the vector space of all $2\times 2$ matrices whose entries are real numbers.
Let \[W=\left\{\, A\in V \quad \middle | \quad A=\begin{bmatrix} a & b\\ c& -a \end{bmatrix} \text{ for any } a, b, c\in \R \,\right\}.\] (a) Show that $W$ is a subspace of $V$. (b) Find a basis of $W$. (c) Find the dimension of $W$.
Add to solve later
(The Ohio State University, Linear Algebra Midterm) Read solution |
Let $C=\Omega \times (0,\infty)$. We want to find a solution $v \in H^1(C)$ such that given $u \in H^{\frac 12}(\Omega)$, $$\int_0^\infty\int_\Omega \nabla v \nabla \varphi + v_y\varphi_y = 0\quad\forall \eta \in H^1(C), \quad\eta(x,0) \equiv 0$$ $$v(x,0) = u(x)$$ where eg. the $n(x,0) = 0$ means in the sense of trace, so $T\eta = 0$ for $T:H^1(C) \to H^{\frac 12}(\Omega)$. This is a weak solution of the problem: $-\Delta_C v = 0$, $Tv=u$, $\partial_\nu u = 0 $ on $\partial\Omega \times (0,\infty)$ (zero Neumann BC).
Now, if $u \equiv 1$, then $v \equiv 1$ would be solution however $1 \notin L^2(C)$. If instead we ask for $v \in X(C)$, where $X$ is a space that involves only the derivatives and the trace onto $\Omega$ at $y=0$, then $1$ is a solution in that space, and is unique. The standard way to get solutions in $H^1(C)$ is to impose a mean value zero condition on the data $u$.
My problem is I seem to have a proof that there is a solution $v \in H^1(C)$ with $Tv=u$ for $u\equiv 1$. Here is my "proof":
Let $U \in H^1(C)$ be such that $TU = 1$. Then the difference $d=v-U$ solves $$\int_0^\infty\int_\Omega \nabla d \nabla \varphi + d_y\varphi_y = -\int_0^\infty\int_\Omega \nabla U \nabla \varphi + U_y\varphi_y \quad\forall \eta \in H^1(C), \quad\eta(x,0) \equiv 0\tag{1}$$ $$d(x,0) = 0.$$ There is a unique $d \in H^1(C)$ solving this because we can define a functional $J:\{d \in H^1(C) \mid Td = 0 \text{ and } \int_\Omega d(x,y)\;\mathrm{d}x = 0 \text{ a.e. $y$}\} \to \mathbb{R}$ such that $$J(d) = \frac{1}{2}\int\int|\nabla d|^2 + d_y^2 + \int\int \nabla U \nabla d + u_y d_y.$$ This is convex, coercive (by Poincare, due to the mean value zero part of the domain), proper etc. It is then easy to show (1) holds (the minimiser solves the related variational problem for all test functions in the domain of $J$, and then we can remove the mean value condition required on the test function). Now it remains to set $v= d+ U \in H^1(C)$, and this solves the original problem.
So where is the fault in my argument??
Note that the domain of $J$ does not require a $d$ with $\int_\Omega d(x,0) = 0$; which would rule out the initial data. |
If we are connected to the internet via one single ISP, it is most likely that we will have a default route set up to one of their access routers. However, if we want to multi-home, either for reliability or load-balancing reasons, the most straightforward way is to set up BGP peering with our upstream providers. For example, if we connect to three upstream ISPs – A, B and C, there is no point in sending packets destined to one of B’s (or its clients’) IP addresses, to A or C. In order to make informed decisions on where to send packets, we will usually need to receive a full BGP feed so as to see the “whole internet” in terms of routing tables.
In the ideal world, we would see all the IP prefixes that are advertised in the internet, via all of our peers, leaving us to choose the best path, based on the AS path length and local preference. This is not always the case, though. Some routes will be missing in BGP feeds received from our upstream providers. That is – we will have some routes in our routing tables that are advertised only by some of our peers. These can be our other peers’ internal networks (no-advertise/no-export), in which case we don’t really care. If these routes, however, represent actual parts of the internet that should be reachable, this can be a cause for concern.
For example, if we only have two upstream providers and only one of them advertises a route to a certain network and the connection to that ISP is temporarily lost, that network will be unreachable to us.
The problem
If there are routes missing that should not be, there is little that can be done other than escalating the issue to the problematic ISP. In order to do so, we must first identify the routes that are missing. By the time of writing this article, the global IPv4 routing table has more than 400,000 entries and the number is expected to grow in the future. Therefore, we need a way to work with large collections of routes efficiently. The other problem is that due to different route summarization rules of our peers, routes may be split in different parts by different peers, so we cannot run a simple diff through all of our routing tables.
We would like to have a tool to perform a binary difference operation of two sets of IP address ranges (subnets) in an efficient manner. Ideally, the tool would also be able to perform other set operations, such as union or intersection, so we can merge several ISP’s advertised routes prior to comparing it with a single ISP’s route table and so on.
Analysis of the problem
While making a tool like this, we would like to avoid pitfalls, such as using expensive binary trees, even though these may seem at first to be the most versatile and offer most flexibility while working with datasets. Since the IP address space is essentially a binary tree in itself, we can be tempted to model our internal representation of the route table by it.
A quick calculation tells us that even if limit ourselves to networks with the maximum prefix length of /24, we would need $$\sum _{n=0}^{24} 2^n = 2^{25}-1 = 33554431$$ nodes or 384 MiB with 12 bytes per node (2×32-bit pointer, 1×32-bit address). This is workable with IPv4, but completely infeasible when dealing with IPv6 routes. We would like to allow for at least the networks with a maximum prefix of /48, yielding: $$\sum _{n=0}^{48} 2^n = 2^{49}-1 = 562949953421311$$ nodes or 16 PiB of data with 32 bytes per node (2×64-bit pointer, 1×128-bit address). Clearly impossible to hold in RAM by today’s standards.
Fortunately, constructing a large binary tree can be completely avoided when performing this task and can in fact be done in \(O(n)\) when the input routing tables are already sorted and usually \(O(n \log n)\) when they are not, depending on the sorting algorithm.
Algorithm
The algorithm used for comparison is very simple and as stated above runs in linear time. Suppose we compare two routing tables, A and B.
Iterate through all the entries in both routing tables and mark the start and the end of each subnet ( all-zerosand all-onesaddresses of a subnet). Insert both of them into a single array, while noting their type ( startA, startB, endA, endB) Sort the above array, ordered only by IP number. Iterate through the array and keep two counters, say \(c_A\) .Each time, startXmarker is encountered, increase \(c_X\) endXis encountered, decrease it. If we are in the region, where \(c_A > 0 \wedge c_B = 0\), we have found a IP range that is in A but not in B.
The algorithm works well even with overlapping subnets, since the counter is simply going to be larger than one in that case. But even though the algorithm is simple, there are a few things to consider:
If we want to aggregate the missing subnets for clarity, we should perform the action (print out the subnet, …) when counter drops back to zero. If we do this, we should be careful with neighbouring subnets, since the counter may drop to zero on the boundary but rises again immediately afterwards. We should be careful when dealing with single-address subnets (/32 or /128), since the start and the end point are going to be the same.
The algorithm works for other binary set operations as well. If we want to produce a union of two sets (\(A \cup B\)), we should mark the regions where \(c_A > 0 \vee c_B > 0\) and for intersections (\(A \cap B\)), we use \(c_A > 0 \wedge c_B > 0\) comparison kernel.
Summing-up
I implemented the above algorithm, based on this Stack Overflow answer, in a tool that can be used to analyse routing tables, called BgpCompare. It reads IP subnets from two input files and then outputs the result of a set operation to standard output. It uses regular expressions to capture IP addresses and prefix lengths, so it can work with a wide variety of different routing platforms’ routing table dumps, just by providing a new regular expression.
BgpCompare is free and open-source software, licensed under LGPL licence. It is written in C++11 and comes with a handy header-only library for IPv4/IPv6 address manipulation (parsing, textual representation, arithmetics, …).
The latest version, with full source code along with Windows and Linux binaries, is available here
(1.02 MiB). |
Update: see below for an update on the incorrectness of this join operation
Here is a very rough sketch of a possible solution:
I think I may have a solution to this problem using a type of randomly-balanced B+-tree.Like treaps, these trees have a unique representation.Unlike treaps, they store some keys multiple times.It might be possible to fix that using a trick from Bent et al's "Biased Search Trees" of storing each key only in the highest (that is, closest-to-the-root) level in which it appears)
A tree for an ordered set of unique values is created by first associating each value with a stream of bits, similar to the way each value in a treap is associated with a priority.Each node in the tree contains both a key and a bit stream.Non-leaf nodes contain, in addition, a natural number indicating the height of the tree rooted at that node.Internal nodes may have any non-zero number of children.Like B+-trees, every non-self-intersecting path from the root to a leaf is the same length.
Every internal node $v$ contains (like in B+-trees) the largest key $k$ of its descendant leaves.Each one also contains a natural number $i$ indicating the height of the tree rooted at $v$, and the stream of bits associated with $k$ from the $i+1$th bit onward.If every key in the tree rooted at $v$ has the same first bit in its bit stream, every child of $v$ is a leaf and $i$ is $1$.Otherwise, the children of $v$ are internal nodes all of which have the same $i$th bit in the bit stream associated with their key.
To make a tree from a sorted list of keys with associated bit streams, first collect the keys into contiguous groups based on the first bit in their streams.For each of these groups, create a parent with the key and bit stream of the largest key in the group, but eliding the first bit of the stream.Now do the same grouping procedure on the new parents to create grandparents.Continue until only one node remains; this is the root of the tree.
The following list of keys and (beginning of) bit streams is represented by the tree below it.In the bit stream prefixes, a '.' means any bit. That is, any bit stream for the key A with a 0 in the first place with produce the same tree as any other, assuming no other key's bit stream is diffferent.
A 0...
B 00..
C 10..
D 0...
E 0011
F 1...
G 110.
H 0001
____H____
/ \
E H
| / \
__E__ G H
/ | \ | |
B C E G H
/ \ | / \ / \ |
A B C D E F G H
Every child of a particular internal node has the same bit in the first place of its bit stream.This is called the "color" of the parent - 0 is red, 1 is green.The child has a "flavor" depending on the first bit of its bit stream - 0 is cherry, 1 is mint.Leaves have flavors, but no color.By definition, a cherry node can't have a green parent, and a mint node can't have a red parent.
Assuming the bits in the bit streams are IID from the uniform distribution, the PMF of the number of parents of $n$ nodes is $2^{1-n}$ ${n-1}\choose{i-1}$and the expected value is $(n+1)/2$. For all $n \geq 2$, this is $\leq \frac{3}{4}n$, so the expected tree height is $O(\lg n)$.
To join two trees of equal height, first check to see if their roots are the same color.If so, sever from the left root its right-most child and from the right root its left-most child, then recursively join these two trees.The result will be a tree of the same height or one taller since the trees have the same flavor (see below).If the result of recursively joining the two trees has same height as the two severed children, make it the middle child of a root with the remaining children of the left root before it and the remaining children of the right root after it.If it is taller by 1, make its children the middle children of a root with the remaining children of the left root before it and the remaining children of the right root after it.If the roots have different colors, check to see if they have the same flavor.If they do, give them a new parent with the key and bit stream of the right root, eliding its first bit.If they do not, give each root a new parent with the key and bit stream of the old root (eliding each first bit), then recursively join those trees.
There are two recursive calls in this algorithm.The first is when the roots have the same color, the second is when the roots have different colors and different flavors.The roots have the same color with probability $1/2$.The recursive call in this case always sees roots with the same flavor, so the second type of recursion never occurs after the first.However, the first can occur repeatedly, but each time with probability $1/2$, so the expected running time is still $O(1)$.The second recursive call happens with probability $1/4$, and subsequent recursive calls are always on trees with different colors, so the same analysis applies.
To join two trees of unequal height, first trace down the left spine of the right tree, assuming the right tree is taller.(The other case is symmetric.)When two trees of equal height are reached, perform the join operation for two trees of equal height, modified as follows:If the result has the same height, replace the tree that was a child with the result of the join.If the result is taller, join the parent of the tree on the right to the root of the other tree, after it has been made taller by one by adding a parent for the root.The tree will be the same height with probability $1/2$, so this terminates in $O(1)$ expected.
Update: Thanks to QuickCheck, I discovered that the above join method does not produce the same trees as the uniquely represented trees above. The problem is that parent choices near the leaves may change depending on the available siblings. To fix up those changes, join would have to traverse all the way to the leaves, which is not $O(1)$. Here is the example QuickCheck found:
a 01110
b 110..
c 10...
d 00000
The the tree made by
[a,b] has height 2, the tree made by
[c,d] has height 2, and the tree made by
joinEqual (tree [a,b]) (tree [c,d]) has height 3. However, the tree made by
[a,b,c,d] has height 5.
Here is the code I used to find this error. |
Preprints (rote Reihe) des Fachbereich Mathematik Refine Year of publication 1996 (22) (remove) Has Fulltext yes (22) (remove)
284
A polynomial function \(f : L \to L\) of a lattice \(\mathcal{L}\) = \((L; \land, \lor)\) is generated by the identity function id \(id(x)=x\) and the constant functions \(c_a (x) = a\) (for every \(x \in L\)), \(a \in L\) by applying the operations \(\land, \lor\) finitely often. Every polynomial function in one or also in several variables is a monotone function of \(\mathcal{L}\). If every monotone function of \(\mathcal{L}\)is a polynomial function then \(\mathcal{L}\) is called orderpolynomially complete. In this paper we give a new characterization of finite order-polynomially lattices. We consider doubly irreducible monotone functions and point out their relation to tolerances, especially to central relations. We introduce chain-compatible lattices and show that they have a non-trivial congruence if they contain a finite interval and an infinite chain. The consequences are two new results. A modular lattice \(\mathcal{L}\) with a finite interval is order-polynomially complete if and only if \(\mathcal{L}\) is finite projective geometry. If \(\mathcal{L}\) is simple modular lattice of infinite length then every nontrivial interval is of infinite length and has the same cardinality as any other nontrivial interval of \(\mathcal{L}\). In the last sections we show the descriptive power of polynomial functions of lattices and present several applications in geometry.
285
On derived varieties (1996)
Derived varieties play an essential role in the theory of hyperidentities. In [11] we have shown that derivation diagrams are a useful tool in the analysis of derived algebras and varieties. In this paper this tool is developed further in order to use it for algebraic constructions of derived algebras. Especially the operator \(S\) of subalgebras, \(H\) of homomorphic irnages and \(P\) of direct products are studied. Derived groupoids from the groupoid \(N or (x,y)\) = \(x'\wedge y'\) and from abelian groups are considered. The latter class serves as an example for fluid algebras and varieties. A fluid variety \(V\) has no derived variety as a subvariety and is introduced as a counterpart for solid varieties. Finally we use a property of the commutator of derived algebras in order to show that solvability and nilpotency are preserved under derivation.
279
It is shown that Tikhonov regularization for ill- posed operator equation \(Kx = y\) using a possibly unbounded regularizing operator \(L\) yields an orderoptimal algorithm with respect to certain stability set when the regularization parameter is chosen according to the Morozov's discrepancy principle. A more realistic error estimate is derived when the operators \(K\) and \(L\) are related to a Hilbert scale in a suitable manner. The result includes known error estimates for ordininary Tikhonov regularization and also the estimates available under the Hilbert scale approach.
293
Tangent measure distributions were introduced by Bandt and Graf as a means to describe the local geometry of self-similar sets generated by iteration of contractive similitudes. In this paper we study the tangent measure distributions of hyperbolic Cantor sets generated by contractive mappings, which are not similitudes. We show that the tangent measure distributions of these sets equipped with either Hausdorff or Gibbs measure are unique almost everywhere and give an explicit formula describing them as probability distributions on the set of limit models of Bedford and Fisher.
276
Let \(a_1,\dots,a_n\) be independent random points in \(\mathbb{R}^d\) spherically symmetrically but not necessarily identically distributed. Let \(X\) be the random polytope generated as the convex hull of \(a_1,\dots,a_n\) and for any \(k\)-dimensional subspace \(L\subseteq \mathbb{R}^d\) let \(Vol_L(X) :=\lambda_k(L\cap X)\) be the volume of \(X\cap L\) with respect to the \(k\)-dimensional Lebesgue measure \(\lambda_k, k=1,\dots,d\). Furthermore, let \(F^{(i)}\)(t):= \(\bf{Pr}\) \(\)(\(\Vert a_i \|_2\leq t\)), \(t \in \mathbb{R}^+_0\) , be the radial distribution function of \(a_i\). We prove that the expectation functional \(\Phi_L\)(\(F^{(1)}, F^{(2)},\dots, F^{(n)})\) := \(E(Vol_L(X)\)) is strictly decreasing in each argument, i.e. if \(F^{(i)}(t) \le G^{(i)}(t)t\), \(t \in {R}^+_0\), but \(F^{(i)} \not\equiv G^{(i)}\), we show \(\Phi\) \((\dots, F^{(i)}, \dots\)) > \(\Phi(\dots,G^{(i)},\dots\)). The proof is clone in the more general framework of continuous and \(f\)- additive polytope functionals.
282
Let \(a_1,\dots,a_m\) be independent random points in \(\mathbb{R}^n\) that are independent and identically distributed spherically symmetrical in \(\mathbb{R}^n\). Moreover, let \(X\) be the random polytope generated as the convex hull of \(a_1,\dots,a_m\) and let \(L_k\) be an arbitrary \(k\)-dimensional subspace of \(\mathbb{R}^n\) with \(2\le k\le n-1\). Let \(X_k\) be the orthogonal projection image of \(X\) in \(L_k\). We call those vertices of \(X\), whose projection images in \(L_k\) are vertices of \(X_k\)as well shadow vertices of \(X\) with respect to the subspace \(L_k\) . We derive a distribution independent sharp upper bound for the expected number of shadow vertices of \(X\) in \(L_k\).
275
277
A convergence rate is established for nonstationary iterated Tikhonov regularization, applied to ill-posed problems involving closed, densely defined linear operators, under general conditions on the iteration parameters. lt is also shown that an order-optimal accuracy is attained when a certain a posteriori stopping rule is used to determine the iteration number.
274
This paper investigates the convergence of the Lanczos method for computing the smallest eigenpair of a selfadjoint elliptic differential operator via inverse iteration (without shifts). Superlinear convergence rates are established, and their sharpness is investigated for a simple model problem. These results are illustrated numerically for a more difficult problem. |
For a language $L$ over the finite alphabet $\Sigma$, let $L_n$ denote the set of words in $L$ of length $n$. The word $u$ is a
subword of $w$ if $u$ can be obtained from $w$ by deleting letters. The language $L$ is subword-closed if whenever $w\in L$ and $u$ is a subword of $w$ then $u\in L$. It can be shown (see below) that for all subword-closed languages $L$,$$\lim_{n\to\infty} \sqrt[n]{|L_n|}$$exists and is an integer. Does anyone know of a reference for this fact? (I have stated it with proof in one of my papers, but I am trying to find the "correct" reference for it now.)
Here is the proof I know, thanks to Michael Albert. First, if $L$ is subword-closed then there are only finitely many minimal (in the subword ordering) words not in $L$ by Higman's Theorem (words over a finite alphabet are well-quasi-ordered by the subword order). This fact implies that all subword-closed languages are regular.
Next we claim that every subword-closed language $L\subseteq\Sigma^\ast$ can be expressed as a finite union of regular expressions of the form $\ell_1\Sigma_1^\ast\cdots\ell_k\Sigma_k^\ast\ell_{k+1}$ for letters $\ell_i\in\Sigma$ and subsets $\Sigma_i\subseteq\Sigma$. This follows by induction on the regular expression defining $L$. The base cases where $L$ is empty or a single letter are trivial. If the regular expression defining $L$ is a union or a concatenation then the claim follows inductively. The only other case is when this regular expression is a star, $L=E^\ast$. In this case though, because $L$ is subword-closed, we see that $L=\Pi^\ast$ where $\Pi\subseteq\Sigma$ is the set of all letters occurring in $E$.
With this claim established, it follows that $\lim\sqrt[n]{|L_n|}$ is equal to the size of the largest set $\Sigma_i$ occurring in such an expression for $L$. |
And yet another question to discuss the assumptions in PRIIPs. It is remarkable that in these legal documents a Cornish-Fisher expansion including skewness and kurtosis is used.
Looking at the very recent version of the document we find on page 27 the following formula for the moderate scenario (Which is, if I read it correctly, supposed to be the 50% quantile):
$$ \exp(M_1 \cdot N - \sigma \mu_1/6 - 0.5 \sigma^2 N ), $$ where $N$ is the number of days (more details are not necessary here), $M_1$ is the first moment of the log returns observed, $\sigma$ is the standard deviation and $\mu_1$ is the skewness measured.
I have one question: I see that $- \sigma \mu_1/6$ enters if we put in $0$ for the "z-value". Thus there is something that remains from skewness.
But is it ok to have the average return $M_1$ if we model in a risk-neutral world?
If $M_1$ is the average of log-returns then we have $M_1 = \tilde{\mu} + \sigma^2/2$ where $\tilde{\mu}$ is the "true" mean and $\sigma^2/2$ is the convexity that we have in the log-normal case. This is corrected in the last part of the formula by the term $- 0.5 \sigma^2 N$. This formula is different from the others where there is usually just an expected return of $-\sigma^2/2 N$ which makes the expected growth zero (see e.g. page 28 point 11).
In short: is it really consistent to have the $M_1$ term above? Any comments are really appreciated! |
stat946w18/Implicit Causal Models for Genome-wide Association Studies Contents 1 Introduction and Motivation 2 Implicit Causal Models 3 Implicit Causal Models with Latent Confounders 4 Likelihood-free Variational Inference 5 Empirical Study 6 Conclusion 7 Critique 8 References 9 Implicit causal model in Edward Introduction and Motivation
There is currently much progress in probabilistic models which could lead to the development of rich generative models. The models have been applied with neural networks, implicit densities, and with scalable algorithms to very large data for their Bayesian inference. However, most of the models are focused on capturing statistical relationships rather than causal relationships. Causal relationships are relationships where one event is a result of another event, i.e. a cause and effect. Causal models give us a sense of how manipulating the generative process could change the final results.
Genome-wide association studies (GWAS) are examples of causal relationships. Genome is basically the sum of all DNAs in an organism and contain information about the organism's attributes. Specifically, GWAS is about figuring out how genetic factors cause disease among humans. Here the genetic factors we are referring to are single nucleotide polymorphisms (SNPs), and getting a particular disease is treated as a trait, i.e., the outcome. In order to know about the reason of developing a disease and to cure it, the causation between SNPs and diseases is investigated: first, predict which one or more SNPs cause the disease; second, target the selected SNPs to cure the disease.
The figure below depicts an example Manhattan plot for a GWAS. Each dot represents an SNP. The x-axis is the chromosome location, and the y-axis is the negative log of the association p-value between the SNP and the disease, so points with the largest values represent strongly associated risk loci.
This paper focuses on two challenges to combining modern probabilistic models and causality. The first one is how to build rich causal models with specific needs by GWAS. In general, probabilistic causal models involve a function [math]f[/math] and a noise [math]n[/math]. For working simplicity, we usually assume [math]f[/math] as a linear model with Gaussian noise. However problems like GWAS require models with nonlinear, learnable interactions among the inputs and the noise.
The second challenge is how to address latent population-based confounders. Latent confounders are issues when we apply the causal models since we cannot observe them nor know the underlying structure. For example, in GWAS, both latent population structure, i.e., subgroups in the population with ancestry differences, and relatedness among sample individuals produce spurious correlations among SNPs to the trait of interest. The existing methods cannot easily accommodate the complex latent structure.
For the first challenge, the authors develop implicit causal models, a class of causal models that leverages neural architectures with an implicit density. With GWAS, implicit causal models generalize previous methods to capture important nonlinearities, such as gene-gene and gene-population interaction. Building on this, for the second challenge, they describe an implicit causal model that adjusts for population-confounders by sharing strength across examples (genes).
There has been an increasing number of works on causal models which focus on causal discovery and typically have strong assumptions such as Gaussian processes on noise variable or nonlinearities for the main function.
Implicit Causal Models
Implicit causal models are an extension of probabilistic causal models. Probabilistic causal models will be introduced first.
Probabilistic Causal Models
Probabilistic causal models have two parts: deterministic functions of noise and other variables. Consider background noise [math]\epsilon[/math], representing unknown background quantities which are jointly independent and global variable [math]\beta[/math], some function of this noise, where
Each [math]\beta[/math] and [math]x[/math] is a function of noise; [math]y[/math] is a function of noise and [math]x[/math],
The target is the causal mechanism [math]f_y[/math] so that the causal effect [math]p(y|do(X=x),\beta)[/math] can be calculated. [math]do(X=x)[/math] means that we specify a value of [math]X[/math] under the fixed structure [math]\beta[/math]. By other paper’s work, it is assumed that [math]p(y|do(x),\beta) = p(y|x, \beta)[/math].
An example of probabilistic causal models is additive noise model.
[math]f(.)[/math] is usually a linear function or spline functions for nonlinearities. [math]\epsilon[/math] is assumed to be standard normal, as well as [math]y[/math]. Thus the posterior [math]p(\theta | x, y, \beta)[/math] can be represented as
where [math]p(\theta)[/math] is the prior which is known. Then, variational inference or MCMC can be applied to calculate the posterior distribution.
Implicit Causal Models
The difference between implicit causal models and probabilistic causal models is the noise variable. Instead of using an additive noise term, implicit causal models directly take noise [math]\epsilon[/math] as input and outputs [math]x[/math] given parameter [math]\theta[/math].
[math] x=g(\epsilon | \theta), \epsilon \tilde s(\cdot) [/math]
The causal diagram has changed to:
They used fully connected neural network with a fair amount of hidden units to approximate each causal mechanism. Below is the formal description: Implicit Causal Models with Latent Confounders
Previously, they assumed the global structure is observed. Next, the unobserved scenario is being considered.
Causal Inference with a Latent Confounder
Similar to before, the interest is the causal effect [math]p(y|do(x_m), x_{-m})[/math]. Here, the SNPs other than [math]x_m[/math] is also under consideration. However, it is confounded by the unobserved confounder [math]z_n[/math]. As a result, the standard inference method cannot be used in this case.
The paper proposed a new method which include the latent confounders. For each subject [math]n=1,…,N[/math] and each SNP [math]m=1,…,M[/math],
The mechanism for latent confounder [math]z_n[/math] is assumed to be known. SNPs depend on the confounders and the trait depends on all the SNPs and the confounders as well.
The posterior of [math]\theta[/math] is needed to be calculate in order to estimate the mechanism [math]g_y[/math] as well as the causal effect [math]p(y|do(x_m), x_{-m})[/math], so that it can be explained how changes to each SNP [math]X_m[/math] cause changes to the trait [math]Y[/math].
Note that the latent structure [math]p(z|x, y)[/math] is assumed known.
In general, causal inference with latent confounders can be dangerous: it uses the data twice, and thus it may bias the estimates of each arrow [math]X_m → Y[/math]. Why is this justified? This is answered below:
Proposition 1. Assume the causal graph of Figure 2 (left) is correct and that the true distribution resides in some configuration of the parameters of the causal model (Figure 2 (right)). Then the posterior [math]p(θ | x, y)[/math] provides a consistent estimator of the causal mechanism [math]f_y[/math].
Proposition 1 rigorizes previous methods in the framework of probabilistic causal models. The intuition is that as more SNPs arrive (“M → ∞, N fixed”), the posterior concentrates at the true confounders [math]z_n[/math], and thus we can estimate the causal mechanism given each data point’s confounder [math]z_n[/math]. As more data points arrive (“N → ∞, M fixed”), we can estimate the causal mechanism given any confounder [math]z_n[/math] as there is an infinity of them.
Implicit Causal Model with a Latent Confounder
This section is the algorithm and functions to implementing an implicit causal model for GWAS.
Generative Process of Confounders [math]z_n[/math].
The distribution of confounders is set as standard normal. [math]z_n \in R^K[/math] , where [math]K[/math] is the dimension of [math]z_n[/math] and [math]K[/math] should make the latent space as close as possible to the true population structural.
Generative Process of SNPs [math]x_{nm}[/math].
Given SNP is coded for,
The authors defined a [math]Binomial(2,\pi_{nm})[/math] distribution on [math]x_{nm}[/math]. And used logistic factor analysis to design the SNP matrix.
A SNP matrix looks like this:
Since logistic factor analysis makes strong assumptions, this paper suggests using a neural network to relax these assumptions,
This renders the outputs to be a full [math]N*M[/math] matrix due the the variables [math]w_m[/math], which act as principal component in PCA. Here, [math]\phi[/math] has a standard normal prior distribution. The weights [math]w[/math] and biases [math]\phi[/math] are shared over the [math]m[/math] SNPs and [math]n[/math] individuals, which makes it possible to learn nonlinear interactions between [math]z_n[/math] and [math]w_m[/math].
Generative Process of Traits [math]y_n[/math].
Previously, each trait is modeled by a linear regression,
This also has very strong assumptions on SNPs, interactions, and additive noise. It can also be replaced by a neural network which only outputs a scalar,
Likelihood-free Variational Inference
Calculating the posterior of [math]\theta[/math] is the key of applying the implicit causal model with latent confounders.
could be reduces to
However, with implicit models, integrating over a nonlinear function could be suffered. The authors applied likelihood-free variational inference (LFVI). LFVI proposes a family of distribution over the latent variables. Here the variables [math]w_m[/math] and [math]z_n[/math] are all assumed to be Normal,
For LFVI applied to GWAS, the algorithm which similar to the EM algorithm has been used:
Empirical Study
The authors performed simulation on 100,000 SNPs, 940 to 5,000 individuals, and across 100 replications of 11 settings. Four methods were compared:
implicit causal model (ICM); PCA with linear regression (PCA); a linear mixed model (LMM); logistic factor analysis with inverse regression (GCAT).
The feedforward neural networks for traits and SNPs are fully connected with two hidden layers using ReLU activation function, and batch normalization.
Simulation Study
Based on real genomic data, a true model is applied to generate the SNPs and traits for each configuration. There are four datasets used in this simulation study:
HapMap [Balding-Nichols model] 1000 Genomes Project (TGP) [PCA] Human Genome Diversity project (HGDP) [PCA] HGDP [Pritchard-Stephens-Donelly model] A latent spatial position of individuals for population structure [spatial] The table shows the prediction accuracy. The accuracy is calculated by the rate of the number of true positives divide the number of true positives plus false positives. True positives measure the proportion of positives that are correctly identified as such (e.g. the percentage of SNPs which are correctly identified as having the causal relation with the trait). In contrast, false positives state the SNPs has the causal relation with the trait when they don’t. The closer the rate to 1, the better the model is since false positives are considered as the wrong prediction.
The result represented above shows that the implicit causal model has the best performance among these four models in every situation. Especially, other models tend to do poorly on PSD and Spatial when [math]a[/math] is small, but the ICM achieved a significantly high rate. The only comparable method to ICM is GCAT, when applying to simpler configurations.
Real-data Analysis
They also applied ICM to GWAS of Northern Finland Birth Cohorts, which measure 10 metabolic traits and also contain 324,160 SNPs and 5,027 individuals. The data came from the database of Genotypes and Phenotypes (dbGaP) and used the same preprocessing as Song et al. Ten implicit causal models were fitted, one for each trait to be modeled. For each of the 10 implicit causal models the dimension of the counfounders was set to be six, same as what was used in the paper by Song. The SNP network used 512 hidden units in both layers and the trait network used 32 and 256. et al. for comparable models in Table 2.
The numbers in the above table are the number of significant loci for each of the 10 traits. The number for other methods, such as GCAT, LMM, PCA, and "uncorrected" (association tests without accounting for hidden relatedness of study samples) are obtained from other papers. By comparison, the ICM reached the level of the best previous model for each trait.
Conclusion
This paper introduced implicit causal models in order to account for nonlinear complex causal relationships, and applied the method to GWAS. It can not only capture important interactions between genes within an individual and among population level, but also can adjust for latent confounders by taking account of the latent variables into the model.
By the simulation study, the authors proved that the implicit causal model could beat other methods by 15-45.3% on a variety of datasets with variations on parameters.
The authors also believed this GWAS application is only the start of the usage of implicit causal models. The authors suggest that it might also be successfully used in the design of dynamic theories in high-energy physics or for modeling discrete choices in economics.
Critique
This paper is an interesting and novel work. The main contribution of this paper is to connect the statistical genetics and the machine learning methodology. The method is technically sound and does indeed generalize techniques currently used in statistical genetics. While the author focusing on GWAS applications in the paper, the author also believes implicit causal models have significant potential in other sciences: for example, to design new dynamical theories in high energy physics; and to accurately model structural equations of discrete choices in economics.
The neural network used in this paper is a very simple feed-forward 2 hidden-layer neural network, but the idea of where to use the neural network is crucial and might be significant in GWAS.
It has limitations as well. The empirical example in this paper is too easy, and far away from the realistic situation. Despite the simulation study showing some competing results, the Northern Finland Birth Cohort Data application did not demonstrate the advantage of using implicit causal model over the previous methods, such as GCAT or LMM.
Another limitation is about linkage disequilibrium as the authors stated as well. SNPs are not completely independent of each other; usually, they have correlations when the alleles at close locus. They did not consider this complex case, rather they only considered the simplest case where they assumed all the SNPs are independent.
Furthermore, one SNP maybe does not have enough power to explain the causal relationship. Recent papers indicate that causation to a trait may involve multiple SNPs. This could be a future work as well.
References
Tran D, Blei D M. Implicit Causal Models for Genome-wide Association Studies[J]. arXiv preprint arXiv:1710.10742, 2017.
Patrik O Hoyer, Dominik Janzing, Joris M Mooij, Jonas Peters, and Prof Bernhard Schölkopf. Non- linear causal discovery with additive noise models. In Neural Information Processing Systems, 2009.
Alkes L Price, Nick J Patterson, Robert M Plenge, Michael E Weinblatt, Nancy A Shadick, and David Reich. Principal components analysis corrects for stratification in genome-wide association studies. Nature Genetics, 38(8):904–909, 2006.
Minsun Song, Wei Hao, and John D Storey. Testing for genetic associations in arbitrarily structured populations. Nature, 47(5):550–554, 2015.
Dustin Tran, Rajesh Ranganath, and David M Blei. Hierarchical implicit models and likelihood-free variational inference. In Neural Information Processing Systems, 2017. |
Naime Ekici
Articles written in Proceedings – Mathematical Sciences
Volume 121 Issue 3 August 2011 pp 291-300
Let 𝐹 be a free Lie algebra of rank $n\geq 2$ and 𝐴 be a free abelian Lie algebra of rank $m\geq 2$. We prove that the test rank of the abelian product $F\times A$ is 𝑚. Morever we compute the test rank of the algebra $F/\gamma k(F)'$.
Volume 121 Issue 4 November 2011 pp 405-416
Let 𝐿 be a free metabelian Lie algebra of finite
Current Issue
Volume 129 | Issue 5 November 2019
Click here for Editorial Note on CAP Mode |
The general purpose of multiple regression (the term was first used by Pearson, 1908), as a generalization of simple linear regression, is to learn about how several independent variables or predictors (IVs) together predict a dependent variable (DV). Multiple regression analysis often focuses on understanding (1) how much variance in a DV a set of IVs explain and (2) the relative predictive importance of IVs in predicting a DV.
In the social and natural sciences, multiple regression analysis is very widely used in research. Multiple regression allows a researcher to ask (and hopefully answer) the general question "what is the best predictor of ...". For example, educational researchers might want to learn what the best predictors of success in college are. Psychologists may want to determine which personality dimensions best predicts social adjustment.
A general multiple linear regression model at the population level can be written as
\[y_{i}=\beta_{0}+\beta_{1}x_{1i}+\beta_{2}x_{2i}+\ldots+\beta_{k}x_{ki}+\varepsilon_{i} \]
The least squares method used for the simple linear regression analysis can also be used to estimate the parameters in a multiple regression model. The basic idea is to minimize the sum of squared residuals or errors. Let $b_{0},b_{1},\ldots,b_{k}$ represent the estimated regression coefficients.The individual $i$'s residual $e_{i}$ is the difference between the observed $y_{i}$ and the predicted $y_{i}$
\[ e_{i}=y_{i}-\hat{y}_{i}=y_{i}-b_{0}-b_{1}x_{1i}-\ldots-b_{k}x_{ki}.\]
The sum of squared residuals is
\[ SSE=\sum_{i=1}^{n}e_{i}^{2}=\sum_{i=1}^{n}(y_{i}-\hat{y}_{i})^{2}. \]
By minimizing $SSE$, the regression coefficient estimates can be obtained as
\[ \boldsymbol{b}=(\boldsymbol{X}'\boldsymbol{X})^{-1}\boldsymbol{X}'\boldsymbol{y}=(\sum\boldsymbol{x}_{i}\boldsymbol{x}_{i}')^{-1}(\sum\boldsymbol{x}_{i}\boldsymbol{y}_{i}). \]
How well the multiple regression model fits the data can be assessed using the $R^{2}$. Its calculation is the same as for the simple regression
\[\begin{align*} R^{2} & = & 1-\frac{\sum e_{i}^{2}}{\sum_{i=1}^{n}(y_{i}-\bar{y})^{2}}\\& = & \frac{\text{Variation explained by IVs}}{\text{Total variation}} \end{align*}. \]
In multiple regression, $R^{2}$ is the total proportion of variation in $y$ explained by the multiple predictors.
The $R^{2}$ increases or at least is the same with the inclusion of more predictors. However, with more predators, the model becomes more complex and potentially more difficult to interpret. In order to take into consideration of the model complexity, the adjusted $R^{2}$ has been defined, which is calculated as
\[aR^{2}=1-(1-R^{2})\frac{n-1}{n-k-1}.\]
With the estimates of regression coefficients and their standard errors estimates, we can conduct hypothesis testing for one, a subset, or all regression coefficients.
At first, we can test the significance of the coefficient for a single predictor. In this situation, the null and alternative hypotheses are
\[ H_{0}:\beta_{j}=0\text{ vs }H_{1}:\beta_{j}\neq0 \]
with $\beta_{j}$ denoting the regression coefficient of $x_{j}$ at the population level.
As in the simple regression, we use a test statistic
\[ t_{j}=\frac{b_{j} - \beta{j} }{s.e.(b_{j})}\]
where $b_{j}$ is the estimated regression coefficient of $x_{j}$ using data from a sample. If the null hypothesis is true and $\beta_j = 0$, the test statistic follows a t-distribution with degrees of freedom \(n-k-1\) where \(k\) is the number of predictors.
One can also test the significance of \(\beta_j\) by constructing a confidence interval for it. Based on a t distribution, the \(100(1-\alpha)%\) confidence interval is
\[ [b_{j}+t_{n-k-1}(\alpha/2)*s.e.(b_{j}),\;b_{j}+t_{n-k-1}(1-\alpha/2)*s.e.(b_{j})]\]
where $t_{n-k-1}(\alpha/2)$ is the $\alpha/2$ percentile of the t distribution. As previously discussed, if the confidence interval includes 0, the regression coefficient is not statistically significant at the significance level $\alpha$.
Given the multiple predictors, we can also test whether all of the regression coefficients are 0 at the same time. This is equivalent to test whether all predictors combined can explained a significant portion of the variance of the outcome variable. Since $R^2$ is a measure of the variance explained, this test is naturally related to it.
For this hypothesis testing, the null and alternative hypothesis are
\[H_{0}:\beta_{1}=\beta_{2}=\ldots=\beta_{k}=0\]
vs.
\[H_{1}:\text{ at least one of the regression coefficients is different from 0}.\]
In this kind of test, an F test is used. The F-statistic is defined as
\[F=\frac{n-k-1}{k}\frac{R^{2}}{1-R^{2}}.\]
It follows an F-distribution with degrees of freedom $k$ and $n-k-1$ when the null hypothesis is true. Given an F statistic, its corresponding p-value can be calculated from the F distribution as shown below. Note that we only look at one side of the distribution because the extreme values should be on the large value side.
We can also test whether a subset of $p$ regression coefficients, e.g., $p$ from 1 to the total number coefficients $k$, are equal to zero. For convenience, we can rearrange all the $p$ regression coefficients to be the first $p$ coefficients. Therefore, the null hypothesis should be
\[H_{0}:\beta_{1}=\beta_{2}=\ldots=\beta_{p}=0\]
and the alternative hypothesis is that at least one of them is not equal to 0.
As for testing the overall model fit, an F test can be used here. In this situation, the F statistic can be calculated as
\[F=\frac{n-k-1}{p}\frac{R^{2}-R_{0}^{2}}{1-R^{2}},\]
which follows an F-distribution with degrees of freedom $p$ and $n-k-1$. $R^2$ is for the regression model with all the predictors and $R_0^2$ is from the regression model without the first $p$ predictors $x_{1},x_{2},\ldots,x_{p}$ but with the rest predictors $x_{p+1},x_{p+2},\ldots,x_{k}$.
Intuitively, this test determine whether the variance explained by the first \(p\) predictors above and beyond the $k-p$ predictors is significance or not. That is also the increase in R-squared.
As an example, suppose that we wanted to predict student success in college. Why might we want to do this? There's an ongoing debate in college and university admission offices (and in the courts) regarding what factors should be considered important in deciding which applicants to admit. Should admissions officers pay most attention to more easily quantifiable measures such as high school GPA and SAT scores? Or should they give more weight to more subjective measures such as the quality of letters of recommendation? What are the pros and cons of the approaches? Of course, how we define college success is also an open question. For the sake of this example, let's measure college success using college GPA.
In this example, we use a set of simulated data (generated by us). The data are saved in the file gpa.csv. As shown below, the sample size is 100 and there are 4 variables: college GPA (c.gpa), high school GPA (h.gpa), SAT, and quality of recommendation letters (recommd).
> usedata('gpa') > dim(gpa) [1] 100 4 > head(gpa) c.gpa h.gpa SAT recommd 1 2.04 2.01 1070 5 2 2.56 3.40 1254 6 3 3.75 3.68 1466 6 4 1.10 1.54 706 4 5 3.00 3.32 1160 5 6 0.05 0.33 756 3 >
Before fitting a regression model, we should check the relationship between college GPA and each predictor through a scatterplot. A scatterplot can tell us the form of relationship, e.g., linear, nonlinear, or no relationship, the direction of relationship, e.g., positive or negative, and the strength of relationship, e.g., strong, moderate, or weak. It can also identify potential outliers.
The scatterplots between college GPA and the three potential predictors are given below. From the plots, we can roughly see all three predictors are positively related to the college GPA. The relationship is close to linear and the relationship seems to be stronger for high school GPA and SAT than for the quality of recommendation letters.
> usedata('gpa') > attach(gpa) > > par(mfrow=c(2,2)) > plot(h.gpa, c.gpa) > plot(SAT, c.gpa) > plot(recommd, c.gpa) >
Next, we can calculate some summary statistics to explore our data further. For each variable, we calculate 6 numbers: minimum, 1st quartile, median, mean, 3rd quartile, and maximum. Those numbers can be obtained using the
summary() function. To look at the relationship among the variables, we can calculate the correlation matrix using the correlation function
cor().
Based on the correlation matrix, the correlation between college GPA and high school GPA is about 0.545, which is larger than that (0.523) between college GPA and SAT, in turn larger than that (0.35) between college GPA and quality of recommendation letters.
> usedata('gpa') > summary(gpa) c.gpa h.gpa SAT recommd Min. :0.050 Min. :0.330 Min. : 400 Min. : 2.00 1st Qu.:1.562 1st Qu.:1.640 1st Qu.: 852 1st Qu.: 4.00 Median :1.985 Median :1.930 Median :1036 Median : 5.00 Mean :1.980 Mean :2.049 Mean :1015 Mean : 5.19 3rd Qu.:2.410 3rd Qu.:2.535 3rd Qu.:1168 3rd Qu.: 6.00 Max. :4.010 Max. :4.250 Max. :1500 Max. :10.00 > cor(gpa) c.gpa h.gpa SAT recommd c.gpa 1.0000000 0.5451980 0.5227546 0.3500768 h.gpa 0.5451980 1.0000000 0.4326248 0.6265836 SAT 0.5227546 0.4326248 1.0000000 0.2175928 recommd 0.3500768 0.6265836 0.2175928 1.0000000 >
As for the simple linear regression, The multiple regression analysis can be carried out using the
lm() function in R. From the output, we can write out the regression model as
\[ c.gpa = -0.153+ 0.376 \times h.gpa + 0.00122 \times SAT + 0.023 \times recommd \]
> usedata('gpa') > gpa.model<-lm(c.gpa~h.gpa+SAT+recommd, data=gpa) > summary(gpa.model) Call: lm(formula = c.gpa ~ h.gpa + SAT + recommd, data = gpa) Residuals: Min 1Q Median 3Q Max -1.0979 -0.4407 -0.0094 0.3859 1.7606 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -0.1532639 0.3229381 -0.475 0.636156 h.gpa 0.3763511 0.1142615 3.294 0.001385 ** SAT 0.0012269 0.0003032 4.046 0.000105 *** recommd 0.0226843 0.0509817 0.445 0.657358 --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Residual standard error: 0.5895 on 96 degrees of freedom Multiple R-squared: 0.3997, Adjusted R-squared: 0.381 F-statistic: 21.31 on 3 and 96 DF, p-value: 1.16e-10 >
From the output, we see the intercept is -0.153. Its immediate meaning is that when all predictors' values are 0, the predicted college GPA is -0.15. This clearly does not make much sense because one would never get a negative GPA, which results from the unrealistic presumption that the predictors can take the value of 0.
The regression coefficient for the predictor high school GPA (h.gpa) is 0.376. This can be interpreted as
, the predicted college GPA would increase 0.376 with a unit increase in high school GPA.This is again might be problematic because it might be impossible to increase high school GPA while keeping the other two predictors unchanged. The other two regression coefficients can be interpreted in the same way. keeping SAT and recommd scores constant
From the output, we can also see that the multiple R-squared ($R^2$) is 0.3997. Therefore, about 40% of the variation in college GPA can be explained by the multiple linear regression with h.GPA, SAT, and recommd as the predictors. The adjusted $R^2$ is slightly smaller because of the consideration of the number of predictors. In fact,
\[ \begin{eqnarray*} aR^{2} & = & 1-(1-R^{2})\frac{n-1}{n-k-1}\\& = & 1-(1-.3997)\frac{100-1}{100-3-1}\\& = & .3809 \end{eqnarray*} \]
For any regression coefficients for the three predictors (also the intercept), a t test can be conducted. For example, for high school GPA, the estimated coefficient is 0.376 with the standard error 0.114. Therefore, the corresponding t statistic is \(t = 0.376/0.114 = 3.294\). Since the statistic follows a t distribution with the degrees of freedom \(df = n - k - 1 = 100 - 3 -1 =96\), we can obtain the p-value as \(p = 2*(1-pt(3.294, 96))= 0.0013\). Since the p-value is less than 0.05, we conclude the coefficient is statistically significant. Note the t value and p-value are directly provided in the output.
> t <- 0.376/0.114 > t [1] 3.298246 > 2*(1-pt(t, 96)) [1] 0.001365401 >
To test all coefficients together or the overall model fit, we use the F test. Given the $R^2$, the F statistic is
\[ \begin{eqnarray*} F & = & \frac{n-k-1}{k}\frac{R^{2}}{1-R^{2}}\\ & = & \left(\frac{100-3-1}{3}\right)\times \left(\frac{0.3997}{1-.3997}\right )=21.307\end{eqnarray*} \]
which follows the F distribution with degrees of freedom $df1=k=3$ and $df2=n-k-1=96$. The corresponding p-value is 1.160e-10. Note that this information is directly shown in the output as "
F-statistic: 21.31 on 3 and 96 DF, p-value: 1.160e-10".
Therefore, at least one of the regression coefficients is statistically significantly different from 0. Overall, the three predictors explained a significant portion of the variance in college GPA. The regression model with the 3 predictors is significantly better than the regression model with intercept only (i.e., predict c.gpa by the mean of c.gpa).
> F <- (100 - 3 -1)/3*0.3997/(1-0.3997) > F [1] 21.30668 > 1 - pf(F, 3, 96) [1] 1.162797e-10 >
Suppose we are interested in testing whether the regression coefficients of high school GPA and SAT together are significant or not. Alternative, we want to see above and beyond the quality of recommendation letters, whether the two predictors can explain a significant portion of variance in college GPA. To conduct the test, we need to fit two models:
From the full model, we can get the $R^2 = 0.3997$ with all three predictors and from the reduced model, we can get the $R_0^2 = 0.1226$ with only quality of recommendation letters. Then the F statistic is constructed as
\[F=\frac{n-k-1}{p}\frac{R^{2}-R_{0}^{2}}{1-R^{2}}=\left(\frac{100-3-1}{2}\right )\times\frac{.3997-.1226}{1-.3997}=22.157.\]
Using the F distribution with the degrees of freedom $p=2$ (the number of coefficients to be tested) and $n-k-1 = 96$, we can get the p-value close to 0 ($p=1.22e-08$).
> usedata('gpa') > model.reduce <- lm(c.gpa~recommd, data=gpa) > summary(model.reduce) Call: lm(formula = c.gpa ~ recommd, data = gpa) Residuals: Min 1Q Median 3Q Max -1.90257 -0.33372 0.01973 0.43457 1.71204 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 1.07020 0.25596 4.181 6.31e-05 *** recommd 0.17539 0.04741 3.700 0.000356 *** --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Residual standard error: 0.7054 on 98 degrees of freedom Multiple R-squared: 0.1226, Adjusted R-squared: 0.1136 F-statistic: 13.69 on 1 and 98 DF, p-value: 0.0003564 > > F <- (100 - 3 -1)/2*(0.3997-0.1226)/(1-0.3997) > F [1] 22.15692 > 1 - pf(F, 2, 96) [1] 1.225164e-08 >
Note that the test conducted here is based on the comparison of two models. In R, if there are two models, they can be compared conveniently using the R function
anova(). As shown below, we obtain the same F statistic and p-value.
> usedata('gpa') > model.full <- lm(c.gpa~h.gpa+SAT+recommd, data=gpa) > model.reduce <- lm(c.gpa~recommd, data=gpa) > anova(model.reduce, model.full) Analysis of Variance Table Model 1: c.gpa ~ recommd Model 2: c.gpa ~ h.gpa + SAT + recommd Res.Df RSS Df Sum of Sq F Pr(>F) 1 98 48.762 2 96 33.358 2 15.404 22.165 1.219e-08 *** --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 > |
Not related to this old question of mine, but takes the question from a different perspective.
Let $\mathcal V$ be a monoidal model category (following the def of Hovey, for example). Then there is a bicategory $\text{Prof}(\mathcal V)$ of $\cal V$-valued profunctors, which has the following interesting property:
every hom-category has a model structure[1].
This is, I think, the paradigmatic example of a "locally model bicategory". There can be others: I'm interested in examples for this notion and related results.
I'm trying to get a list of properties to impose to the general notion (a "locally model 2-category", i.e. a 2-category "enriched" over model categories -this is not a true definition, as there is no sensible monoidal structure on model categories-).
Let's for example consider the following explicit question:
Let $\varphi\colon {\bf A}\looparrowright{\bf B}$ be a profunctor, and $\bf X$ be a category; then precomposition by $\varphi$ gives $$ \text{Prof}({\bf B},{\bf X}) \overset{-\diamond \varphi}\to \text{Prof}({\bf A},{\bf X}) $$ which has left and right adjoints $\text{Lan}_\varphi$, $\text{Ran}_\varphi$ defined by the co/ends $$ \text{Ran}_\varphi\psi(b,x) \cong \int_a \hom(\varphi(a,b), \psi(a,x)) $$ (Lan is similar). Does $-\diamond \varphi \dashv \text{Ran}_\varphi$ form a Quillen adjunction?
===
[1] in fact, many! Let's take $\mathcal V = \bf sSet$ and declare that I want to study the injective model structure on $\text{Prof}(\mathbf{sSet})(\mathbf A,\mathbf B)=[\mathbf A^\text{op}\times \mathbf B,\mathbf{sSet}]$. |
The key here is the
antimagnetic strip, quite aside from whether or not such a device can be built.
When you insert the anti-magnetic strip, you must change the shape of the magnetic field. You must force the magnetic field to "leave" the high permeability ball. The same magnetic induction $|\vec{B}|$ in a high permeability $\mu$ material represents a lower energy of genesis of the magnetic field $\frac{1}{2\,\mu} |\vec{B}|^2$ than it does in a lower permeability material (magnetic induction is continuous across the magnet's face whatever is outside). So to exclude the magnetic field, you have to do work pushing the antimagnetic strip in. The ball then falls, and you pull the strip out. However now there is no high permeability ball in contact with the strip, so there is little change in the magnetic field configuration when the strip is pulled out, and any work done on the strip as it is withdrawn will be very much less than what you put in to push the strip in. When the ball crashes at the bottom it dissipates the energy that ultimately came from your pushing the antimagnetic strip in as heat, and then the cycle repeats: you have to do the amount of work lost by the ball on crashing inelastically with the ground on the antimagnetic strip at each cycle.
You can, in principle, recover some of the ball's kinetic energy as it hits the ground to help do some of this nett work on the antimagnetic strip by the schematic "tredle" device you show linked to the magnet. However, this is the point at which rob's answer comes in: you might make a lovely art piece that runs for a while, but, if there is any energy loss, anywhere, the machine will stop eventually.
Turning your question around: the ultimate "reason" is that we have never experimentally observed a nonconservation of energy. By experimental induction, therefore, we postulate a principle of conservation of energy and thus the machine cannot work. However, if so, then a theorist can work out what kinds of properties of an
antimagnetic strip are physically plausible: only those properties which would demand a net input of energy in the way I describe in my answer are plausible i.e. consistent with overwhelming experimental evidence. |
It is often explained that renormalization arises in QFT because QFT is a low-energy effective theory that needs to be replaced by a more fundamental theory at higher energies/smaller distances. While we don't have a more fundamental theory that's accepted by everyone, candidates do exist. Can string theory for example handle the same calculations as in QFT but without renormalization? If I have a Feynman diagram that diverges in QFT and I replace the point particles by strings, will I be able to now calculate the diagram without issue?
You seem to be confusing regularization with renormalization.
Regularization is the process of removing (or, more properly, parameterizing) infinities in loop integrals. Often in elementary texts a "cutoff" representing an energy scale above which the theory is assumed to be invalid is discussed, and counterterms are added to the Lagrangian in order to make loop integrals finite.
This introduces ambiguities in the definition of the theory parameters, like masses or coupling constants. Enter renormalization, which is the process of carefully defining what it means to measure a parameter so that we can properly define a Lagrangian that gives the correct results for measured quantities.
While often discussed at the same time, renormalization and regularization are completely separate and distinct procedures. Consider a loop integral for a scalar field in 2 dimensions. Once Wick-rotated to Euclidean space it would look something like this:
$$I(p) = \int \frac{d^2k}{(2\pi)^2} \frac{1}{k^2 + m^2}\frac{1}{(p+k)^2 + m^2}$$
For large $k$, the integrand goes like $\frac{1}{k^4}$ so there's no divergence. However, one loop effects such as this would still screen charges or modify masses and you would need to perform renormalization in order to connect theory and experiment. Renormalization really is an established part of physics, with observable consequences. No future developments will ever get rid of it. The couplings
really do run, the masses really do get radiative corrections, and so on. String theory is no different.
The requirement for a fundamental physical theory is not that it doesn't require renormalization -- an empirical and logical impossibility -- but that it be
UV complete. This is the requirement that the theory is well defined up to arbitrarily high energy scales and that it is predictive (typically taken to mean "renormalizable" but fashions change). String theory is known to be UV complete. Asymptotically free field theories are also known to be UV complete.
What I think one needs to internalize conceptually is that the program of renormalization is always favourable (and almost always required) in physical theories, be they fundamental or effective phenomenological ones (including condensed matter field theories),
be there infinities or not.
I think the last point is by far the most important. Yes, renormalization did formally arise as a method of handling divergent loop integrals by declaring only physically measurable observables be finite and well-behaved. This is well justified for a physical theory, as can never actually measure the action or lagrangian itself, just the scattering amplitudes which in turn provide us with the Green's functions or Correlation functions of the theory.
For a moment switching to condensed matter systems, which do have a finite UV cutoff, there are
no divergences in the perturbative formulation of the field theory, as we always have some lattice (or equivalent discrete structure) at the shortest length scales. Even with the absence of infinities, we do renormalize such theories, the essential reason being that renormalization is a procedure by which one can decouple the low-energy physics (the IR behaviour) from the high-energy UV one (that takes place on the scale of, say, the lattice spacing). It is this removal of sensitive dependence on the microscopic details that underlies the idea of renormalization.
So, even if a theory is UV complete, which is something one would require of a fundamental field theory, we would want to renormalize the couplings and calculate their flow so that your everyday coffee may not spill because a particle collider discovered new interactions at the Planck scale! |
Search
Now showing items 1-10 of 18
J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-02)
Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ...
Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-12)
The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ...
Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC
(Springer, 2014-10)
Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ...
Centrality, rapidity and transverse momentum dependence of the J/$\psi$ suppression in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2014-06)
The inclusive J/$\psi$ nuclear modification factor ($R_{AA}$) in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76TeV has been measured by ALICE as a function of centrality in the $e^+e^-$ decay channel at mid-rapidity |y| < 0.8 ...
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Multiplicity Dependence of Pion, Kaon, Proton and Lambda Production in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV
(Elsevier, 2014-01)
In this Letter, comprehensive results on $\pi^{\pm}, K^{\pm}, K^0_S$, $p(\bar{p})$ and $\Lambda (\bar{\Lambda})$ production at mid-rapidity (0 < $y_{CMS}$ < 0.5) in p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV, measured ...
Multi-strange baryon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2014-01)
The production of ${\rm\Xi}^-$ and ${\rm\Omega}^-$ baryons and their anti-particles in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been measured using the ALICE detector. The transverse momentum spectra at ...
Measurement of charged jet suppression in Pb-Pb collisions at √sNN = 2.76 TeV
(Springer, 2014-03)
A measurement of the transverse momentum spectra of jets in Pb–Pb collisions at √sNN = 2.76TeV is reported. Jets are reconstructed from charged particles using the anti-kT jet algorithm with jet resolution parameters R ...
Two- and three-pion quantum statistics correlations in Pb-Pb collisions at √sNN = 2.76 TeV at the CERN Large Hadron Collider
(American Physical Society, 2014-02-26)
Correlations induced by quantum statistics are sensitive to the spatiotemporal extent as well as dynamics of particle-emitting sources in heavy-ion collisions. In addition, such correlations can be used to search for the ...
Exclusive J /ψ photoproduction off protons in ultraperipheral p-Pb collisions at √sNN = 5.02TeV
(American Physical Society, 2014-12-05)
We present the first measurement at the LHC of exclusive J/ψ photoproduction off protons, in ultraperipheral proton-lead collisions at √sNN=5.02 TeV. Events are selected with a dimuon pair produced either in the rapidity ... |
Let’s start by recalling some background about
modules.
Suppose that \(R\) is a ring and \(1_R\) is its multiplicative identity. A left
\(R\)-module \(M\) consists of an abelian group \((M, +)\) and an operation \(R \times M \rightarrow M\) such that for all \(r, s \in R\) and \(x, y \in M\), we have: \(r \cdot (x+y)= r \cdot x + r \cdot y\) (\( \cdot\) is left-distributive over \(+\)) \((r +s) \cdot x= r \cdot x + s \cdot x\) (\( \cdot\) is right-distributive over \(+\)) \((rs) \cdot x= r \cdot (s \cdot x)\) \(1_R \cdot x= x \)
\(+\) is the symbol for addition in both \(R\) and \(M\).
If \(K\) is a field, \(M\) is \(K\)- vector space. It is well known that a vector space \(V\) is having a basis, i.e. a subset of linearly independent vectors that spans \(V\). Unlike for a vector space, a module doesn’t always have a basis. Continue reading A module without a basis |
Published February 2011.
The most ancient device found in all early civilisations, is a "shadow stick". The shadow cast from a shadow stick was used to observe the motion of the Sun and thus to tell time. Today we call this instrument a Gnomon. The name gnomon comes from the Greek and refers to any L-shaped instrument, originally used to draw a right angle.
In Euclid Book II, where Euclid deals with the transformation of areas, the gnomon takes the form of an "L-shaped" area touching two adjacent sides of a parallelogram. Today, a gnomon is the vertical rod or similar device that makes the shadow on a sundial.
For more about sundials go to Leo's article - Brief History of Time Measurement.
At midday the shadow of a stick is shortest, and the civilisations of Mesopotamia, Egypt, and China took the North - South direction from this alignment. In contrast, the Hindus used the East - West direction, the rising and setting of the sun, to orient their "fire-altars" for religious practices. To do this they constructed the "gnomon circle" whose radius was the square root of the sum of the square of the height of the gnomon and its shadow [See Note 2 below].
The Merkhet is one of the oldest known astronomical instruments. It was developed around 600 BCE and uses a plumb line to obtain a true vertical, as in the picture. The other object is the rib of a palm leaf, split at one end to make a thin slit for a sight. Babylonian and Egyptian astronomers were able to measure the altitude and lateral displacement of heavenly objects from a particular direction by using a Merkhet, thus giving the earliest ideas of turning, or angle.
Observations of celestial bodies by the Babylonians from about 1,800 BCE gave rise to the eventual division of the circle into 360 degrees, and by about 500 BCE, the division of the heavens into twelve regions of 30 degrees each, often referred to as the 12 houses of the zodiac. The Babylonians recorded the events of the lunar month, the daily movement of the sun across the sky over the year, and the rising and setting of the major planets. So, by 750 BCE astronomers had a reasonably accurate means of measuring the elevation (latitude) and lateral direction (longitude) of all objects in the heavens. They built up an extensive collection of data, and made tables of the positions of objects in the sky at any given time through a year (these tables are called ephemerides).
These observations continued over many centuries, slowly becoming more accurate, so that ancient people were able to make star maps, and detect the regular events in the heavens.
Many seasonal phenomena like the flooding of the Nile, or special events like religious ceremonies were linked to astronomical phenomena. The ability to predict some of these major astronomical events gave rise to astrology, where people believed that there was a link between heavenly and earthly events, and that the stars had some control over their lives. See this BBC news item about a prehistoric star map.
The Babylonians and Chinese both believed that the earth and the moon were spherical, that the earth and the moon rotated on an axis, and that the sun and the planets moved in circles round the earth. This enabled them to be able to explain the phases of the moon, and predict eclipses of the moon and the sun by believing that the earth cast a shadow on the moon, and the moon cast a shadow over the sun. They were able to predict paths of other objects across the sun, for example the transit of Venus, a description and explanation of which can be found here on Wikipedia.
The Babylonian astronomers recorded astronomical data systematically and by the Seleucid period (330-125BCE) there were a great many astronomical tablets showing ephemerides for the moon and the major planets. Many of the tablets contain "procedures" or instructions for how to calculate intervals between astronomical events using the properties of simple arithmetic progressions. These procedural processes were the earliest steps of a mathematical astronomy, and both the procedures and the data were used by those who came later. The Babylonians wrote down lists of numbers, in what we would call an arithmetic progression and recognised that numbers repeated themselves over periods of time.
As you can see, Neugebauer published the sexagesimal values for twelve measurements of the position of the Moon taken from a clay tablet dated 133/132 BCE.
In the table above, the top line shows the end of the year 133 BCE with the last month Aires, so the start of the Babylonian year was at the vernal equinox, and the bottom line represents the end of year 132 BCE. The height of the lines on the zig-zag graph below approximately represent the sequence of the numerical values in the table. There are two groups of numbers, one starting with 28, followed by another starting 29. The results for Gemini and Cancer differ only in the third place of sexagesimals and the minimum on the graph is interpolated from the results in the table. Similarly the results for Sagitarius and Capricorn indicate the maximum value for the longitude.
Looking at the first three sets of sexagesimal numbers: 28, 55, 57, 58; 28, 37, 57, 58 and 28, 19, 57, 58 we can notice that the significant differences in the second place between 55, 37 and 19 are all giving a constant 18, which is the difference in height of the vertical lines on the zig-zag graph (except at the minimum and maximum). The graph was drawn to illustrate the periodicity of the data. It is important to realise that the Babylonians recognised the events repeated themselves after some time, but they did not see these results as a 'graph' as we can [see Note 3 below].
The use of graphs as a way of recording the data comes from Neugebauer's book The Exact Sciences in Antiquity.
The Babylonian astronomers recognised the events were periodic but they did not have a theory of planetary motion.
The Sulbasutras are the only early sources of Hindu mathematical knowledge and originally come from the Vedic period (during the second millennium BCE). The earliest written texts we have from this oral tradition date from about 800 BCE. The Sulbasutras are the instructions for constructing various geometrical shapes to make 'fire-altars' using the "Peg and Cord" technique. Each 'fire-altar' was a different shape and associated with unique gifts from the Gods.
For more information on Peg and Cord geometry see: The Development of Algebra Part 1: Section 4 "Early Indian Mathematics" an article by Leo already published on NRICH.
The Vedic people knew how to find the cardinal directions (NSEW). The Sulbasutras gave procedures for the construction of the altars by starting with a line marking the E-W direction (sun rises in east and sinks in the west), thus the E-W direction had special religious significance.
At the end of the fourth century BCE the Indian part of Alexander the Great's empire broke up into small kingdoms run by Indian Greeks. Around this time there was a collection of mathematical knowledge called jyotsia, a mixture of astronomy, calendar calculations and astrology. The rulers still maintained trading links between western India and the Hellenistic culture of the Roman Empire. At this time, Indian horoscope astrology became popular needing precise calendar and astronomical calculations.
The Panca-siddhantica is a collection of five astronomical works composed in the sixth century CE by Vrahamihira. These works contain earlier mathematical knowledge and here we find an approximation for $\pi$ as $\sqrt{10}$, because they used the relationship between the circumference of a circle, $C$ and its diameter $D$ as $D=\sqrt{\frac{C^2}{10}}$. Sines were calculated at intervals of $\frac{30^\circ}{8}$ or $3^\circ45'$, giving a series of values for Sines of angles in the first quadrant and, using the same terms in Sanskrit as the Babylonians for the radius of a circle. Also, the use of similar calculation methods as the Babylonians suggest that this is the earliest surviving Indian sine table. [See Note 4 below]
The method of calculation and the values used by Vrahamihira is very similar to a Greek Chord table for arcs up to $120^\circ$ and intervals of the quadrant into sixths, namely arcs of $15^\circ$. This suggests that the Indian invention of the trigonometry of Sines was inspired by replacing the Greek Chord geometry of right triangles in a semicircle by the simpler Sine geometry of right triangles in a quadrant [See Note 5 below]. This discovery is much earlier than the account usually given of the sine table derived from chords by Aryabhata the Elder (476-550 CE) who used the word jiya for sine. Brahmagupta reproduced the same table in 628 CE and Bhaskara gave a detailed method for constructing a table of sines for any angle in 1150 CE.
The Chinese were the most accurate observers of celestial phenomena before the Arabs. "Oracle Bones" with star names engraved on them dating back to the Chinese Bronze Age (about 2,000 BCE) have been found, and very old star maps have been found on pottery, engraved on stones, and painted on the walls of caves.
Surviving records of astronomical observations made by two astronomers Shi Shen and Gan De date from the 4th century BCE.
Shi Shen wrote a book on astronomy, and made a star map and a star catalogue. In 364 BCE Gan De made the first recorded observation of sunspots, and the moons of Jupiter and they both made accurate observations of the five major planets. Their observations were based on the principle of the stars rotating about the pole (equivalent to the earth rotating on its axis).
A famous map due to Su Song (1020-1101) and drawn on paper in 1092 represents the whole sky with the positions of some 1,350 stars.
The equator is represented by the horizontal straight line running through the star chart, while the ecliptic curves above it.
The oldest star map found so far is from Dunhuang. Earlier thought to date from about 940 CE, it was made with precise mathematical methods by the astronomer and mathematician Li Chunfeng (602-670) and shows 1339 stars in 257 Chinese star groups with a precision between 1.5 and 4 degrees of arc. In all there are 12 charts each in 30 degree sections displaying the full sky visible from the Northern hemisphere. Up to now it is the oldest complete preserved star atlas discovered from any civilisation. It has been on display this year in the British Library to celebrate 2009 as the International Year of Astronomy. [see Note 6 below]
Some elements of Indian astronomy reached China with the expansion of Buddhism (25-220 CE). Later, during the period (618-907 CE) a number of Indian astronomers came to live in China and Islamic astronomers collaborated closely with their Chinese counterparts particularly during (1271-1368).
Very little of the knowledge of the Indians and the Chinese was known in Europe before the Portuguese navigators and the Jesuit scientist Matteo Ricci in the fifteenth century.
Babylonian astronomy contributed direct empirical data as a foundation for Greek theory and exactly the same data which provided the information for the "zig-zag" data results in Babylonian theory were used to calculate the mean motions of the sun and moon by Hipparchus.
Pedagogical notes to support this article can be found in the Teachers' Notes accompanying this resource.
Explanations for some of the astronomical terms used in this article can be found here.
Part 2 of the History of Trigonometry will take you from Eudoxus to Ptolemy.
Katz, V. (1998) A History of Mathematics. New York. Addison Wesley. Recommended as the best general history of mathematics currently available. There is good coverage of aspects of astronomy in antiquity, and the discussion on 'functions' (p. 156) is worth reading. Trigonometry is dealt with in sections on Ancient Civilisations, Mediaeval Europe, Renaissance Europe
Katz, V. (Ed.) (2007) The Mathematics of Egypt, Mesopotamia, China, India, and Islam. Princeton. Princeton University Press. This book contains a wealth of up-to-date information on mathematics and some aspects of astronomy in these ancient civilisations.
Linton, C. M. (2004) From Eudoxus to Einstein: A History of Mathematical Astronomy. Cambridge University Press The first chapter deals with ancient people and early Greek astronomy.
Needham, J. (1959) Science and Civilisation in China. Vol. 3. Mathematics and the Sciences of the Heavens and the Earth. Cambridge University Press.
Neugebauer, O. (1983)(1955) Astronomical Cuneiform Texts. Vol. 1 The Moon. Heidelberg. Springer-Verlag These two books are the big classics on China and Mesopotamia, but much work has been done in these areas since the 1950s.
Neugebauer, O. (1969) (original 1952) The Exact Sciences in Antiquity. New York. Dover Books. Still available, this is a more popular book and contains much information on Egypt, Babylon and Greek Science.
Plofker, K. (2009) Mathematics in India. Princeton. Princeton University Press. This is the most recent book on the history of Mathematics in India by a renowned expert.
Wikipedia is quite good for first-level information on early astronomy, and should lead you to more reliable sources. However, more recent work - as found in Katz (2007) is the best generally available today.
The MacTutor site has a topic list and there you can find material on Trigonometry and Greek astronomy, but look also at Geography. In the biography list, you can find Ptolemy, Eudoxus, Menelaos, Brahmagupta and others.
Note 6 below, has a link to the oldest Chinese Star Map
Here is Gary Thompson's huge collection of data on Egyptian, Babylonian, Chinese and other Ancient Astronomy: http://members.westnet.com.au/gary-david-thompson/index1.html
This site shows some of the oldest star diagrams from prehistoric times: http://www.spacetoday.org/SolSys/Earth/OldStarCharts.html
This is a site on Egyptian Astronomy http://www.egyptologyonline.com/astronomy.htm
Here you can find the 'Decan' chart http://www.moses-egypt.net/star-map/senmut1-mapdate_en.asp |
Search
Now showing items 1-2 of 2
Search for new resonances in $W\gamma$ and $Z\gamma$ Final States in $pp$ Collisions at $\sqrt{s}=8\,\mathrm{TeV}$ with the ATLAS Detector
(Elsevier, 2014-11-10)
This letter presents a search for new resonances decaying to final states with a vector boson produced in association with a high transverse momentum photon, $V\gamma$, with $V= W(\rightarrow \ell \nu)$ or $Z(\rightarrow ...
Fiducial and differential cross sections of Higgs boson production measured in the four-lepton decay channel in $\boldsymbol{pp}$ collisions at $\boldsymbol{\sqrt{s}}$ = 8 TeV with the ATLAS detector
(Elsevier, 2014-11-10)
Measurements of fiducial and differential cross sections of Higgs boson production in the ${H \rightarrow ZZ ^{*}\rightarrow 4\ell}$ decay channel are presented. The cross sections are determined within a fiducial phase ... |
Search
Now showing items 1-1 of 1
Higher harmonic flow coefficients of identified hadrons in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Springer, 2016-09)
The elliptic, triangular, quadrangular and pentagonal anisotropic flow coefficients for $\pi^{\pm}$, $\mathrm{K}^{\pm}$ and p+$\overline{\mathrm{p}}$ in Pb-Pb collisions at $\sqrt{s_\mathrm{{NN}}} = 2.76$ TeV were measured ... |
Motion in a Straight Line General Kinematics of a Moving Body A particle is there which is having negligible dimension A body in that which is having countable dimension. Distance is the actual path covered by the body where as displacement is change in the initial and final position with direction. Distance is a scalar and displacement is a vector. Speed is defined as the rate of distance travelled by the body. Velocity is the rate of displacement of the body. Speed and velocity have same units and dimension If a particle travels first half of distance with speed V 1and rest half with speed V 2Then average speed=\tt \frac{2V_{1}V_{2}}{V_{1}+V_{2}} If a particle travels first half of time with speed V 1and rest half with speed V 2The average speed = \tt \frac{V_{1}+V_{2}}{2} For round in P average velocity = 0 Average speed > O. In uniform motion average velocity and instantaneous velocity are equal Moving with uniform acceleration a body crosses two points with velocities \tt \overline{u} \ and \ \overline{V} Then the velocity at mid point is \tt \sqrt{\frac{u^{2}+v^{2}}{2}} A Particle travel along a circular path making an angle ‘θ’. Then Displacement = \tt \frac{2R\ \sin \theta}{2} View the Topic in this video From 0:50 To 57:18
Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers.
1. Speed (v) =\tt \frac{Distance\ travelled(s)}{Time\ taken(t)}
2. Average speed = \tt \frac{Total\ distance\ travelled}{Total\ time \ taken}
3. If a particle travels distances s
1, s 2, s 3, .... with speeds v 1, v 2, v 3 ,...., then Average speed = \frac{S_{1}+S_{2}+S_{3}+...}{\left(\frac{S_{1}}{v_{1}}+\frac{S_{2}}{v_{2}}+\frac{S_{3}}{v_{3}}+...\right)}
4. If particle travels equal distances (s
1 = s 2 = s) with velocities v 1 and v 2, then, Average speed = \frac{2v_{1}v_{2}}{\left(v_{1}+v_{2}\right)}
5. If a particle travels with speeds v
1, v 2, v 3,.... during time intervals t 1, t 2, t 3,...., then Average speed = \frac{v_{1}t_{1}+v_{2}t_{2}+v_{3}t_{3}+...}{t_{1}+t_{2}+t_{3}+...}
6. If particle travels with speeds v
1 and v 2 for equal time intervals, i.e. t 1 = t 2 = t, then Average speed = \frac{v_{1}+v_{2}}{2}
7. When a body travels equal distance with speeds V
1 and V 2, the average speed (v) is the harmonic mean of two speeds. \frac{2}{v}=\frac{1}{v_{1}}+\frac{1}{v_{2}}
8. Instantaneous speed = \tt \lim_{\Delta t \rightarrow 0}\ \frac{\Delta s}{\Delta t}=\frac{ds}{dt}
9. \tt velocity=\frac{Displacement}{Time\ taken}
10. \tt Average\ velocity=\frac{Total\ Displacement}{Total\ Time\ taken}
11. Instantaneous velocity: The velocity of a body at a given instant of time during motion is known as instantaneous velocity i.e.,
Instantaneous velocity =\tt \lim_{\Delta t \rightarrow 0}\frac{\Delta\overrightarrow{r}}{\Delta t}=\frac{d\overrightarrow{r}}{dt} |
I am looking for a list of classifying spaces $BG$ of groups $G$ (discrete and/or topological) along with associated covers $EG$; there does not seem to be such cataloging on the web. Or if not a list, just some further fundamental examples. For instance, here are the ones I have off the top of my head:
$B\mathbb{Z}_n=L_n^\infty$ with cover $S^\infty$ ($B\mathbb{Z}_2=\mathbb{R}P^\infty$)
$B\mathbb{Z}=S^1$ with cover $\mathbb{R}$
$BS^1=\mathbb{C}P^\infty$ with cover $S^\infty$
$B(F_2)=S^1\vee S^1$ with cover $\mathcal{T}$ (infinite fractal tree)
$BO(n)=BGL_n(\mathbb{R})=G_n(\mathbb{R}^\infty)$ with cover $V_n(\mathbb{R}^\infty)$
$B\mathbb{R}=\lbrace pt.\rbrace$ with cover $\mathbb{R}$
$B\langle a_1,b_1,\ldots,a_g,b_g\;|\;\prod_{i=1}^g[a_i,b_i]\rangle=M_g$ with cover $\mathcal{H}$ (hyperbolic plane tiled by $4g$-sided polygon)
And of course, $B(G_1\times G_2)=BG_1\times BG_2$, so I do not care that much about ''decomposable'' groups.
**The "associated cover" is the [weakly] contractible total space.
[Edit] I should make the comment that $BG$ will be different from $BG_\delta$, where $G_\delta$ denotes the topological group with discrete topology. For instance, the homology of $B\mathbb{R}_\delta$ has uncountable rank in all degrees (learned from a comment of Thurston). |
I was trying to understand why the equation $y_i = \left( \frac{n}{w} \right) + (i \pmod w) $ describes the step property in a balancing network?
First, recall $x_i$ to be the number of tokens a network gets as input and similarly $y_i$ to be the number of output tokens. Recall that a balancing network is just a network that distributes tokens to its output.
I was reading the art of multicore programming and in page 272 it says:
If the number of tokens n is a multiple of four (the network width), then the same number of tokens emerges from each wire. If there is one excess token, it emerges on output wire 0, if there are two, they emerge on output wires 0 and 1, and so on. In general,
$$ n = \sum x_i $$
then
$$y_i = \left( \frac{n}{w} \right) + (i \pmod w) $$
we call this property the
step property.
It also defines equivalent ways to see the step property as:
For any $i<j$, $0 \leq y_i - y_j \leq 1$
i.e. as we go up the output wires, the wire can only increase one step at a time or not increase (so top values are always larger or equal). An example:
However, the formula $y_i = \left( \frac{n}{w} \right) + (i \pmod w) $ doesn't make sense to me also, specifically the following doesn't make sense:
if there are two, they emerge on output wires 0 and 1, and so on...
I tried plugging in the numbers too say, $ n = 6 $, but the results don't quite make sense.
For example according to the formula above we get:
$$ y_0 = \left( \frac{6}{4} \right) + (0 \pmod 4) = 1 + 0 = 1 $$ $$ y_1 = \left( \frac{6}{4} \right) + (1 \pmod 4) = 1 + 1 = 2 $$ $$ y_2 = \left( \frac{6}{4} \right) + (2 \pmod 4) = 1 + 2 = 3 $$ $$ y_3 = \left( \frac{6}{4} \right) + (2 \pmod 4) = 1 + 3 = 4 $$
which doesn't agree with what the picture of the diagram would be because it seems to be backwards. Not sure if I made a mistake or misunderstood the formula, but it should be giving something as in the figure/diagram/picture from the book. Also, according to the formula, is it ever possible for it be equal? It seems to always increase in the wrong direction. |
1 Easy
Proposition Let $f:X\to Y$ be a continuous map of topological spaces, $\mathscr F$ a sheaf of abelian groups on $X$ such that $R^jf_*\mathscr F=0$ for $j>0$. Then for all $i\geq 0$ there exists a natural isomorphism $$H^i(Y, f_*\mathscr F)\simeq H^i(X,\mathscr F)$$
ProofApply the composition rule for the derived functors of $G=\Gamma(Y, \_ )$ and $F=f_*({\_})$. By definition, $G\circ F = \Gamma(X, \_ )$. Then$$R\Gamma(Y, f_*\mathscr F) \simeq R\Gamma(Y, Rf_*\mathscr F) \simeq R\Gamma (X, \mathscr F).$$Taking cohomology shows the result. $\square$
(
edit to please anon, see the comments below)This is usually exhibited as an example of how to use the Leray spectral sequence. Doing it that way is not much harder than the above, but perhaps a bit less "automatic".
Furthermore, this proof shows more: Not only the cohomologies of these sheaves are isomorphic, but they come from the same complex! That's a much stronger statement. It is easy to give examples when the cohomologies of two complexes are isomorphic, but the complexes are not. I suppose one may argue that the word "natural" in the statement means exactly this, but then I'd say that proving naturality with the Leray spectral sequence is certainly possible, but it definitely needs more care.
This last point is actually an important one regarding the derived category language. You get a higher level notion. The fact that you can work with the complex whose cohomologies give the derived functors of your original functor is very very useful.
2 Less Easy
In case you were not convinced by the above example, here is one that should do the trick:
A special case of Grothendieck duality says that if $f:X\to Y$ is a proper morphism between not too horrible schemes, let's say finite type over a field $k$ (let me not try to make a precise statement, this is in
Residues and Duality that you mentioned) and $\mathscr F$ is a coherent sheaf on $X$, then $$ Rf_*R\mathscr Hom_X(\mathscr F, \omega_X^{\bullet})\simeq R\mathscr Hom_Y(Rf_*\mathscr F, \omega_Y^{\bullet}).$$Here $\omega_{Z}^{\bullet}=\varepsilon^!k$ is "the" dualizing complex where $\varepsilon: Z\to \mathrm{Spec}\ k$ is the structure map of $Z$.
Now try to imagine how one could state this using spectral sequences. Both sides actually correspond to spectral sequences, so the statement would be something like "there is a natural map between this an this spectral sequences, such that they converge to the same thing".
I would argue that already the statement of this theorem would be tiring in the language of spectral sequences, but using it would be pure pain.
3 Even Less Easy
Here is an application of Grothendieck duality where one can see how the derived category formalism makes life easier and arguments that seemed complicated are reduced to a one liner.
Theorem (a.k.a. Kempf's Criterion)Let $Y$ be a normal variety over $\mathbb C$ with a resolution of singularities $f:X\to Y$. Then $Y$ has rational singularities (i.e., $R^if_*\mathscr O_X=0$ for $i>0$) if and only if $Y$ is Cohen-Macaulay $f_*\omega_X\simeq \omega_Y$
Proof Let $n=\dim Y=\dim X$ and suppose $Y$ has rational singularities. Then$$\omega_Y^{\bullet}\simeq R\mathscr Hom_Y(\mathscr O_Y, \omega_Y^{\bullet})\simeq R\mathscr Hom_Y(Rf_*\mathscr O_X, \omega_Y^{\bullet})\simeqRf_*R\mathscr Hom_X(\mathscr O_X, \omega_X^{\bullet})\simeq Rf_*\omega_X[n]\simeq f_*\omega_X[n].$$(The isomorphisms follow by the assumptions, Grothendieck duality, and the last one is the Grauert-Riemenschneider vanishing theorem).This implies that $\omega_Y=h^{-n}(\omega_Y^{\bullet})\simeq f_*\omega_X$, which is the second condition to prove and also that $h^i(\omega_Y^{\bullet})=0$ for $i\neq -n$ which is equivalent to $Y$ being Cohen-Macaulay.
The other direction goes essentially in the same fashion. $\square$
Now try to do this with spectral sequences.
To answer your second question, I think you are right. In order to get the spectral sequences you do not
need to go through the derived category formalism. However, if you are indeed "recovering" the spectral sequence, then you start with the derived category formalism. In other words, if you've never heard of derived categories, why (or perhaps more importantly how) would you want to recover anything from a derived category statement? (Since written word lacks intonation, let me add that I'm not trying to be confrontational, but I feel that this question is somehow off target.) |
\(LS\)-coupling is in practice a good approximation in light atoms, but there are appreciable departures from \(LS\)-coupling in the heavier atoms. Generally the several lines in a multiplet in \(LS\)-coupling are fairly close together in wavelength for \(LS\)-coupling, but, as departures from \(LS\)-coupling become more pronounced, the lines in a multiplet may become more widely separated and may appear in quite different parts of the spectrum.
In \(LS\)-coupling, multiplets always connect terms with the same value of \(S\). Thus, while \(^3 \text{D} − \ ^3 \text{P}\) would be "allowed" for \(LS\)-coupling, \(^3 \text{D} − \ ^1 \text{P}\) would not. \(\Delta S = 0\) is a necessary condition for \(LS\)-coupling, but is not a sufficient condition. Thus while a multiplet with \(\Delta S \neq 0\) certainly indicates departure from \(LS\)-coupling, \(\Delta S = 0\) by no means guarantees that you have \(LS\)-coupling. In spectroscopy, the term "forbidden" generally refers to transitions that are forbidden to electric dipole radiation. Transitions that are forbidden merely to \(LS\)-coupling are usually referred to as "semi-forbidden", or as "intersystem" or "intercombination" transitions. We shall have more on selection rules in section 7.24.
The energies, or term values, of the levels (each defined by \(LSJ\)) within a term are given, for \(LS\)-coupling, by a simple formula:
\[T = \frac{1}{2} a [ J(J+1) - L(L+1) - S(S+1)]. \label{7.17.1} \tag{7.17.1}\]
Here \(a\) is the spin-orbit coupling coefficient, whose value depends on the electron configuration. What is the separation in term values between two adjacent levels, say between level \(J\) and \(J −1\)? Evidently (if you apply equation \(\ref{7.17.1}\)) it is just \(aJ\). Hence Landé's Interval Rule, which is a good test for \(LS\)-coupling:
The separation between two adjacent levels within a term is proportional to the larger of the two J-values involved. For example, in the \(KL3s (^2 S) 3 p^3 P^{\text{o}}\) term of \(\text{Mg} \ _\text{I}\) (the first excited term above the ground term), the separation between the \(J = 2\) and \(J = 1\) levels is \(4.07 \ \text{mm}^{-1}\), while the separation between \(J = 1\) and \(J = 0\) is \(2.01 \ \text{mm}^{-1}\). Landé's rule is approximately satisfied, showing that the term conforms closely, but not exactly, to \(LS\)-coupling. It is true that for doublet terms (and all the terms in \(\text{Na} \ _\text{I}\) and \(\text{K} \ _\text{I}\) for example, are doublets) this is not of much help, since there is only one interval. There are, however, other indications. For example, the value of the spin-orbit coupling coefficient can be calculated from \(LS\)-theory, though I do not do that here. Further, the relative intensities of the several lines within a multiplet (or indeed of multiplets within a polyad) can be predicted from \(LS\)-theory and compared with what is actually observed. We discuss intensities in a later chapter.
The spin-orbit coupling coefficient a can be positive or negative. If it is positive, the level within a term with the largest \(J\) lies highest; such a term is called a
normal term, though terms with negative \(a\) are in fact just as common as "normal" terms. If \(a\) is negative, the level with largest \(J\) lies lowest, and the term is called an inverted term. Within a shell (such as the \(L\) -shell) all the \(s\) electrons may be referred to as a subshell, and all the \(p\) electrons are another subshell. The subshell of \(s\) electrons can hold at most two electrons; the subshell of \(p\) electrons can hold at most six electrons. If the outermost subshell (i.e. the electrons responsible for the optical spectrum) is less than half full, \(a\) is positive and the terms are normal. If it is more than half full, \(a\) is negative and the terms are inverted. If the subshell is exactly half full, \(a\) is small, the term is compact and may be either normal or inverted. For example in \(\text{Al} \ _\text{I}\), the term \(3 p^2 \ ^4\text{P}\) (which has three levels - write down their \(J\)-values) is normal. There are only two \(p\) electrons out of six allowed in that subshell, so the subshell is less than half full. The term \(2s 2 p^4 \ ^4\text{P}\) of \(\text{O} \ _\text{II}\) has four \(p\) electrons, so the subshell is more than half full, and the term is inverted. The term \(2s^2 2 p^3 \ ^2\text{P}^{\text{o}}\) of the same atom has a subshell that is exactly half full. The term happens to be normal, but the two levels are separated by only \(0.15 \ \text{mm}^{-1}\), which is relatively quite tiny. |
I should say that you have 3 related questions, namely 1) To what extent can we trust the approximations based on HP and Jw transformations, 2) The nature of the low excitation spectrum and 3) The relation with Goldstone modes.
We shall look first at the Holstein-Primakoff method. The spin ladder operators for at a site $j$ are given by
$S^-_j = \sqrt{2S}b_j^\dagger\sqrt{1-\frac{n_j}{2S}}$
and its adjoint, where $S$ is the spin of you're model, in this case we have $S=1/2$. You're making the approximation $S_j^-=\sqrt{2S}b_j^\dagger$, or in other words expanding the square root and discarding non-linear terms, which should be good as long as $\langle n_j \rangle << S=1/2$. Spin one-half is not really the best case to use HP because is the one with greatest error in the linear approximation. Nevertheless let's continue. To study the low energy spectrum we introduce excitation (called magnons) with thermal distribution according to BE statistics $\langle n_k\rangle =(e^{\beta\omega_k}-1)^{-1}$ and see the correction to the magnetization $\Delta S(T)=S-\langle S_j\rangle$ at each site. By translational invariance we have $\langle n_j\rangle =\frac{1}{N}\sum_j \langle n_j\rangle$. Passing to momentum representation as usual we get
$\Delta S(T)=\int \frac{dk}{2\pi}\frac{1}{e^{\beta\omega_k}-1}$
Is easy to see that the integral diverges at low momenta as $\Delta S \propto \int_\epsilon \frac{dk}{k^2}\propto\frac{1}{k}$. This is just an instance of Mermin-Wagner theorem that says that in 1 and 2 dimensions there is no spontaneous symmetry breaking because the corresponding massless Goldstone bosons have infrared divergencies. You can check tha in 3D the correction goes as $\Delta S\propto T^{3/2}$. I see you're interested in the zero temperature limit. For fermion theories the Luttinger-Ward theorem gives the conditions for which the finite temperature results hold in the zero temperature limit. For bosons is somewhat harder because you have to deal with bose condensation. For the simple case of the Heisenberg model in 1D the classic result from Coleman can be extended without much trouble, as he himself notes, namely a proibition of spontaneous symmetry breaking in 1D an consequently absence of Goldstone modes.
So this answers question 3) regarding the Goldstone modes (they do not exist) and shows that although Holstein-Primakoff seens reasonable it gives results which are difficult to interpret as soon as one talks about excitations.
What about the JW transformation? It works greatly in 1D. In fact I think it is instructive to work all the terms. There is a convention of signals in the transforms, but I get for the full Hamiltonian (in lattice space, and disregarding terms that depend only on $n_j$ and ignoring boundary because I'm concerned with the $N\rightarrow \infty$ limit.)
$H_f=\sum_j -J\frac{1}{2}(f_j^\dagger f_{j+1}+f_{j+1}^\dagger f_j) -Jn_{j+1}n_j$
with $J>0$. In momentum space the first term is the kinectic one you wrote. The second one is easy to see corresponds to an attractive interaction. Therefore as soon as you put excitations you need to worry about the fermions forming bound states.
In fact, the one-dimension Heisenberg model is exactly solvable by Bethe Ansatz, and one can show that the low energy spectrum is made of gapped bosons, which from the JW point of view are bound states. If you want to understand the finite $N$ model the Bethe Ansatz is even better, since you can construct the exact energies and corresponding eigenstates.
In resume, HP is not really trustworthy in this case, it is better to look at JW, but in low dimensions basically every interaction is strong no matter how weak the coupling, so it pays to look beyond the first terms in perturbation theory. And there is no Goldstone mode, boson or fermion, because of the infrared divergence.
Nevertheless it is well known that in one dimensional systems we do not have spin-statistics theorem, viz. because there is no consistent definition of spin. Therefore there is a mapping from bosons to fermions. This article discuss the equivalence between fermions and bosons. In case you want further discussions I would recommend the great book by Giamarchi "Quantum Physics in one dimension". You''ll find a lot about Luttinger liquids, bosonization and there is a short introduction to Bethe Ansatz, complete with low energy excitations.
For even further discussions of Heisenberg model in 1D I really like "The theory of magnetism made simple", by Daniel Mattis. Not really made that simple though.
For a relation between the bosons and fermions in the context of Heinsenberg model, check this paper from Luscher where he discusses the antiferromagnet as a lattice regularization of the Thirring Model which Coleman had shown being equivalent to the Sine-Gordon model. It may be possible that the ferromagnet case you're interested also possess a similar relation. |
Consider a family of convex sets $\{K_n\}$ such that $K_n \subset \mathbb{R}^n$ for each $n$. The kinds of sets one might be considering could be, for instance,
$K_n$ is the cube of side $2A$, i.e., $K_n = [-A, A]^n$.
$K_n$ is the $n$ dimensional ball of radius $\sqrt{\lambda n}$.
Suppose we are interested in the rate of growth with respect to $n$, of the
volumes of sets in the family $\{K_n\}$. We can define the parameter$$v = \lim_{n \to \infty} \frac{1}{n}\log \text{Vol}(K_n),$$which captures the exponential growth rate of volume. For the family of cubes, the parameter is $\log 2A$, and for the family of spheres it is $\frac{1}{2} \log 2 \pi e \lambda$.
Now instead of just volumes, suppose we are interested in the rate of exponential growth of intrinsic volumes of the family $\{K_n\}$. It seems intuitive to define the function $v:[0,1] \to \mathbb{R}$ as $$v(\theta) = \lim_{n \to \infty} \frac{1}{n} \log \mu_{n\theta}(K_n),$$ where $\mu_{n\theta}(K_n)$ is the $n\theta$-th intrinsic volume of $K_n$. Naturally $n\theta$ need not be an integer, but while taking the intrinsic volume we can round it off to the nearest integer. In case of the two examples considered,
$v(\theta)$ for the family of cubes is: $H(\theta) + \theta \log 2A$, $v(\theta)$ for the family of spheres is: $H(\theta)+ \frac{\theta}{2} \log 2 \pi e \lambda + \frac{1- \theta}{2} \log (1 - \theta)$,
where $H(\theta) = -\theta \log \theta - (1 - \theta) \log (1 - \theta)$, is the binary entropy function. We can check that $v(1)$, which is the growth rate of volume, matches the $v$ defined earlier.
My questions are:
What is this function $v(\theta)$? It seems to be a characteristic of the family $\{K_n\}$ considered as it tries to capture how this family is growing. Has this been encountered in the literature before? For what families of convex sets can one always guarantee the existence of $v(\theta)$? Is there any intuitive way to see why this limit function $v(\theta)$ should even exist for the two families considered above? (I explicitly used the formulae of intrinsic volumes for the above families to arrive at the corresponding $v(\theta)$). |
For a given finite alphabet $\Sigma$, my goal is to write an algorithm that receives as input a sequence $V=V_{1}V_{2}\dots V_{n}$ of subsets ($V_{i}\subseteq\Sigma$), and returns a weighted deterministic finite-state automaton with the following property: for every input string $s$, the automaton penalizes each substring $s_{1}s_{2}\dots s_{n}$ of $s$ if $s_{i}\in V_{i}$ for every $1\le i\le n$.
For example, if $\Sigma=\{a,b\}$, $V=V_{1}V_{2}=\{b\}\{b\}$, the returned automaton would penalize the string $s=bbbb$ $3$ times, i.e. when accepting the string and summing over the weighted arcs, the result would be $3$.
The algorithm should return the weighted DFA as a 4-tuple $<Q,\delta ,q_0 , F>$:
$Q=\{q_{0},q_{1}\}$
$\delta=\{(q_{0},a,0,q_{0}),(q_{0},b,0,q_{1}),(q_{1},b,1,q_{1}),(q_{1},a,0,q_{0})\}\subseteq Q\times\Sigma\times\{0,1]\times Q$
$q_{0}=q_{0}$
$F=\{q_{0},q_{1}\}$
The algorithm must return the DFA as a 4-tuple as described. My main concern is the
size of the resulting DFA. Therefore, the structure of $V$ should be taken advantage of. Capitalizing on subset relations, it is possible to eliminate states from the resulting automaton. For example, when all subsets are pairwise disjoint,the automaton would have $n$ states.
The problem arises in the general case, where the inclusion relation between subsets is unknown.
For example, if $\Sigma=\{a,b,c,d,e\}$ and $V=\{a,b\}\{b,c,d\}\{d,e\}$, the string $abde$ should be penalized twice (both $abd$ and $bde$ are penalized), where $acde$ should be penalized only once ($acd$ is penalized, but not $cde$). This happens since there is a chain of non-empty intersections between subsets $(V_{1}\cap V_{2},V_{2}\cap V_{3}\not=\emptyset)$, so having accepted a penalized substring, some paths taken must be remembered in order to know how to move on. Paths in the required automaton would have to be splitted accordingly. In this case the resulting automaton could be minimally built with 5 states.
When the sequence is longer, there is a chance for more complex inclusion relations between its subsets.
I'm interested in a general algorithmic solution for this problem, currently not in any implementation or performance considerations. As stated, my main concern is the size of the DFA.
Any insight, reference or suggestion on how to tackle this problem would be appreciated.
More specifically, I'm now trying to figure out how to capitalize on inclusion relations between subsets in order to eliminate states (in advance) when building the DFA.
EDIT: Added a paragraph and a final comment about using the structure of V to minimize the DFA, emphasis on my need of a smallest DFA as possible. |
I apologize if this question is not a good fit for CSTheory. I'm a PhD student who has just started out and I'm working on a game-theory problem in one of my classes. Although my professor hasn't explicitly required it, I'd like to prove the NP-hardness or completeness of the problem since it's something I'd like to learn to do. I'm pretty new at theoretical CS since I've been working as a software-engineer for the past 8 or so years.
My problem is as follows. The defender has a set of targets $T = \{a, b, c, d, e, f, g\}$ and $k$ resources, where $k < |T|$. The defender can place at most one resource on one target to cover it.
There is an attacker who can attack any subset of targets in $T$. So the attacker's pure strategy $A \subseteq T$ and $A \in \mathcal{P}(T)$.
The attacker succeeds if
all the targets he attacks are not covered by the defender. If at least one of the targets that he attacks is covered by the defender, he fails.
The defender does not know which targets the attacker will attack, and he also does not know the number of targets either. The defender's job is to come up with an optimal mixed strategy (a vector with a probability assigned to each pure strategy) to guard against the attacker. The probability assigned to a pure-strategy is the probability that the defender will play that pure strategy. This information is also known to the attacker.
I know that the strategy space for both players is large. The total number of pure strategies for the defender is $\binom{|T|}{k}$, and for the attacker it is $2^{|T|}$. There is a standard algorithm that generates a set of linear programs to solve for the defender's mixed strategy. However, we will end up with $2^{|T|}$ linear programs, each with $\binom{|T|}{k}$ variables. So the complexity is $O\big(2^{|T|} \times \binom{|T|}{k}^{3.5}\big)$ (Karmarkar's algorithm).
The algorithm won't perform well at all, because there combinatorial explosion in the strategy space for both the defender and attacker. I'm trying to see if there is an NP-complete problem I can reduce this to, but I'm not sure where to go from here. I have looked at set cover, and exact set cover, but I'm still not sure how I would reduce it to that, or even if those are applicable here.
I read this paper where they deal with a somewhat similar problem (urban network graph, where an attacker can attack any target from a starting point by using any of the paths available; the defender has $k$ resources that he can place on edges to block the attacker). Here, they use a double oracle approach and then prove the NP hardness of the defender and attacker oracle by reducing to set-cover and 3-SAT respectively. The defender oracle tries to find a pure strategy for a given attacker mixed-strategy, and the attacker tries to find a pure strategy for a given defender mixed-strategy.
I'm trying to do something similar, but I'm not sure where to start, or even of it would be applicable in my case. For example, if we have an attacker with the following mixed-strategy $\{\{b, c\}, \{a, b, c,\}, \{a, b, d\}, \{b, c\}\}$ (where each one has an associated probability), the defender needs to find the optimal pure strategy that covers the most targets and gives him the most payoff. If we assume that the defender has 3 resources, the optimal pure strategy would be one of $\{\{a, b, c\}, \{a, b, d\}, \{a, c, d\}, \{b, c, d\}\}$. Here, it doesn't look like finding a cover would help, since the defender only has 3 resources. So I'm not sure where to go from here.
EDIT: Each target $t \in T$ has an associated payoff of $\tau_t$. |
If $y(t) = x(t)*h(t)$, then what is the expression for $y(t+a)$?
Is it $x(t+a)*h(t+a)$ or $x(t+a)*h(t)$?
Signal Processing Stack Exchange is a question and answer site for practitioners of the art and science of signal, image and video processing. It only takes a minute to sign up.Sign up to join this community
If $y(t) = x(t)*h(t)$, then what is the expression for $y(t+a)$?
Is it $x(t+a)*h(t+a)$ or $x(t+a)*h(t)$?
Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.
From your confusion of $x(t+a) \star h(t+a)$ vs $h(t) \star x(t+a)$ I guess that a little help on the argument manipulations on functions and convolutions could be appropriate here, moving in simple examples:
First let us express the usual simplistic case. Consider the relation: $$ y(t) = h(t) \cdot x(t) + g(t) \tag{1}$$
then manipulations on the argument $t$ is applied to all functions on both sides
$$ y(t+a) = h(t+a) \cdot x(t+a) + g(t+a)$$
or an arbitrary transform on $t$ would similary be: $$ y(\phi(t)) = h(\phi(t)) \cdot x(\phi(t)) + g(\phi(t))$$
Now consider the case where two functions convolved to produce the third: $$y(t) = \int_{-\infty}^{\infty} h(\tau) x(t-\tau) d\tau $$
which is abbreviated as $$ y(t) = h(t) \star x(t) \tag{2} $$
Now be careful to interpret the case-2. The variable $t$ shows in all functions as an argument but you may not apply the transform on $t$ as you did with the case-1, so assume you have an
arbitrary transform on $t$ as $\phi(t)$ then
$$ y(\phi(t)) \neq h(\phi(t)) \star x(\phi(t)) \tag{3} $$
For example, as in your case, if $\phi(t) = t+a$ then you get
$$ y(t+a) \neq h(t+a) \star x(t+a)$$ but $$ y(t+a) = h(t) \star x(t+a) = h(t+a) \star x(t) $$
The justification of this can (only) be seen when you consider the
integral definition of the convolution operator:
$$ \begin{align} y(t) &= h(t) \star x(t) \\ & = \int_{-\infty}^{\infty} h(\tau) x(t-\tau)d\tau \\ y(t+a) & = \int_{-\infty}^{\infty} h(\tau) x((t+a)-\tau)d\tau \\ & = h(t) \star x(t+a) \\ \end{align} $$
Note that since the
live variable inside the integral only happens in one function ($x(t-\tau)$ in this case) then a change in $t$ will only affect one of them and you get:
$$ y(t+a) = h(t) \star x(t+a) $$ or from commutativity of convolution you get $$ y(t+a) = h(t+a) \star x(t) $$
So this provides the answer you were looking for. However, it's not over. Because the following case represents an exception:
$$ y(-t) \neq h(t) \star x(-t) $$ but $$ y(-t) = h(-t) \star x(-t) \tag{4} $$
So how to see this case-4. Again, using the integral definition :
Assuming that $y(t) = h(t) \star x(t)$, then compute the convolution between two new signals $g(t)=h(-t)$ and $z(t)=x(-t)$ as: $$ \begin{align} w(t) &= g(t) \star z(t) \\ & = \int_{-\infty}^{\infty} g(\tau) z(t-\tau)d\tau &g(\tau)=h(-\tau),z(t-\tau)=x(-(t-\tau)) \\ & = \int_{-\infty}^{\infty} h(-\tau) x(-(t-\tau))d\tau &\text{ let } \tau'=-\tau \\ & = -\int_{\infty}^{-\infty} h(\tau') x(-(t+\tau'))d\tau' &\text{ replace } \tau' \text{ with } \tau \\ & = \int_{-\infty}^{\infty} h(\tau) x(-t-\tau) d\tau \\ & = y(-t) \\ \end{align} $$
hence we conclude that $h(-t) \star x(-t) = w(t) = y(-t) $. As stated before, you must always consult to the (explicit) integral definition to decide on the correct functions used in the convolution operator. |
What is a proof by contradiction? This is actually quite difficult to answer in a satisfactory way, but usually what people mean is something like this: given a statement $\phi$, a proof of $\phi$ by contradiction is a derivation of a contradiction from the assumption $\lnot \phi$. In order to analyse this, it is very important to distinguish between the statement $\phi$ and the statement $\lnot \lnot \phi$; the two statements are formally distinct (as obvious from the fact that their written forms are different!) even though they always have the same truth value in classical logic.
Let $\bot$ denote contradiction. When we show a contradiction assuming $\lnot \phi$, what we have is a
conditional proof of $\bot$ from $\lnot \phi$. This can then be transformed into a proof of the statement $\lnot \phi \to \bot$, which is the long form of $\lnot \lnot \phi$ – in other words, we have a proof that "it is not the case that $\lnot \phi$". This, strictly speaking, is not a complete proof of $\phi$: we must still write down the last step deducing $\phi$ from $\lnot \lnot \phi$. This is the point of contention between constructivists and non-constructivists: in the constructive interpretation of logic, $\lnot \lnot \phi$ is not only formally distinct from $\phi$ but also semantically distinct; in particular, constructivists reject the principle that $\phi$ can be deduced from $\lnot \lnot \phi$ (though they may accept some limited instances of this rule).
There is one case where proof by contradiction is always acceptable to constructivists (or at least intuitionists): this is when the statement $\phi$ to be proven is itself of the form $\lnot \psi$. This is because it is a theorem of intuitionistic logic that $\lnot \lnot \lnot \psi$ holds if and only if $\lnot \psi$. On the other hand, it is also in principle possible to give a "direct" proof of $\lnot \psi$ in the following sense: we simply have to derive a contradiction by assuming $\psi$. Any proof of $\lnot \psi$ by contradiction can thus be transformed into a "direct" proof because one can always derive $\lnot \lnot \psi$ from $\psi$; so if we can obtain a contradiction by assuming $\lnot \lnot \psi$, we can certainly derive a contradiction by assuming $\psi$.
Ultimately, both of the above methods involve making a counterfactual assumption and deriving a contradiction. However, it is sometimes possible to "push" the negation inward and even eliminate it. For example, if $\phi$ is the statement "there exists an $x$ such that $\theta (x)$ holds", then $\lnot \phi$ can be deduced from the statement "$\theta (x)$ does
not hold for any $x$". In particular, if $\theta (x)$ is itself a negative statement, say $\lnot \sigma (x)$, then $\lnot \phi$ can be deduced from the statement "$\sigma (x)$ holds for all $x$". Thus, proving "there does not exist an $x$ such that $\sigma (x)$ does not hold" by showing "$\sigma (x)$ holds for all $x$" might be considered a more "direct" proof than either of the two previously-mentioned approaches.
Can all proofs by contradiction be transformed into direct proofs? In some sense the answer has to be no: intuitionistic logic is known to be weaker than classical logic, i.e. there are statements have proofs in classical logic but not intuitionistic logic. The only difference between classical logic and intuitionistic logic is the principle that $\phi$ is deducible from $\lnot \lnot \phi$, so this (in some sense) implies that there are theorems that can only be proven by contradiction.
So what are the advantages of proof by contradiction? Well, it makes proofs easier. So much so that one algorithm for automatically proving theorems in propositional logic is based on it. But it also has its disadvantages: a proof by contradiction can be more confusing (because it has counterfactual assumptions floating around!), and in a precise technical sense it is less satisfactory because it generally cannot be (re)used in constructive contexts. But most mathematicians don't worry about the latter problem. |
The probability
<math>P</math> of some event
<math>E</math> (denoted <math>P(E)</math>) is defined with respect to a "universe" or sample space
<math>S</math> of all possible elementary events
in such a way that <math>P</math> must satisfy the Kolmogorov axioms.
Alternatively, a probability can be interpreted as a measure on a sigma-algebra of subsets of the sample space, those subsets being the events, such that the measure of the whole set equals 1. This property is important, since it gives rise to the natural concept of conditional probability. Every set <math>A</math> with non-zero probability defines another probability on the space:
<math>P(B \vert A) = {P(B \cap A) \over P(A)}.</math>
This is usually read as "probablity of <math>B</math> given <math>A</math>". <math>B</math> and <math>A</math> are said to be independent
if the conditional probability of <math>B</math> given <math>A</math> is the same as the probability of <math>B</math>.
In the case that the sample space is finite or countably infinite, a probability function can also be defined by its values on the elementary events <math>\{e_1\}, \{e_2\}, ...</math> where <math>S = {e_1, e_2, ...}</math>
Kolmogorov axioms
For any set <math>E</math>:
<math>0 \leq P(E) \leq 1.</math>
That is, the probability of an event set is represented by a real number between 0 and 1.
<math>P(S) = 1</math>.
That is, the probability that some elementary event in the entire sample set will occur is 1, or certainty. More specifically, there are no elementary events outside the sample set. This is often overlooked in some mistaken probability calculations; if you cannot precisely define the whole sample set, then the probability of any subset cannot be defined either.
Any sequence of mutually disjoint events <math>E_1, E_2, ...</math> satisfies
<math>P(E_1 \cup E_2 \cup \cdots) = \sum P(E_i)</math>.
That is, the probability of an event set which is the union of other disjoint subsets is the sum of the probabilities of those subsets. This is called σ-additivity. If there is any overlap among the subsets this relation does not hold.
These axioms are known as the
Kolmogorov axioms, after Andrey Kolmogorov who developed them.
Lemmas in probability
From these axioms one can deduce other useful rules for calculating probabilities. For example:
<math>P(A \cup B) = P(A) + P(B) - P(A \cap B)</math>
That is, the probability that A
or B will happen is the sum of the
probabilities that A will happen and that B will happen, minus the
probability that A and B will happen.
<math>P(S - E) = 1 - P(E)</math>
That is, the probability that any event will
not happen is 1 minus the probability that it will.
Using conditional probability as defined above, it also follows immediately that:
<math>P(A \cap B) = P(A) \cdot P(B \vert A)</math>
That is, the probability that A
and B will happen is the probability
that A will happen, times the probability that B will happen given
that A happened. It then follows that A and B are independent if and only if
<math>P(A \cap B) = P(A) \cdot P(B)</math>.
See also
frequency probability -- personal probability -- eclectic probability -- statistical regularity
All Wikipedia text is available under the terms of the GNU Free Documentation License |
To show two sets are equivalent, you should show that $A\subseteq B$ and $B\subseteq A$. This implies that $A=B$. If $A=\varnothing$ and $B=\varnothing$, then try an element-chasing proof to show that $A=B$.
($\to$): If $x\in A$, then $x\in B$. Thus, $A\subseteq B$. $\qquad$[ Vacuously true]
($\leftarrow$): If $x\in B$, then $x\in A$. Thus, $B\subseteq A$.$\qquad$ [ Vacuously true]
Thus, by mutual subset inclusion, we have that $A=B$.
This conclusion is pretty lame though, as it is an example of a so-called
vacuous truth. The implication $p\to q$ is only false when $p$ is true and $q$ is false. Thus, assuming anything to be in an empty set will give you all sorts of bizarre conclusions.
Addendum: Some of the confusion seems to be rooted in what it means to be a subset as opposed to an element. Thus, I am going to list several claims where the goal is to figure out whether or not the claim is true or false (hopefully this may help the OP and some other users). Answers will be provided on the side of each claim.
Claims:
(a) $0\in\varnothing\qquad\qquad$ [ False]
(b) $\varnothing\in\{0\}\qquad\qquad$ [ False]
(c) $\{0\}\subset\varnothing\qquad\qquad$ [ False]
(d) $\varnothing\subset\{0\}\qquad\qquad$ [ True]
(e) $\{0\}\in\{0\}\qquad\qquad$ [ False]
(f) $\{0\}\subset\{0\}\qquad\qquad$ [ False]
(g) $\{\varnothing\}\subseteq\{\varnothing\}\qquad\qquad$ [ True]
(h) $\varnothing\in\{\varnothing\}\qquad\qquad$ [ True]
(i) $\varnothing\in\{\varnothing,\{\varnothing\}\}\qquad\qquad$ [ True]
(j) $\{\varnothing\}\in\{\varnothing\}\qquad\qquad$ [ False]
(k) $\{\varnothing\}\in\{\{\varnothing\}\}\qquad\qquad$ [ True]
(l) $\{\varnothing\}\subset\{\varnothing,\{\varnothing\}\}\qquad\qquad$ [ True]
(m) $\{\{\varnothing\}\}\subset\{\varnothing,\{\varnothing\}\}\qquad\qquad$ [ True]
(n) $\{\{\varnothing\}\}\subset\{\{\varnothing\},\{\varnothing\}\}\qquad\qquad$ [ False]
Note: Below, $x$ is meant simply to denote a letter, not a set (which is often indicated by writing a capital letter, as was done in the initial explanation). For (t), if $x$ did denote a set, then $x=\varnothing$ would make (t) true as opposed to false.
(o) $x\in\{x\}\qquad\qquad$ [ True]
(p) $\{x\}\subseteq\{x\}\qquad\qquad$ [ True]
(q) $\{x\}\in\{x\}\qquad\qquad$ [ False]
(r) $\{x\}\in\{\{x\}\}\qquad\qquad$ [ True]
(s) $\varnothing\subseteq\{x\}\qquad\qquad$ [ True]
(t) $\varnothing\in\{x\}\qquad\qquad$ [ False]
(u) $\varnothing\in\varnothing\qquad\qquad$ [ False]
(v) $\varnothing\subseteq\varnothing\qquad\qquad$ [ True] |
Topology Seminar: Soren Galatius (Stanford) Date: 10/13/2010
University of British Columbia
Homology of moduli spaces of manifolds
We study the space M_g of isometry classes (or conformal equivalence classes) of smooth manifolds, diffeomorphic to #^g(S^d \times S^d), the connected sum of g copies of S^d \times S^d. For 2d=2, this is essentially the moduli space of Riemann surfaces. There is a variant M_{g,1} where we consider moduli of manifolds with an embedded D^{2d}; connected sum with S^d \times S^d gives a map M_{g,1} \to M_{g+1,1}, and we can form the direct limit M_{\infty,1}. The work of Madsen and Weiss on Mumford's conjecture determines the homology of M_{\infty,1} in the case 2d=2. We give a similar description of the homology of M_{\infty,1} in higher dimensions (2d \geq 6). This is joint work with Oscar Randal-Williams.
3:00pm - 4:00pm, WMAX 110 |
I have encountered different notions of isotropy of radiation and I would like to know if they are the same and what the exact definition of isotropy is, if one exists.
Let's take black body radiation inside a cavity at thermal equilibrium for an example. It's a fact, that in this case the radiation is isotropic, but what does that mean exactly? Let's consider a point in the cavity and two small surfaces $d\sigma_1$ and $d\sigma_2$ located at said point, but with different orientations in space. Let $L_1(\theta_1,\phi_1,\nu)$ be the spectral radiance of $d\sigma_1$ where $\theta_1$ and $\phi_1$ are the angles of the solid angle $d\Omega_1$ and $\nu$ is a certain frequency and $L_2(\theta_2,\phi_2,\nu)$ be the corresponding spectral radiance of $d\sigma_2$. The angles $\theta_i$, $\phi_i$ are measured with respect to the normal to the surface $d\sigma_i$.
The first notion of isotropy that I encountered is: $L_1(\theta_1,\phi_1,\nu)$ = $L_1(\tilde{\theta}_1,\tilde\phi_1,\nu)$ for all possible angles $\theta_1$, $\phi_1$, $\tilde\theta_1$, $\tilde\phi_1$. That means, if the surface is chosen, it does not matter in which direction you look.
The second notion is: Let $\theta := \theta_1=\theta_2$ and $\phi := \phi_1 = \phi_2$. Then $L_1(\theta,\phi,\nu) = L_2(\theta,\phi,\nu)$ (Planck used this in his book "Vorlesungen über die Theorie der Wärmestrahlung" in the derivation of equation (21)). That means that the orientation does not matter.
So which one is the definition of isotropy of radiation? Maybe none or both? |
I have a normal random variable $X$ with mean $\mu$ and variance $\sigma^2$. Any advice on how to compute the conditional expectation $E[\frac{1}{X}|X \leq T]$ where $T$ is a positive constant?
Comment: Simulation for $T = 10,$ which avoids taking reciprocals of values
anywhere near $0.$ Then $E(\frac 1 X\, |\, X > 10) \approx 0.042.$ (As @Wolfies comments, this is a different problem.)
set.seed(2019)x = rnorm(10^6, 25, 5)xc = x[x > 10]length(xc)[1] 998638a = mean(1/xc); a[1] 0.04174508hist(1/xc, prob=T, col="skyblue2") abline(v=a, col="red")
Tangentially related application: Here |
Laws of Motion Third Law of Motion For every action there is an equal and opposite reaction. Action and reaction never act on same body. Velocity of rocket at any time \tt V = v_o + u \log_{e}\left(\frac{m_{0}}{m}\right) When initial velocity is zero. \tt V = u \log \left(\frac{m_{0}}{m}\right) Thrust acting on rocket \tt F = - u \left(\frac{dm}{dt}\right) For variable velocity \tt \int_{vo}^{v} dv = \int_{mo}^{m} - u \frac{dm}{m} Constraint equation x 1+ x 2= R v 1+ v 2= 0 a 1+ a 2= 0
x 1+ x 3= l 1(x 1– x 3) + (x 4– x 3) = l 2(x 1– x 4) + (x 2– x 4) = l 3a 1+ a 3= 0 a 1+ a 4– 2a 3= 0 a 1+ a 2– 2a 4= 0
In spring Restoring force is directly proportional to elongation. F ∝ x ⇒ K = f/x. (K = spring constant) Springs connected in series K eq= K 1+ K 2 Springs connected in parallel \tt K_{eq} = \frac{K_{1}K_{2}}{K_{1} + K_{2}} K 1x 1= K 2x 2 View the Topic in this video From 44:19 To 57:09
Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers.
For every action there is an equal and opposite reaction and both acts on two different bodies. Mathematically
F 12 = − F 21 |
Electronic Devices Classification of Metals, Conductors and Semiconductors The band formed by a series of energy levels containing the valence electrons is known as VALENCE BAND. Valence Band may be partially or completely filled with electrons depending on the nature of crystal. Valence Bond is the highest filled energy band. The band formed by a series of energy levels containing the conduction electrons is known as CONDUCTION BAND. Conduction Band may be empty or partially filled with electrons. Conduction Band is the lowest unfilled energy band. A part of the energy band which is not occupied by any electron is known as FORBIDDEN BAND. The energy gap between the valence band and the conduction band is called FORBIDDEN ENERGY GAP. No electron will exist with an energy level in forbidden gap. The substances having high electrical conductivity are known as CONDUCTORS. The valence band and conduction band overlap. En! metals Those substances having poor electrical conductivity are INSULATORS. The valence band and conduction band are separated by large forbidden gap (≈ 5 – 9eV) Ex: Diamond The substances whose electrical conductivity lies between conductors and insulators are called SEMI CONDUCTORS. The forbidden energy gap between conduction and valence band is very small (≈ 1 eV) Ex: Si & Ge For silicon forbidden energy gap is 1.1 eV and for germanium 0.72 eV. Eg ≈ KT For diamond, forbidden energy gap is 6eV. Eg >> KT At O Kelvin, pure semiconductor behaves as a perfect insulator. On adding suitable impurities to the semiconductor, their conductivity increases by large amounts. The charge carriers which are responsible for conduction in semiconductors are free electrons and holes. View the Topic in this video From 0:29 To 3:45
Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers.
For extrinsic or doped semiconductor
n_{e} \cdot n_{h} = n_{i}^{2} \Rightarrow n_{e} = \frac{n_{i}^{2}}{n_{h}} |
Edited:
My question is related to a tutorial I was reading.
The covariance matrix is a block matrix where $C_{xx}$ and $C_{yy}$ are within-set covariance matrices and $C_{xy} = C_{yx}^T$ are between-sets covariance matrices.
$$ \left[\begin{array}{r r} C_{xx} & C_{xy}\\ C_{yx} & C_{yy} \end{array}\right] $$
The tutorial says that the canonical correlations between $x$ and $y$ can be found by solving the eigenvalue equations
$$ C_{xx}^{-1}C_{xy}C_{yy}^{-1}C_{yx} \hat w_x = \rho^2 \hat w_x \\ C_{yy}^{-1}C_{yx}C_{xx}^{-1}C_{xy} \hat w_y = \rho^2 \hat w_y $$
where the eigenvalues are the squared canonical correlations and the eigenvectors and are the normalized canonical correlation basis vectors.
What I do not understand is how the eigenvalue equations are found by using the covariance matrix? Can someone please explain how we get those sets of equations?
Thanks. |
Two questions (more details below):
Let $G$ be a compact Lie group and $X$ a $G$-space such that all stabilizer subgroups are conjugate to a fixed $H \leq G$. Denote by $\pi: X \to X/G$ the quotient map. Under which conditions on $X$ is $\pi$ a Serre fibration?
Let $G$ be as above and $F$ a
free$G$-space. Under which conditions is the canonical map $F_{hG} \to F/G$ from the homotopy quotient to the quotient a weak equivalence?
All spaces are assumed to be CGWH spaces.
Relation between 1. and 2.:
I am most interested in 2.. However, if $F$ is a free $G$-space, $EG$ denotes Bar construction, such that $(F \times EG)/G$ models $F_{hG}$, and both $F$ and $F \times EG$ are as in 1., i.e., the respective quotient maps are Serre fibrations, then it is not hard to see that $F_{hG} \to F/G$ is a weak equivalence, using the long exact sequence of homotopy groups.
So, any condition for 1. that holds for $EG$ and is stable under products provides a condition for 2.
Results known to me:
A sufficient condition for 1. is being a completely regular Hausdorff space (also known as a Tychonoff space) by a result in Bredon's "Introduction to Compact Transformation groups." He shows that the quotient map is a fiber bundle in this case. However, the Tychonoff property does not seem to be preserved by CGWH products, so it will not hold for many constructions which one would like to have results as in 1. or 2. for.
A sufficient condition for 2. should be that $F$ is a (retract of a) free $G$-CW-complex by using model category arguments. However, this also a rather severe condition that can be hard or impossible to check in practice.
Thus, I would like to know if there any other known results, preferably with a reference, that improve the sufficient conditions outlined above. Any results for more arbitrary topological groups will also be appreciated. |
The previous part brought forth the different tools for reasoning, proofing and problem solving. In this part, we will study the discrete structures that form the basis of formulating many a real-life problem.
The two discrete structures that we will cover are graphs and trees. A graph is a set of points, called nodes or vertices, which are interconnected by a set of lines called edges. The study of graphs, or
graph theory is an important part of a number of disciplines in the fields of mathematics, engineering and computer science. Definition − A graph (denoted as $G = (V, E)$) consists of a non-empty set of vertices or nodes V and a set of edges E. Example − Let us consider, a Graph is $G = (V, E)$ where $V = \lbrace a, b, c, d \rbrace $ and $E = \lbrace \lbrace a, b \rbrace, \lbrace a, c \rbrace, \lbrace b, c \rbrace, \lbrace c, d \rbrace \rbrace$ Degree of a Vertex − The degree of a vertex V of a graph G (denoted by deg (V)) is the number of edges incident with the vertex V.
Vertex Degree Even / Odd a 2 even b 2 even c 3 odd d 1 odd Even and Odd Vertex − If the degree of a vertex is even, the vertex is called an even vertex and if the degree of a vertex is odd, the vertex is called an odd vertex. Degree of a Graph − The degree of a graph is the largest vertex degree of that graph. For the above graph the degree of the graph is 3. The Handshaking Lemma − In a graph, the sum of all the degrees of all the vertices is equal to twice the number of edges.
There are different types of graphs, which we will learn in the following section.
A null graph has no edges. The null graph of $n$ vertices is denoted by $N_n$
A graph is called simple graph/strict graph if the graph is undirected and does not contain any loops or multiple edges.
If in a graph multiple edges between the same set of vertices are allowed, it is called Multigraph. In other words, it is a graph having at least one loop or multiple edges.
A graph $G = (V, E)$ is called a directed graph if the edge set is made of ordered vertex pair and a graph is called undirected if the edge set is made of unordered vertex pair.
A graph is connected if any two vertices of the graph are connected by a path; while a graph is disconnected if at least two vertices of the graph are not connected by a path. If a graph G is disconnected, then every maximal connected subgraph of $G$ is called a connected component of the graph $G$.
A graph is regular if all the vertices of the graph have the same degree. In a regular graph G of degree $r$, the degree of each vertex of $G$ is r.
A graph is called complete graph if every two vertices pair are joined by exactly one edge. The complete graph with n vertices is denoted by $K_n$
If a graph consists of a single cycle, it is called cycle graph. The cycle graph with n vertices is denoted by $C_n$
If the vertex-set of a graph G can be split into two disjoint sets, $V_1$ and $V_2$, in such a way that each edge in the graph joins a vertex in $V_1$ to a vertex in $V_2$, and there are no edges in G that connect two vertices in $V_1$ or two vertices in $V_2$, then the graph $G$ is called a bipartite graph.
A complete bipartite graph is a bipartite graph in which each vertex in the first set is joined to every single vertex in the second set. The complete bipartite graph is denoted by $K_{x,y}$ where the graph $G$ contains $x$ vertices in the first set and $y$ vertices in the second set.
There are mainly two ways to represent a graph −
An Adjacency Matrix $A[V][V]$ is a 2D array of size $V \times V$ where $V$ is the number of vertices in a undirected graph. If there is an edge between $V_x$ to $V_y$ then the value of $A[V_x][V_y]=1$ and $A[V_y][V_x]=1$, otherwise the value will be zero. And for a directed graph, if there is an edge between $V_x$ to $V_y$, then the value of $A[V_x][V_y]=1$, otherwise the value will be zero.
Adjacency Matrix of an Undirected Graph
Let us consider the following undirected graph and construct the adjacency matrix −
Adjacency matrix of the above undirected graph will be −
0
1
1
0
1
0
1
0
1
1
0
1
0
0
1
0
Adjacency Matrix of a Directed Graph
Let us consider the following directed graph and construct its adjacency matrix −
Adjacency matrix of the above directed graph will be −
0
1
1
0
0
0
1
0
0
0
0
1
0
0
0
0
In adjacency list, an array $(A[V])$ of linked lists is used to represent the graph G with $V$ number of vertices. An entry $A[V_x]$ represents the linked list of vertices adjacent to the $Vx-th$ vertex. The adjacency list of the undirected graph is as shown in the figure below −
Planar graph − A graph $G$ is called a planar graph if it can be drawn in a plane without any edges crossed. If we draw graph in the plane without edge crossing, it is called embedding the graph in the plane. Non-planar graph − A graph is non-planar if it cannot be drawn in a plane without graph edges crossing.
If two graphs G and H contain the same number of vertices connected in the same way, they are called isomorphic graphs (denoted by $G \cong H$).
It is easier to check non-isomorphism than isomorphism. If any of these following conditions occurs, then two graphs are non-isomorphic −
The following graphs are isomorphic −
A homomorphism from a graph $G$ to a graph $H$ is a mapping (May not be a bijective mapping)$ h: G \rightarrow H$ such that − $(x, y) \in E(G) \rightarrow (h(x), h(y)) \in E(H)$. It maps adjacent vertices of graph $G$ to the adjacent vertices of the graph $H$.
A homomorphism is an isomorphism if it is a bijective mapping.
Homomorphism always preserves edges and connectedness of a graph.
The compositions of homomorphisms are also homomorphisms.
To find out if there exists any homomorphic graph of another graph is a NPcomplete problem.
A connected graph $G$ is called an Euler graph, if there is a closed trail which includes every edge of the graph $G$. An Euler path is a path that uses every edge of a graph exactly once. An Euler path starts and ends at different vertices.
An Euler circuit is a circuit that uses every edge of a graph exactly once. An Euler circuit always starts and ends at the same vertex. A connected graph $G$ is an Euler graph if and only if all vertices of $G$ are of even degree, and a connected graph $G$ is Eulerian if and only if its edge set can be decomposed into cycles.
The above graph is an Euler graph as $“a\: 1\: b\: 2\: c\: 3\: d\: 4\: e\: 5\: c\: 6\: f\: 7\: g”$ covers all the edges of the graph.
A connected graph $G$ is called Hamiltonian graph if there is a cycle which includes every vertex of $G$ and the cycle is called Hamiltonian cycle. Hamiltonian walk in graph $G$ is a walk that passes through each vertex exactly once.
If $G$ is a simple graph with n vertices, where $n \geq 3$ If $deg(v) \geq \frac{n}{2}$ for each vertex $v$, then the graph $G$ is Hamiltonian graph. This is called
Dirac's Theorem.
If $G$ is a simple graph with $n$ vertices, where $n \geq 2$ if $deg(x) + deg(y) \geq n$ for each pair of non-adjacent vertices x and y, then the graph $G$ is Hamiltonian graph. This is called
Ore's theorem. |
Naturally, a rotating object has kinetic energy - its parts are moving after all (even if they’re just rotating around a fixed axis). The total kinetic energy of rotation is simply the sum of the kinetic energies of all rotating parts, just like the total translational kinetic energy was the sum of the individual kinetic energies of the constituent particles in Section 4.5. Using that \(v = \omega r\), we can write for a discrete collection of particles:
\[K_{\mathrm{rot}}=\sum_{\alpha} \frac{1}{2} m_{\alpha} v_{\alpha}^{2}=\sum_{\alpha} \frac{1}{2} m_{\alpha} r_{\alpha}^{2} \omega^{2}=\frac{1}{2} I \omega^{2}\]
by the definition 5.4.2 of the moment of inertia I. Analogously we find for a continuous object, using 5.4.3:
\[K_{\mathrm{rot}}=\int_{V} \frac{1}{2} v^{2} \rho \mathrm{d} V=\int_{V} \frac{1}{2} \omega^{2} r^{2} \rho \mathrm{d} V=\frac{1}{2} I \omega^{2}\]
so we arrive at the general rule:
\[K_{\mathrm{rot}}=\frac{1}{2} I \omega^{2}\]
Naturally, the work-energy theorem (Equation 3.2.3) still holds, so we can use it to calculate the work necessary to effect a change in rotational velocity, which by Equation 5.4.1 can also be expressed in terms of the torque (in 2D):
\[W=\Delta K_{\mathrm{rot}}=\frac{1}{2} I\left(\omega_{\mathrm{final}}^{2}-\omega_{\mathrm{initial}}^{2}\right) = \int_{\theta_{\mathrm{initial}}}^{\theta_{\mathrm{final}}} \tau \mathrm{d} \theta\] |
A sequence of numbers \(\left\{ {{a_n}} \right\}\) is called a geometric sequence if the quotient of successive terms is a constant, called the common ratio. Thus \({\large\frac{{{a_{n + 1}}}}{{{a_n}}}\normalsize} = q\) or \({a_{n + 1}} = q{a_n}\) for all terms of the sequence. It’s supposed that \(q \ne 0\) and \(q \ne 1.\)
For any geometric sequence:
\[{a_n} = {a_1}{q^{n – 1}}.\]
A geometric series is the indicated sum of the terms of a geometric sequence. For a geometric series with \(q \ne 1,\)
\[
{{S_n} }={ {a_1} + {a_2} + \ldots + {a_n} }={ {a_1}\frac{{1 – {q^n}}}{{1 – q}},\;\;}\kern-0.3pt {q \ne 1.} \]
We say that the geometric series converges if the limit \(\lim\limits_{n \to \infty } {S_n}\) exists and is finite.Otherwise the series is said to diverge.
Let \(S = \sum\limits_{n = 0}^\infty {{a_n}} \) \(= {a_1}\sum\limits_{n = 0}^\infty {{q^n}} \) be a geometric series. Then the series converges to \(\large\frac{{{a_1}}}{{1 – q}}\normalsize\) if \(\left| q \right| \lt 1,\) and the series diverges if \(\left| q \right| \gt 1.\)
Solved Problems
Click a problem to see the solution.
Example 1Find the sum of the first \(8\) terms of the geometric sequence \(3,6,12, \ldots \) Example 2Find the sum of the series \(1 – 0,37 + 0,{37^2} \) \(- 0,{37^3} + \ldots \) Example 3Find the sum of the series Example 4Express the repeating decimal \(0,131313 \ldots \) as a rational number. Example 5Show that Example 6Solve the equation Example 7The second term of an infinite geometric progression (\(\left| q \right| \lt 1\)) is \(21\) and the sum of the progression is \(112.\) Determine the first term and ratio of the progression. Example 1.Find the sum of the first \(8\) terms of the geometric sequence \(3,6,12, \ldots \)
Solution.
Here \({a_1} = 3\) and \(q = 2.\) For \(n = 8\) we have
\[
{{S_8} = {a_1}\frac{{1 – {q^8}}}{{1 – q}} } = {3 \cdot \frac{{1 – {2^8}}}{{1 – 2}} } = {3 \cdot \frac{{1 – 256}}{{\left( { – 1} \right)}} }={ 765.} \] Example 2.Find the sum of the series \(1 – 0,37 + 0,{37^2} \) \(- 0,{37^3} + \ldots \)
Solution.
This is an infinite geometric series with ratio \(q = -0,37.\) Hence, the series converges to
\[
{S = \sum\limits_{n = 0}^\infty {{q^n}} } = {\frac{1}{{1 – \left( { – 0,37} \right)}} } = {\frac{1}{{1 + 0,37}} } = {\frac{1}{{1,37}} } = {\frac{{100}}{{137}}.} \] |
Let $E$ be the event that a smartphone of this model is defective. Let $F_A$ be the event that a smartphone is manufactured by factory A. Similarly for $F_B$ and $F_C$.
Then the overall fraction of defective smartphones of this model can be found as follows.\begin{align*}P(E) &= P(F_A \cap E) + P(F_B \cap E) + P(F_C \cap E)\\&= P(F_A)\cdot P(E \mid F_A) + P(F_B)\cdot P(E \mid F_B) + P(F_C)\cdot P(E \mid F_C)\\&= (0.6)(0.05) + (0.25)(0.02) + (0.15)(0.07)\\&= 0.0455.\end{align*}Thus, the overall defective rate is $4.55\%$.
Conditional Probability Problems about Die RollingA fair six-sided die is rolled.(1) What is the conditional probability that the die lands on a prime number given the die lands on an odd number?(2) What is the conditional probability that the die lands on 1 given the die lands on a prime number?Solution.Let $E$ […]
Complement of Independent Events are IndependentLet $E$ and $F$ be independent events. Let $F^c$ be the complement of $F$.Prove that $E$ and $F^c$ are independent as well.Solution.Note that $E\cap F$ and $E \cap F^c$ are disjoint and $E = (E \cap F) \cup (E \cap F^c)$. It follows that\[P(E) = P(E \cap F) + P(E […]
Jewelry Company Quality Test Failure ProbabilityA jewelry company requires for its products to pass three tests before they are sold at stores. For gold rings, 90 % passes the first test, 85 % passes the second test, and 80 % passes the third test. If a product fails any test, the product is thrown away and it will not take the […]
Independent Events of Playing CardsA card is chosen randomly from a deck of the standard 52 playing cards.Let $E$ be the event that the selected card is a king and let $F$ be the event that it is a heart.Prove or disprove that the events $E$ and $F$ are independent.Definition of IndependenceEvents […]
Independent and Dependent Events of Three Coins TossingSuppose that three fair coins are tossed. Let $H_1$ be the event that the first coin lands heads and let $H_2$ be the event that the second coin lands heads. Also, let $E$ be the event that exactly two coins lands heads in a row.For each pair of these events, determine whether […]
Pick Two Balls from a Box, What is the Probability Both are Red?There are three blue balls and two red balls in a box.When we randomly pick two balls out of the box without replacement, what is the probability that both of the balls are red?Solution.Let $R_1$ be the event that the first ball is red and $R_2$ be the event that the […]
Probability Problems about Two DiceTwo fair and distinguishable six-sided dice are rolled.(1) What is the probability that the sum of the upturned faces will equal $5$?(2) What is the probability that the outcome of the second die is strictly greater than the first die?Solution.The sample space $S$ is […] |
(a) If $AB=B$, then $B$ is the identity matrix. (b) If the coefficient matrix $A$ of the system $A\mathbf{x}=\mathbf{b}$ is invertible, then the system has infinitely many solutions. (c) If $A$ is invertible, then $ABA^{-1}=B$. (d) If $A$ is an idempotent nonsingular matrix, then $A$ must be the identity matrix. (e) If $x_1=0, x_2=0, x_3=1$ is a solution to a homogeneous system of linear equation, then the system has infinitely many solutions.
Let $A$ and $B$ be $3\times 3$ matrices and let $C=A-2B$.If\[A\begin{bmatrix}1 \\3 \\5\end{bmatrix}=B\begin{bmatrix}2 \\6 \\10\end{bmatrix},\]then is the matrix $C$ nonsingular? If so, prove it. Otherwise, explain why not.
(a) Suppose that a $3\times 3$ system of linear equations is inconsistent. Is the coefficient matrix of the system nonsingular?
(b) Suppose that a $3\times 3$ homogeneous system of linear equations has a solution $x_1=0, x_2=-3, x_3=5$. Is the coefficient matrix of the system nonsingular?
(c) Let $A$ be a $4\times 4$ matrix and let\[\mathbf{v}=\begin{bmatrix}1 \\2 \\3 \\4\end{bmatrix} \text{ and } \mathbf{w}=\begin{bmatrix}4 \\3 \\2 \\1\end{bmatrix}.\]Suppose that we have $A\mathbf{v}=A\mathbf{w}$. Is the matrix $A$ nonsingular?
Suppose that $B=\{\mathbf{v}_1, \mathbf{v}_2\}$ is a basis for $\R^2$. Let $S:=[\mathbf{v}_1, \mathbf{v}_2]$.Note that as the column vectors of $S$ are linearly independent, the matrix $S$ is invertible.
Prove that for each vector $\mathbf{v} \in V$, the vector $S^{-1}\mathbf{v}$ is the coordinate vector of $\mathbf{v}$ with respect to the basis $B$.
Prove that the matrix\[A=\begin{bmatrix}0 & 1\\-1& 0\end{bmatrix}\]is diagonalizable.Prove, however, that $A$ cannot be diagonalized by a real nonsingular matrix.That is, there is no real nonsingular matrix $S$ such that $S^{-1}AS$ is a diagonal matrix.
The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017.There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold).The time limit was 55 minutes.
This post is Part 3 and contains Problem 7, 8, and 9.Check out Part 1 and Part 2 for the rest of the exam problems.
Problem 7. Let $A=\begin{bmatrix}-3 & -4\\8& 9\end{bmatrix}$ and $\mathbf{v}=\begin{bmatrix}-1 \\2\end{bmatrix}$.
(a) Calculate $A\mathbf{v}$ and find the number $\lambda$ such that $A\mathbf{v}=\lambda \mathbf{v}$.
(b) Without forming $A^3$, calculate the vector $A^3\mathbf{v}$.
Problem 8. Prove that if $A$ and $B$ are $n\times n$ nonsingular matrices, then the product $AB$ is also nonsingular.
Problem 9.Determine whether each of the following sentences is true or false.
(a) There is a $3\times 3$ homogeneous system that has exactly three solutions.
(b) If $A$ and $B$ are $n\times n$ symmetric matrices, then the sum $A+B$ is also symmetric.
(c) If $n$-dimensional vectors $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3$ are linearly dependent, then the vectors $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3, \mathbf{v}_4$ is also linearly dependent for any $n$-dimensional vector $\mathbf{v}_4$.
(d) If the coefficient matrix of a system of linear equations is singular, then the system is inconsistent.
An $n\times n$ matrix $A$ is called nonsingular if the only vector $\mathbf{x}\in \R^n$ satisfying the equation $A\mathbf{x}=\mathbf{0}$ is $\mathbf{x}=\mathbf{0}$.Using the definition of a nonsingular matrix, prove the following statements.
(a) If $A$ and $B$ are $n\times n$ nonsingular matrix, then the product $AB$ is also nonsingular.
(b) Let $A$ and $B$ be $n\times n$ matrices and suppose that the product $AB$ is nonsingular. Then:
The matrix $B$ is nonsingular.
The matrix $A$ is nonsingular. (You may use the fact that a nonsingular matrix is invertible.)
Let $A$ be an $n\times (n-1)$ matrix and let $\mathbf{b}$ be an $(n-1)$-dimensional vector.Then the product $A\mathbf{b}$ is an $n$-dimensional vector.Set the $n\times n$ matrix $B=[A_1, A_2, \dots, A_{n-1}, A\mathbf{b}]$, where $A_i$ is the $i$-th column vector of $A$.
Prove that $B$ is a singular matrix for any choice of $\mathbf{b}$.
For each of the following $3\times 3$ matrices $A$, determine whether $A$ is invertible and find the inverse $A^{-1}$ if exists by computing the augmented matrix $[A|I]$, where $I$ is the $3\times 3$ identity matrix. |
Warning: the following may not be considered as a proper answer in that it does not provide a closed form solution to the question, esp. when compared with the previous answers. I however found the approach sufficiently interesting to work out the conditional distribution.
Consider the preliminary question of getting a sequence of $N$ heads out of $k$ throws, with probability $1-p(N,k)$. This is given by the recurrence formula $$ p(N,k) = \begin{cases} 1 &\text{if } k<N\\ \sum_{m=1}^{N} \frac{1}{2^m}p(N,k-m) &\text{else}\\ \end{cases} $$ Indeed, my reasoning is that no consecutive $N$ heads out of $k$ draws can be decomposed according to the first occurrence of a tail out of the first $N$ throws. Conditioning on whether this first tail occurs at the first, second, ..., $N$th draw leads to this recurrence relation.
Next, the probability of getting the
first consecutive N heads in $m\ge N$ throws is$$q(N,m) =\begin{cases} \dfrac{1}{2^N} &\text{if }m=N\
p(N,m-N-1) \dfrac{1}{2^{N+1}} &\text{if } N<m<2N+1 \end{cases}
$$ The first case is self-explanatory. the second case corresponds to a tail occuring at the $m-N-1$th draw, followed by $N$ heads, and the last case prohibits $N$ consecutive heads prior to the $m-N-1$th draw. (The two last cases could be condensed into one, granted!)
Now, the probability to get $M$ heads first and the first consecutive $N$ heads in
exactly $m\ge N$ throws (and no less) is$$r(M,N,m) = \begin{cases} 1/2^N &\text{if }m=N\
0 &\text{if } N<m\le N+M\\ \dfrac{1}{2^{M}}\sum_{r=M+1}^{N}\dfrac{1}{2^{r-M}}q(N,m-r)&\text{if } N+M<m
\end{cases} $$ Hence the conditional probability of waiting $m$ steps to get $N$ consecutive heads given the first $M$ consecutive heads is $$ s(M,N,m) = \begin{cases} 1/{2^{N-M}} &\text{if }m=N\ 0 &\text{if } N \sum_{r=M+1}^{N}\dfrac{q(N,m-r)}{2^{r-M}}&\text{if } N+M
\end{cases}$$The expected number of draws can then be derived by $$\mathfrak{E}(M,N)= \sum_{m=N}^\infty m\, s(M,N,m)$$or $\mathfrak{E}(M,N)-M$ for the number of
additional steps... |
Newton’s second law of motion tells us what a force does: it causes a change in momentum of any particle it acts upon. It does not tell us where the force comes from, nor does it care - which is a very useful feature, as it means that the law applies to all forces. However, we do of course need to know what to put down for the force, so we need some rule to determine it independently. This is where the force laws come in.
2.2.1. Springs: Hooke's Law
One very familiar example of a force is the spring force: you need to exert a force on something to compress it, and (in accordance with Newton’s third law), if you press on something you’ll feel it push back on you. The simplest possible object that you can compress is an ideal spring, for which the force necessary to compress it scales linearly with the compression itself. This relation is known as
Hooke’s law:
\[\boldsymbol{F}_{s}=-k \boldsymbol{x}\]
where x is now the displacement (from rest) and k is the spring constant, measured in newtons per meter.The value of k depends on the spring in question - stiffer springs having higher spring constants.
Hooke’s law gives us another way to
measure forces. We have already defined the unit of force using Newton’s second law of motion, and we can use that to calibrate a spring, i.e., determine its spring constant,by determining the displacement due to a known force. Once we have k, we can simply measure forces by measuring displacements - this is exactly what a spring scale does.
Robert Hooke
Robert Hooke (1635-1703) was a British all-round natural scientist and architect. He discovered the force law named after him in 1660, which he published first as an anagram: ‘ceiiinosssttuv’, so he could claim the discovery without actually revealing it (a fairly common practice at the time); he only provided the solution in 1678: ‘ut tensio, sic vis’ (‘as the extension, so the force’). Hooke made many contributions to the development of microscopes, using them to reveal the structure of plants, coining the word cell for their basic units. Hooke was the curator of experiments of England’s Royal Society for over 40 years, combining this position with a professorship in geometry and the job of surveyor of the city of London after the great fire of 1666. In the latter position he got a strong reputation for a hard work and great honesty. At the same time, he was frequently at odds with his contemporaries Isaac Newton and Christiaan Huygens; it is not unlikely that they independently developed similar notions on, among others on the inverse-square law of gravity.
2.2.2. Gravity: Newton's Law of Gravity
A second and probably even more familiar example is force due to gravity, at the local scale, i.e., around you, in the approximation that the Earth is flat. Anything that has mass attracts everything else that has mass,and since the Earth is very massive, it attracts all objects in the space around you, including yourself. Since the force of gravity is weak, you won’t feel the pull of your book, but since the Earth is so massive, you do feel its pull. Therefore if you let go of something, it will be accelerated towards the Earth due to its attracting gravitational force. As demonstrated by Galilei (and some guys in spacesuits on a rock we call the moon
2),the acceleration of any object due to the force of gravity is the same, and thus the force exerted by the Earth on any object equals the mass of that object times this acceleration, which we call g:
\[\boldsymbol{F}_{g}=m \boldsymbol{g} \label{fg}\]
Because the Earth’s mass is not exactly uniformly distributed, the magnitude of g varies slightly from place to place, but to good approximation equals \(9.81 {m \over s^2}\). It always points down.
Although Equation \ref{fg} for local gravity is handy, its range of application is limited to everyday objects at everyday altitudes - say up to a couple thousand kilograms and a couple kilometers above the surface of the Earth, which is tiny compared to Earth’s mass and radius. For larger distances and bodies with larger mass- say the Earth-Moon, or Earth-Sun systems - we need something else, namelyNewton’s law of gravitation between two bodies with masses m
1 and m 2 and a distance r apart:
\[\boldsymbol{F}_{\mathrm{G}}=-G \frac{m_{1} m_{2}}{r^{2}} \hat{\boldsymbol{r}} \label{lawg}\]
where \(\hat r\) is the unit vector pointing along the line connecting the two masses, and the proportionality constant \(G={6.67 \cdot 10^{-11} {N \cdot m^2 \over kg^2}}\) is known as the gravitational constant (or Newton’s constant). The minus sign indicates that the force is attractive. Equation \ref{lawg} allows you to actually calculate the gravitational pull that your book exerts on you, and understand why you don’t feel it. It also lets you calculate the value of g - simply fill in the mass and radius of the Earth. If you wish to know the value of g on any other celestial body, you can put in its particulars, and compare with Earth. You’ll find you’d ‘weigh’ 3 times less on Mars and 6 times less on the Moon. Most of the time we can safely assume the Earth is flat and use \ref{fg}, but in particular for celestial mechanics and when considering satellites we’ll need to use \ref{lawg}.
Galileo galilei
Galileo Galilei (1564-1642) was an Italian physicist and astronomer, who is widely regarded as one of the founding figures of modern science. Unlike classical philosophers, Galilei championed the use of experiments and observations to validate (or disprove) scientific theories, a practice that is the cornerstone of the scientific method. He pioneered the use of the telescope (newly invented at the time) for astronomical observations, leading to his discovery of mountains on the moon an the four largest moons of Jupiter (now known as the Galilean moons in his honor). On the theoretical side, Galilei argued that Aristotle’s argument that heavy objects fall faster than light ones is incorrect, and that the acceleration due to gravity is equal for all objects (\ref{fg}). Galilei also strongly advocated the heliocentric worldview introduced by Copernicus in 1543, as opposed to the widely-held geocentric view. Unfortunately, the Inquisition thought otherwise, leading to his conviction for heresy with a sentence of life-long house arrest in 1633, a position that was only recanted by the church in 1995. 2.2.3. Electrostatics: Coulomb's Law
Like two masses interact due to the gravitational force, two charged objects interact via Coulomb’s force.Because charge has two possible signs, Coulomb’s force can both be attractive (between opposite charges)and repulsive (between identical charges). Its mathematical form strongly resembles that of Newton’s law of gravity:
\[\boldsymbol{F}_{\mathrm{C}}=k_{e} \frac{q_{1} q_{2}}{r^{2}} \hat{\boldsymbol{r}} \label{coulomb}\]
where \(q_{1}\) and \(q_{2}\) are the signed magnitudes of the charges, r is again the distance between them, and \(k_{e}=8.99 \cdot 10^9 {{N \cdot m^2} \over C^2}\) is Coulomb’s constant. For everyday length and force scales, Coulomb’s force is much larger than the force of gravity.
Charles-augustin de Coulomb
Charles-Augustin de Coulomb (1736-1806) was a French physicist and military engineer. For most of his working life, Coulomb served in the French army, for which he supervised many construction projects. As part of this job, Coulomb did research, first in mechanics (leading to his law of kinetic friction, Equation \ref{friction}), and later in electricity and magnetism,for which he discovered that the force between charges (and those between magnetic poles) drops off quadratically with their distance (Equation \ref{coulomb}). Near the end of his life, Coulomb participated in setting up the SI system of units.
2.2.4. Friction and Drag
Why did it take the genius of Galilei and Newton to uncover Newton’s first law of motion? Because everyday experience seems to contradict it: if you don’t exert a force, you won’t keep moving, but gradually slow down.You know of course why this is: there’sdragandfrictionacting on a moving body, which is why it’s much easier (though not necessarily handier) for a car to keep moving on ice than on a regular tarmac road (less friction on ice), and why walking through water is so much harder than walking through air (more drag in water). The medium in which you move can exert a drag force on you, and the surface over which you move exerts friction forces. These of course are the forces responsible for slowing you down when you stop exerting force yourself, so the first law doesn’t apply, as there are forces acting.
For low speeds, the drag force typically scales linearly with the velocity of the moving object. Drag forces for objects moving through a (fluid) medium moreover depend on the properties of the medium (its viscosity \(\eta\)) and the cross-sectional area of the moving object. For a sphere of radius R moving at velocity v, the drag force is given by
Stokes’ law:
\[\boldsymbol{F}_{\mathrm{d}}=-6 \pi \eta R \boldsymbol{v} \label{stokes}\]
The more general version for an object of arbitrary shape is \(F_{d}=\zeta v\), where \(\zeta\) is a proportionality constant. Stokes’ law breaks down at high velocities, for which the drag force scales quadratically with the speed:
\[F_{\mathrm{d}}=\frac{1}{2} \rho c_{\mathrm{d}} A v^{2} \label{drag}\]
where \(\rho\) is the density of the fluid, A the cross-sectional area of the object, v its speed, and \(c_{\mathrm{d}}\) its dimensionless drag coefficient, which depends on the object’s shape and surface properties. Typical values for the drag coefficient are 1.0 for a cyclist, 1.2 for a running person, 0.48 for a Volkswagen Beetle, and 0.19 for a modern aerodynamic car. The direction of the drag force is still opposite that of the motion.
Frictional forces are due to two surfaces sliding past each other. It should come as no surprise that the direction of the frictional force is opposite that of the motion, and its magnitude depends on the properties of the surfaces. Moreover, the magnitude of the frictional force also depends on how strongly the two surfaces are pushed against each other - i.e., on the forces they exert on each other, perpendicular to the surface. These forces are of course equal (by Newton’s third law) and are called
normal forces, because they are normal (that is, perpendicular) to the surface. If you stand on a box, gravity exerts a force on you pulling you down, which you ‘transfer’ to a force you exert on the top of the box, and causes an equal but opposite normal force exerted by the top of the box on your feet. If the box is tilted, the normal force is still perpendicular to the surface (it remains normal), but is no longer equal in magnitude to the force exerted on you by gravity. Instead, it will be equal to the component of the gravitational force along the direction perpendicular to the surface (see figure 2.6). We denote normal forces as \(F_n\). Now according to the Coulomb friction law (not to be confused with the Coulomb force between two charged particles), the magnitude of the frictional force between two surfaces satisfies
\[F_{f} \leq \mu F_{n} \label{friction}\]
Here \(\mu\) is the coefficient of friction, which of course depends on the two surfaces, but also on the question whether the two surfaces are moving with respect to each other or not. If they are not moving, i.e., the con-figuration is static, the appropriate coefficient is called the
coefficient of static friction and denoted by \(\mu_s\).The actual magnitude of the friction force will be such that it balances the other forces (more on that in section 2.4). Equation \ref{friction} tells us that this is only possible if the required magnitude of the friction force is less than \(\mu_s F_n\). When things start moving, the static friction coefficient is replaced by the coefficient of kinetic friction \(\mu_k\), which is usually smaller than \(\mu_s\); also in that case the inequality in Equation \ref{friction} gets replaced by an equals sign, and we have
\[F_{f}=\mu_{k} F_{n}.\]
2 To be precise, astronaut David Scott of the Apollo 15 mission in 1971, who dropped both a hammer and a feather and saw them fall at exactly the same rate, as shown in this NASA movie. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.