text stringlengths 256 16.4k |
|---|
Consider the following 2D wave equation:
$$ \left(\frac{d^2}{dx^2}-k_y^2+\omega^2\ V(x)\right)\psi(x)=0 $$ where $V(x+L)=V(x)>0$ is a positive periodic potential, $k_y$ is the wave vector along $y$-direction, $\omega$ is the frequency, and the eigenfunction satisfies the periodic boundary condition $\psi(x+L)=\psi(x)$, $\frac{d}{dx}\psi(x+L)=\frac{d}{dx}\psi(x)$.
From the numerical solutions of several different periodic potentials $V(x)$, I find that all bands approach to linear asymptotic dispersion $\omega_n(k_y)$ as $k_y\rightarrow\infty$. And it seems that the asymptotic group velocities for different bands are identical and are only determined by the maximum of the potential $V(x)$, namely
$$\lim_{k_y\rightarrow\infty}\frac{d\omega_n}{dk_y}=1/\sqrt{V_{\mathrm{max}}}.$$
The following figures show two examples (the first 9 bands in each case). However, I cannot prove this conjecture. Can some one help me prove or disprove this conjecture?
The insets in the upper subfigures are the profiles of the potential function $V(x)$ in one period. |
Perloff 3.1-4. WB 3.
You do not need to remember which Greek letter is associated with which elasticity. In questions, elasticities will always be referred to by name, not symbol.
The general formula
Suppose X increases by 1%; the “X elasticity of Y” is the percent change in Y that results (divided by 100).
Price elasticity of Demand
The (own-) price elasticity of Demand is given by any of the following formulas:(1)
where $\Delta P$ and $\Delta Q$ are small changes in the price and quantity that keep you on the Demand curve. The term ${\Delta Q}/{\Delta P}$ is one over the slope of the Demand curve (recall that P is on the y-axis). This slope is negative (by the Law of Demand), so the price elasticity of Demand is always negative.
Other elasticities
The calculation of other elasticities is similar.
Applying elasticities The (own-price) elasticities of demand $\varepsilon$ and of supply $\eta$ are used in evaluating how much producers and consumers are hurt by a per-unit tax. The elasticity of demand $\varepsilon$ is also used in finding the monopoly price markup. The income elasticity of Demand $\xi$ tells us whether a good is normalor inferiorand what share of a consumer's spending goes towards it. The XY cross-price elasticity of Demand $\varepsilon_{XY}$ tells us whether two goods are complements or substitutes, and to what degree. |
Author Siargey Kachanovich
Witness complex representation
Definitions
Witness complex is a simplicial complex defined on two sets of points in \(\mathbb{R}^D\):
\(W\) set of witnesses and \(L\) set of landmarks.
Even though often the set of landmarks \(L\) is a subset of the set of witnesses \( W\), it is not a requirement for the current implementation.
Landmarks are the vertices of the simplicial complex and witnesses help to decide on which simplices are inserted via a predicate "is witnessed".
De Silva and Carlsson in their paper
[20] differentiate weak witnessing and strong witnessing: weak: \( \sigma \subset L \) is witnessed by \( w \in W\) if \( \forall l \in \sigma,\ \forall l' \in \mathbf{L \setminus \sigma},\ d(w,l) \leq d(w,l') \) strong: \( \sigma \subset L \) is witnessed by \( w \in W\) if \( \forall l \in \sigma,\ \forall l' \in \mathbf{L},\ d(w,l) \leq d(w,l') \)
where \( d(.,.) \) is a distance function.
Both definitions can be relaxed by a real value \(\alpha\):
weak: \( \sigma \subset L \) is \(\alpha\)-witnessed by \( w \in W\) if \( \forall l \in \sigma,\ \forall l' \in \mathbf{L \setminus \sigma},\ d(w,l)^2 \leq d(w,l')^2 + \alpha^2 \) strong: \( \sigma \subset L \) is \(\alpha\)-witnessed by \( w \in W\) if \( \forall l \in \sigma,\ \forall l' \in \mathbf{L},\ d(w,l)^2 \leq d(w,l')^2 + \alpha^2 \)
which leads to definitions of
weak relaxed witness complex (or just relaxed witness complex for short) and strong relaxed witness complex respectively. swit.svg
Strongly witnessed simplex
In particular case of 0-relaxation, weak complex corresponds to
witness complex introduced in [20], whereas 0-relaxed strong witness complex consists of just vertices and is not very interesting. Hence for small relaxation weak version is preferable. However, to capture the homotopy type (for example using Gudhi::persistent_cohomology::Persistent_cohomology) it is often necessary to work with higher filtration values. In this case strong relaxed witness complex is faster to compute and offers similar results. Implementation
The two complexes described above are implemented in the corresponding classes
The construction of the Euclidean versions of complexes follow the same scheme:
Construct a search tree on landmarks (for that Gudhi::spatial_searching::Kd_tree_search is used internally). Construct lists of nearest landmarks for each witness (special structure Gudhi::witness_complex::Active_witness is used internally). Construct the witness complex for nearest landmark lists.
In the non-Euclidean classes, the lists of nearest landmarks are supposed to be given as input.
The constructors take on the steps 1 and 2, while the function 'create_complex' executes the step 3.
Example 1: Constructing weak relaxed witness complex from an off file
Let's start with a simple example, which reads an off point file and computes a weak witness complex.
#include <gudhi/Simplex_tree.h> #include <gudhi/Euclidean_witness_complex.h> #include <gudhi/pick_n_random_points.h> #include <gudhi/Points_off_io.h> #include <CGAL/Epick_d.h> #include <string> #include <vector> typedef CGAL::Epick_d<CGAL::Dynamic_dimension_tag> K; typedef typename K::Point_d Point_d; typedef std::vector< Vertex_handle > typeVectorVertex; typedef std::vector< Point_d > Point_vector; int main( int argc, char * const argv[]) {
std::string file_name = argv[1];
int nbL = atoi(argv[2]), lim_dim = atoi(argv[4]);
double alpha2 = atof(argv[3]);
Point_vector point_vector, landmarks;
point_vector = Point_vector(off_reader.get_point_cloud());
Witness_complex witness_complex(landmarks,
point_vector);
witness_complex.create_complex(simplex_tree, alpha2, lim_dim);
}
Example2: Computing persistence using strong relaxed witness complex
Here is an example of constructing a strong witness complex filtration and computing persistence on it:
#include <gudhi/Simplex_tree.h> #include <gudhi/Euclidean_strong_witness_complex.h> #include <gudhi/Persistent_cohomology.h> #include <gudhi/Points_off_io.h> #include <gudhi/pick_n_random_points.h> #include <gudhi/choose_n_farthest_points.h> #include <boost/program_options.hpp> #include <CGAL/Epick_d.h> #include <string> #include <vector> #include <limits> using K = CGAL::Epick_d<CGAL::Dynamic_dimension_tag>; using Point_d = K::Point_d; using Point_vector = std::vector<Point_d>; void program_options( int argc, char* argv[], int& nbL, std::string& file_name, std::string& filediag, int main( int argc, char* argv[]) {
std::string file_name;
std::string filediag;
int p, nbL, lim_d;
program_options(argc, argv, nbL, file_name, filediag, max_squared_alpha, p, lim_d, min_persistence);
Point_vector witnesses, landmarks;
if (!off_reader.is_valid()) {
std::cerr <<
"Witness complex - Unable to read file " << file_name << "\n";
exit(-1);
}
witnesses = Point_vector(off_reader.get_point_cloud());
std::cout <<
"Successfully read " << witnesses.size() << " points.\n";
std::cout <<
"Ambient dimension is " << witnesses[0].dimension() << ".\n";
std::back_inserter(landmarks));
strong_witness_complex.create_complex(simplex_tree, max_squared_alpha, lim_d);
std::cout <<
"The complex contains "
<< simplex_tree.
num_simplices
() <<
" simplices \n"
;
std::cout <<
" and has dimension "
<< simplex_tree.
dimension
() <<
" \n"
;
pcoh.init_coefficients(p);
pcoh.compute_persistent_cohomology(min_persistence);
if (filediag.empty()) {
pcoh.output_diagram();
}
else {
std::ofstream out(filediag);
pcoh.output_diagram(out);
out.close();
}
return 0;
}
void program_options( int argc, char* argv[], int& nbL, std::string& file_name, std::string& filediag,
namespace po = boost::program_options;
po::options_description hidden(
"Hidden options");
hidden.add_options()(
"input-file", po::value<std::string>(&file_name),
"Name of file containing a point set in off format.");
po::options_description visible(
"Allowed options", 100); Filtration_value
default_alpha = std::numeric_limits<Filtration_value>::infinity();
visible.add_options()(
"help,h", "produce help message")( "landmarks,l", po::value<int>(&nbL),
"Number of landmarks to choose from the point cloud.")(
"output-file,o", po::value<std::string>(&filediag)->default_value(std::string()),
"Name of file in which the persistence diagram is written. Default print in std::cout")(
"max-sq-alpha,a", po::value<Filtration_value>(&max_squared_alpha)->default_value(default_alpha),
"Maximal squared relaxation parameter.")(
"field-charac,p", po::value<int>(&p)->default_value(11),
"Characteristic p of the coefficient field Z/pZ for computing homology.")(
"min-persistence,m", po::value<Filtration_value>(&min_persistence)->default_value(0),
"Minimal lifetime of homology feature to be recorded. Default is 0. Enter a negative value to see zero length "
"intervals")( "cpx-dimension,d", po::value<int>(&dim_max)->default_value(std::numeric_limits<int>::max()),
"Maximal dimension of the strong witness complex we want to compute.");
po::positional_options_description pos;
pos.add(
"input-file", 1);
po::options_description all;
all.add(visible).add(hidden);
po::variables_map vm;
po::store(po::command_line_parser(argc, argv).options(all).positional(pos).run(), vm);
po::notify(vm);
if (vm.count( "help") || !vm.count( "input-file")) {
std::cout << std::endl;
std::cout <<
"Compute the persistent homology with coefficient field Z/pZ \n";
std::cout <<
"of a Strong witness complex defined on a set of input points.\n \n";
std::cout <<
"The output diagram contains one bar per line, written with the convention: \n";
std::cout <<
" p dim b d \n";
std::cout <<
"where dim is the dimension of the homological feature,\n";
std::cout <<
"b and d are respectively the birth and death of the feature and \n";
std::cout <<
"p is the characteristic of the field Z/pZ used for homology coefficients." << std::endl << std::endl;
std::cout <<
"Usage: " << argv[0] << " [options] input-file" << std::endl << std::endl;
std::cout << visible << std::endl;
exit(-1);
}
}
Example3: Computing relaxed witness complex persistence from a distance matrix
In this example we compute the relaxed witness complex persistence from a given matrix of closest landmarks to each witness. Each landmark is given as the couple (index, distance).
#define BOOST_PARAMETER_MAX_ARITY 12 #include <gudhi/Simplex_tree.h> #include <gudhi/Witness_complex.h> #include <gudhi/Persistent_cohomology.h> #include <iostream> #include <fstream> #include <utility> #include <string> #include <vector> int main( int argc, char * const argv[]) {
using Nearest_landmark_range = std::vector<std::pair<std::size_t, double>>;
using Nearest_landmark_table = std::vector<Nearest_landmark_range>;
Nearest_landmark_table nlt = {
{{0, 0.0}, {1, 0.1}, {2, 0.2}, {3, 0.3}, {4, 0.4}},
{{1, 0.0}, {2, 0.1}, {3, 0.2}, {4, 0.3}, {0, 0.4}},
{{2, 0.0}, {3, 0.1}, {4, 0.2}, {0, 0.3}, {1, 0.4}},
{{3, 0.0}, {4, 0.1}, {0, 0.2}, {1, 0.3}, {2, 0.4}},
{{4, 0.0}, {0, 0.1}, {1, 0.2}, {2, 0.3}, {3, 0.4}}
};
Witness_complex witness_complex(nlt);
witness_complex.create_complex(simplex_tree, .41);
std::cout <<
"Number of simplices: "
<< simplex_tree.
num_simplices
() << std::endl;
pcoh.init_coefficients(11);
pcoh.compute_persistent_cohomology(-0.1);
pcoh.output_diagram();
} |
It is known DLMF (25.2.8) that for $\Re s>0$ and for integers $N\geq 1$ $$\zeta(s)=\sum_{k=1}^N\frac{1}{k^s}+\frac{N^{1-s}}{s-1}-s\int_{N}^\infty \frac{x-\lfloor x \rfloor}{x^{s+1}} dx,$$
where $\zeta(s)$ is the
Riemann zeta function.
Then if I propose $\rho= \left(\frac{1}{2}+\epsilon \right)+it, $ satisfying $\zeta(\rho)=0$ and such that $0<\epsilon<\frac{1}{2}$ (I don't use this last condition on epsilon), if there are no mistakes one has by direct computations with the real part function that $$\int_{N}^\infty \frac{x-\lfloor x \rfloor}{x^{\frac{3}{2}+\epsilon}} \left( (\frac{1}{2}+\epsilon)\cos (t\log x)+t\sin (t\log x) \right) dx$$ equals to $$\sum_{k=1}^N\frac{\cos (t\log k)}{k^{\frac{1}{2}+\epsilon}}+\frac{N^{\frac{1}{2}-\epsilon}}{(\epsilon-\frac{1}{2})^2+t^2} \left((\epsilon-\frac{1}{2})\cos (t\log N)-t\sin (t\log N) \right). $$ In this context and with the purpose to learn some useful fact I've asked to me some questions. One of those was
Question.Compute $$\int \frac{t\sin at}{b^2+t^2}dt,$$ for real numbers $a> 0$ and $b>0$ (if you consider it, you can use the form $a=\log N$, where $N\geq 2$ is an integer, and $b=\epsilon-\frac{1}{2}$ with $0<\epsilon<\frac{1}{2}$, I have no used this from my evaluation with Wolfram Alpha, and I have no special knowledges about the integral functions $Ci(x)$ and $Si(x)$). Thanks in advance.
My attempt was explore the other integral (the concerning with the cosine function), integration by parts yields as an antiderivative $v=\frac{1}{\epsilon-\frac{1}{2}}\arctan\frac{t}{\epsilon-\frac{1}{2}}$; and using
Wolfram Alpha that provide us a result concerning our Question involving $Ci(x), Si(x)$ and hyperbolic functions (it isn't neccesary that you wirte this definition, I will read how are defined). The comment in brackects in the Question, was because I am interested to evaluate integrals of the form $\int_{-T}^T$. |
(a) Let $$E_1\subset E_2\subset ... \subset E_n\subset ... \subset I_0=:[a,b]$$
be a sequence of measurable sets and let $f:[a,b]\to\mathbb{R}$ be a non-negative integrable function. Show that $$\lim_{n\to\infty}\int_{E_n}f(x)d\mu=\int_E f(x)d\mu\,\,\,\,\,\,\text{ where }E=\cup_{n=1}^\infty E_n$$
(b) Is the above statement true without the hypothesis "$f$ is non-negative"? Justify your answer.
By Levi's theorem I know we can deal with non-decreasing measurable functions, but I don't know how to deal with this.
Any help would be greatly appreciated. Thanks in advance. |
I've been trying to integrate this:
$$\int_0^\infty \frac{1}{x^2 + 2x + 2} \mathrm{d} x .$$
Unfortunately I haven't found a way so far. I've been trying to factor the denominator in order to end up with partial fractions. Is there a way to factor it? If so, I can't remember any, so if you could remind me how to do it, it would be nice.
Thanks for your help. |
ISSN:
1930-5311
eISSN:
1930-532X
All Issues
Journal of Modern Dynamics
October 2007 , Volume 1 , Issue 4
Select all articles
Export/Reference:
Abstract:
These notes combine an analysis of what the author considers (admittedly subjectively) as the most important trends and developments related to the notion of entropy, with information of more “historical” nature including allusions to certain episodes and discussion of attitudes and contributions of various participants. I directly participated in many of those developments for the last forty three or forty four years of the fifty-year period under discussion and on numerous occasions was fairly close to the center of action. Thus, there is also an element of personal recollections with all attendant peculiarities of this genre.
For the full preface, please click the "Full Text" link above.
Abstract:
We show that a nondegenerate tight contact form on the 3-sphere has exactly two simple closed Reeb orbits if and only if the differential in linearized contact homology vanishes. Moreover, in this case the Floquet multipliers and Conley-Zehnder indices of the two Reeb orbits agree with those of a suitable irrational ellipsoid in 4-space.
Abstract:
Using the definition of dominated splitting, we introduce the notion of
critical setfor any dissipative surface diffeomorphism as an intrinsically well-defined object. We obtain a series of results related to this concept. Abstract:
In this paper we show that there exist irrational polygons $P$ where the number of periodic billiard paths of length less than $n$, $f_P(n)$, grows superlinearly. In fact, if we fix the number of sides of our polygon, for any $k \in \N$ there is an open set of polygons where $f_P(n)$ grows faster than $n \log^k n$.
Abstract:
We consider partially hyperbolic abelian algebraic higher-rank actions on compact homogeneous spaces obtained from simple split Lie groups of nonsymplectic type. We show that smooth, real-valued cocycles trivialize as well as small cocycles taking values in groups of diffeomorphisms of compact manifolds and some semisimple Lie groups. In the second part of the paper, we show local differentiable rigidity for such actions.
Abstract:
We study the Gross--Pitaevskii equation with a delta function potential, $ q \delta_0 $, where $ |q| $ is small and analyze the solutions for which the initial condition is a soliton with initial velocity $ v_0 $. We show that up to time $ (|q| + v_0^2 )^{-1/2} \log$($1$/$|q|$) the bulk of the solution is a soliton evolving according to the classical dynamics of a natural effective Hamiltonian, $ (\xi^2 + q \, \sech^2 ( x ) )$/$2$.
Readers Authors Librarians Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top] |
Perloff 6, 7; WB 12 and 13.
We examine how firms' costs are determined by two key Supply shifters — technology and input prices. This provides a foundation for the Supply curve in our model of perfect competition (it is equal to the marginal cost curve).
In this section the firm uses two inputs to reach a fixed output goal (quantity to supply to the market). The firm buys labor L and capital K to produce q units of the good at the lowest cost. This
cost minimization problem is very similar to the consumer's utility maximization problem. Technology
The firm's technology is described by a
production function F(L,K), which gives the output that can be made with any combination of labor L and capital K. We make two assumptions on the technology… Free disposal and useful inputs
We assume that the production function is (strictly monotone) increasing in both L and K. The production function can never be decreasing in the inputs because the firm can
freely dispose of unwanted inputs. If the production function is always increasing (as we assume), then both inputs are always useful, no matter how many are currently being used. What is the free disposal assumption? Convexity
We also assume that the marginal products MP
L and MP K are decreasing. This makes the technology's isoquants strictly convex. Or, equivalently, it makes the isoquant slopes — the Marginal Rate of Technical Substitution (MRTS) — fall. What is the Law of Diminishing Marginal Returns? Graphing technology
All pairs of inputs L and K that reach the same output goal q form a curve called the
isoquant for q. Our assumptions yield isoquants just like the indifference curves seen in the Consumer Choice model. Again, we will consider three cases: Cobb-Douglas, which satisfies both assumptions; Perfect substitutes, which violates strict convexity; and Fixed-proportions, which violates strict monotonicity. Graph the relevant isoquant for fixed-proportions technology, given the output goal and either the L-K proportion or the production function. Graph the relevant isoquant for perfect-substitutes technology, given the output goal and either the L-K trade-off or the production function. Returns to scale
The firm's technology has increasing returns to scale if a proportional increase in both inputs causes a more-than-proportional increase in outputs. In other words, the
scale elasticity of output must be greater than one:
With increasing returns to scale, the amount of (both) resources required per unit produced falls with output.
Given a simple production function, determine if it has increasing, decreasing or constant returns to scale. Given a complicated production function, determine if it has increasing, decreasing or constant returns to scale. Costs
All pairs of inputs L and K that cost the same amount wL+rK form a line called the
isocost line. The slope is -w/r. Graph some isocost lines, given the input prices. Cost minimization
Goal Must have Consumer's problem pick X *, Y * to max utility spending = I Producer's problem pick L *, K * to min spending output = q
As with utility maximization, we need two numbers, so we need two equations. The first equation is that the production from L and K is equal to the output goal. The second condition depends on which of three cases we are looking at.
Solve the firm's cost-minimization problem given an output goal q, input costs w and r, and…
Cobb-Douglas technology. Fixed-proportions technology. Perfect substitutes technology. Calculate the cost. Calculate the cost as a function of input prices and the output goal. Perfect substitutes
Compare the cost of producing using only L and using only K.
Fixed-proportions
Use the proportion as the second equation.
Cobb-Douglas
Use the slope-matching approach (MRTS = -w/r) or the cost-share shortcut.
Economies of scale
There are economies of scale when a proportional increase in output causes a less-than-proportional increase in costs. If we define the scale elasticity of cost as $\xi_{SCALE}=\frac{\Delta C}{\Delta q}\times\frac{q}{C}$, then there are economies of scale whenever this elasticity is less than one. If it is greater than one, there are ‘diseconomies’ of scale.
With economies of scale, the amount of money required per unit produced falls with output. (Note the difference between this and increasing returns to scale.)
Given a cost function, determine if there are economies of diseconomies of scale (or neither). Short-run cost minimization
Suppose that, in the short-run, capital K cannot be changed. Then the firm will choose labor L to minimize cost.
Solve the firm's short run cost-minimization problem given an output goal q, input prices w and r, one of the three types of technology, and a fixed level of K; and calculate the cost. More resources Perloff, Chapter 6 ``Firms and Production'' and Chapter 7 ``Costs'' Quizzes Applications Stiglitz and Walsh, Chapter 7 ``The Firm's Costs'' Narrated lecture with graphs Quizzes FAQs and pitfalls Krugman and Wells, Chapter 8 |
States of Matter: Gases and Liquids Gas Laws and Ideal Gas Equation and Kinetic Molecular Theory of Gases Gas Laws: \tt V\propto\frac{1}{P} (Boyle's Law) PV = constant
Charles' Law: V ∝ T (or) \tt \frac{V_{1}}{V_{2}}=\frac{T_{1}}{T_{2}} Gay-Lussac Law: \tt \frac{P_{1}}{T_{1}}=\frac{P_{2}}{T_{2}}
Avagadro's Law: V ∝ n Ideal Gas Equation: \tt V \propto \frac{1}{P} V ∝ T V ∝ n \tt \therefore V \propto \frac{nT}{P} PV ∝ nT PV = nRT Graham Law of Diffusion of Gases: \tt r\propto\frac{1}{\sqrt{d}}\propto\frac{1}{\sqrt{M.wt}}\propto\frac{1}{\sqrt{V.D}} (Constant T and P) \tt \frac{r_{1}}{r_{2}}=\frac{\sqrt{d_{2}}}{\sqrt{d_{1}}}=\sqrt{\frac{M_{2}}{M_{1}}}=\sqrt{\frac{V.D_{2}}{V.D_{1}}}=\frac{V_{1}t_{2}}{V_{2}t_{1}} \tt \frac{r_{H_{2}}}{r_{CH_{4}}}=\frac{\sqrt{M_{CH_{4}}}}{\sqrt{M_{H_2}}}=\sqrt{\frac{16}{2}}=2\sqrt{2} Dalton's Law of Partial Pressure: P total = P 1 + P 2 + P 3 + .... P.P = M.F × T.P = Xi × T.P [Xi = mole fraction] \tt Xi=\frac{P.P}{T.P} P total = P A + P B + P C = (n 1 + n 2 + n 3) \tt \frac{RT}{V} Kinetic Gas Equation: U rms = \tt \sqrt{\frac{u_1^2+u_2^2+u_3^2...+u_n^2}{n}} K.E equation = \tt PV=\frac{1}{3}MC^2 \tt K.E=\frac{3}{2}RT \tt K.E=\frac{3}{2}kT (PV=RT) Part1: View the Topic in this Video from 3:47 to 54:04 Part2: View the Topic in this Video from 1:43 to 53:00 Part3: View the Topic in this Video from 0:56 to 1:00:00
Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers.
1.
Boyle's law PV = K or \tt P_{1}V_{1} = P_{2}V_{2} = K (For two or more gases)
2. Boyle's law can also be expressed as \tt \left(\frac{dP}{dV}\right)_{T} = \frac{-k}{V^{2}} \ or \ \frac{dV}{V} = \frac{-dP}{P}
3. \tt K = \frac{V}{T} \ or \ \frac{V_{1}}{T_{1}} = \frac{V_{2}}{T_{2}} = K \left(For \ two \ or \ more \ gases\right)
4. Charle's law can be represented as \tt \left(\frac{dV}{dt}\right)_{P} = K
5. \tt K = \frac{P}{T} \ or \ \frac{P_{1}}{T_{1}} = \frac{P_{2}}{T_{2}} = K \left(For \ two \ or \ more \ gases\right)
6. Avogadro's Law: \tt \frac{V_{1}}{n_{1}} = \frac{V_{2}}{n_{2}} = ............ = K
7. PV = nRT, This is called ideal gas equation.
8. If a number of gases having volume V
1, V 2, V 3 ...... at pressure P 1, P 2, P 3 ....... are mixed together in container of volume V, then, \tt P_{Total} = \frac{P_{1}V_{1} + P_{2}V_{2} + P_{3}V_{3}......}{V} or = \tt \left(n_{1} + n_{2} + n_{3}.......\right)\frac{RT}{V} \ \left(\because PV = nRT\right) or = \tt n\frac{RT}{V} \ \left(\because n = n_{1} + n_{2} + n_{3}.......\right)
9. \tt \frac{r_{1}}{r_{2}} = \sqrt{\frac{d_{2}}{d_{1}}} = \sqrt{\frac{d_{2} \times 2}{d_{1} \times 2}} = \sqrt{\frac{M_{2}}{M_{1}}}
Where, M 1 and M 2 are the molecular weights of the two gases.
10. For gases \tt \frac{V_{1}}{V_{2}} = {\frac{n_{1}}{n_{2}}}
\tt {\frac{n_{1}}{n_{2}}}\times{\frac{t_{2}}{t_{1}}}= \sqrt{\frac{M_{2}}{M_{1}}}
11. When equal volume of the two gases diffuse, i.e. V
1 = V 2 then, \tt {\frac{r_{1}}{r_{2}}} = {\frac{t_{2}}{t_{1}}}= \sqrt{\frac{d_{2}}{d_{1}}}
12. When volumes of the two gases diffuse in the same time, i.e. t
1 = t 2 then, \tt {\frac{r_{1}}{r_{2}}} = {\frac{V_{1}}{V_{2}}}= \sqrt{\frac{d_{2}}{d_{1}}}
13.
Kinetic gas equation : On the basis of above postulates, the following gas equation was derived, \tt PV = \frac{1}{3}mnu_{rms}^2 where, P = pressure exerted by the gas V = Volume of the gas m = average mass of a molecule n = number of molecules u rms = root mean square (RMS) velocity of the gas. |
So now let’s use the skills from the last two posts to work on mixed fractions. Consider:\[
{3}\frac{3}{8}\hspace{0.33em}{+}\hspace{0.33em}{6}\frac{7}{8}
\]
As the denominators are the same, one can do this problem by adding the whole parts first to get 9 then add the fractions together to get \[
\frac{10}{8} \] which after simplifying and converting to a mixed fraction is equal to \[ 1\frac{1}{4} \]. Then you add this to the 9 to get \[ 10\frac{1}{4} \]. But I want to show a more general method that is particularly useful when the denominators are different and/or the problem is a subtraction.
The first step is to convert the mixed fractions to improper ones as discussed in my last post:\[
{3}\frac{3}{8}\hspace{0.33em}{+}\hspace{0.33em}{6}\frac{7}{8}\hspace{0.33em}{=}\hspace{0.33em}\frac{{(}{8}\hspace{0.33em}\times\hspace{0.33em}{3}{)}\hspace{0.33em}{+}\hspace{0.33em}{3}}{8}\hspace{0.33em}{+}\hspace{0.33em}\frac{{(}{8}\hspace{0.33em}\times\hspace{0.33em}{6}{)}\hspace{0.33em}{+}\hspace{0.33em}{7}}{8}\hspace{0.33em}{=}\hspace{0.33em}\frac{27}{8}\hspace{0.33em}{+}\hspace{0.33em}\frac{55}{8}\hspace{0.33em}{=}\hspace{0.33em}\frac{82}{8}
\]
The addition in the last step was easy as the denominators are the same. Now all that is left is to simplify and convert to a mixed fraction. You can convert first then simplify which has the advantage of having smaller numbers to factor, so let’s do that:
\[
\frac{82}{8}\hspace{0.33em}{=}\hspace{0.33em}{82}\hspace{0.33em}\div\hspace{0.33em}{8}\hspace{0.33em}{=}\hspace{0.33em}{10} \] with remainder of 2 so
\frac{82}{8}\hspace{0.33em}{=}\hspace{0.33em}{10}\frac{2}{8}\hspace{0.33em}
\]
So now all that is left is to simplify the fractional part:\[
\frac{2}{8}\hspace{0.33em}{=}\hspace{0.33em}\frac{\rlap{/}{2}\hspace{0.33em}\times\hspace{0.33em}{1}}{\rlap{/}{2}\hspace{0.33em}\times\hspace{0.33em}{4}}\hspace{0.33em}{=}\hspace{0.33em}\frac{1}{4}
\]
So \[
{3}\frac{3}{8}\hspace{0.33em}{+}\hspace{0.33em}{6}\frac{7}{8}\hspace{0.33em}{=}\hspace{0.33em}{10}\frac{1}{4} \] which is the same answer we got before. Isn’t maths consistent? (and fun!).
Now let’s do one with different denominators:\[
{6}\frac{3}{8}\hspace{0.33em}{-}\hspace{0.33em}{3}\frac{7}{12}
\]
We start by converting the problem into one with improper fractions:\[
{6}\frac{3}{8}\hspace{0.33em}{-}\hspace{0.33em}{3}\frac{7}{12}\hspace{0.33em}{=}\hspace{0.33em}\frac{{(}{8}\hspace{0.33em}\times\hspace{0.33em}{6}{)}\hspace{0.33em}{+}\hspace{0.33em}{3}}{8}\hspace{0.33em}{-}\hspace{0.33em}\frac{{(}{12}\hspace{0.33em}\times\hspace{0.33em}{3}{)}\hspace{0.33em}{+}\hspace{0.33em}{7}}{12}\hspace{0.33em}{=}\hspace{0.33em}\frac{51}{8}\hspace{0.33em}{-}\hspace{0.33em}\frac{43}{12}
\]
Now we need to find a common denominator between 8 and 12. To find the least common denominator (LCD), we first factor both numbers:
8 = 2 × 2 × 2, 12 = 2 × 2 × 3
As 2 and 3 are the only factors present, we now combine them, using them only the maximum number of times each appear in the above factorisation:
LCD = 2 × 2 × 2 × 3 = 24
So the common denominator we will use is 24. We want to convert each of the fractions in the problem into equivalent ones that have 24 as the denominator. For the first fraction, we will multiply top and bottom by 3 to get the 24 in the denominator. We will use 2 for the second fraction to get 24 in the denominator there as well:\[
\begin{array}{l}
{\frac{51}{8}\hspace{0.33em}{=}\hspace{0.33em}\frac{{51}\hspace{0.33em}\times\hspace{0.33em}{3}}{{8}\hspace{0.33em}\times\hspace{0.33em}{3}}\hspace{0.33em}{=}\hspace{0.33em}\frac{153}{24}}\\
{\frac{43}{12}\hspace{0.33em}{=}\hspace{0.33em}\frac{{43}\hspace{0.33em}\times\hspace{0.33em}{2}}{{12}\hspace{0.33em}\times\hspace{0.33em}{2}}\hspace{0.33em}{=}\hspace{0.33em}\frac{86}{24}}
\end{array}
\]
So now the problem becomes:\[
\frac{51}{8}\hspace{0.33em}{-}\hspace{0.33em}\frac{43}{12}\hspace{0.33em}{=}\hspace{0.33em}\frac{153}{24}\hspace{0.33em}{-}\hspace{0.33em}\frac{86}{24}\hspace{0.33em}{=}\hspace{0.33em}\frac{67}{24}
\]
Now convert back to a mixed fraction:
\[
\frac{67}{24}\hspace{0.33em}{=}\hspace{0.33em}{67}\hspace{0.33em}\div\hspace{0.33em}{24}\hspace{0.33em}{=}\hspace{0.33em}{2} \] with a remainder of 19.
So the final answer is \[
2\frac{19}{24} \]. The fractional part cannot be simplified any further.
So the steps to do multiplication would be just to convert the mixed fractions to improper ones, multiply these, and then convert back to a mixed fraction.
The general steps to do arithmetic on mixed fractions are:
Convert all fractions to improper fractions. If the problem is a multiplication one or if the denominators are the same, skip to step 5. Find the LCD for the denominators. Convert the improper fractions into equivalent ones with the LCD as the denominator. Do the indicated arithmetic (multiplication, addition, or subtraction) on the improper fractions. Convert the answer back to a mixed fraction if the numerator is greater than the denominator. Simplify the fractional part if needed. |
Invited speakers
There will be four invited talks at FCT 2017. The invited speakers are:
Thomas Colcombet (CNRS, University of Paris-Diderot, France) Martin Dietzfelbinger (Technische Universität Ilmenau, Germany) Juraj Hromkovič (ETH Zürich, Switzerland) Anca Muscholl (University of Bordeaux, France) Jean-Éric Pin (CNRS, University of Paris-Diderot, France) Thomas Colcombetis currently a full time senior researcher at the CNRS, working in the Institut de Recherche en Informatique Fondamentale, Paris. After studying at the Ecole Normale Supérieure de Lyon, he received a PhD degree from the university of Rennes. His research is in automata theory in a broad sense, and in particular its connections to algebra, category theory, topology, model theory and algorithmic logic as well as game theory. Automata and Program Analysis Based on joint work with Laure Daviaud and Florian Zuleger. Martin Dietzfelbingeris a full professor of Computer Science (for complexity theory and efficient algorithms) at Technische Universität Ilmenau. He received a Diplom (Master’s degree) Mathematics from the Ludwig Maximilians Universität Munich in 1983, was awarded a PhD degree in Computer Science from the University of Illinois at Chicago in 1987, and completed his hablitation at the University of Paderborn in 1992. Before moving to Ilmenau in 1998, he was a professor in the Computer Science Department of the University of Dortmund for several years. Nowadays, his main research interests lie in understanding the power of randomization in data structures and algorithms. A substantial part of his work deals with different aspects of foundations and applications of hashing in data structures and algorithmics. Optimal Dual-Pivot Quicksort: Exact Comparison Count Based on joint work with Martin Aumüller, Daniel Krenn, Clemens Heuberger, and Helmut Prodinger. What one has to know when attacking P vs. NP Based on joint work with Peter Rossmanith.
Mathematics was developed as a strong research instrument with fully verifiable argumentations. We call any consistent and sufficiently powerful formal theory that enables to algorithmically verify for any given text whether it is a proof or not algorithmically verifiable mathematics (AV-mathematics for short). We say that a decision problem $L \subseteq \Sigma^\ast$ is almost everywhere solvable if for all but finitely many inputs $x \in \Sigma^\ast$ one can prove either “$x \in L$” or “$x \not\in L$” in AV-mathematics.
First, we formalize Rice's theorem on unprovability, claiming that each nontrivial semantic problem about programs is not almost everywhere solvable in AV-mathematics. Using this, we show that there are infinitely many algorithms (programs that are provably algorithms) for which there do not exist proofs that they work in polynomial time or that they do not work in polynomial time. We can prove the same also for linear time or any time-constructible function.
Note that, if $\textsf{P}\ne \textsf{NP}$ is provable in AV-mathematics, then for each algorithm $A$ it is provable that “$A$ does not solve SATISFIABILITY or $A$ does not work in polynomial time”. Interestingly, there exist algorithms for which it is neither provable that they do not work in polynomial time, nor that they do not solve SATISFIABILITY. Moreover, there is an algorithm solving SATISFIABILITY for which one cannot prove in AV-mathematics that it does not work in polynomial time.
Furthermore, we show that $\textsf{P}=\textsf{NP}$ implies the existence of algorithms $X$ for which the true claim “$X$ solves SATISFIABILITY in polynomial time” is not provable in AV-mathematics. Analogously, if the multiplication of two decimal numbers is solvable in linear time, one cannot decide in AV-mathematics for infinitely many algorithms $X$ whether “$X$ solves multiplication in linear time”.
Anca Muschollis Professor of Computer Science at the University of Bordeaux, France, since 2007. Before moving to Bordeaux she was a professor at the University of Paris VII. She received her PhD in 1994 from the University of Stuttgart, and the habilitation in 1999. Her research interests lie in the area of automata, logic, verification and control of concurrent systems. She spent the academic year 2015/16 as a Hans-Fischer Senior Fellow of the Institute of Advanced Study at the Technical University of Munich. A tour of recent results on word transducers Based on joint work with Félix Baschenis, Olivier Gauwin and Gabriele Puppis.
In this talk we survey recent results on regular word transducers. We discuss how some of the classical connections between automata, logic and algebra extend to transducers, as well as some genuine definability questions.
Jean-Éric Pinis currently director of research appointed by the CNRS (Centre National de la Recherche Scientifique) and working at IRIF (Institut de Recherche en Informatique Fondamentale), a Joint Research Unit supported by CNRS and University Paris Diderot. He is a leading scientist in the algebraic theory of automata and languages in connection with logic, topology, and combinatorics, but his research has also been influential in other areas. He wrote a number of articles in semigroup theory, most of them motivated by problems issued from automata theory. He is the author of two reference books in automata theory: Varieties of Formal Languages and the monograph Infinite Words, co-authored with D. Perrin. He is also a fellow of the EATCS. Some results of Zoltán Ésik on regular languages Talk in memory of Zoltán Ésik. |
I am reading from
Degree Theory by N. Lloyd, and in one section he writes about the degree of a holomorphic map of several complex variables. I am unsure about one of the steps in a proof he gives.
The setup:
$\mathbb{C}^n$ is the vector space of complex $n$-tuples $z=(z_1,\ldots,z_n)$. $D$ is a bounded, open subset of $\mathbb{C}^n$. $\mathscr{H}(\bar{D})$ is the vector space of holomorphic mappings of $\bar{D}$ into $\mathbb{C}$; that is, $\phi=(\phi_1,\ldots,\phi_n)$ is in $\mathscr{H}(\bar{D})$ if each $\phi_j = u_j+iv_j$ is a holomorphic function of $n$ complex variables, so they satisfy the Cauchy-Riemann equations on some neighborhood of $\bar{D}$: $$ \frac{\partial u_k}{\partial x_j} = \frac{\partial v_k}{\partial y_j},~~~ \frac{\partial u_k}{\partial y_j} = -\frac{\partial v_k}{\partial x_j},~~~ (k,j=1,\ldots,n). $$ Now, let $p\in D$, $p\notin \partial D$. Consider a point $\zeta$ such that $\phi(\zeta)=p$. The matrix of $\phi'(\zeta)$ as a mapping of $\mathbb{C}^n$ into itself with respect to the standard basis is given by $A=(\alpha_{kj})$, where $$ \alpha_{kj} = \frac{\partial u_k}{\partial z_j} = \frac{\partial u_k}{\partial x_j}-i\frac{\partial u_k}{\partial y_j}. $$ So far so good.
Now, Lloyd claims that the determinant $\det \phi'(\zeta) \geq 0$. There isn't much explanation as to why, other than to consider the Jordan normal form. In fact, this doesn't even seem true for $n=1$. I know of the result that the square of this determinant is the determinant of the corresponding real Jacobian matrix, but I'm not sure if that's what the author is referring to.
I don't have much of an idea of what the the eigenvalues of a Jacobian of a holomorphic map of several complex variables should be. Would someone please explain to me why the determinant should be nonnegative? (I have minimal knowledge of several complex variables, and it is not a focus in this book.)
(More context, in case it helps: we are proving the following theorem:
Suppose $D$ is an open bounded subset of $\mathbb{C}^n$, and $\phi\in\mathscr{H}(\bar{D})$. If $p\notin\phi(\partial D)$, then $\deg(\phi,D,p)\geq 0$.
Here $\deg(\phi, D, p)$ is the degree of $\phi$ with respect to $D$ and $p$.) |
Skills to Develop
To learn the distinction between independent samples and paired samples. To learn how to construct a confidence interval for the difference in the means of two distinct populations using paired samples. To learn how to perform a test of hypotheses concerning the difference in the means of two distinct populations using paired samples
Suppose chemical engineers wish to compare the fuel economy obtained by two different formulations of gasoline. Since fuel economy varies widely from car to car, if the mean fuel economy of two independent samples of vehicles run on the two types of fuel were compared, even if one formulation were better than the other the large variability from vehicle to vehicle might make any difference arising from difference in fuel difficult to detect. Just imagine one random sample having many more large vehicles than the other. Instead of independent random samples, it would make more sense to select pairs of cars of the same make and model and driven under similar circumstances, and compare the fuel economy of the two cars in each pair. Thus the data would look something like Table \(\PageIndex{1}\), where the first car in each pair is operated on one formulation of the fuel (call it Type \(1\) gasoline) and the second car is operated on the second (call it Type \(2\) gasoline).
Make and Model Car 1 Car 2 Buick LaCrosse 17.0 17.0 Dodge Viper 13.2 12.9 Honda CR-Z 35.3 35.4 Hummer H 3 13.6 13.2 Lexus RX 32.7 32.5 Mazda CX-9 18.4 18.1 Saab 9-3 22.5 22.5 Toyota Corolla 26.8 26.7 Volvo XC 90 15.1 15.0
The first column of numbers form a sample from Population \(1\), the population of all cars operated on Type \(1\) gasoline; the second column of numbers form a sample from Population \(2\), the population of all cars operated on Type \(2\) gasoline. It would be incorrect to analyze the data using the formulas from the previous section, however, since the samples were not drawn independently. What is correct is to compute the difference in the numbers in each pair (subtracting in the same order each time) to obtain the third column of numbers as shown in Table \(\PageIndex{2}\) and treat the differences as the data. At this point, the new sample of differences \(d_1=0.0,\cdots ,d_9=0.1\) in the third column of Table \(\PageIndex{2}\) may be considered as a random sample of size \(n=9\) selected from a population with mean \(\mu _d=\mu _1-\mu _2\)
Make and Model Car 1 Car 2 Difference Buick LaCrosse 17.0 17.0 0.0 Dodge Viper 13.2 12.9 0.3 Honda CR-Z 35.3 35.4 −0.1 Hummer H 3 13.6 13.2 0.4 Lexus RX 32.7 32.5 0.2 Mazda CX-9 18.4 18.1 0.3 Saab 9-3 22.5 22.5 0.0 Toyota Corolla 26.8 26.7 0.1 Volvo XC 90 15.1 15.0 0.1
Note carefully that although it does not matter what order the subtraction is done, it must be done in the same order for all pairs. This is why there are both positive and negative quantities in the third column of numbers in Table \(\PageIndex{2}\).
Confidence Intervals
When the population of differences is normally distributed the following formula for a confidence interval for \(\mu _d=\mu _1-\mu _2\) is valid.
where there are \(n\) pairs, \(\bar{d}\) is the mean and \(s_d\) is the standard deviation of their differences.
The number of degrees of freedom is
The population of differences must be
normally distributed.
Example \(\PageIndex{1}\)
Using the data in Table \(\PageIndex{1}\) construct a point estimate and a \(95\%\) confidence interval for the difference in average fuel economy between cars operated on Type \(1\) gasoline and cars operated on Type \(2\) gasoline.
Solution:
We have referred to the data in Table \(\PageIndex{1}\) because that is the way that the data are typically presented, but we emphasize that with paired sampling one immediately computes the differences, as given in Table \(\PageIndex{2}\) , and uses the differences as the data.
The mean and standard deviation of the differences are
\[\bar{d}=\frac{\sum d}{n}=\frac{1.3}{9}=0.1\bar{4}\]
\[s_d=\sqrt{\frac{\sum d^2-\frac{1}{n}(\sum d)^2}{n-1}}=\sqrt{\frac{0.41-\frac{1}{9}(1.3)^2}{8}}=0.1\bar{6}\]
The point estimate of \(\mu _1-\mu _2=\mu _d\) is
In words, we estimate that the average fuel economy of cars using Type \(1\) gasoline is \(0.14\) mpg greater than the average fuel economy of cars using Type \(2\) gasoline.
To apply the formula for the confidence interval, we must find \(t_{\alpha /2}\)
\[\bar{d}\pm t_{\alpha /2}\frac{s_d}{\sqrt{n}}=0.14\pm 2.306\left ( \frac{0.1\bar{6}}{\sqrt{9}} \right )\approx 0.14\pm 0.13\]
We are \(95\%\) confident that the difference in the population means lies in the interval \([0.01,0.27]\), in the sense that in repeated sampling \(95\%\) of all intervals constructed from the sample data in this manner will contain \(\mu _d=\mu _1-\mu _2\)
Hypothesis Testing
Testing hypotheses concerning the difference of two population means using paired difference samples is done precisely as it is done for independent samples, although now the null and alternative hypotheses are expressed in terms of \(\mu _d\) instead of \(\mu _1-\mu _2\). Thus the null hypothesis will always be written
\[H_0:\mu _d=D_0\]
The three forms of the alternative hypothesis, with the terminology for each case, are:
Form of Terminology Left-tailed \(H_a:\mu_d>D_0\) Right-tailed \(H_a:\mu_d\neq D_0\) Two-tailed
The same conditions on the population of differences that was required for constructing a confidence interval for the difference of the means must also be met when hypotheses are tested. Here is the standardized test statistic that is used in the test.
Standardized Test Statistic for Hypothesis Tests Concerning the Difference Between Two Population Means: Paired Difference Samples
\[T=\frac{\bar{d}-D_0}{s_d/\sqrt{n}}\]
where there are \(n\) pairs, \(\bar{d}\) is the mean and \(s_d\) is the standard deviation of their differences.
The test statistic has Student’s \(t\)-distribution with \(df=n-1\) degrees of freedom.
The population of differences must be normally distributed.
Example \(\PageIndex{2}\): using the critical value approach
Using the data of Table \(\PageIndex{2}\) test the hypothesis that mean fuel economy for Type \(1\) gasoline is greater than that for Type \(2\) gasoline against the null hypothesis that the two formulations of gasoline yield the same mean fuel economy. Test at the \(5\%\) level of significance using the critical value approach.
Solution:
The only part of the table that we use is the third column, the differences.
Step 1. Since the differences were computed in the order .Thus the test is
\[H_0:\mu _d=0\\ \text{vs.}\\ H_a:\mu _d>0\; \; @\; \; \alpha =0.05\]
(If the differences had been computed in the opposite order then the alternative hypotheses would have been \(H_a:\mu _d<0\)
Step 2. Since the sampling is in pairs the test statistic is
\[T=\frac{\bar{d}-D_0}{s_d/\sqrt{n}}\]
Step 3. We have already computed \(\bar{d}\) and \(s_d\) in the previous example. Inserting their values and \(D_0=0\) into the formula for the test statistic gives
\[T=\frac{\bar{d}-D_0}{s_d/\sqrt{n}}=\frac{0.1\bar{4}}{0.1\bar{6}/\sqrt{3}}=2.600\]
Step 4. Since the symbol in \(H_a\) is “\(>\)” this is a right-tailed test, so there is a single critical value, \(t_\alpha =t_{0.05}\) with \(8\) degrees of freedom, which from the row labeled \(df=8\) in Figure 7.1.6 we read off as \(1.860\). The rejection region is \([1.860,\infty )\). Step 5. As shown in Figure \(\PageIndex{1}\) the test statistic falls in the rejection region. The decision is to reject \(H_0\). In the context of the problem our conclusion is:
\(\PageIndex{2}\) Figure \(\PageIndex{1}\): Rejection Region and Test Statistic for "Example "
The data provide sufficient evidence, at the \(5\%\) level of significance, to conclude that the mean fuel economy provided by Type \(1\) gasoline is greater than that for Type \(2\) gasoline.
Example \(\PageIndex{3}\): using the p-value approach
Perform the test in Example \(\PageIndex{2}\) using the p-value approach.
Solution:
The first three steps are identical to those \(\PageIndex{2}\).
Step 4. Because the test is one-tailed the observed significance or \(p\)-value of the test is just the area of the right tail of Student’s \(t\)-distribution, with \(8\) degrees of freedom, that is cut off by the test statistic \(T=2.600\). We can only approximate this number. Looking in the row of Figure 7.1.6 headed \(df=8\), the number \(2.600\) is between the numbers \(2.306\) and \(2.896\), corresponding to \(t_{0.025}\) and \(t_{0.010}\). The area cut off by \(t=2.306\) is \(0.025\) and the area cut off by \(t=2.896\) is \(0.010\). Since \(2.600\) is between \(2.306\) and \(2.896\) the area it cuts off is between \(0.025\) and \(0.010\). Thus the \(p\)-value is between \(0.025\) and \(0.010\). In particular it is less than \(0.025\). See Figure \(\PageIndex{2}\).
\(\PageIndex{3}\) Figure \(\PageIndex{2}\): P-Value for "Example " Step 5. Since \(0.025<0.05\), \(p<\alpha\) so the decision is to reject the null hypothesis:
The data provide sufficient evidence, at the \(5\%\) level of significance, to conclude that the mean fuel economy provided by Type \(1\) gasoline is greater than that for Type \(2\) gasoline.
The paired two-sample experiment is a very powerful study design. It bypasses many unwanted sources of “statistical noise” that might otherwise influence the outcome of the experiment, and focuses on the possible difference that might arise from the one factor of interest.
If the sample is large (meaning that \(n\geq 30\)) then in the formula for the confidence interval we may replace \(t_{\alpha /2}\) by \(z_{\alpha /2}\)
Key Takeaway When the data are collected in pairs, the differences computed for each pair are the data that are used in the formulas. A confidence interval for the difference in two population means using paired sampling is computed using a formula in the same fashion as was done for a single population mean. The same five-step procedure used to test hypotheses concerning a single population mean is used to test hypotheses concerning the difference between two population means using pair sampling. The only difference is in the formula for the standardized test statistic. |
Skills to Develop
To learn how to construct a confidence interval for the difference in the means of two distinct populations using small, independent samples. To learn how to perform a test of hypotheses concerning the difference between the means of two distinct populations using small, independent samples.
When one or the other of the sample sizes is small, as is often the case in practice, the Central Limit Theorem does not apply. We must then impose conditions on the population to give statistical validity to the test procedure. We will assume that both populations from which the samples are taken have a normal probability distribution and that their standard deviations are equal.
Confidence Intervals
When the two populations are normally distributed and have equal standard deviations, the following formula for a confidence interval for \(\mu _1-\mu _2\) is valid.
\(100(1-\alpha )\%\) Confidence Interval for the Difference Between Two Population Means: Small, Independent Samples
\[(\bar{x_1}-\bar{x_2})\pm t_{\alpha /2}\sqrt{s_{p}^{2}\left ( \dfrac{1}{n_1}+\dfrac{1}{n_2} \right )} \label{eq1}\]
where
\[s_{p}^{2}=\dfrac{(n_1-1)s_{1}^{2}+(n_2-1)s_{2}^{2}}{n_1+n_2-2}\]
The number of degrees of freedom is
\[df=n_1+n_2-2.\]
The samples must be independent, the populations must be normal, and the population standard deviations must be equal. “Small” samples means that either \(n_1<30\) or \(n_2<30\).
The quantity \(s_{p}^{2}\) is called the
pooled sample variance. It is a weighted average of the two estimates \(s_{1}^{2}\) and \(s_{2}^{2}\) of the common variance \(\sigma _{1}^{2}=\sigma _{2}^{2}\) of the two populations.
Example \(\PageIndex{1}\)
A software company markets a new computer game with two experimental packaging designs. Design \(1\) is sent to \(11\) stores; their average sales the first month is \(52\) units with sample standard deviation \(12\) units. Design \(2\) is sent to \(6\) stores; their average sales the first month is \(46\) units with sample standard deviation \(10\) units. Construct a point estimate and a \(95\%\) confidence interval for the difference in average monthly sales between the two package designs.
Solution:
The point estimate of \(\mu _1-\mu _2\) is
In words, we estimate that the average monthly sales for Design \(1\) is \(6\) units more per month than the average monthly sales for Design \(2\).
To apply the formula for the confidence interval (Equation \ref{eq1}), we must find \(t_{\alpha /2}\). The \(95\%\) confidence level means that \(\alpha =1-0.95=0.05\) so that \(t_{\alpha /2}=t_{0.025}\). From Figure 7.1.6, in the row with the heading \(df=11+6-2=15\) we read that \(t_{0.025}=2.131\). From the formula for the pooled sample variance we compute
Thus
We are \(95\%\) confident that the difference in the population means lies in the interval \([-6.3,18.3]\), in the sense that in repeated sampling \(95\%\) of all intervals constructed from the sample data in this manner will contain \(\mu _1-\mu _2\). Because the interval contains both positive and negative values the statement in the context of the problem is that we are \(95\%\) confident that the average monthly sales for Design \(1\) is between \(18.3\) units higher and \(6.3\) units lower than the average monthly sales for Design \(2\).
Hypothesis Testing
Testing hypotheses concerning the difference of two population means using small samples is done precisely as it is done for large samples, using the following standardized test statistic. The same conditions on the populations that were required for constructing a confidence interval for the difference of the means must also be met when hypotheses are tested.
Standardized Test Statistic for Hypothesis Tests Concerning the Difference Between Two Population Means: Small, Independent Samples
\[T=\dfrac{(\bar{x_1}-\bar{x_2})-D_0}{\sqrt{s_{p}^{2}\left ( \dfrac{1}{n_1}+\dfrac{1}{n_2}\right )}}\]
where
\[ s_{p}^{2}=\dfrac{(n_1-1)s_{1}^{2}+(n_2-1)s_{2}^{2}}{n_1+n_2-2}\]
The test statistic has Student’s t-distribution with \(df=n_1+n_2-2\) degrees of freedom.
The samples must be independent, the populations must be normal, and the population standard deviations must be equal. “Small” samples means that either \(n_1<30\) or \(n_2<30\).
Example \(\PageIndex{2}\)
Refer to Example \(\PageIndex{1}\) concerning the mean sales per month for the same computer game but sold with two package designs. Test at the \(1\%\) level of significance whether the data provide sufficient evidence to conclude that the mean sales per month of the two designs are different. Use the critical value approach.
Solution: Step 1. The relevant test is
\[H_0: \mu _1-\mu _2=0\]
vs.
\[H_a: \mu _1-\mu _2\neq 0\; \; @\; \; \alpha =0.01\]
Step 2. Since the samples are independent and at least one is less than \(30\) the test statistic is
\[T=\dfrac{(\bar{x_1}-\bar{x_2})-D_0}{\sqrt{s_{p}^{2}\left ( \dfrac{1}{n_1}+\dfrac{1}{n_2}\right )}}\]
which has Student’s \(t\)-distribution with \(df=11+6-2=15\) degrees of freedom.
Step 3. Inserting the data and the value \(D_0=0\) into the formula for the test statistic gives
\[T=\dfrac{(\bar{x_1}-\bar{x_2})-D_0}{\sqrt{s_{p}^{2}\left ( \dfrac{1}{n_1}+\dfrac{1}{n_2}\right )}}=\dfrac{(52-46)-0}{\sqrt{129.\bar{3}\left ( \dfrac{1}{11}+\dfrac{1}{6} \right )}}=1.040\]
Step 4. Since the symbol in \(H_a\) is “\(\neq\)” this is a two-tailed test, so there are two critical values, \(\pm t_{\alpha /2}=\pm t_{0.005}\). From the row in Figure 7.1.6 with the heading \(df=15\) we read off \(t_{0.005}=2.947\). The rejection region is \((-\infty ,-2.947]\cup [2.947,\infty )\). : Figure \(\PageIndex{1}\) Rejection Region and Test Statistic for "Example\(\PageIndex{2}\)" Step 5. As shown in Figure \(\PageIndex{1}\) the test statistic does not fall in the rejection region. The decision is not to reject \(H_0\). In the context of the problem our conclusion is:
The data do not provide sufficient evidence, at the \(1\%\) level of significance, to conclude that the mean sales per month of the two designs are different.
Example \(\PageIndex{3}\)
Perform the test of Example \(\PageIndex{2}\) using the \(p\)-value approach.
Solution:
The first three steps are identical to those in Example \(\PageIndex{2}\).
Step 4. Because the test is two-tailed the observed significance or \(p\)-value of the test is the double of the area of the right tail of Student’st-distribution, with \(15\) degrees of freedom, that is cut off by the test statistic \(T=1.040\). We can only approximate this number. Looking in the row of Figure 7.1.6 headed \(df=15\), the number \(1.040\) is between the numbers \(0.866\) and \(1.341\), corresponding to \(t_{0.200}\) and \(t_{0.100}\). The area cut off by \(t=0.866\) is \(0.200\) and the area cut off by \(t=1.341\) is \(0.100\). Since \(1.040\) is between \(0.866\) and \(1.341\) the area it cuts off is between \(0.200\) and \(0.100\). Thus the \(p\)-value (since the area must be doubled) is between \(0.400\) and \(0.200\). Step 5. Since \(p>0.200>0.01,\; \; p>\alpha\), so the decision is not to reject the null hypothesis:
The data do not provide sufficient evidence, at the \(1\%\) level of significance, to conclude that the mean sales per month of the two designs are different.
Key Takeaway In the context of estimating or testing hypotheses concerning two population means, “small” samples means that at least one sample is small. In particular, even if one sample is of size \(30\) or more, if the other is of size less than \(30\) the formulas of this section must be used. A confidence interval for the difference in two population means is computed using a formula in the same fashion as was done for a single population mean. |
Here is my answer to a similar question posed a few days ago:
One of the most important functions in analysis is the function$$\arg:\quad \dot{\mathbb R}^2\to{\mathbb R}/(2\pi),\quad{\rm resp.},\quad \dot{\mathbb C}\to{\mathbb R}/(2\pi),$$written as $$\arg(x,y), \quad \arg(x+iy),\quad{\rm or}\quad \arg(z),$$depending on context. It gives the angle you are talking about "up to multiples of $2\pi$". If you remove the negative $x$-axis (resp., negative real axis) from $\dot{\mathbb R}^2$ (resp., from $\dot{\mathbb C}$) you can single out the
principal value of the argument, denoted by ${\rm Arg}(x,y)$, which is then a well defined continuous real-valued function on this restricted domain, taking values in $\ ]-\pi,\pi[\ $. One has$${\rm Arg}(x,y)=\arctan{y\over x}\qquad(x>0)$$and similar formulas in other half planes.
Even though the values of $\arg$ are not "ordinary real numbers" the gradient of $\arg$ is a well defined vector field in $\dot{\mathbb R}^2$, and is given by$$\nabla\arg(x,y)=\left({-y\over x^2+y^2},\>{x\over x^2+y^2}\right)\qquad\bigl((x,y)\ne(0,0)\bigr)\ .$$ |
I will provide an answer but from a different perspective, and hopefully convince you that there is information in a density matrix which has no classical counterpart. Furthermore this can hence be considered a quantum component, and it can be shown that this information is stored as the eigenvectors of $\rho$.
I will give an example of how this manifests. The Fisher Information $I(\theta)$ is a statistic from classical probability theory which characterises how quickly one can learn about a parameter $\theta$ which characterises a probability distribution $p(\theta)$.
Specifically the variance of an unbiased classical estimator $\hat{\theta}$ respects the Cramer Rao bound$$\mathrm{var}(\hat{\theta})\geq \frac{1}{I(\theta)}$$
The additivity of information means that if you sample the distribution $n$ times, collecting measurements each time the expected error $\Delta \theta_c = \sqrt{\mathrm{var}(\hat{\theta})}$ of any estimator goes like$$\Delta \theta_c \propto \frac1{\sqrt{n}}$$
This is recognised in the scaling of the standard deviation $\sigma$ in things like central limit theorem.
We can define a quantum analogue, to the fisher information $J(\theta)$ which satisfies an analogus bound, known as the Quantum Cramer Rao bound.
However it is found that by permitting entanglement between classically independent sampling events, the bound is much better. And after having collected a dataset of $n$ measurments, the best possible quantum estimator is bound only by the error$$\Delta \theta_q \propto \frac1{n}$$.
This shows that a general quantum state $\rho$ can definately support statistics which a classical probability distribution cannot.
The quantum Fisher information of a density matrix which depends on a parameter $\theta$$$\rho(\theta) = \sum_i p_i(\theta) |\psi_i(\theta)\rangle\langle\psi_i(\theta)|$$can be seen to seperate into several contributions, one of which is the classical Fisher information of the spectrum $p_i(\theta)$, another of which is a Fubini-Study like term which accounts for the information stored in the basis $|\psi_i(\theta)\rangle$. The possibility of (super-classical) quantum scaling depends entirely on the existence of this quantum term.
Alternatively stated, in terms of the behaviour of the Fisher information statistic and its quantum analogues, a density matrix $\rho$ supports non classical behaviour only if the basis set $|\psi_i(\theta)\rangle$ contains information relevant to the measurment, and in this sense, information stored in this way may be considered non-classical.
Useful stuff
If you are interested in some of the topics discussed here see this good review for an explanation.http://arxiv.org/pdf/1102.2318v1.pdf
This for an accessible but mathematical explanation of the QFI.http://arxiv.org/pdf/0804.2981.pdfThis post imported from StackExchange Physics at 2014-04-11 15:21 (UCT), posted by SE-user ComptonScattering |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
So I will get back to roots of numbers, but let’s first look at the rules for combining exponents.
Now an integer exponent means that you are multiplying the base together as many times as the exponent indicates. That is,
\[
{3}^{5}\hspace{0.33em}{=}\hspace{0.33em}{3}\hspace{0.33em}\times\hspace{0.33em}{3}\hspace{0.33em}\times\hspace{0.33em}{3}\hspace{0.33em}\times\hspace{0.33em}{3}\hspace{0.33em}\times\hspace{0.33em}{3} \].
The exponent “5” says to multiply the base “3” five times. This immediately suggests our first rule of exponents:\[
{x}^{m}{x}^{n}\hspace{0.33em}{=}\hspace{0.33em}{x}^{{m}{+}{n}}
\]
That is, when the same base with exponents are multiplied together, you can simplify this by adding the exponents. You can readily see this with the example above:\[
{3}^{2}\hspace{0.33em}\times\hspace{0.33em}{3}^{3}\hspace{0.33em}{=}\hspace{0.33em}{(}{3}\hspace{0.33em}\times\hspace{0.33em}{3}{)}\hspace{0.33em}\times\hspace{0.33em}{(}{3}\hspace{0.33em}\times\hspace{0.33em}{3}\hspace{0.33em}\times\hspace{0.33em}{3}{)}\hspace{0.33em}{=}\hspace{0.33em}{3}^{{2}{+}{3}}\hspace{0.33em}{=}\hspace{0.33em}{3}^{5}
\]
So this rule makes sense. Now let’s look at another example to motivate the next exponent rule:\[
\frac{{3}^{3}}{{3}^{2}}\hspace{0.33em}{=}\hspace{0.33em}\frac{\rlap{/}{3}\hspace{0.33em}\times\hspace{0.33em}\rlap{/}{3}\hspace{0.33em}\times\hspace{0.33em}{3}}{\rlap{/}{3}\hspace{0.33em}\times\hspace{0.33em}\rlap{/}{3}}\hspace{0.33em}{=}\hspace{0.33em}\frac{3}{1}\hspace{0.33em}{=}\hspace{0.33em}{3}^{1}
\]
Now you would normally leave out the exponent “1” in the final answer but I left it there so you can see the following rule in action:\[
\frac{{x}^{m}}{{x}^{n}}\hspace{0.33em}{=}\hspace{0.33em}{x}^{{m}{-}{n}}
\]
Can you see how this rule works for the last example? By the way, these rules work whether or not the base is a known number or not.\[
{x}^{13}{x}^{7}\hspace{0.33em}{=}\hspace{0.33em}{x}^{20}{,}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\frac{{x}^{13}}{{x}^{7}}\hspace{0.33em}{=}\hspace{0.33em}{x}^{6}
\]
Now so far, I have limited myself to using positive integers as the exponents. It turns out that any number can be used as an exponent but it is not clear what a negative or non-integer exponent means. Let’s first look at negative integer exponents.
In the division example above with the base “3”, I specifically put the “3” with the larger exponent in the numerator. What if I reversed these:\[
\frac{{3}^{2}}{{3}^{3}}\hspace{0.33em}{=}\hspace{0.33em}\frac{\rlap{/}{3}\hspace{0.33em}\times\hspace{0.33em}\rlap{/}{3}}{\rlap{/}{3}\hspace{0.33em}\times\hspace{0.33em}\rlap{/}{3}\hspace{0.33em}\times\hspace{0.33em}{3}}\hspace{0.33em}{=}\hspace{0.33em}\frac{1}{3}\hspace{0.33em}{=}\hspace{0.33em}\frac{1}{{3}^{1}}
\]
But according to the division rule of exponents:\[
\frac{{3}^{2}}{{3}^{3}}\hspace{0.33em}{=}\hspace{0.33em}{3}^{{2}{-}{3}}\hspace{0.33em}{=}\hspace{0.33em}{3}^{{-}{1}}
\]
This suggests that \[
{3}^{{-}{1}}\hspace{0.33em}{=}\hspace{0.33em}\frac{1}{{3}^{1}} \] and this is correct. A negative exponent of a base means the equivalent to the same base to the positive exponent in the denominator. It actually works in the other direction as well. You can move factors between the numerator and denominator as long as you change the sign of the exponent:
\begin{array}{l}
{{x}^{{-}{6}}\hspace{0.33em}{=}\hspace{0.33em}\frac{1}{{x}^{6}}}\\
{\frac{{x}^{{-}{6}}\hspace{0.33em}{y}^{5}}{{z}^{{-}{7}}}\hspace{0.33em}{=}\hspace{0.33em}\frac{{y}^{5}{z}^{7}}{{x}^{6}}}
\end{array}
\]
And the multiplication rule works as well:\[
{x}^{7}{x}^{{-}{4}}\hspace{0.33em}{=}\hspace{0.33em}{x}^{{7}{-}{4}}\hspace{0.33em}{x}^{3}
\]
That takes care of the numbers on the tick marks of the number line as exponents, but what about the numbers in between? That will be the topic of my next post. |
I failed to do this question on the exam and finding it very difficult, I would be glad if you can help me solve it. How shall I start?
Signal Processing Stack Exchange is a question and answer site for practitioners of the art and science of signal, image and video processing. It only takes a minute to sign up.Sign up to join this community
Most introductory texts on DSP have solved examples of this kind, so you should probably get some good text and work with it. But I'll give you a few hints anyway.
You know that
$$X(e^{j\Omega})=\sum_{n=-\infty}^{\infty}x[n]e^{-jn\Omega}\tag{1}$$
So for (a) you get
$$X(e^{j\Omega})=\sum_{n=-\infty}^{\infty}\left(\frac34\right)^n u[n-4] e^{-jn\Omega}\tag{2}$$
Since $u[n-4]$ equals zero for $n<4$, (2) simplifies to
$$X(e^{j\Omega})=\sum_{n=4}^{\infty}\left(\frac34\right)^n e^{-jn\Omega}= \sum_{n=0}^{\infty}\left(\frac34\right)^{n+4}e^{-j(n+4)\Omega}=\\= \left(\frac34\right)^4e^{-j4\Omega}\sum_{n=0}^{\infty}\left(\frac34\right)^{n}e^{-jn\Omega}\tag{3}$$
where you can evaluate the final sum using the formula for a geometric series with $r=\frac34e^{-j\Omega}$.
For (b) you have
$$X(e^{j\Omega})=\sum_{n=-\infty}^{\infty}a^{|n|}e^{-jn\Omega}=\sum_{n=0}^{\infty}a^ne^{-jn\Omega}+\sum_{n=-\infty}^{-1}a^{-n}e^{-jn\Omega}=\\= \sum_{n=0}^{\infty}a^ne^{-jn\Omega}+\sum_{n=1}^{\infty}a^{n}e^{jn\Omega}= \sum_{n=0}^{\infty}a^ne^{-jn\Omega}+ae^{j\Omega}\sum_{n=0}^{\infty}a^{n}e^{jn\Omega}$$
Now you have both sums in the desired form to apply the formula:
$$X(e^{j\Omega})=\frac{1}{1-ae^{-j\Omega}}+\frac{ae^{j\Omega}}{1-ae^{j\Omega}}$$
You can combine the two terms, which should finally result in
$$X(e^{j\Omega})=\frac{1-a^2}{1-2a\cos\Omega+a^2}$$
Please remember that this is basic stuff that you must learn to do yourself. The only way to learn it is by solving the problems which you find at the end of each chapter of any introductory DSP text. |
Suppose we have two distributions given by the vectors $p=(p_1,\dots,p_n)$ and $q=(q_1,\dots,q_n)$, with $p_i,q_i\geq 0$, and $\sum_i p_i = \sum_i q_i=1$.
Now suppose that for some $\alpha\in(0,\infty)$,
$$H_{\alpha}(p)=H_{\alpha}(q),$$
where $H_{\alpha}(p)=\frac{1}{1-\alpha}\log\sum_i p_i^{\alpha}$ is the Rényi entropy of $p$. What can we say about $p$ and $q$? Of course if $q$ is a permutation of $p$, then their entropies will be equal. But is the converse true? |
I had the impression that there might be proofs of the irrationality of $\sqrt{2}$ that showed that $$ \left|\frac a b - \sqrt{2} \right| \ge (\text{something possibly depending on $a$ or $b$}) >0 $$ where $a,b\in\mathbb{Z}$. But the one I saw in Wikipedia's article on the square root of $2$ spoke of whether the multiplicity of $2$ as a factor of $a$ or $b$ is even or odd, and that makes it seem not all that different from the old-fashioned proof that we all learned in childhood (you know, in 500 BC when we were children). Is there a short and simple proof of that kind to which this last criticism will not apply?
Hint: $|\sqrt2 - \frac{a}{b}| > \frac{1}{3b^2}$ for all rational $a/b$
Opp, sorry, I do not think there is a proof of irrationality of $\sqrt2$ that uses the inequality. I saw 5,6 proofs of the irrationality of $\sqrt2$ on the first day of my class but most proofs are in the article that Nbubis posted, and this inequality above on the next day, is to study how well rational, irrational and transcendental numbers are approximated.
PS: One cute proof for the rationality of $\sqrt{2}$ is to use the fundamental theorem of arithmetic, ie: every integer can be factored into product of prime numbers. For a square, every prime will appear an even number of times. $a^2 = 2 b^2$The prime $2$ appears even times for the right and odd times for left sides, contradiction.
You can use that if $a^2=2b^2$ then $(2b-a)^2=2(a-b)^2$ to show that if $\frac ab $ is a square root of 2 then so is $\frac {2b-a}{a-b}$. So there is not a fraction with smallest denominator.
Also if $\frac ab$ is an approximation to $\sqrt 2$, then $\frac {a+2b}{a+b}$ is in general a better one, which may be what you are remembering.
If $$2-\frac{a^2}{b^2}=\epsilon$$ then
$$\frac{(a+2b)^2}{(a+b)^2}-2 = \frac{2b^2-a^2}{(a+b)^2}=\epsilon \left(\frac b{a+b}\right)^2$$ |
How can I solve this problem without using L'Hôpital's rule?$$\lim_{x→0}\frac{(\sin(x)-x)(\cos(3x)-1)}{x(e^x -1)}$$
Thanks in advance!
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
You can use the two basic limits $$ \lim_{t\to0}\frac{1-\cos t}{t^2}=\frac{1}{2}, \qquad \lim_{x\to0}\frac{e^x-1}{x}=1 $$ so you can rewrite your limit as $$ \lim_{x\to0}-9(\sin x-x)\frac{1-\cos(3x)}{(3x)^2}\frac{x}{e^x-1} $$ and conclude the limit is …
Hints.
Use the fact that $$ \lim_{x\to x_0}f(x)g(x)= \lim_{x\to x_0}f(x)\cdot\lim_{x\to x_0}g(x) $$ provided both limits on the right hand side exist (which generalizes to a product of three or more factors).
What is $\lim_{x\to0}(\sin x-x)$? |
So, I have a book here, which has an example for context sensitive grammar, and the grammar is the famous $0^n1^n2^n$ , and it has:
$$ \begin{align} S &\rightarrow 0BS2 \mid 012 \\ B0 &\rightarrow 0B \\ B1 &\rightarrow 11 \\ \end{align} $$
I agree that the above works, but what is wrong with just saying: $S\rightarrow 0S12 |\epsilon$
The above also generators the same number of $0$s as $1$s and $2$s. |
I want to show the following:
Let $f: \mathbb{R} \to \mathbb{R}$ be continuous.
a) If $f$ is differentiable and for $x \ne 0$ the limit $\lim_{x\to 0} f'(x) = A$ exists, then $f$ is differentiable at $x = 0$ and $f'(0) = A$.
b) Show that the inverse is false, i.e. there exists a function $f$ which is differentiable at $x = 0$, but $\lim_{x\to 0} f'(x)$ does not exists.
For a), I worked out a proof, but I am unsure if the limit manipulations I used are okay and rigorous enough, so please can you comment on my solution and point out possible flaws?
Ok, in a) I have to show, that if the limit exists, then it obeys the continuity condition at $x = 0$, i.e. $$ \lim_{x\to 0} f'(x) = f'(0) $$ so I calculate (by using the limit definition of the derivative) \begin{align*} \lim_{x\to 0} f'(x) & = \lim_{x \to 0} \left[ \lim_{h \to 0} \left( \frac{f(x+h)-f(x)}{h} \right) \right] \\ & = \lim_{h \to 0} \left[ \lim_{x \to 0} \left( \frac{f(x+h)-f(x)}{h} \right) \right] \\ & = \lim_{h \to 0} \left[ \frac{1}{h} \lim_{x \to 0} \left( f(x+h)-f(x) \right) \right] \\ & = \lim_{h \to 0} \left[ \frac{1}{h} ( f(h) - f(0) ) \right] \\ & = f'(0) \end{align*} Thats my proof, in the last step I used the continuity of $f$ and the manipulations are possible, I think, because all limits exists. I never saw such manipulations, most proofs in my textbooks use $\epsilon/\delta$-Arguments, so I am unsure as how valid are such limit-exchange operations.
For b) the function $f(x) := |x|$ is differentiable at $x = 0$ but it's derivative is not continuous at $x = 0$. |
Once we have organized and summarized your sample data, the next step is to identify the underlying distribution of our random variable. Computing probabilities for continuous random variables are complicated by the fact that there are an infinite number of possible values that our random variable can take on, so the probability of observing a particular value for a random variable is zero. Therefore, to find the probabilities associated with a continuous random variable, we use a probability density function (PDF).
A PDF is an equation used to find probabilities for continuous random variables. The PDF must satisfy the following two rules:
The area under the curve must equal one (over all possible values of the random variable). The probabilities must be equal to or greater than zero for all possible values of the random variable.
The area under the curve of the probability density function over some interval represents the probability of observing those values of the random variable in that interval.
The Normal Distribution
Many continuous random variables have a bell-shaped or somewhat symmetric distribution. This is a normal distribution. In other words, the probability distribution of its relative frequency histogram follows a normal curve. The curve is bell-shaped, symmetric about the mean, and defined by µ and σ (the mean and standard deviation).
Figure 9. A normal distribution.
There are normal curves for every combination of µ and σ. The mean (µ) shifts the curve to the left or right. The standard deviation (σ) alters the spread of the curve. The first pair of curves have different means but the same standard deviation. The second pair of curves share the same mean (µ) but have different standard deviations. The pink curve has a smaller standard deviation. It is narrower and taller, and the probability is spread over a smaller range of values. The blue curve has a larger standard deviation. The curve is flatter and the tails are thicker. The probability is spread over a larger range of values.
Figure 10. A comparison of normal curves.
Properties of the normal curve:
The mean is the center of this distribution and the highest point. The curve is symmetric about the mean. (The area to the left of the mean equals the area to the right of the mean.) The total area under the curve is equal to one. As xincreases and decreases, the curve goes to zero but never touches. The PDF of a normal curve is $$ y= \frac {1}{\sqrt {2\pi} \sigma} e^{\frac {-(x-\mu)^2}{2\sigma^2}}$$ A normal curve can be used to estimate probabilities. A normal curve can be used to estimate proportions of a population that have certain x-values. The Standard Normal Distribution
There are millions of possible combinations of means and standard deviations for continuous random variables. Finding probabilities associated with these variables would require us to integrate the PDF over the range of values we are interested in. To avoid this, we can rely on the standard normal distribution. The standard normal distribution is a special normal distribution with a µ = 0 and
= 1. We can use the Z-score to standardize any normal random variable, converting the x-values to Z-scores, thus allowing us to use probabilities from the standard normal table. So how do we find area under the curve associated with a Z-score? σ Standard Normal Table The standard normal table gives probabilities associated with specific Z-scores. The table we use is cumulative from the left. The negative side is for all Z-scores less than zero (all values less than the mean). The positive side is for all Z-scores greater than zero (all values greater than the mean). Not all standard normal tables work the same way.
Example \(\PageIndex{1}\):
What is the area associated with the Z-score 1.62?
Figure 11. The standard normal table and associated area for z = 1.62. Answer
The area is 0.9474.
Reading the Standard Normal Table Read down the Z-column to get the first part of the Z-score (1.6). Read across the top row to get the second decimal place in the Z-score (0.02). The intersection of this row and column gives the area under the curve to the left of the Z-score. Finding Z-scores for a Given Area What if we have an area and we want to find the Z-score associated with that area? Instead of Z-score → area, we want area → Z-score. We can use the standard normal table to find the area in the body of values and read backwards to find the associated Z-score. Using the table, search the probabilities to find an area that is closest to the probability you are interested in.
Example \(\PageIndex{2}\):
To find a Z-score for which the area to the right is 5%:
Since the table is cumulative from the left, you must use the complement of 5%.
$$1.000 – 0.05 = 0.9500$$
Figure 12. The upper 5% of the area under a normal curve.
Find the Z-score for the area of 0.9500. Look at the probabilities and find a value as close to 0.9500 as possible.
Figure 13. The standard normal table. Answer
The Z-score for the 95th percentile is 1.64.
Area in between Two Z-scores
Example \(\PageIndex{3}\):
To find Z-scores that limit the middle 95%:
Figure 14. The middle 95% of the area under a normal curve. Solutions
The middle 95% has 2.5% on the right and 2.5% on the left. Use the symmetry of the curve. Look at your standard normal table. Since the table is cumulative from the left, it is easier to find the area to the left first. Find the area of 0.025 on the negative side of the table. The Z-score for the area to the left is -1.96. Since the curve is symmetric, the Z-score for the area to the right is 1.96. Common Z-scores
There are many commonly used Z-scores:
\(Z_{.05}\) = 1.645 and the area between -1.645 and 1.645 is 90% \(Z_{.025}\) = 1.96 and the area between -1.96 and 1.96 is 95% \(Z_{.005}\) = 2.575 and the area between -2.575 and 2.575 is 99% Applications of the Normal Distribution
Typically, our normally distributed data do not have μ = 0 and
= 1, but we can relate any normal distribution to the standard normal distributions using the Z-score. We can transform values of x to values of z. σ
$$z=\frac {x-\mu}{\sigma}$$
For example, if a normally distributed random variable has a μ = 6 and
= 2, then a value of x = 7 corresponds to a Z-score of 0.5. σ
$$Z=\frac{7-6}{2}=0.5$$
This tells you that 7 is one-half a standard deviation above its mean. We can use this relationship to find probabilities for any normal random variable.
Figure 15. A normal and standard normal curve.
To find the area for values of X, a normal random variable, draw a picture of the area of interest, convert the x-values to Z-scores using the Z-score and then use the standard normal table to find areas to the left, to the right, or in between.
$$z=\frac {x-\mu}{\sigma}$$
Example \(\PageIndex{4}\):
Adult deer population weights are normally distributed with µ = 110 lb. and
= 29.7 lb. As a biologist you determine that a weight less than 82 lb. is unhealthy and you want to know what proportion of your population is unhealthy. σ
P(x<82)
Figure 16. The area under a normal curve for P(x<82).
Convert 82 to a Z-score
$$z=\frac{82-110}{29.7} = -0.94$$
The
x value of 82 is 0.94 standard deviations below the mean.
Figure 17. Area under a standard normal curve for P(z<-0.94).
Go to the standard normal table (negative side) and find the area associated with a Z-score of -0.94.
This is an “area to the left” problem so you can read directly from the table to get the probability.
$$P(x<82) = 0.1736$$
Approximately 17.36% of the population of adult deer is underweight, OR one deer chosen at random will have a 17.36% chance of weighing less than 82 lb.
Example \(\PageIndex{5}\):
Statistics from the Midwest Regional Climate Center indicate that Jones City, which has a large wildlife refuge, gets an average of 36.7 in. of rain each year with a standard deviation of 5.1 in. The amount of rain is normally distributed. During what percent of the years does Jones City get more than 40 in. of rain?
$$P(x > 40)$$
Figure 18. Area under a normal curve for P(x>40). Solution
$$z=\frac {40-36.7}{5.1}=0.65$$
$$ P(x>40) = (1-0.7422) = 0.2578$$
For approximately 25.78% of the years, Jones City will get more than 40 in. of rain.
Assessing Normality
If the distribution is unknown and the sample size is not greater than 30 (Central Limit Theorem), we have to assess the assumption of normality. Our primary method is the normal probability plot. This plot graphs the observed data, ranked in ascending order, against the “expected” Z-score of that rank. If the sample data were taken from a normally distributed random variable, then the plot would be approximately linear.
Examine the following probability plot. The center line is the relationship we would expect to see if the data were drawn from a perfectly normal distribution. Notice how the observed data (red dots) loosely follow this linear relationship. Minitab also computes an Anderson-Darling test to assess normality. The null hypothesis for this test is that the sample data have been drawn from a normally distributed population. A p-value greater than 0.05 supports the assumption of normality.
Figure 19. A normal probability plot generated using Minitab 16.
Compare the histogram and the normal probability plot in this next example. The histogram indicates a skewed right distribution.
Figure 20. Histogram and normal probability plot for skewed right data.
The observed data do not follow a linear pattern and the p-value for the A-D test is less than 0.005 indicating a non-normal population distribution.
Normality cannot be assumed. You must always verify this assumption. Remember, the probabilities we are finding come from the standard NORMAL table. If our data are NOT normally distributed, then these probabilities DO NOT APPLY.
Do you know if the population is normally distributed? Do you have a large enough sample size (n≥30)? Remember the Central Limit Theorem? Did you construct a normal probability plot? |
I'm reading a linear algebra book and it defines the integration operation as $T \epsilon L(P(\Bbb R), \Bbb R)$. However, it defines the differentiation operation as
T $\epsilon$ $L(P(\Bbb R), P(\Bbb R))$. Don't they both map to the vector space that contains all polynomials? Why is integration as a linear function defined this way?
I'm reading a linear algebra book and it defines the integration operation as $T \epsilon L(P(\Bbb R), \Bbb R)$. However, it defines the differentiation operation as
Because integration gives you a real number value
Integration is considered a transformation T: $ P(\mathbb R) \to \mathbb R $ because a definite integral of the form $ \int_a^b p(x) dx $ will typically give you a real number value: i.e. the area under the curve of p(x) on [a,b]. For example, consider: $$ T(p(x))= \int_0^1 p(x) dx $$
Let's take this transformation for $p(x)=x^2$: $$ \int_0^1 x^2 dx = \frac{1}{3}$$
The transformation T has taken our polynomial, $x^2$, which is in the set $P(\mathbb R)$, and produced a fraction, which is in the set $\mathbb R$, so for any polynomial in the set, the transformation (the definite integral on [0,1] in this case) will produce a real number. That's why we consider the integral to be a linear transformation between these two spaces.
A derivative applied to a first order polynomial $(ax+C)$ will give you a real number in the same way.
However, for the indefinite integral $\int p(x) dx $, you are correct that most of the time you will get a polynomial back as an answer, and the derivative of a higher-order polynomial will also give you a polynomial.
TL;DR: the indefinite integral will map to $\mathbb R$ but the definite integral and most derivatives tend to remain in $P(\mathbb R)$. |
In a recent article with Valérie Berthé [BL15], we provided a multidimensional continued fraction algorithm called Arnoux-Rauzy-Poincaré (ARP) to construct, given any vector \(v\in\mathbb{R}_+^3\), an infinite word \(w\in\{1,2,3\}^\mathbb{N}\) over a three-letter alphabet such that the frequencies of letters in \(w\) exists and are equal to \(v\) and such that the number of factors (i.e. finite block of consecutive letters) of length \(n\) appearing in \(w\) is linear and less than \(\frac{5}{2}n+1\). We also conjecture that for almost all \(v\) the contructed word describes a discrete path in the positive octant staying at a bounded distance from the euclidean line of direction \(v\).
In Sage, you can construct this word using the next version of my package slabbe-0.2 (not released yet, email me to press me to finish it). The one with frequencies of letters proportionnal to \((1, e, \pi)\) is:
sage: from slabbe.mcf import algo sage: D = algo.arp.substitutions() sage: it = algo.arp.coding_iterator((1,e,pi)) sage: w = words.s_adic(it, repeat(1), D) word: 1232323123233231232332312323123232312323...
The factor complexity is close to 2n+1 and the balance is often less or equal to three:
sage: w[:10000].number_of_factors(100) 202 sage: w[:100000].number_of_factors(1000) 2002 sage: w[:1000].balance() 3 sage: w[:2000].balance() 3
Note that bounded distance from the euclidean line almost surely was proven in [DHS2013] for Brun algorithm, another MCF algorithm.
Other approaches: Standard model and billiard sequences
Other approaches have been proposed to construct such discrete lines.
One of them is the standard model of Eric Andres [A03]. It is also equivalent to billiard sequences in the cube. It is well known that the factor complexity of billiard sequences is quadratic \(p(n)=n^2+n+1\) [AMST94]. Experimentally, we can verify this. We first create a billiard word of some given direction:
sage: from slabbe import BilliardCube sage: v = vector(RR, (1, e, pi)) sage: b = BilliardCube(v) sage: b Cubic billiard of direction (1.00000000000000, 2.71828182845905, 3.14159265358979) sage: w = b.to_word() sage: w word: 3231232323123233213232321323231233232132...
We create some prefixes of \(w\) that we represent internally as
char*.The creation is slow because the implementation of billiard words in myoptional package is in Python and is not that efficient:
sage: p3 = Word(w[:10^3], alphabet=[1,2,3], datatype='char') sage: p4 = Word(w[:10^4], alphabet=[1,2,3], datatype='char') # takes 3s sage: p5 = Word(w[:10^5], alphabet=[1,2,3], datatype='char') # takes 32s sage: p6 = Word(w[:10^6], alphabet=[1,2,3], datatype='char') # takes 5min 20s
We see below that exactly \(n^2+n+1\) factors of length \(n<20\) appears in the prefix of length 1000000 of \(w\):
sage: A = ['n'] + range(30) sage: c3 = ['p_(w[:10^3])(n)'] + map(p3.number_of_factors, range(30)) sage: c4 = ['p_(w[:10^4])(n)'] + map(p4.number_of_factors, range(30)) sage: c5 = ['p_(w[:10^5])(n)'] + map(p5.number_of_factors, range(30)) # takes 4s sage: c6 = ['p_(w[:10^6])(n)'] + map(p6.number_of_factors, range(30)) # takes 49s sage: ref = ['n^2+n+1'] + [n^2+n+1 for n in range(30)] sage: T = table(columns=[A,c3,c4,c5,c6,ref]) sage: T n p_(w[:10^3])(n) p_(w[:10^4])(n) p_(w[:10^5])(n) p_(w[:10^6])(n) n^2+n+1 +----+-----------------+-----------------+-----------------+-----------------+---------+ 0 1 1 1 1 1 1 3 3 3 3 3 2 7 7 7 7 7 3 13 13 13 13 13 4 21 21 21 21 21 5 31 31 31 31 31 6 43 43 43 43 43 7 52 55 56 57 57 8 63 69 71 73 73 9 74 85 88 91 91 10 87 103 107 111 111 11 100 123 128 133 133 12 115 145 151 157 157 13 130 169 176 183 183 14 144 195 203 211 211 15 160 223 232 241 241 16 176 253 263 273 273 17 192 285 296 307 307 18 208 319 331 343 343 19 224 355 368 381 381 20 239 392 407 421 421 21 254 430 448 463 463 22 268 470 491 507 507 23 282 510 536 553 553 24 296 552 583 601 601 25 310 596 632 651 651 26 324 642 683 703 703 27 335 687 734 757 757 28 345 734 787 813 813 29 355 783 842 871 871
Billiard sequences generate paths that are at a bounded distance from an euclidean line. This is equivalent to say that the balance is finite. The balance is defined as the supremum value of difference of the number of apparition of a letter in two factors of the same length. For billiard sequences, the balance is 2:
sage: p3.balance() 2 sage: p4.balance() # takes 2min 37s 2 Other approaches: Melançon and Reutenauer
Melançon and Reutenauer [MR13] also suggested a method that generalizes Christoffel words in higher dimension. The construction is based on the application of two substitutions generalizing the construction of sturmian sequences. Below we compute the factor complexity and the balance of some of their words over a three-letter alphabet.
On a three-letter alphabet, the two morphisms are:
sage: L = WordMorphism('1->1,2->13,3->2') sage: R = WordMorphism('1->13,2->2,3->3') sage: L WordMorphism: 1->1, 2->13, 3->2 sage: R WordMorphism: 1->13, 2->2, 3->3
Example 1: periodic case \(LRLRLRLRLR\dots\). In this example, the factor complexity seems to be around \(p(n)=2.76n\) and the balance is at least 28:
sage: from itertools import repeat, cycle sage: W = words.s_adic(cycle((L,R)),repeat('1')) sage: W word: 1213122121313121312212212131221213131213... sage: map(W[:10000].number_of_factors, [10,20,40,80]) [27, 54, 110, 221] sage: [27/10., 54/20., 110/40., 221/80.] [2.70000000000000, 2.70000000000000, 2.75000000000000, 2.76250000000000] sage: W[:1000].balance() # takes 1.6s 21 sage: W[:2000].balance() # takes 6.4s 28
Example 2: \(RLR^2LR^4LR^8LR^{16}LR^{32}LR^{64}LR^{128}\dots\) taken from the conclusion of their article. In this example, the factor complexity seems to be \(p(n)=3n\) and balance at least as high (=bad) as \(122\):
sage: W = words.s_adic([R,L,R,R,L,R,R,R,R,L]+[R]*8+[L]+[R]*16+[L]+[R]*32+[L]+[R]*64+[L]+[R]*128,'1') sage: W.length() 330312 sage: map(W.number_of_factors, [10, 20, 100, 200, 300, 1000]) [29, 57, 295, 595, 895, 2981] sage: [29/10., 57/20., 295/100., 595/200., 895/300., 2981/1000.] [2.90000000000000, 2.85000000000000, 2.95000000000000, 2.97500000000000, 2.98333333333333, 2.98100000000000] sage: W[:1000].balance() # takes 1.6s 122 sage: W[:2000].balance() # takes 6s 122
Example 3: some random ones. The complexity \(p(n)/n\) occillates between 2 and 3 for factors of length \(n=1000\) in prefixes of length 100000:
sage: for _ in range(10): ....: W = words.s_adic([choice((L,R)) for _ in range(50)],'1') ....: print W[:100000].number_of_factors(1000)/1000. 2.02700000000000 2.23600000000000 2.74000000000000 2.21500000000000 2.78700000000000 2.52700000000000 2.85700000000000 2.33300000000000 2.65500000000000 2.51800000000000
For ten randomly generated words, the balance goes from 6 to 27 which is much more than what is obtained for billiard words or by our approach:
sage: for _ in range(10): ....: W = words.s_adic([choice((L,R)) for _ in range(50)],'1') ....: print W[:1000].balance(), W[:2000].balance() 12 15 8 24 14 14 5 11 17 17 14 14 6 6 19 27 9 16 12 12 References
[BL15] V. Berthé, S. Labbé,Factor Complexity of S-adic words generated by the Arnoux-Rauzy-Poincaré Algorithm, Advances in Applied Mathematics 63 (2015) 90-130.http://dx.doi.org/10.1016/j.aam.2014.11.001
[DHS2013] Delecroix, Vincent, Tomás Hejda, and Wolfgang Steiner. “Balancedness of Arnoux-Rauzy and Brun Words.” In Combinatorics on Words, 119–31. Springer, 2013. http://link.springer.com/chapter/10.1007/978-3-642-40579-2_14.
[A03] E. Andres, Discrete linear objects in dimension n: the standard model, Graphical Models 65 (2003) 92-111.
[AMST94] P. Arnoux, C. Mauduit, I. Shiokawa, J. I. Tamura, Complexity of sequences defined by billiards in the cube, Bull. Soc. Math. France 122 (1994) 1-12.
[MR13] G. Melançon, C. Reutenauer, On a class of Lyndon words extending Christoffel words and related to a multidimensional continued fraction algorithm. J. Integer Seq. 16, No. 9, Article 13.9.7, 30 p., electronic only (2013). https://cs.uwaterloo.ca/journals/JIS/VOL16/Reutenauer/reut3.html |
The group-algebra of an abelian group is commutative, so we can consider the spectrum of this algebra. Are there any information about the abelian group that we can obtain from such considerations? That is to say, could we study abelian groups by considering the spectrum and the scheme of its group-algebra?
Since I know nothing about the subject, any reference is mostly welcomed. Thanks in advance. P.S. I also posted in mathmatics stack exchange Here.
The group-algebra of an abelian group is commutative, so we can consider the spectrum of this algebra. Are there any information about the abelian group that we can obtain from such considerations? That is to say, could we study abelian groups by considering the spectrum and the scheme of its group-algebra?
The spectrum of the group algebra of a commutative group is called a diagonalizable group scheme. This is defined in SGA 3 Exposé VIII Section 1. Several geometric characterizations of group-theoretic properties are given in Proposition 2.1. A lot more is written in later sections, such as material on principal homogeneous spaces, quotients of affine schemes by diagonalizable group schemes, and representability of restriction of scalars.
If that isn't enough for you, Exposés 9-11 are about group schemes that are locally-on-the-base isomorphic to diagonalizable group schemes.
What follows does not answer your precise question, but is very related to it and may be of interest to you. I consider the case where $G$ is finite, but not necessarily abelian. Then there are several rings attached to $G$, whose spectrum you might want to consider.
The first one is the ring $R(G)$ of virtual characters of $G$ (or, equivalently, the Grothendieck ring of the category of finite-dimensional complex representations of $G$). When $G$ is abelian, it is exactly the group ring of the dual group $\hat{G}$. This group is defined in [Serre: représentations linéaires des groupes finis, 9.1] and its spectrum is studied in [loc. cit., 11.4], where it is shown to be connected.
The other one is the ring $Burn(G)$ that is the Grothendieck ring of the category of finite $G$-sets. In [Bayer-Fluckiger, Parimala, Serre: Hasse principle for G-trace forms, 4.2] (see also references therein), you will find the precise definition, the statement that the spectrum of $Burn(G)$ is connected if and only if $G$ is solvable, and a very nice example of application of $Burn(G)$.
Hi.
Varying the coefficients gives certainly a lot of information about the group. For example the smallest field $K\supseteq\mathbb{Q}$ such that $K[G]$ becomes split semisimple (which means isomorphic to $K^G$ in this case) encodes the exponent of the group (which also can be read of from the Loewy length of the modular group algebras I think).
If you are willing to consider the scheme including the involution $\ast: R[G]\to R[G]$ which is defined by $g\mapsto g^{-1}$, then the group is in fact determined up to isomorphism by $\mathbb{Z}[G]$ since $\lbrace\pm 1\rbrace G$ is the group of "orthogonal" units: $\lbrace x\in\mathbb{Z}[G] \mid xx^\ast=1\rbrace = \lbrace\pm1\rbrace G$. |
A sphere is 3D or a solid shape having a completely round structure. If you rotate a circular disc along any of its diameters, the structure thus obtained can be seen as a sphere. You can also define it as a set of points which are located at a fixed distance from a fixed point in a three-dimensional space. This fixed point is known as the center of the sphere. And the fixed distance is called its radius.
Volume of a Sphere Formula
In this section, we will obtain the formula to compute the volume of a sphere. Volume, as you know, is defined as the capacity of a 3D object. The volume of a sphere is nothing but the space occupied by it. It can be given as:
\(volume \; of \; a \; sphere = \frac{4}{3}\pi ^{3}\)
Where ‘r’ represents the radius of the sphere.
Volume Of A Sphere Derivation
The volume of a sphere can alternatively be viewed as the number of cubic units which is required to fill up the sphere.
Let us take up an activity to find out the volume of a sphere .
Take a cylindrical container. Pour water into it until it is filled to the brim. Place this container in a large trough. Now dip a spherical ball with a radius of ‘r’ units into the cylinder. You will observe that some quantity of the water is displaced from the cylinder and falls out into the trough. Pour this displaced water trough in another cylinder with radius ‘r’ units and height ‘2r’ units. We know that the volume of the water displaced by the ball must be equal to the volume of the spherical ball. Now take a note of the amount of water in the second cylinder. You will observe that the volume occupied by water is two-third of the volume of the second cylinder.
Hence, volume of water filled in the second cylinder = \(\frac{2}{3}=\pi r^{2}\times 2r\)
Thus,
Volume of a sphere of radius r = \(\frac{4}{3}=\pi r^{3}\)
Alternatively, the formula for the volume of a sphere can also be derived as follows.
Consider a sphere of radius r and divide it into pyramids. In this way, we see that the volume of the sphere is the same as the volume of all the pyramids of height, r and total base area equal to the surface area of the sphere as shown in the figure.
The total volume is calculated by the summation of the pyramids’ volumes.
Volume of the sphere = Sum of volumes of all pyramids
Volume of the sphere= \(\frac{1}{3}A_{1}r+\frac{1}{3}A_{2}r+\frac{1}{3}A_{3}r…..\frac{1}{3}A_{n}r\) \(=\frac{1}{3}r(Surface\;area\;of\;a\;sphere)\) \(=\frac{1}{3}\times 4\pi r^{2}\times r\)
Volume of the sphere = \(\frac{4}{3}\pi r^{3}\) Volume of a Sphere Formula in Real Life:
In our daily life, we come across different types of spheres. Basketball, football, table tennis, etc. are some of the common sports that are played by people all over the world. The balls used in these sports are nothing but spheres of different radii. The volume of sphere formula is useful in designing and calculating the capacity or volume of such spherical objects. You can easily find out the volume of a sphere if you know its radius.
Solved Examples Based on Sphere Volume Formula: Question 1: A sphere has a radius of 11 feet. Find its volume. Solution: Given,
r = 11 feet
We know that, volume of a sphere = \(\frac{4}{3}\pi r^{3}\)
Volume of the sphere= \frac{4}{3}\times 3.14\times 11^{3} = 5572.45 cubic feet
Question 2:The volume of a spherical ball is \(343\;cm^{3}\). Find the radius of the ball. Solution: Given, volume of the sphere= \(343\;cm^{3}\)
We know that, volume of a sphere= \(\frac{4}{3}\pi r^{3}\) \(343\;cm^{3}\)
= \(\frac{4}{3}\pi r^{3}\) \(\Rightarrow r^{3}=\frac{343\times 3}{4\pi }=\frac{343\times 3}{4\times 3.14 }=81.92cm^{3}\) \(\Rightarrow r=4.34cm\)
The radius of the ball is 4.34 cm.
To solve more problems on the topic, download BYJU’S- The Learning App. |
Optimal control of the coefficient for the regional fractional $p$-Laplace equation: Approximation and convergence
1.
Department of Mathematical Sciences, George Mason University, Fairfax, VA 22030, USA
2.
University of Puerto Rico, Rio Piedras Campus, Department of Mathematics, College of Natural Sciences, 17 University AVE. STE 1701, San Juan PR 00925-2537, USA
In this paper we study optimal control problems with the regional fractional $p$-Laplace equation, of order $s \in \left( {0,1} \right)$ and $p \in \left[ {2,\infty } \right)$, as constraints over a bounded open set with Lipschitz continuous boundary. The control, which fulfills the pointwise box constraints, is given by the coefficient of the regional fractional $p$-Laplace operator. We show existence and uniqueness of solutions to the state equations and existence of solutions to the optimal control problems. We prove that the regional fractional $p$-Laplacian approaches the standard $p$-Laplacian as $s$ approaches 1. In this sense, this fractional $p$-Laplacian can be considered degenerate like the standard $p$-Laplacian. To overcome this degeneracy, we introduce a regularization for the regional fractional $p$-Laplacian. We show existence and uniqueness of solutions to the regularized state equation and existence of solutions to the regularized optimal control problem. We also prove several auxiliary results for the regularized problem which are of independent interest. We conclude with the convergence of the regularized solutions.
Keywords:Regional fractional p-Laplace operator, non-constant coefficient, quasi-linear nonlocal elliptic boundary value problems, optimal control. Mathematics Subject Classification:35R11, 49J20, 49J45, 93C73. Citation:Harbir Antil, Mahamadi Warma. Optimal control of the coefficient for the regional fractional $p$-Laplace equation: Approximation and convergence. Mathematical Control & Related Fields, 2019, 9 (1) : 1-38. doi: 10.3934/mcrf.2019001
References:
[1]
D. Adams and L. Hedberg,
[2]
L. Ambrosio, N. Fusco and D. Pallara,
[3]
H. Antil and S. Bartels,
Spectral approximation of fractional PDEs in image processing and phase field modeling,
[4] [5]
V. Benci, P. D'Avenia, D. Fortunato and L. Pisani,
Solitons in several space dimensions:Derrick's problem and infinitely many solutions,
[6] [7] [8]
J. Bourgain, H. Brezis and P. Mironescu, Another look at sobolev spaces, in
[9]
J. Bourgain, H. Brezis and P. Mironescu,
Limiting embedding theorems for $W^{s,p}$ when $s\uparrow1$ and applications,
[10]
L. Brasco, E. Parini and M. Squassina,
Stability of variational eigenvalues for the fractional $p$-Laplacian,
[11] [12] [13]
L. Caffarelli, J.-M. Roquejoffre and Y. Sire,
Variational problems for free boundaries for the fractional Laplacian,
[14]
L. Caffarelli, S. Salsa and L. Silvestre,
Regularity estimates for the solution and the free boundary of the obstacle problem for the fractional Laplacian,
[15]
E. Casas, P. Kogut and G. Leugering,
Approximation of optimal control problems in the coefficient for the $p$-Laplace equation. I. Convergence result,
[16] [17]
A. Di Castro., T. Kuusi and G. Palatucci,
Local behavior of fractional $p$-minimizers,
[18] [19] [20] [21] [22]
P. Drábek and J. Milota,
[23]
A. Elmoataz, M. Toutain and D. Tenbrinck,
On the $p$-Laplacian and ∞-Laplacian on graphs with applications in image and data processing,
[24]
L. Evans, Partial differential equations and Monge-Kantorovich mass transfer, in
[25]
C.G. Gal and M. Warma,
On some degenerate non-local parabolic equation associated with the fractional $p$-Laplacian,
[26]
P. Grisvard,
[27]
A. Jonsson and H. Wallin, Function spaces on subsets of ${\bf R}^n$,
[28]
O. Kupenko and R. Manzo, Approximation of an optimal control problem in coefficient for variational inequality with anisotropic
[29] [30]
J.-L. Lions and E. Magenes,
[31] [32]
V. Maz'ya and S. Poborchi,
[33] [34] [35]
F. Murat and L. Tartar,
[36]
I. Pan and S. Das,
[37] [38]
R. Showalter,
[39]
L. Tartar, Problèmes de contrôle des coefficients dans des équations aux dérivées partielles, in
[40]
D. Valério and J. Sá da Costa,
[41] [42] [43]
M. Warma,
The fractional relative capacity and the fractional Laplacian with Neumann and Robin boundary conditions on open sets,
[44]
M. Warma, The fractional Neumann and Robin type boundary conditions for the regional fractional
[45]
M. Warma,
Local Lipschitz continuity of the inverse of the fractional
[46]
show all references
References:
[1]
D. Adams and L. Hedberg,
[2]
L. Ambrosio, N. Fusco and D. Pallara,
[3]
H. Antil and S. Bartels,
Spectral approximation of fractional PDEs in image processing and phase field modeling,
[4] [5]
V. Benci, P. D'Avenia, D. Fortunato and L. Pisani,
Solitons in several space dimensions:Derrick's problem and infinitely many solutions,
[6] [7] [8]
J. Bourgain, H. Brezis and P. Mironescu, Another look at sobolev spaces, in
[9]
J. Bourgain, H. Brezis and P. Mironescu,
Limiting embedding theorems for $W^{s,p}$ when $s\uparrow1$ and applications,
[10]
L. Brasco, E. Parini and M. Squassina,
Stability of variational eigenvalues for the fractional $p$-Laplacian,
[11] [12] [13]
L. Caffarelli, J.-M. Roquejoffre and Y. Sire,
Variational problems for free boundaries for the fractional Laplacian,
[14]
L. Caffarelli, S. Salsa and L. Silvestre,
Regularity estimates for the solution and the free boundary of the obstacle problem for the fractional Laplacian,
[15]
E. Casas, P. Kogut and G. Leugering,
Approximation of optimal control problems in the coefficient for the $p$-Laplace equation. I. Convergence result,
[16] [17]
A. Di Castro., T. Kuusi and G. Palatucci,
Local behavior of fractional $p$-minimizers,
[18] [19] [20] [21] [22]
P. Drábek and J. Milota,
[23]
A. Elmoataz, M. Toutain and D. Tenbrinck,
On the $p$-Laplacian and ∞-Laplacian on graphs with applications in image and data processing,
[24]
L. Evans, Partial differential equations and Monge-Kantorovich mass transfer, in
[25]
C.G. Gal and M. Warma,
On some degenerate non-local parabolic equation associated with the fractional $p$-Laplacian,
[26]
P. Grisvard,
[27]
A. Jonsson and H. Wallin, Function spaces on subsets of ${\bf R}^n$,
[28]
O. Kupenko and R. Manzo, Approximation of an optimal control problem in coefficient for variational inequality with anisotropic
[29] [30]
J.-L. Lions and E. Magenes,
[31] [32]
V. Maz'ya and S. Poborchi,
[33] [34] [35]
F. Murat and L. Tartar,
[36]
I. Pan and S. Das,
[37] [38]
R. Showalter,
[39]
L. Tartar, Problèmes de contrôle des coefficients dans des équations aux dérivées partielles, in
[40]
D. Valério and J. Sá da Costa,
[41] [42] [43]
M. Warma,
The fractional relative capacity and the fractional Laplacian with Neumann and Robin boundary conditions on open sets,
[44]
M. Warma, The fractional Neumann and Robin type boundary conditions for the regional fractional
[45]
M. Warma,
Local Lipschitz continuity of the inverse of the fractional
[46]
[1]
Olha P. Kupenko, Rosanna Manzo.
On optimal controls in coefficients for ill-posed non-Linear elliptic Dirichlet boundary value problems.
[2]
Peter I. Kogut, Olha P. Kupenko.
On optimal control problem for an ill-posed strongly nonlinear elliptic equation with $p$-Laplace operator and $L^1$-type of nonlinearity.
[3] [4]
John R. Graef, Lingju Kong, Qingkai Kong, Min Wang.
Positive solutions of nonlocal fractional boundary value problems.
[5] [6] [7]
Bo You, Yanren Hou, Fang Li, Jinping Jiang.
Pullback attractors for the non-autonomous quasi-linear complex Ginzburg-Landau equation with $p$-Laplacian.
[8] [9]
Lu Yang, Meihua Yang, Peter E. Kloeden.
Pullback attractors for non-autonomous quasi-linear parabolic equations with dynamical boundary conditions.
[10]
Arrigo Cellina.
The regularity of solutions to some variational problems, including the
[11] [12]
Sofia Giuffrè, Giovanna Idone.
On linear and nonlinear elliptic boundary value problems in the plane with discontinuous coefficients.
[13]
Peter I. Kogut.
On approximation of an optimal boundary control problem for linear elliptic equation with unbounded coefficients.
[14]
Vasily Denisov and Andrey Muravnik.
On asymptotic behavior of solutions of the Dirichlet problem in half-space for linear and quasi-linear elliptic equations.
[15]
Maria Rosaria Lancia, Alejandro Vélez-Santiago, Paola Vernole.
A quasi-linear nonlocal Venttsel' problem of Ambrosetti–Prodi type on fractal domains.
[16]
Kais Hamza, Fima C. Klebaner.
On nonexistence of non-constant volatility in the Black-Scholes
formula.
[17]
Boumediene Abdellaoui, Ahmed Attar, Abdelrazek Dieb, Ireneo Peral.
Attainability of the fractional hardy constant with nonlocal mixed boundary conditions: Applications.
[18]
Pasquale Candito, Giovanni Molica Bisci.
Multiple solutions for a Navier boundary value problem involving the $p$--biharmonic operator.
[19] [20]
2018 Impact Factor: 1.292
Tools Metrics Other articles
by authors
[Back to Top] |
Models at Dynamic Stochastic General Equilibrium level must be able to replicate real economies to an acceptable degree. One of the features of real economies has been a relatively stable
growth rate (see also this post), $\dot x/x=\gamma$, where the dot above a variable denots the derivative with respect to time.
So one would want a model that admits a constant growth rate at its steady-state. In the benchmark deterministic/continuous time "representative household" model, the Euler equation takes the form
$$r = \rho - \left(\frac {u''(c)\cdot c}{u'(c)}\right)\cdot \frac {\dot c}{c}$$
This is the optimal rule for the growth rate of consumption. The rate of pure time preference $\rho$ is assumed constant. The interest rate $r$ has its own way to become constant at the steady state. So in order to obtain a constant consumption growth rate at the steady state, we want the term
$$\left(\frac {u''(c)\cdot c}{u'(c)}\right)$$to be constant too. The
Constant Relative Risk Aversion (CRRA) utility function satisfies exactly this requirement:
$$u(c) = \frac {c^{1-\sigma}}{1-\sigma} \Rightarrow u'(c) = c^{-\sigma} \Rightarrow u''(c) = -\sigma c^{-\sigma-1}$$
So
$$\frac {u''(c)\cdot c}{u'(c)} = \frac {-\sigma c^{-\sigma-1} \cdot c}{c^{-\sigma}} = -\sigma $$and the Euler equation becomes
$$\frac {\dot c}{c} = (1/\sigma)\cdot (r-\rho)$$
Barro & Sala-i-Martin (2004, 2n ed.), extend the required form of the utility function when there is also leisure-labor choice (ch. 9 pp 427-428).
These fundamental property extends to the case of stochastic/discrete time.
XXXX
To compare, if we have specified a
Constant Absolute Risk Aversion (CARA) form, we would have
$$u(c) = -\alpha^{-1}e^{-\alpha c} \Rightarrow u'(c) = e^{-\alpha c}\Rightarrow u''(c) = -\alpha e^{-\alpha c}$$ and the Euler equation would become
$$\dot c = (1/\alpha)\cdot (r-\rho)$$
i.e.here we would obtain a constant steady-state growth in the
level of consumption (and so a diminishing growth rate). |
PRZEMYSLAW DOBROWOLSKI has written a paper that (I think) can be applied to swing-twist decompositions for quaternion rotations called "SWING-TWIST DECOMPOSITION IN CLIFFORD ALGEBRA" .
I tried to apply Algorithm 1 in the paper to this simple scenario: I want to calculate the angle between the world z-axis and the body z-axis from the body rotation given as a quaternion.
I therefore set
$v = 0 \mathbf{e}_1 + 0 \mathbf{e}_2 + 1 \mathbf{e}_3\\ q= a + b \mathbf{e}_{12} + c \mathbf{e}_{23} + d \mathbf{e}_{31}$
applying the algorithm, I get the following intermediate variables
$u = b,\; n = 1,\; m = a,\; l = \sqrt{a^2 + b^2}$
from those, I can calculate the twist $q$ and the swing $p$ as
$q = \frac{m}{l} + \frac{u}{l} \mathbf{e}_{12} = \frac{a^2}{\sqrt{a^2 + b^2}} + \frac{b^2}{\sqrt{a^2 + b^2}} \mathbf{e}_{12} \\ p = s\tilde{q} = s \left( \frac{a^2}{\sqrt{a^2 + b^2}} - \frac{b^2}{\sqrt{a^2 + b^2}} \mathbf{e}_{12} \right) = \underbrace{\frac{a^2 + b^2}{\sqrt{a^2 +b^2}}}_w + \dots$
Where I only wrote down the real part of the resulting swing quaternion $p$ because this already determines the swing angle by $w = cos(\theta/2)$.
But this doesn't make alot of sense to me, because the solution does not depend on $c$ and $d$, which it should, considering the euler rotation interpretation of the quaternion.
Also, using this paper, I was able to derive a different solution, namely
$$ \theta = \cos^{-1} \left( a^2 - b^2 - c^2 + d^2 \right) $$
which actually, looking at a few numeric values, gives the correct result.
I'm now wondering what I misunderstood about the first paper. Shouldn't I obtain the same results? |
Search
Now showing items 1-10 of 33
The ALICE Transition Radiation Detector: Construction, operation, and performance
(Elsevier, 2018-02)
The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD ...
Constraining the magnitude of the Chiral Magnetic Effect with Event Shape Engineering in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2018-02)
In ultrarelativistic heavy-ion collisions, the event-by-event variation of the elliptic flow $v_2$ reflects fluctuations in the shape of the initial state of the system. This allows to select events with the same centrality ...
First measurement of jet mass in Pb–Pb and p–Pb collisions at the LHC
(Elsevier, 2018-01)
This letter presents the first measurement of jet mass in Pb-Pb and p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV and 5.02 TeV, respectively. Both the jet energy and the jet mass are expected to be sensitive to jet ...
First measurement of $\Xi_{\rm c}^0$ production in pp collisions at $\mathbf{\sqrt{s}}$ = 7 TeV
(Elsevier, 2018-06)
The production of the charm-strange baryon $\Xi_{\rm c}^0$ is measured for the first time at the LHC via its semileptonic decay into e$^+\Xi^-\nu_{\rm e}$ in pp collisions at $\sqrt{s}=7$ TeV with the ALICE detector. The ...
D-meson azimuthal anisotropy in mid-central Pb-Pb collisions at $\mathbf{\sqrt{s_{\rm NN}}=5.02}$ TeV
(American Physical Society, 2018-03)
The azimuthal anisotropy coefficient $v_2$ of prompt D$^0$, D$^+$, D$^{*+}$ and D$_s^+$ mesons was measured in mid-central (30-50% centrality class) Pb-Pb collisions at a centre-of-mass energy per nucleon pair $\sqrt{s_{\rm ...
Search for collectivity with azimuthal J/$\psi$-hadron correlations in high multiplicity p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 and 8.16 TeV
(Elsevier, 2018-05)
We present a measurement of azimuthal correlations between inclusive J/$\psi$ and charged hadrons in p-Pb collisions recorded with the ALICE detector at the CERN LHC. The J/$\psi$ are reconstructed at forward (p-going, ...
Systematic studies of correlations between different order flow harmonics in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(American Physical Society, 2018-02)
The correlations between event-by-event fluctuations of anisotropic flow harmonic amplitudes have been measured in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The results are ...
$\pi^0$ and $\eta$ meson production in proton-proton collisions at $\sqrt{s}=8$ TeV
(Springer, 2018-03)
An invariant differential cross section measurement of inclusive $\pi^{0}$ and $\eta$ meson production at mid-rapidity in pp collisions at $\sqrt{s}=8$ TeV was carried out by the ALICE experiment at the LHC. The spectra ...
J/$\psi$ production as a function of charged-particle pseudorapidity density in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Elsevier, 2018-01)
We report measurements of the inclusive J/$\psi$ yield and average transverse momentum as a function of charged-particle pseudorapidity density ${\rm d}N_{\rm ch}/{\rm d}\eta$ in p-Pb collisions at $\sqrt{s_{\rm NN}}= 5.02$ ...
Energy dependence and fluctuations of anisotropic flow in Pb-Pb collisions at √sNN=5.02 and 2.76 TeV
(Springer Berlin Heidelberg, 2018-07-16)
Measurements of anisotropic flow coefficients with two- and multi-particle cumulants for inclusive charged particles in Pb-Pb collisions at 𝑠NN‾‾‾‾√=5.02 and 2.76 TeV are reported in the pseudorapidity range |η| < 0.8 ... |
Abbreviation:
InRL
An
is a structure $\mathbf{A}=\langle A, \vee, \wedge, \cdot, 1, \sim, -\rangle$ of type $\langle 2, 2, 2, 0, 1, 1\rangle$ such that involutive residuated lattice
$\langle A, \vee, \wedge, \neg\rangle$ is an involutive lattice
$\langle A, \cdot, 1\rangle$ is a monoid
$xy\le z\iff x\le \neg(y(\neg z))\iff y\le \neg((\neg z)x)$
Let $\mathbf{A}$ and $\mathbf{B}$ be involutive residuated lattices. A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a function $h:A\rightarrow B$ that is a homomorphism: $h(x \vee y)=h(x) \vee h(y)$, $h(x \cdot y)=h(x) \cdot h(y)$, $h({\sim}x)={\sim}h(x)$ and $h(1)=1$.
An
is a structure $\mathbf{A}=\langle A,...\rangle$ of type $\langle...\rangle$ such that …
$...$ is …: $axiom$
$...$ is …: $axiom$
Example 1:
Feel free to add or delete properties from this list. The list below may contain properties that are not relevant to the class that is being described.
$\begin{array}{lr} f(1)= &1\\ f(2)= &\\ f(3)= &\\ f(4)= &\\ f(5)= &\\ \end{array}$ $\begin{array}{lr} f(6)= &\\ f(7)= &\\ f(8)= &\\ f(9)= &\\ f(10)= &\\ \end{array}$ |
Does anyone have the idea to solve the global multivariate minimization problem as below?
$$\text{minimizes}\quad (x_1x_2x_3+x_1x_4x_5+x_1x_6x_7+x_2x_4x_6+x_2x_5x_7+x_3x_4x_7)-(x_1+x_2+x_4+x_7) \\ \text{subject to}\quad 1\leq x_i\leq N, \forall 1\leq i \leq 7, \text{ and }\prod_{i=1}^7 x_i=N$$
where $N>1$ is a constant.
As far as I know that the global minimizer should be $$ (x_1,x_2,x_3,x_4,x_5,x_6,x_7)=(1,1,N^{1/3},1,N^{1/3},N^{1/3},1).$$
However, I don't have any clue to prove this claim, it seems that the function is somehow symmetric, but not totally symmetric for any variables $x_i$. |
Wave energy converters in coastal structures: verschil tussen versies
(→Application for wave energy converters)
(→Application for wave energy converters)
Regel 78: Regel 78:
<p>
<p>
−
where
+
where
+ +
represents the spectral moment of order n. An equation similar to that describing the power of regular waves is then obtained :
<p>
<p>
<br>
<br>
Versie van 3 sep 2012 om 12:00 Introduction
Fig 1: Construction of a coastal structure.
Coastal works along European coasts are composed of very diverse structures. Many coastal structures are ageing and facing problems of stability, sustainability and erosion. Moreover climate change and especially sea level rise represent a new danger for them. Coastal dykes in Europe will indeed be exposed to waves with heights that are greater than the dykes were designed to withstand, in particular all the structures built in shallow water where the depth imposes the maximal amplitude because of wave breaking.
This necessary adaptation will be costly but will provide an opportunity to integrate converters of sustainable energy in the new maritime structures along the coasts and in particular in harbours. This initiative will contribute to the reduction of the greenhouse effect. Produced energy can be directly used for the energy consumption in harbour area and will reduce the carbon footprint of harbours by feeding the docked ships with green energy. Nowadays these ships use their motors to produce electricity power on board even if they are docked. Integration of wave energy converters (WEC) in coastal structures will favour the emergence of the new concept of future harbours with zero emissions.
Inhoud Wave energy and wave energy flux
For regular water waves, the time-mean wave energy density E per unit horizontal area on the water surface (J/m²) is the sum of kinetic and potential energy density per unit horizontal area. The potential energy density is equal to the kinetic energy
[1] both contributing half to the time-mean wave energy density E that is proportional to the wave height squared according to linear wave theory [1]:
(1)
[math]E= \frac{1}{8} \rho g H^2[/math]
g is the gravity and [math]H[/math] the wave height of regular water waves. As the waves propagate, their energy is transported. The energy transport velocity is the group velocity. As a result, the time-mean wave energy flux per unit crest length (W/m) perpendicular to the wave propagation direction, is equal to
[1]:
(2)
[math] P= Ec_{g}[/math]
with [math]c_{g}[/math] the group velocity (m/s). Due to the dispersion relation for water waves under the action of gravity, the group velocity depends on the wavelength λ (m), or equivalently, on the wave period T (s). Further, the dispersion relation is a function of the water depth h (m). As a result, the group velocity behaves differently in the limits of deep and shallow water, and at intermediate depths:
[math](\frac{\lambda}{20} \lt h \lt \frac{\lambda}{2})[/math]
Application for wave energy convertersFor regular waves in deep water:
[math]c_{g} = \frac{gT}{4\pi} [/math] and [math]P_{w1} = \frac{\rho g^2}{32 \pi} H^2 T[/math]
The time-mean wave energy flux per unit crest length is used as one of the main criteria to choose a site for wave energy converters.
For real seas, whose waves are random in height, period (and direction), the spectral parameters have to be used. [math]H_{m0} [/math] the spectral estimate of significant wave height is based on zero-order moment of the spectral function as [math]H_{m0} = 4 \sqrt{m_0} [/math] Moreover the wave period is derived as follows
[2].
[math]T_e = \frac{m_{-1}}{m_0} [/math]
where [math]m_n[/math] represents the spectral moment of order n. An equation similar to that describing the power of regular waves is then obtained :
[math]P_{w1} = \frac{\rho g^2}{64 \pi} H_{m0}^2 T_e[/math]
If local data are available ([math]H_{m0}^2, T_e [/math]) for a sea state through in-situ wave buoys for example, satellite data or numerical modelling, the last equation giving wave energy flux [math]P_{w1}[/math] gives a first estimation. Averaged over a season or a year, it represents the maximal energetic resource that can be theoretically extracted from wave energy. If the directional spectrum of sea state variance F (f,[math]\theta[/math]) is known with f the wave frequency (Hz) and [math]\theta[/math] the wave direction (rad), a more accurate formulation is used:
[math]P_{w2} = \rho g\int\int c_{g}(f,h)F(f,\theta) dfd \theta[/math]
Fig 2: Time-mean wave energy flux along
West European coasts
[3] .
It can be shown easily that equations (5 and 6) can be reduced to (4) with the hypothesis of regular waves in deep water. The directional spectrum is deduced from directional wave buoys, SAR images or advanced spectral wind-wave models, known as third-generation models, such as WAM, WAVEWATCH III, TOMAWAC or SWAN. These models solve the spectral action balance equation without any a priori restrictions on the spectrum for the evolution of wave growth.
From TOMAWAC model, the near shore wave atlas ANEMOC along the coasts of Europe and France based on the numerical modelling of wave climate over 25 years has been produced
[4]. Using equation (4), the time-mean wave energy flux along West European coasts is obtained (see Fig. 2). This equation (4) still presents some limits like the definition of the bounds of the integration. Moreover, the objective to get data on the wave energy near coastal structures in shallow or intermediate water requires the use of numerical models that are able to represent the physical processes of wave propagation like the refraction, shoaling, dissipation by bottom friction or by wave breaking, interactions with tides and diffraction by islands.
The wave energy flux is therefore calculated usually for water depth superior to 20 m. This maximal energetic resource calculated in deep water will be limited in the coastal zone:
at low tide by wave breaking; at high tide in storm event when the wave height exceeds the maximal operating conditions; by screen effect due to the presence of capes, spits, reefs, islands,...
Technologies
According to the International Energy Agency (IEA), more than hundred systems of wave energy conversion are in development in the world. Among them, many can be integrated in coastal structures. Evaluations based on objective criteria are necessary in order to sort theses systems and to determine the most promising solutions.
Criteria are in particular:
the converter efficiency : the aim is to estimate the energy produced by the converter. The efficiency gives an estimate of the number of kWh that is produced by the machine but not the cost. the converter survivability : the capacity of the converter to survive in extreme conditions. The survivability gives an estimate of the cost considering that the weaker are the extreme efforts in comparison with the mean effort, the smaller is the cost.
Unfortunately, few data are available in literature. In order to determine the characteristics of the different wave energy technologies, it is necessary to class them first in four main families
[3].
An interesting result is that the maximum average wave power that a point absorber can absorb [math]P_{abs} [/math](W) from the waves does not depend on its dimensions
[5]. It is theoretically possible to absorb a lot of energy with only a small buoy. It can be shown that for a body with a vertical axis of symmetry (but otherwise arbitrary geometry) oscillating in heave the capture (or absorption) width [math]L_{max}[/math](m) is as follows [5]:
[math]L_{max} = \frac{P_{abs}}{P_{w}} = \frac{\lambda}{2\pi}[/math] or [math]1 = \frac{P_{abs}}{P_{w}} \frac{2\pi}{\lambda}[/math]
Fig 4: Upper limit of mean wave power
absorption for a heaving point absorber.
where [math]{P_{w}}[/math] is the wave energy flux per unit crest length (W/m). An optimally damped buoy responds however efficiently to a relatively narrow band of wave periods.
Babarit et Hals propose
[6] to derive that upper limit for the mean annual power in irregular waves at some typical locations where one could be interested in putting some wave energy devices. The mean annual power absorption tends to increase linearly with the wave power resource. Overall, one can say that for a typical site whose resource is between 20-30 kW/m, the upper limit of mean wave power absorption is about 1 MW for a heaving WEC with a capture width between 30-50 m.
In order to complete these theoretical results and to describe the efficiency of the WEC in practical situations, the capture width ratio [math]\eta[/math] is also usually introduced. It is defined as the ratio between the absorbed power and the available wave power resource per meter of wave front times a relevant dimension B [m].
[math]\eta = \frac{P_{abs}}{P_{w}B} [/math]
The choice of the dimension B will depend on the working principle of the WEC. Most of the time, it should be chosen as the width of the device, but in some cases another dimension is more relevant. Estimations of this ratio [math]\eta[/math] are given
[6]: 33 % for OWC, 13 % for overtopping devices, 9-29 % for heaving buoys, 20-41 % for pitching devices. For energy converted to electricity, one must take into account moreover the energy losses in other components of the system.
Civil engineering
Never forget that the energy conversion is only a secondary function for the coastal structure. The primary function of the coastal structure is still protection. It is necessary to verify whether integration of WEC modifies performance criteria of overtopping and stability and to assess the consequences for the construction cost.
Integration of WEC in coastal structures will always be easier for a new structure than for an existing one. In the latter case, it requires some knowledge on the existing coastal structures. Solutions differ according to sea state but also to type of structures (rubble mound breakwater, caisson breakwaters with typically vertical sides). Some types of WEC are more appropriate with some types of coastal structures.
Fig 5: Several OWC (Oscillating water column) configurations (by Wavegen – Voith Hydro).
Environmental impact
Wave absorption if it is significant will change hydrodynamics along the structure. If there is mobile bottom in front of the structure, a sand deposit can occur. Ecosystems can also be altered by change of hydrodynamics and but acoustic noise generated by the machines.
Fig 6: Finistere area and locations of
the six sites (google map).
Study case: Finistere area
Finistere area is an interesting study case because it is located in the far west of Brittany peninsula and receives in consequence the largest wave energy flux along the French coasts (see Fig.2). This area with a very ragged coast gathers moreover many commercial ports, fishing ports, yachting ports. The area produces a weak part of its consumption and is located far from electricity power plants. There are therefore needs for renewable energies that are produced locally. This issue is important in particular in islands. The production of electricity by wave energy will have seasonal variations. Wave energy flux is indeed larger in winter than in summer. The consumption has peaks in winter due to heating of buildings but the consumption in summer is also strong due to the arrival of tourists.
Six sites are selected (see figure 7) for a preliminary study of wave energy flux and capacity of integration of wave energy converters. The wave energy flux is expected to be in the range of 1 – 10 kW/m. The length of each breakwater exceeds 200 meters. The wave power along each structure is therefore estimated between 200 kW and 2 MW. Note that there exist much longer coastal structures like for example Cherbourg (France) with a length of 6 kilometres.
(1) Roscoff (300 meters) (2) Molène (200 meters) (3) Le Conquet (200 meters) (4) Esquibien (300 meters) (5) Saint-Guénolé (200 meters) (6) Lesconil (200 meters) Fig.7: Finistere area, the six coastal structures and their length (google map).
Wave power flux along the structure depends on local parameters: bottom depth that fronts the structure toe, the presence of caps, the direction of waves and the orientation of the coastal structure. See figure 8 for the statistics of wave directions measured by a wave buoy located at the Pierres Noires Lighthouse. These measurements show that structures well-oriented to West waves should be chosen in priority. Peaks of consumption occur often with low temperatures in winter coming with winds from East- North-East directions. Structures well-oriented to East waves could therefore be also interesting even if the mean production is weak.
Fig 8: Wave measurements at the Pierres Noires Lighthouse.
Conclusion
Wave energy converters (WEC) in coastal structures can be considered as a land renewable energy. The expected energy can be compared with the energy of land wind farms but not with offshore wind farms whose number and power are much larger. As a land system, the maintenance will be easy. Except the energy production, the advantages of such systems are :
a “zero emission” port industrial tourism test of WEC for future offshore installations.
Acknowledgement
This work is in progress in the frame of the national project EMACOP funded by the French Ministry of Ecology, Sustainable Development and Energy.
See also Waves Wave transformation Groynes Seawall Seawalls and revetments Coastal defense techniques Wave energy converters Shore protection, coast protection and sea defence methods Overtopping resistant dikes
References Mei C.C. (1989) The applied dynamics of ocean surface waves. Advanced series on ocean engineering. World Scientific Publishing Ltd Vicinanza D., Cappietti L., Ferrante V. and Contestabile P. (2011) : Estimation of the wave energy along the Italian offshore, journal of coastal research, special issue 64, pp 613 - 617. Mattarolo G., Benoit M., Lafon F. (2009), Wave energy resource off the French coasts: the ANEMOC database applied to the energy yield evaluation of Wave Energy, 10th European Wave and Tidal Energy Conference Series (EWTEC’2009), Uppsala (Sweden) Benoit M. and Lafon F. (2004) : A nearshore wave atlas along the coasts of France based on the numerical modeling of wave climate over 25 years, 29th International Conference on Coastal Engineering (ICCE’2004), Lisbonne (Portugal), pp 714-726. De O. Falcão A. F. (2010) Wave energy utilization: A review of the technologies. Renewable and Sustainable Energy Reviews, Volume 14, Issue 3, April 2010, pp. 899–918. Babarit A. and Hals J. (2011) On the maximum and actual capture width ratio of wave energy converters – 11th European Wave and Tidal Energy Conference Series (EWTEC’2011) – Southampton (U-K). |
While reading the the book on neural network http://neuralnetworksanddeeplearning.com/chap2.html by Michael Nielson I had a problem of understanding eqn BP3. Which reads as "Change in cost wrt bias in a neuron is equals to error in that neuron". (Sorry unable to put the eqn here.)
This is just an application of the chain rule. The same chapter has "Proof of the four fundamental equations" section, which proves BP1-2, while PB3-4 are left as exercise to the reader. I agree that it's a good exercise indeed, that's why I encourage you to
stop here and try to prove it yourself using a chain rule.
Now, if you decided to read further, here's the sketch of a proof.
Recall equations (25) and (29), both
definitions:
$z^l$ is a linear transformation of $a^{l-1}$: $z^l = w^l a^{l-1} + b^l$
$\delta^l$ is a partial derivative of $C$ with respect to $z^l$: $\delta^l = \dfrac{\partial C}{\partial z^l}$
The chain rule itself:
the partial derivative of $C$ with respect to $b^l$:
$$\frac{\partial C}{\partial b^l_j} = \sum_k \frac{\partial C}{\partial z^l_k} \frac{\partial z^l_k}{\partial b^l_j}$$
Almost all elements in this sum will be zero, except for one when $k=j$. The first term in it is by definition $\delta^l_j$, the second term $\dfrac{\partial z^l_j}{\partial b^l_j}$ is $1$, because $z$ is linear with respect to $b$.
The derivative with respect to weights is taken the same way, and differs only in the last term: $z$ is linear with respect to $w$ as well, but with a coefficient $a^{l-1}$. |
How does one calculate fully quantum mechanical rate ($\kappa$) in the golden-rule approximation for two linear potential energy surfaces?
Attempt:
Miller (83) proposes $\kappa=\int{Tr[\exp{(-\beta\hat{H})}\hat{F}\exp{(-i\hat{H}t/\hbar)}\hat{F}\exp{(i\hat{H}t/\hbar)}]}dt$
Where integrand is simply the flux-flux correlation function: $C_{ff}(t)$. Which can be calculated under Feynman's path integral formalism. My attempt (which is in vain) at calculating $C_{ff}(t)$ is as follows:
$C_{ff}(t)=Tr[\exp{(-\beta\hat{H})}\hat{F}\exp{(-i\hat{H}t/\hbar)}\hat{F}\exp{(i\hat{H}t/\hbar)}]$
=$Tr[\exp{(-\beta\frac{\hat{H}}{2})}\hat{F}\exp{(-\beta\frac{\hat{H}}{2})}\exp{(-i\hat{H}t/\hbar)}\hat{F}\exp{(i\hat{H}t/\hbar)}]$
By cyclicly permuting the operators we reach at:
=$Tr[\exp{(i\hat{H}t/\hbar)}\exp{(-\beta\frac{\hat{H}}{2})}\hat{F}\exp{(-\beta\frac{\hat{H}}{2})}\exp{(-i\hat{H}t/\hbar)}\hat{F}]$
The boltzmann operator and quantum mechanical propagator can be combined as follows:
=$Tr[\exp{\hat{H}(\frac{it}{\hbar}-\frac{\beta}{2})}\hat{F}\exp{\hat{H}(\frac{-it}{\hbar}-\frac{\beta}{2})}\hat{F}]$
In the golden-rule (non-adiabatic) case, we have two electronic states 0 and 1. So F is simply a projection operator. Hence one can obtain:
=$Tr[\exp{\hat{H_0}(\frac{it}{\hbar}-\frac{\beta}{2})}\exp{\hat{H_1}(\frac{-it}{\hbar}-\frac{\beta}{2})}]$
This basically is kernel corresponding to two potential energy surfaces $V_0$ and $V_1$. For trajectory starting at $x_a$ and ending at $x_b$, we have
$C_{ff}(t)=\int{\int{K_0(x_a,x_b,\frac{it}{\hbar}-\frac{\beta}{2})K_1(x_b,x_a,\frac{-it}{\hbar}-\frac{\beta}{2})}}dx_adx_b$
For a linear potential energy surfaces (PES), where my PES looks as follows:
$V_0=k_0 x$
$V_1=k_1 x$
My kernels are:
$K_0=\sqrt{\frac{m}{2\pi t_0}}\exp{(-S_0)}$
$K_1=\sqrt{\frac{m}{2\pi t_1}}\exp{(-S_1)}$
$S's$ correspond to action which is:
$S_n(x_a,x_b,t_n)=\frac{m(x_a-x_b)^2}{2 t_n}-\frac{(x_a+x_b)k_nt_n}{2}-\frac{k_n^2t_n^3}{24m}$
The problem is the integral for flux flux correlation function doesn't seem to be converging with the imaginary argument for $t$'s. I am trying to integrate w.r.t $x_a$, $x_b$ and $t$ from -Inf to +Inf. My final answer for rate should look something like this:
$\exp{\frac{k_0^2k_1^2\hbar^2\beta^3}{24m(k_0-k_1)^2}}$
Is it a gaussian integral with respect to $x_a$ and $x_b$? One has to be careful because there is also an imaginary parts in the exponent. How does one reach the final answer for rate with those integrals? Really confused! Any help is appreciated. |
Notice:
If you happen to see a question you know the answer to, please do chime in and help your fellow community members. We encourage our fourm members to be more involved, jump in and help out your fellow researchers with their questions. GATK forum is a community forum and helping each other with using GATK tools and research is the cornerstone of our success as a genomics research community.We appreciate your help!
Test-drive the GATK tools and Best Practices pipelines on Terra Check out this blog post to learn how you can get started with GATK and try out the pipelines in preconfigured workspaces (with a user-friendly interface!) without having to install anything. Genotype Refinement workflow for germline short variants Contents Overview Summary of workflow steps Output annotations Example More information about priors Mathematical details 1. Overview
The core GATK Best Practices workflow has historically focused on variant discovery --that is, the existence of genomic variants in one or more samples in a cohorts-- and consistently delivers high quality results when applied appropriately. However, we know that the quality of the individual genotype calls coming out of the variant callers can vary widely based on the quality of the BAM data for each sample. The goal of the Genotype Refinement workflow is to use additional data to improve the accuracy of genotype calls and to filter genotype calls that are not reliable enough for downstream analysis. In this sense it serves as an optional extension of the variant calling workflow, intended for researchers whose work requires high-quality identification of individual genotypes.
While every study can benefit from increased data accuracy, this workflow is especially useful for analyses that are concerned with how many copies of each variant an individual has (e.g. in the case of loss of function) or with the transmission (or de novo origin) of a variant in a family.
If a “gold standard” dataset for SNPs is available, that can be used as a very powerful set of priors on the genotype likelihoods in your data. For analyses involving families, a pedigree file describing the relatedness of the trios in your study will provide another source of supplemental information. If neither of these applies to your data, the samples in the dataset itself can provide some degree of genotype refinement (see section 5 below for details).
After running the Genotype Refinement workflow, several new annotations will be added to the INFO and FORMAT fields of your variants (see below).
Note that GQ fields will be updated, and genotype calls may be modified. However, the Phred-scaled genotype likelihoods (PLs) which indicate the original genotype call (the genotype candidate with PL=0) will remain untouched. Any analysis that made use of the PLs will produce the same results as before. 2. Summary of workflow steps Input
Begin with recalibrated variants from VQSR at the end of the germline short variants pipeline. The filters applied by VQSR will be carried through the Genotype Refinement workflow.
Step 1: Derive posterior probabilities of genotypes Tool used: CalculateGenotypePosteriors
Using the Phred-scaled genotype likelihoods (PLs) for each sample, prior probabilities for a sample taking on a HomRef, Het, or HomVar genotype are applied to derive the posterior probabilities of the sample taking on each of those genotypes. A sample’s PLs were calculated by HaplotypeCaller using only the reads for that sample. By introducing additional data like the allele counts from the 1000 Genomes project and the PLs for other individuals in the sample’s pedigree trio, those estimates of genotype likelihood can be improved based on what is known about the variation of other individuals.
SNP calls from the 1000 Genomes project capture the vast majority of variation across most human populations and can provide very strong priors in many cases. At sites where most of the 1000 Genomes samples are homozygous variant with respect to the reference genome, the probability of a sample being analyzed of also being homozygous variant is very high.
For a sample for which both parent genotypes are available, the child’s genotype can be supported or invalidated by the parents’ genotypes based on Mendel’s laws of allele transmission. Even the confidence of the parents’ genotypes can be recalibrated, such as in cases where the genotypes output by HaplotypeCaller are apparent Mendelian violations.
Step 2: Filter low quality genotypes Tool used: VariantFiltration
After the posterior probabilities are calculated for each sample at each variant site, genotypes with GQ < 20 based on the posteriors are filtered out. GQ20 is widely accepted as a good threshold for genotype accuracy, indicating that there is a 99% chance that the genotype in question is correct. Tagging those low quality genotypes indicates to researchers that these genotypes may not be suitable for downstream analysis. However, as with the VQSR, a filter tag is applied, but the data is not removed from the VCF.
Step 3: Annotate possible de novo mutations Tool used: VariantAnnotator
Using the posterior genotype probabilities, possible de novo mutations are tagged. Low confidence de novos have child GQ >= 10 and AC < 4 or AF < 0.1%, whichever is more stringent for the number of samples in the dataset. High confidence de novo sites have all trio sample GQs >= 20 with the same AC/AF criterion.
Step 4: Functional annotation of possible biological effects Tool options: Funcotator (experimental)
Especially in the case of de novo mutation detection, analysis can benefit from the functional annotation of variants to restrict variants to exons and surrounding regulatory regions. Funcotator is a new tool that is currently still in development. If you would prefer to use a more mature tool, we recommend you look into SnpEff or Oncotator, but note that these are not GATK tools so we do not provide support for them.
3. Output annotations
The Genotype Refinement workflow adds several new info- and format-level annotations to each variant. GQ fields will be updated, and genotypes calculated to be highly likely to be incorrect will be changed. The Phred-scaled genotype likelihoods (PLs) carry through the pipeline without being changed. In this way, PLs can be used to derive the original genotypes in cases where sample genotypes were changed.
Population Priors
New INFO field annotation PG is a vector of the Phred-scaled prior probabilities of a sample at that site being HomRef, Het, and HomVar. These priors are based on the input samples themselves along with data from the supporting samples if the variant in question overlaps another in the supporting dataset.
Phred-Scaled Posterior Probability
New FORMAT field annotation PP is the Phred-scaled posterior probability of the sample taking on each genotype for the given variant context alleles. The PPs represent a better calibrated estimate of genotype probabilities than the PLs are recommended for use in further analyses instead of the PLs.
Genotype Quality
Current FORMAT field annotation GQ is updated based on the PPs. The calculation is the same as for GQ based on PLs.
Joint Trio Likelihood
New FORMAT field annotation JL is the Phred-scaled joint likelihood of the posterior genotypes for the trio being incorrect. This calculation is based on the PLs produced by HaplotypeCaller (before application of priors), but the genotypes used come from the posteriors. The goal of this annotation is to be used in combination with JP to evaluate the improvement in the overall confidence in the trio’s genotypes after applying CalculateGenotypePosteriors. The calculation of the joint likelihood is given as:
where the GLs are the genotype likelihoods in [0, 1] probability space.
Joint Trio Posterior
New FORMAT field annotation JP is the Phred-scaled posterior probability of the output posterior genotypes for the three samples being incorrect. The calculation of the joint posterior is given as:
where the GPs are the genotype posteriors in [0, 1] probability space.
Low Genotype Quality
New FORMAT field filter lowGQ indicates samples with posterior GQ less than 20. Filtered samples tagged with lowGQ are not recommended for use in downstream analyses.
High and Low Confidence De Novo
New INFO field annotation for sites at which at least one family has a possible de novo mutation. Following the annotation tag is a list of the children with de novo mutations. High and low confidence are output separately.
4. Example
Before:
1 1226231 rs13306638 G A 167563.16 PASS AC=2;AF=0.333;AN=6;… GT:AD:DP:GQ:PL 0/0:11,0:11:0:0,0,249 0/0:10,0:10:24:0,24,360 1/1:0,18:18:60:889,60,0
After:
1 1226231 rs13306638 G A 167563.16 PASS AC=3;AF=0.500;AN=6;…PG=0,8,22;… GT:AD:DP:GQ:JL:JP:PL:PP 0/1:11,0:11:49:2:24:0,0,249:49,0,287 0/0:10,0:10:32:2:24:0,24,360:0,32,439 1/1:0,18:18:43:2:24:889,60,0:867,43,0
The original call for the child (first sample) was HomRef with GQ0. However, given that, with high confidence, one parent is HomRef and one is HomVar, we expect the child to be heterozygous at this site. After family priors are applied, the child’s genotype is corrected and its GQ is increased from 0 to 49. Based on the allele frequency from 1000 Genomes for this site, the somewhat weaker population priors favor a HomRef call (PG=0,8,22). The combined effect of family and population priors still favors a Het call for the child.
The joint likelihood for this trio at this site is two, indicating that the genotype for one of the samples may have been changed. Specifically a low JL indicates that posterior genotype for at least one of the samples was not the most likely as predicted by the PLs. The joint posterior value for the trio is 24, which indicates that the GQ values based on the posteriors for all of the samples are at least 24. (See above for a more complete description of JL and JP.)
5. More information about priors
The Genotype Refinement Pipeline uses Bayes’s Rule to combine independent data with the genotype likelihoods derived from HaplotypeCaller, producing more accurate and confident genotype posterior probabilities. Different sites will have different combinations of priors applied based on the overlap of each site with external, supporting SNP calls and on the availability of genotype calls for the samples in each trio.
Input-derived Population Priors
If the input VCF contains at least 10 samples, then population priors will be calculated based on the discovered allele count for every called variant.
Supporting Population Priors
Priors derived from supporting SNP calls can only be applied at sites where the supporting calls overlap with called variants in the input VCF. The values of these priors vary based on the called reference and alternate allele counts in the supporting VCF. Higher allele counts (for ref or alt) yield stronger priors.
Family Priors
The strongest family priors occur at sites where the called trio genotype configuration is a Mendelian violation. In such a case, each Mendelian violation configuration is penalized by a de novo mutation probability (currently 10-6). Confidence also propagates through a trio. For example, two GQ60 HomRef parents can substantially boost a low GQ HomRef child and a GQ60 HomRef child and parent can improve the GQ of the second parent. Application of family priors requires the child to be called at the site in question. If one parent has a no-call genotype, priors can still be applied, but the potential for confidence improvement is not as great as in the 3-sample case.
Caveats
Right now family priors can only be applied to biallelic variants and population priors can only be applied to SNPs. Family priors only work for trios.
6. Mathematical details
Note that family priors are calculated and applied before population priors. The opposite ordering would result in overly strong population priors because they are applied to the child and parents and then compounded when the trio likelihoods are multiplied together.
Review of Bayes’s Rule
HaplotypeCaller outputs the likelihoods of observing the read data given that the genotype is actually HomRef, Het, and HomVar. To convert these quantities to the probability of the genotype given the read data, we can use Bayes’s Rule. Bayes’s Rule dictates that the probability of a parameter given observed data is equal to the likelihood of the observations given the parameter multiplied by the prior probability that the parameter takes on the value of interest, normalized by the prior times likelihood for all parameter values:
$$ P(\theta|Obs) = \frac{P(Obs|\theta)P(\theta)}{\sum_{\theta} P(Obs|\theta)P(\theta)} $$
In the best practices pipeline, we interpret the genotype likelihoods as probabilities by implicitly converting the genotype likelihoods to genotype probabilities using non-informative or flat priors, for which each genotype has the same prior probability. However, in the Genotype Refinement Pipeline we use independent data such as the genotypes of the other samples in the dataset, the genotypes in a “gold standard” dataset, or the genotypes of the other samples in a family to construct more informative priors and derive better posterior probability estimates.
Calculation of Population Priors
Given a set of samples in addition to the sample of interest (ideally non-related, but from the same ethnic population), we can derive the prior probability of the genotype of the sample of interest by modeling the sample’s alleles as two independent draws from a pool consisting of the set of all the supplemental samples’ alleles. (This follows rather naturally from the Hardy-Weinberg assumptions.) Specifically, this prior probability will take the form of a multinomial Dirichlet distribution parameterized by the allele counts of each allele in the supplemental population. In the biallelic case the priors can be calculated as follows:
$$ P(GT = HomRef) = \dbinom{2}{0} \ln \frac{\Gamma(nSamples)\Gamma(RefCount + 2)}{\Gamma(nSamples + 2)\Gamma(RefCount)} $$
$$ P(GT = Het) = \dbinom{2}{1} \ln \frac{\Gamma(nSamples)\Gamma(RefCount + 1)\Gamma(AltCount + 1)}{\Gamma(nSamples + 2)\Gamma(RefCount)\Gamma(AltCount)} $$
$$ P(GT = HomVar) = \dbinom{2}{2} \ln \frac{\Gamma(nSamples)\Gamma(AltCount + 2)}{\Gamma(nSamples + 2)\Gamma(AltCount)} $$
where Γ is the Gamma function, an extension of the factorial function.
The prior genotype probabilities based on this distribution scale intuitively with number of samples. For example, a set of 10 samples, 9 of which are HomRef yield a prior probability of another sample being HomRef with about 90% probability whereas a set of 50 samples, 49 of which are HomRef yield a 97% probability of another sample being HomRef.
Calculation of Family Priors
Given a genotype configuration for a given mother, father, and child trio, we set the prior probability of that genotype configuration as follows:
$$ P(G_M,G_F,G_C) = P(\vec{G}) \cases{ 1-10\mu-2\mu^2 & no MV \cr \mu & 1 MV \cr \mu^2 & 2 MVs} $$
where the 10 configurations with a single Mendelian violation are penalized by the de novo mutation probability μ and the two configurations with two Mendelian violations by μ^2. The remaining configurations are considered valid and are assigned the remaining probability to sum to one.
This prior is applied to the joint genotype combination of the three samples in the trio. To find the posterior for any single sample, we marginalize over the remaining two samples as shown in the example below to find the posterior probability of the child having a HomRef genotype:
This quantity P(Gc|D) is calculated for each genotype, then the resulting vector is Phred-scaled and output as the Phred-scaled posterior probabilities (PPs). |
First we must check that $T$ takes values in $W$, as the problem implies but does not prove. Consider $f \in C([-1 , 1])$. The function $T(f)$ lies in $W$ if and only if $T(f)(0) = 0$. We quickly check that it satisfies this condition:in $C([-1, 1])$ lies\[ T(f)(0) = f(0) – f(0) = 0 . \]
Now that we know $T$ takes values in $W$, we show that it is a linear transformation. We will prove this by showing that $T$ satisfies both of the axioms for linear transformations. First, suppose that $f, g \in C([-1, 1])$. Then\begin{align*}T(f+g)(x) &= (f+g)(x) – (f+g)(0) \\&= f(x) + g(x) – f(0) – g(0) \\&= f(x) – f(0) + g(x) – g(0) \\&= T(f)(x) + T(g)(x) . \end{align*}
Now for a scalar $c \in \mathbb{R}$ we have\begin{align*}T( cf )(x) &= (cf)(x) – (cf)(0) \\&= c f(x) – c f(0) \\&= c ( f(x) – f(0) ) \\&= c T(f)(x) . \end{align*}
Thus we have proven that $T$ is a linear transformation.
The nullspace of $T$
Next, we will prove that the nullspace of $T$ is\[\mathcal{N}(T) = \{ f \in C([-1 , 1]) \mid f(x) \mbox{ is a constant function } \}.\]Suppose that $f \in \calN(T)$, that is, $f$ satisfies\[0 = T(f)(x) = f(x) – f(0).\]Then $f(x) = f(0)$ for all $x \in [-1, 1]$. This means that $f$ is a constant function. On the other hand, if $f(x)$ is a constant function, then $T(f)(x) = f(x) – f(0) = 0$. We see that $f$ lies in the nullspace of $T$ if and only if it is a constant function.
The range of $T$
Next, we want to find the range of $T$. We claim that\[\mathcal{R}(T) = W.\]
Suppose that $f \in W$, that is, it is a function such that $f(0) = 0$. Then\[T(f)(x) = f(x) – f(0) = f(x),\]and so we see that $f \in \mathcal{R}(T)$.Conversely, suppose that $f \in \mathcal{R}(T)$, so that $f = T(g)$ for some $g \in C([-1, 1])$. Then\[f(0) = T(g)(0) = g(0) – g(0) = 0,\]and so $f \in W$. Thus every $f \in C([-1, 1])$ lies in the range of $T$ if and only if $f(0) = 0$.
The Range and Null Space of the Zero Transformation of Vector SpacesLet $U$ and $V$ be vector spaces over a scalar field $\F$.Define the map $T:U\to V$ by $T(\mathbf{u})=\mathbf{0}_V$ for each vector $\mathbf{u}\in U$.(a) Prove that $T:U\to V$ is a linear transformation.(Hence, $T$ is called the zero transformation.)(b) Determine […]
An Orthogonal Transformation from $\R^n$ to $\R^n$ is an IsomorphismLet $\R^n$ be an inner product space with inner product $\langle \mathbf{x}, \mathbf{y}\rangle=\mathbf{x}^{\trans}\mathbf{y}$ for $\mathbf{x}, \mathbf{y}\in \R^n$.A linear transformation $T:\R^n \to \R^n$ is called orthogonal transformation if for all $\mathbf{x}, \mathbf{y}\in […] |
A few years ago, I noticed a glitch in a paper that colleagues of mine had published back in 2002. A less-than sign in an inequality should have been a less-than-or-equals. This might have been a transcription error during the typing-up of the work, or it could have entered during some other phase of the writing process. Happens to the best of us! Algebraically, it was equivalent to solving an equation
\[ ax^2 + bx + c = 0 \] with the quadratic formula, \[ x = \frac{-b \pm \sqrt{b^2 – 4ac}}{2a},\] and neglecting the fact that if the expression under the square root sign equals zero, you still get a real solution.
This sort of glitch is usually not worth a lot of breath, though I do tend to write in when I notice them, to keep down the overall confusingness of the scientific literature. In this case, however, there’s a surprise bonus. The extra solutions you pick up turn out to have a very interesting structure to them, and they include mathematical objects that were already interesting for other reasons. So, I wrote a little note explaining this. In order to make it self-contained, I had to lay down a bit of background, and with one thing and another, the little note became more substantial.
Too substantial, I learned: The journal that published the original paper wouldn’t take it as a Comment on that paper, because it said too many new things! Eventually, after a little more work, it found a home: B. C. Stacey, “SIC-POVMs and Compatibility among Quantum States,” Mathematics 4,2 (2016): 36 arXiv:1404.3774 [quant-ph].
The number of citations that Google Scholar lists for this paper (one officially published in a journal, mind) fluctuates between 5 and 6. I think it wavers on whether to include a paper by Szymusiak and Słomczyński (
Phys. Rev. A 94, 012122 = arXiv:1512.01735 [quant-ph]). Also, if you compare against the NASA ADS results, it turns out that Google Scholar is missing other citations, too, including a journal-published item by Bellomo et al. ( Int. J. Quant. Info. 13, 2 (2015), 1550015 = arXiv:1504.02077 [quant-ph]).
As I said in 2014, this would be a rather petty thing to care about,
if people didn’t rely on these metrics to make decisions! And, as it happens, all the problems I noted then are still true now. |
[math]logit(p) = log(odds(p)) = log(\frac{p}{1-p}) \\
expit(p) = \frac{exp(p)}{1+exp(p)} = \frac{2^p}{1+2^p}
[/math]
In order to prove these are inverses, I am going to prove that
[math]\begin{eqnarray}
p &=& logit(expit(p)) \\
&=& log(\frac{expit(p)}{1-expit(p)}) \\
&=& log\Big[\frac{\frac{exp(p)}{1+exp(p)}}{1-\frac{exp(p)}{1+exp(p)}}\Big] \\
&=& log(\frac{exp(p)}{1+exp(p)}) - log(1-\frac{exp(p)}{1+exp(p)}) \\
&=& log(exp(p)) - log(1+exp(p)) - log(1-\frac{exp(p)}{1+exp(p)}) \\
&=& p - log(1+exp(p)) - log(1-\frac{exp(p)}{1+exp(p)}) \\
&=& p - log(1+exp(p)) - log(1-(\frac{exp(p)+1-1}{1+exp(p)})) \\
&=& p - log(1+exp(p)) - log(1-(\frac{exp(p)+1}{1+exp(p)}-\frac{1}{1+exp(p)})) \\
&=& p - log(1+exp(p)) - log(1-(1-\frac{1}{1+exp(p)})) \\
&=& p - log(1+exp(p)) - log(\frac{1}{1+exp(p)}) \\
&=& p - log(1+exp(p)) - (log(1)-log(1+exp(p))) \\
&=& p - log(1+exp(p)) - log(1) + log(1+exp(p)) \\
&=& p - 0 + (0 - 0) \\
&=& p \\
QED
\end{eqnarray}
[/math]
So not strictly that hard from number of reductions, the hardest part for me was the x/(1+x) to 1-1/(1+x) step which wasn't obvious. Anyway I think I should prove the other way too, p = expit(logit(p)), to be done, but not now. Maybe in a future edit.
As a preview for where this is going, I want to write a post justifying from proofs that bayes' theorem can be rewritten as Belief(subject x after evidence e) = Belief(x prior) + Evidence(e about x), where Belief(x) = logit(x) and Evidence(e about x) = log(P(e|x)/P(e|!x)). But right now I'm struggling to find how that shouldn't be a proportional symbol, even though in some test calculations with the fully expanded Bayes theorem vs this, the results are equal...
Posted on 2017-11-19 by Jach Permalink: https://www.thejach.com/view/id/350 Trackback URL: https://www.thejach.com/view/2017/11/logit_and_expit |
Skills to Develop
To use Spearman rank correlation to test the association between two ranked variables, or one ranked variable and one measurement variable. You can also use Spearman rank correlation instead of linear regression/correlation for two measurement variables if you're worried about non-normality, but this is not usually necessary. When to use it
Use Spearman rank correlation when you have two ranked variables, and you want to see whether the two variables covary; whether, as one variable increases, the other variable tends to increase or decrease. You also use Spearman rank correlation if you have one measurement variable and one ranked variable; in this case, you convert the measurement variable to ranks and use Spearman rank correlation on the two sets of ranks.
For example, Melfi and Poyser (2007) observed the behavior of \(6\) male colobus monkeys (
Colobus guereza) in a zoo. By seeing which monkeys pushed other monkeys out of their way, they were able to rank the monkeys in a dominance hierarchy, from most dominant to least dominant. This is a ranked variable; while the researchers know that Erroll is dominant over Milo because Erroll pushes Milo out of his way, and Milo is dominant over Fraiser, they don't know whether the difference in dominance between Erroll and Milo is larger or smaller than the difference in dominance between Milo and Fraiser. After determining the dominance rankings, Melfi and Poyser (2007) counted eggs of Trichuris nematodes per gram of monkey feces, a measurement variable. They wanted to know whether social dominance was associated with the number of nematode eggs, so they converted eggs per gram of feces to ranks and used Spearman rank correlation.
Monkey
name
Dominance
rank
Eggs per
gram
Eggs per
gram (rank)
Erroll 1 5777 1 Milo 2 4225 2 Fraiser 3 2674 3 Fergus 4 1249 4 Kabul 5 749 6 Hope 6 870 5
Some people use Spearman rank correlation as a non-parametric alternative to linear regression and correlation when they have two measurement variables and one or both of them may not be normally distributed; this requires converting both measurements to ranks. Linear regression and correlation that the data are normally distributed, while Spearman rank correlation does not make this assumption, so people think that Spearman correlation is better. In fact, numerous simulation studies have shown that linear regression and correlation are not sensitive to non-normality; one or both measurement variables can be very non-normal, and the probability of a false positive (\(P<0.05\), when the null hypothesis is true) is still about \(0.05\) (Edgell and Noon 1984, and references therein). It's not incorrect to use Spearman rank correlation for two measurement variables, but linear regression and correlation are much more commonly used and are familiar to more people, so I recommend using linear regression and correlation any time you have two measurement variables, even if they look non-normal.
Null hypothesis
The null hypothesis is that the Spearman correlation coefficient, \(\rho \) ("rho"), is \(0\). A \(\rho \) of \(0\) means that the ranks of one variable do not covary with the ranks of the other variable; in other words, as the ranks of one variable increase, the ranks of the other variable do not increase (or decrease).
Assumption
When you use Spearman rank correlation on one or two measurement variables converted to ranks, it does not assume that the measurements are normal or homoscedastic. It also doesn't assume the relationship is linear; you can use Spearman rank correlation even if the association between the variables is curved, as long as the underlying relationship is monotonic (as \(X\) gets larger, \(Y\) keeps getting larger, or keeps getting smaller). If you have a non-monotonic relationship (as \(X\) gets larger, \(Y\) gets larger and then gets smaller, or \(Y\) gets smaller and then gets larger, or something more complicated), you shouldn't use Spearman rank correlation.
Like linear regression and correlation, Spearman rank correlation assumes that the observations are independent.
How the test works
Spearman rank correlation calculates the
When you use linear regression and correlation on the ranks, the Pearson correlation coefficient (\(r\)) is now the Spearman correlation coefficient, \(\rho \), and you can use it as a measure of the strength of the association. For \(11\) or more observations, you calculate the test statistic using the same equation as for linear regression and correlation, substituting \(\rho \) for \(r\): \(t_s=\frac{\sqrt{d.f.}\times \rho ^2}{\sqrt{(1-\rho ^2)}}\). If the null hypothesis (that \(\rho =0\)) is true, \(t_s\) is \(t\)-distributed with \(n-2\) degrees of freedom.
If you have \(10\) or fewer observations, the \(P\) value calculated from the \(t\)-distribution is somewhat inaccurate. In that case, you should look up the \(P\) value in a table of Spearman t-statistics for your sample size. My Spearman spreadsheet does this for you.
You will almost never use a regression line for either description or prediction when you do Spearman rank correlation, so don't calculate the equivalent of a regression line.
For the Colobus monkey example, Spearman's \(\rho \) is \(0.943\), and the \(P\) value from the table is less than \(0.025\), so the association between social dominance and nematode eggs is significant.
Example
Fig. 5.2.1 Magnificent frigatebird, Fregata magnificens.
Volume
(cm
3) Frequency
(Hz)
1760 529 2040 566 2440 473 2550 461 2730 465 2740 532 3010 484 3080 527 3370 488 3740 485 4910 478 5090 434 5090 468 5380 449 5850 425 6730 389 6990 421 7960 416
Males of the magnificent frigatebird (
Fregata magnificens) have a large red throat pouch. They visually display this pouch and use it to make a drumming sound when seeking mates. Madsen et al. (2004) wanted to know whether females, who presumably choose mates based on their pouch size, could use the pitch of the drumming sound as an indicator of pouch size. The authors estimated the volume of the pouch and the fundamental frequency of the drumming sound in \(18\) males.
There are two measurement variables, pouch size and pitch. The authors analyzed the data using Spearman rank correlation, which converts the measurement variables to ranks, and the relationship between the variables is significant (Spearman's \(\rho =-0.76,\; 16 d.f.,\; P=0.0002\)). The authors do not explain why they used Spearman rank correlation; if they had used regular correlation, they would have obtained \(r=-0.82,\; P=0.00003\).
Graphing the results
You can graph Spearman rank correlation data the same way you would for a linear regression or correlation. Don't put a regression line on the graph, however; it would be misleading to put a linear regression line on a graph when you've analyzed it with rank correlation.
How to do the test Spreadsheet
I've put together a spreadsheet that will perform a Spearman rank correlation spearman.xls on up to \(1000\) observations. With small numbers of observations (\(10\) or fewer), the spreadsheet looks up the \(P\) value in a table of critical values.
Web page
This web page will do Spearman rank correlation.
R
Salvatore Mangiafico's \(R\)
Companion has a sample R program for Spearman rank correlation. SAS
Use PROC CORR with the SPEARMAN option to do Spearman rank correlation. Here is an example using the bird data from the correlation and regression web page:
PROC CORR DATA=birds SPEARMAN;
VAR species latitude; RUN; The results include the Spearman correlation coefficient ρ, analogous to the r value of a regular correlation, and the P value:
Spearman Correlation Coefficients, \(N = 17\)
Prob > |r| under H0: Rho=0
species latitude
species 1.00000 -0.36263 Spearman correlation coefficient 0.1526 P value latitude -0.36263 1.00000 0.1526 References Edgell, S.E., and S.M. Noon. 1984. Effect of violation of normality on the t–test of the correlation coefficient. Psychological Bulletin 95: 576-583. Madsen, V., T.J.S. Balsby, T. Dabelsteen, and J.L. Osorno. 2004. Bimodal signaling of a sexually selected trait: gular pouch drumming in the magnificent frigatebird. Condor 106: 156-160. Melfi, V., and F. Poyser. 2007. Trichurisburdens in zoo-housed Colobus guereza. International Journal of Primatology 28: 1449-1456. Contributor
John H. McDonald (University of Delaware) |
2,100 16
I'm now interested in a Schrödinger's equation
[tex] \Big(-\frac{\hbar^2}{2m}\partial_x^2 + V(x)\Big)\psi(x) = E\psi(x) [/tex]
where [itex]V[/itex] does not contain infinities, and satisfies [itex]V(x+R)=V(x)[/itex] with some [itex]R[/itex]. I have almost already understood the Bloch's theorem! But I still have some little problems left. I shall first describe what I already know, and then what's the problem.
If a wave function satisfies a relation [itex]\psi(x+R)=A\psi(x)[/itex] with some [itex]A[/itex], when it follows that [itex]\psi(x)=e^{Cx}u(x)[/itex] with some [itex]C[/itex] and [itex]u(x)[/itex], so that [itex]u(x+R)=u(x)[/itex]. This can be proven by setting
[tex] u(x) = e^{-\frac{\log(A)}{R}x} \psi(x) [/tex]
and checking that this [itex]u(x)[/itex] is periodic.
By basic theory of DEs, there exists two linearly independent solutions [itex]\psi_1,\psi_2[/itex] to the Schrödinger's equation, and all other solutions can be written as a linear combination of these. (This is done with fixed energy [itex]E[/itex].) Now the real task is to show, that [itex]\psi_1,\psi_2[/itex] can be chosen to be of form [itex]e^{C_1x}u_1(x)[/itex] and [itex]e^{C_2x}u_2(x)[/itex].
Suppose that at least other one of [itex]\psi_1,\psi_2[/itex] is not of this form, and denote it simply with [itex]\psi[/itex]. Now [itex]\psi(x)[/itex] and [itex]\psi(x+R)[/itex] are linearly independent solutions to the Schrödinger's equation, so there exists constants [itex]A,B[/itex] so that
[tex] \psi(x+2R) = A\psi(x+R) + B\psi(x). [/tex]
Consider then the following linear combinations.
[tex] \left(\begin{array}{c} \phi_1(x) \\ \phi_2(x) \\ \end{array}\right) = \left(\begin{array}{cc} D_{11} & D_{12} \\ D_{21} & D_{22} \\ \end{array}\right) \left(\begin{array}{c} \psi(x) \\ \psi(x+R) \\ \end{array}\right) [/tex]
Direct calculations give
[tex] \left(\begin{array}{c} \phi_1(x + R) \\ \phi_2(x + R) \\ \end{array}\right) = \left(\begin{array}{cc} D_{11} & D_{12} \\ D_{21} & D_{22} \\ \end{array}\right) \left(\begin{array}{cc} 0 & 1 \\ B & A \\ \end{array}\right) \left(\begin{array}{c} \psi(x) \\ \psi(x+R) \\ \end{array}\right) [/tex]
and
[tex] \left|\begin{array}{cc} -\lambda & 1 \\ B & A - \lambda \\ \end{array}\right| = 0 \quad\quad\implies\quad\quad \lambda = \frac{A}{2}\pm \sqrt{B + \frac{A^2}{4}} [/tex]
This means, that if [itex]B + \frac{A^2}{4}\neq 0[/itex], then we can choose [itex]\boldsymbol{D}[/itex] so that
[tex] \boldsymbol{D} \left(\begin{array}{cc} 0 & 1 \\ B & A \\ \end{array}\right) = \left(\begin{array}{cc} \lambda_1 & 0 \\ 0 & \lambda_2 \\ \end{array}\right) \boldsymbol{D} [/tex]
and then we obtain two linearly independent solutions [itex]\phi_1,\phi_2[/itex] which satisfy [itex]\phi_k(x+R)=\lambda_k\phi_k(x)[/itex], [itex]k=1,2[/itex].
Only thing that still bothers me, is that I see no reason why [itex]B + \frac{A^2}{4} = 0[/itex] could not happen. The matrix
[tex] \left(\begin{array}{cc} 0 & 1 \\ -\frac{A^2}{4} & A \\ \end{array}\right) [/tex]
is not diagonalizable. It could be, that for some reason [itex]B[/itex] will never be like this, but I cannot know this for sure. If [itex]B[/itex] can be like this, how does one prove the Bloch's theorem then?
[tex]
\Big(-\frac{\hbar^2}{2m}\partial_x^2 + V(x)\Big)\psi(x) = E\psi(x)
[/tex]
where [itex]V[/itex] does not contain infinities, and satisfies [itex]V(x+R)=V(x)[/itex] with some [itex]R[/itex]. I have almost already understood the Bloch's theorem! But I still have some little problems left. I shall first describe what I already know, and then what's the problem.
If a wave function satisfies a relation [itex]\psi(x+R)=A\psi(x)[/itex] with some [itex]A[/itex], when it follows that [itex]\psi(x)=e^{Cx}u(x)[/itex] with some [itex]C[/itex] and [itex]u(x)[/itex], so that [itex]u(x+R)=u(x)[/itex]. This can be proven by setting
[tex]
u(x) = e^{-\frac{\log(A)}{R}x} \psi(x)
[/tex]
and checking that this [itex]u(x)[/itex] is periodic.
By basic theory of DEs, there exists two linearly independent solutions [itex]\psi_1,\psi_2[/itex] to the Schrödinger's equation, and all other solutions can be written as a linear combination of these. (This is done with fixed energy [itex]E[/itex].) Now the real task is to show, that [itex]\psi_1,\psi_2[/itex] can be chosen to be of form [itex]e^{C_1x}u_1(x)[/itex] and [itex]e^{C_2x}u_2(x)[/itex].
Suppose that at least other one of [itex]\psi_1,\psi_2[/itex] is not of this form, and denote it simply with [itex]\psi[/itex]. Now [itex]\psi(x)[/itex] and [itex]\psi(x+R)[/itex] are linearly independent solutions to the Schrödinger's equation, so there exists constants [itex]A,B[/itex] so that
[tex]
\psi(x+2R) = A\psi(x+R) + B\psi(x).
[/tex]
Consider then the following linear combinations.
[tex]
\left(\begin{array}{c}
\phi_1(x) \\ \phi_2(x) \\
\end{array}\right)
= \left(\begin{array}{cc}
D_{11} & D_{12} \\
D_{21} & D_{22} \\
\end{array}\right)
\left(\begin{array}{c}
\psi(x) \\ \psi(x+R) \\
\end{array}\right)
[/tex]
Direct calculations give
[tex]
\left(\begin{array}{c}
\phi_1(x + R) \\ \phi_2(x + R) \\
\end{array}\right)
= \left(\begin{array}{cc}
D_{11} & D_{12} \\
D_{21} & D_{22} \\
\end{array}\right)
\left(\begin{array}{cc}
0 & 1 \\
B & A \\
\end{array}\right)
\left(\begin{array}{c}
\psi(x) \\ \psi(x+R) \\
\end{array}\right)
[/tex]
and
[tex]
\left|\begin{array}{cc}
-\lambda & 1 \\
B & A - \lambda \\
\end{array}\right| = 0
\quad\quad\implies\quad\quad
\lambda = \frac{A}{2}\pm \sqrt{B + \frac{A^2}{4}}
[/tex]
This means, that if [itex]B + \frac{A^2}{4}\neq 0[/itex], then we can choose [itex]\boldsymbol{D}[/itex] so that
[tex]
\boldsymbol{D} \left(\begin{array}{cc}
0 & 1 \\
B & A \\
\end{array}\right)
= \left(\begin{array}{cc}
\lambda_1 & 0 \\
0 & \lambda_2 \\
\end{array}\right) \boldsymbol{D}
[/tex]
and then we obtain two linearly independent solutions [itex]\phi_1,\phi_2[/itex] which satisfy [itex]\phi_k(x+R)=\lambda_k\phi_k(x)[/itex], [itex]k=1,2[/itex].
Only thing that still bothers me, is that I see no reason why [itex]B + \frac{A^2}{4} = 0[/itex] could not happen. The matrix
[tex]
\left(\begin{array}{cc}
0 & 1 \\
-\frac{A^2}{4} & A \\
\end{array}\right)
[/tex]
is not diagonalizable. It could be, that for some reason [itex]B[/itex] will never be like this, but I cannot know this for sure. If [itex]B[/itex] can be like this, how does one prove the Bloch's theorem then? |
Answer
$128.0$ ft
Work Step by Step
Let $d$ the distance between the automobile and the person. $\sin23^{\circ}=\frac{50}{d}$ $d=\frac{50}{\sin23^{\circ}}$ $d\approx128.0$
You can help us out by revising, improving and updating this answer.Update this answer
After you claim an answer you’ll have
24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback. |
Answer
$\theta =60^{\circ}$
Work Step by Step
We know that the two pieces have equal velocities and equal masses, so they form the same angle with the x-axis. We find that this angle is: $cos\theta = \frac{1}{2} \\ \theta =60^{\circ}$
You can help us out by revising, improving and updating this answer.Update this answer
After you claim an answer you’ll have
24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback. |
Table of Contents
A line integral is an integral of a function along a curve in space. We usually represent the curve by a parametric equation, e.g. \(\mathbf{r}(t) = [x(t), y(t), z(t)] = x(t)\mathbf{i} + y(t)\mathbf{j} + z(t)\mathbf{k}\). So, in general the curve will be a vector function, and the function we want to integrate will also be a vector function.
Then, we can write the line integral definition as:
\(\int_C \mathbf{F(r)}\cdot d\mathbf{r} = \int_a^b \mathbf{F}({\mathbf{r}(t)) \cdot \mathbf{r'}(t) dt\) where \(\mathbf{r'}(t) = \frac{d\mathbf{r}}{dt}\). This integrand is a scalar function, because of the dot product.
The following examples are adapted from Chapter 10 in Advanced Engineering Mathematics by Kreysig.
The first example is the evaluation of a line integral in the plane. We want to evaluate the integral of \(\mathbf{F(r)}=[-y, -xy]\) on the curve \(\mathbf{r(t)}=[-sin(t), cos(t)]\) from t=0 to t = π/2. The answer in the book is given as 0.4521. Here we evaluate this numerically, using autograd for the relevant derivative. Since the curve has multiple outputs, we have to use the jacobian function to get the derivatives. After that, it is a simple bit of matrix multiplication, and a call to the quad function.
import autograd.numpy as np from autograd import jacobian from scipy.integrate import quad def F(X): x, y = X return -y, -x * y def r(t): return np.array([-np.sin(t), np.cos(t)]) drdt = jacobian(r) def integrand(t): return F(r(t)) @ drdt(t) I, e = quad(integrand, 0.0, np.pi / 2) print(f'The integral is {I:1.4f}.') The integral is 0.4521.
We get the same result as the analytical solution.
The next example is in three dimensions. Find the line integral along \(\mathbf{r}(t)=[cos(t), sin(t), 3t]\) of the function \(\mathbf{F(r)}=[z, x, y]\) from t=0 to t=2 π. The solution is given as 21.99.
import autograd.numpy as np from autograd import elementwise_grad, grad, jacobian def F(X): x, y, z = X return [z, x, y] def C(t): return np.array([np.cos(t), np.sin(t), 3 * t]) dCdt = jacobian(C, 0) def integrand(t): return F(C(t)) @ dCdt(t) I, e = quad(integrand, 0, 2 * np.pi) print(f'The integral is {I:1.2f}.') The integral is 21.99.
That is also the same as the analytical solution. Note the real analytical solution was 7 π, which is nearly equivalent to our answer.
7 * np.pi - I 3.552713678800501e-15
As a final example, we consider an alternate form of the line integral. In this form we do not use a dot product, so the integral results in a vector. This doesn't require anything from autograd, but does require us to be somewhat clever in how to do the integrals since quad can only integrate scalar functions. We need to integrate each component of the integrand independently. Here is one approach where we use lambda functions for each component. You could also manually separate the components.
def F(r): x, y, z = r return x * y, y * z, z def r(t): return np.array([np.cos(t), np.sin(t), 3 * t]) def integrand(t): return F(r(t)) [quad(lambda t: integrand(t)[i], 0, 2 * np.pi)[0] for i in [0, 1, 2]] [-6.9054847581172525e-18, -18.849555921538755, 59.21762640653615]
The analytical solution in this case was given as:
[0, -6 * np.pi, 6 * np.pi**2] [0, -18.84955592153876, 59.21762640653615]
which is evidently the same as our numerical solution.
Maybe an alternative, but more verbose is this vectorized integrate function. We still make temporary functions for integrating, and the vectorization is essentially like the list comprehension above, but we avoid the lambda functions.
@np.vectorize def integrate(i): def integrand(t): return F(r(t))[i] I, e = quad(integrand, 0, 2 * np.pi) return I integrate([0, 1, 2]) array([ -6.90548476e-18, -1.88495559e+01, 5.92176264e+01]) 1 Summary
Once again, autograd provides a convenient way to compute function jacobians which make it easy to evaluate line integrals in Python.
Copyright (C) 2018 by John Kitchin. See the License for information about copying.
Org-mode version = 9.1.14 |
Some of our older headers tell stories about inverse problems and data assimilation.
Engineering the World: Ove Arup and the Philosophy of Total Design (V&A London Nov 2016). Art and science do not need to live on different planets. Today, much art needs and is based on science, and science includes elements of beauty, elements of art, elements of creativity! The mathematician or physicist or biologist needs creativity which is close to those of the artists, and where ever the different worlds inspire each other, we find us in a stream of deep progress and change.
Today, there are 14 centers worldwide which run operational global numerical weather prediction. These models have a quite different setup and structure, with spectral or finite element type of approaches for the simulation of the atmosphere and with different variational or ensemble data assimilation methods run to determine the current state of the atmosphere through the so-called “data assimilation cycle”. We show prediction scores for global Numerical Weather Prediction (NWP) competing with each other to get the most acurate description of the planetary atmosphere, its uncertainty and future development.
We show a visualization for a thunderstorm and the task to prepare for desaster. The goal to understand and to prepare for risk of environmental hazards is a very important task which becomes more and more important with the growing complexity of modern societies. The task includes estimation of the uncertainty of model forecasts and based on the the state estimates and their uncertainty for natural processes, their dynamic states and underlying parameters and distributions.
We show several ground based remote sensing devices as they are used to monitor the atmosphere for climate monitoring and weather prediction. You see the Swiss radar station in Payerne, a radar wind profiler which measured a profile of atmospheric winds up to 11km height and several radiometers measuring electromagnetic radiation which is emitted and diffracted by the different atmospheric layers. All devices are key ingredients of inverse problems and data assimilation in meteorology.
Celebrating James-Clerk Maxwell and the Electromagnetic Waves in the Shanghai Museum of Science. Electromagnetic waves are fundamental for our life and our environment. Visible light is known to be a small part of this spectrum, with infrared, microwave, x-rays and many further wavelength bands as key parts of today's everyday-life.
The images show the Japanese K-Computer, #1 very recently on the supercomputer List. Data assimilation experiments for global or high-resolution numerical weather prediction with 10.000 ensemble members are being carried out on this machine. Such experiments are very important for further developing operational prediction algorithms for weather and climate, in particular for estimating the uncertainty of such predictions and the risk which society faces by weather related high impact phenomena such as storms, hurricanes and floods.
Markov Chain Monte Carlo Method (MCMC) generate sequences of points which sample some probability distribution. It can be used for calculating important quantities such as the mean or variance of an unknown distribution given some prior knowledge and various measurements. The image shows different realizations of an MCMC sequence with Metropolis-Hastings sampling strategy for the posterior distribution of some two-dimensional inverse problem with a bi-modal distribution of both the prior and posterior probability density.
The image shows a comparison of Shape Reconstruction by Born approximation versus the Ortho-Sampling Method. Both methods are based on measurements of the scattered field of the scatterer in a large distance (its so-called far field pattern). They use far field patterns for many different incident waves and sample the space to reconstruct the scatterer using the displayed indicator functions.
Inverse Problems at IOP - 30 Years Celebration, August 26-28, 2014, Bristol, UK: The journal “Inverse Problems” is celebrating 30 years since the start of the journal's regular publication in 1985. The journal would like to thank all of our authors, referees, board members and supporters across the world for their vital contribution to the work and progress of Inverse Problems. Browse its webpage for information on the journal, special issue and topical review collections and our upcoming 2014 conference. The page will be updated throughout the year with more free content: news, photos and highlights articles. The Inverse Problems Special issue collection is now free to read until the end of September 2014.
The World Weather Open Science Conference 2014 is held in Montreal, Canada, August 16-21. With more than 1000 participants a good crowd of scientists meets to talk about their research, to develop their interaction and network, and to share progress in the science of weather and climate. Data Assimilation, i.e. the use of measurements to calculate the state of the atmosphere and the whole earth system is one of the key parts of the conference. Many new measurement devices are in use in both atmospheric analysis, data assimilation and climate monitoring. More infos and the whole programme can be found on the webpage is http://www.wwosc2014.org.
Global Temperature reconstruction from incomplete data. You cannot measure the global temperature at all places at the same time. But it is very important to calculate the temperature distribution on the whole globe - for climate monitoring as well as for weather forecasting. Only when we know the key global variables like pressure, wind, humidity and temperature, we can calculate a weather forecast, we are able to estimate the risk of high-impact events (like storms, hail, strong rain and floods) and we can monitor potential threads like nuclear desasters, which are distributed by atmospheric winds.
The world turned around - Exhibition in Marseille, France, Summer 2014. The world inversed. Calculating backwards. Inverting data. Calculating unknown quantities. That is what inverse problems is about. Looking into what is not directly accessible. Reconstructing sources. Reconstructing unknown causes of action. Reconstructing scatterers when waves are scattered. Can you hear the shape of a drum from its particular sound? Different examples of inverse problems - one of the most fascinating research areas currently.
Recent progress for mathematical and numerical analysis of inverse problems has been discussed at the mathematical research centre CIRM, Marseille, May 18-23, 2014 Web. With around 90 participants, covering important parts of the field of inverse problems, the meeting reflected the state of the art of the mathematical theory of inverse problems. It includes results on uniqueness, stability and algorithmic efficiency.
International Symposium on Data Assimilation, LMU Munich 24.-28.2.2014 Website at LMU. The Symposium had more than 200 Participants and combined four days of invited talks and discussions on Data Assimilation both on the global and the regional scale in the area of weather and climate with a workshop of the European COST Action ES1206 “GNSS4SWEC” (see http://gnss4swec.knmi.nl/) on GPS/GNSS Data in Weather and Climate and a KENDA Mini-Workshop (Programme PDF) on kilometer-scale ensemble data assimilation.
Medical Imaging investigates processes in the brain by techniques like MRI, EEG, MEG and many more. Dynamical models based on finite element approaches are married with data by inversion and data assimilation. (Images by Ingo Bojak, Reading.)
Magnetic Tomography: we show a magnetic sensor recording the field of the current distribution within a fuel cell. The forward operator is given by the Biot-Savart integral \begin{equation} H(x) = \frac{1}{4\pi} \int_{\Omega} \frac{j(y) \times (x-y)}{|x-y|^3} \; dy, \;\; x \in \mathbb{R}^3 \end{equation} The reconstruction of the currents is shown in the right image. The inversion needs to solve an integral equation of the first kind, where $H(x)$ is measured on some outer surface and $j(y)$ is to be reconstructed.
Satellite remote sensing of the atmosphere is an indispensible tool today to monitor the atmosphere and calculate initial states which is used for weather predictions. Many different other remoste sensing techniques are used, for example radar based wind profiler, and cloud radar. In the third image, we show a weather radar operated by the University of Bonn. Also, networks of lidars are used today to monitor the atmospheric aerosol. We show a ceilometer profile of the atmospheric boundary layer (approx. 0-2km height), as recorded by the observatory in Hohenpeißenberg, sourthern Germany.
Reconstructions of scatterers using orthogonality sampling; simulation/reconstruction of a flow field, eigenfunctions of the Laplacian on the sphere.
Image reconstruction from noisy data is an important inverse problem. Here, we also show a picture taken on the Hannover Industrial Fair, where Electrical Impedance Tomography (EIT) on trees has been presented to a wider public. The last image displays a feasibility study of the “No Response Test” applied to Magnetic Tomography.
We show images taken during a special semester on “inverse problems” at the Isaak Newton Institute at Cambridge, UK, in 2011.
There is a growing number of meetings on both inverse problems and data assimilation. The different communities have their own interaction and language. But there is also some convergence, while techniques and tools are used both in mathematical and engineering communities and important application areas.
Various remote sensing data are used to control dynamical systems simulations and forecasts for atmospheric applications. The electromagnetic waves in the infrared and microwave range which are radiated by the atmosphere are measured on satellites. This leads to highly ill-posed inverse problems, which are treated by variational or ensemble methods and assimilated into atmospheric models. Radar measurements (indicated by the circles surrounding radar stations in central Europe) are used to measure precipitation and radial winds. The inverse scattering type measurements are used for atmospheric forecasting. The images on the right show a forecast by the COSMO-DE model over Germany, which is developed by Switzerland, Germany, Italy, Russia, Poland, Romania and Greece.
Data assimilation is about using data in dynamical systems such as weather simulations. The field has grown from applications in meteorology and geophysics. Here, the World Meteorological Organization plays a key role, since it combines many national weather services which work together in sharing data on a global scale and running various programs to support the science as well as the operational work which provides services to all our states and communities.
The display of temperature fields in the atmosphere during deep convection (left). The distributions are simulated using numerical models on supercomputers as on the NEC SX9, shown in image no. 2. Sea surface temperature fields are calculated by inverse techniques over the oceans and used by data assimilation in numerical models (image 3). Scientific meetings are essential in developing these methods, image 4 shows a snapshot from the International Symposium on Data Assimilation in Offenbach in 2012. The distribution of temperature, winds and pressure (and other quantities) is simulated by data assimilation techniques and then forecasted (image 5). To this end various measurements such as radar measurements (last image; on the right) are employed. |
[Top][All Lists] [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Axiom-developer] [#187 trouble with tuples] [#187 trouble with tuples]
From: William Sit Subject: [Axiom-developer] [#187 trouble with tuples] [#187 trouble with tuples] functions are objectsof type Mapping Date: Mon, 04 Jul 2005 08:20:03 -0500 Changes
http://page.axiom-developer.org/zope/mathaction/187TroubleWithTuples/diff
--
Bill Page wrote:
> From a mathematical point of view clearly the idea that
> Mapping(T, A, B) denotes the class of mappings from (A, B)
> into T implies that (A,B) denotes some kind of set, i.e.
> the Product(A,B).
> William Sit wrote:
>
> > I don't agree. When one wraps up something into a single object,
> > it is inconvenient to look at the parts without unwrapping.
(Disclaimer: my earlier response above is not directly to your new comment. See
previous post. I agree that the Cartesian product of A, B is implied in the
notion of Mapping(T,A,B). But that is *not* the issue here.)
In most mathematics I know, products, if they exist, are formed from objects of
the same category. If $f:A \times B \rightarrow C$ is a mapping, where $A$, $B$
are from the same category, we may sometimes let $D = A \times B$ and identify
$f$ as $f:D \rightarrow C$ (let me rename this to $g:D \rightarrow C$). However,
there is this subtle distinction in the way we give the definition of $f$ and
$g$. In the first case, we would write f(a,b) = c, where as in the second case,
we would write g(d) = c, with d = (a,b). The two are *not* equivalent as
*mappings*: $f$ is binary and $g$ is unary. To define $c$ to be $a+b$ in both
cases, say, it is straight forward in the first case $f(a,b)=a+b$. In the second
case, there is necessarily a composition with two projection maps $p:D
\rightarrow A$ and $q:D \rightarrow B$, where $p(d)=a$, $q(d) = b$. The true
definition of $g$ is: $g(d) = p(d)+q(d)$. If the target $C$ is more involved,
say $C$ is D^2$ and $f$ is meant to be the diagonal map $D \rightarrow D^2$,
then the $g$-form would be more preferrable: $g(d) = (d,d)$.
In short, Axiom imitates closely mathematics and gives us both ways to define
mappings. When the data structure is complicated, a Record is preferred.
In mathematics, we have been trained to be sloppy with details that have been
seen too often already, so as to contemplate at newer and more abstract levels.
We tend to see things as the same via isomorphisms. Now, we are forced to think
about all the details again in using Axiom. That is the main hurdle and a steep
"relearning" curve. Perhaps in 2032 (I'm not going to let this be a sliding
window of 30 years!), computer algebra can be smart enough to incorporate all
theorems (isomorphisms included) from a self-generated data-base and we don't
have to "relearn" what is in our subconsciousness.
> But [Cartesian Product] is implemented as a Record and the
> domain Record is a primative in Axiom. So my proposal above
> amounts to stating, for example, that:
> \begin{axiom}
> f1:Record(a:INT,b:FLOAT)->FLOAT
> f1(arg)==arg.b+arg.a
> f1[1,1.1]
> \end{axiom}
>
> should be viewed as equivalent to
> \begin{axiom}
> f2:(INT,FLOAT)->FLOAT
> f2(a,b)==a+b
> f2(1,1.1)
> \end{axiom}
>
> And in fact after a simple optimization, the compiler should
> be able to produce equivalent internal lisp code.
Axiom is a strongly typed language with object oriented roots. A Record, even if
it is a primary domain, is a single object. You cannot have the compiler
*sometimes* treat it as one object and *sometimes* not. In your example, f1 has
arity one (requiring two projection maps), and f2 has arity two. In general, the
compiler reads in a list of arguments for a function and then parses it. I don't
see much optimization (the parser needs to work one level deeper to parse the
content of the record,and there is *no way* the generated code can be simplified
to remove this level because the arities are different).
In the second form, you *explicitly* tell the compiler that f2 requires two
*objects* as inputs. In the first, only one. In terms of data structure, Axiom
*must* view arg as a Record object (note your use of square brackets). In your
scenario, you seem to suggest that the compiler *automatically* changes the
arity of f1 to the length of the record. If so, I think this will only confuse
users, even if that change is restricted to Records appearing as function
signatures.
There is also another problem with your automatic translation. In a record, the
*order* of the items is not important (conceptually speaking), each field is
tagged by an identifier. In a tuple, the physical order is important and items
are not tagged.
Please note also that Axiom *hides* the data representation of objects from code
external to the object constructors. So sometimes these projections are *not*
available (for 'Product' and domains of 'DirectProductCategory', they should be,
and are).
William
--
forwarded from http://page.axiom-developer.org/zope/mathaction/address@hidden
[Prev in Thread] Current Thread [Next in Thread] [Axiom-developer] [#187 trouble with tuples] [#187 trouble with tuples] functions are objectsof type Mapping, William Sit <= |
Answer
$19^{\circ}$
Work Step by Step
Let $\alpha$ be the angle formed by the rope and the water. $\sin\alpha=\frac{4}{12}$ $\alpha=\sin^{-1}\frac{4}{12}$ $\alpha\approx19^{\circ}$
You can help us out by revising, improving and updating this answer.Update this answer
After you claim an answer you’ll have
24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback. |
Fundamentally it's mathematics. The energies of the MOs are the eigenvalues of the matrix $$\begin{pmatrix}\alpha_1 & \beta \\ \beta & \alpha_2 \end{pmatrix},$$ where $(\alpha_1,\alpha_2)$ are the energies of the two constituent orbitals and $\beta$ is (loosely speaking) the overlap between them.
For the same size of $\beta$, if $\alpha_1 \approx \alpha_2$, then the eigenvalues can differ quite a lot from $(\alpha_1,\alpha_2)$. On the other hand, if $\alpha_1$ is very different from $\alpha_2$, then the eigenvalues will be closer to $(\alpha_1,\alpha_2)$.
As an illustration consider the matrix
$$\begin{pmatrix}1.1 & 0.5 \\ 0.5 & 1 \end{pmatrix}$$
This has eigenvalues of $1.55$ and $0.55$, which are relatively distant from the "original energies" of $1.1$ and $1$. On the other hand, the matrix
$$\begin{pmatrix}3 & 0.5 \\ 0.5 & 1 \end{pmatrix}$$
has eigenvalues $3.12$ and $0.88$, which are closer to $3$ and $1$.
Taking a more abstract perspective, the eigenvalues of $$\begin{pmatrix}\alpha_1 & \beta \\ \beta & \alpha_2 \end{pmatrix}$$ are
$$\lambda_\pm = \frac{\alpha_1 + \alpha_2}{2} \pm \frac{\sqrt{(\alpha_1 - \alpha_2)^2 + 4\beta^2}}{2}$$
and you can play around with this expression to gain some insight. For example, in the limit where $(\alpha_1 - \alpha_2)^2 \gg 4\beta^2$ (corresponding to a large energy difference between the original interacting orbitals), the eigenvalues reduce to
$$\lambda_\pm \to \frac{\alpha_1 + \alpha_2}{2} \pm \frac{\alpha_1 - \alpha_2}{2}$$
which are simply $\alpha_1$ and $\alpha_2$. |
Version 6.0 [beta] of the UAH [AMSU] Temperature Dataset Released: New LT Trend = +0.11 C/decade [raw data]which is more compatible with the RSS AMSU dataset. In fact, UAH now shows a smaller (by 1/4 or so) warming trend than RSS. The trend has been exactly zero in the last 18 years. In the past, I tended to slightly prefer RSS AMSU – partly because I wanted to avoid suggestions that Spencer et al. aren't impartial just because they're skeptics (I surely do think that they areimpartial). I also preferred a more silent method with which RSS was fixing their small bugs.
But this new release convinced me to play with the datasets again – do all kinds of Fourier analysis, Fourier filters, predictions, and so on. And I just found something that I want to share with you because it seems pretty exciting.
First, I used Mathematica to import the monthly global temperature anomalies:
b = Import["http://vortex.nsstc.uah.edu/data/msu/v6.0beta/tlt/uahncdc_lt_6.0beta1.txt", "Table"][[2;;-13,3]];This gives me those 436 months from December 1978 to March 2015. You may plot them by ListLinePlot[b].
I looked at the Fourier decomposition, Fourier[b], somewhat carefully, and there were some peaks but they didn't quite impress me (although they already contained the reason of the excitement described below). Instead, I decided to look at the autocorrelation of the data:
core = Table[Correlation[b[[1 ;; -1 - k]], b[[1 + k ;; -1]]], {k, 1, 400}];This takes the list of monthly anomalies b, shifts it by "k" months, and computes the correlation coefficient with the original data. The correlation coefficient is +1 for k=0, of course. But I expected a sort of a random curve. But this is what I got: BarChart[core]
Wow, I told myself. It doesn't look chaotic. The ups and downs are almost regular, equally spaced: it looks almost like a sine function combined with a much slower one.
You should understand the graph. The horizontal axis is the delay that goes from zero to 400 months. The vertical axis is the correlation coefficient that oscillates. In fact, there are 9 periods between the delay 0 and the delay 395 months or so (the number of maxima in the interval is 10 but that's because you count both peaks at the boundaries, I am sure you know how to count periodicities).
So the average spacing between the delays is 395/9 = 44 months or so. To a certain extent, one feels confident that the graph above allows us to determine this constant 44 months rather accurately – so that one may conclude it's much more likely to be 44 than 43 or 45 months, for example. Think about it. The point is that the temperature wiggles are much more similar to those 44 or 88 months ago than to those 22 or 66 months ago.
OK, I could see the periodicity 44 months (3 years and 8 months) in the Fourier decomposition, too. There was a peak around these frequencies. I am just not experienced enough to immediately appreciate that the peaks around this frequency are really high. The similarity between the graph of correlation coefficients above and a sine was something I couldn't overlook.
Here is what I did to "see" the value referred to as 44 months above more accurately. Just calculate the average of the correlation coefficients over all the allowed delays that are multiples of "k" months:
coreD = Table[ Total[core[[1 ;; -1 ;; i]]]/Length[core[[1 ;; -1 ;; i]]], {i, 1, 60}]Here is the resulting picture: BarChart[coreD]
If you count the bars, you will see that the two highest peaks correspond to the delays that are multiples of 44 and 45 months, and the 45-month peak is actually a bit higher. So the periodicity seen in the global temperature anomalies is about 44.6 months or something like that.
Good. We live in the era of search engines so I immediately searched for 44-month and 45-month periodicities that could explain it. First, I ran into an essay by Willis Eschenbach who have Fourier-analyzed the temperature data as well and found the 44-month periodicity, too. So as I could have expected, I had discovered the wheel and America (in my mother tongue, we say that "you discovered America" if you did something trivial that every child and even Christopher Columbus could do, too). In that article, he concludes that the climate data don't show the slightest trace of the 11-year solar cycles and I completely agree with him. The people who believe that these 11-year sunspot cycles have to be critical for the terrestrial climate are fooling themselves.
But he probably didn't do any other search for that 44/45-month periodicity in the literature. I did.
First, I found the following 1997 article. Look at it:
In particular, the Ap index – which quantifies some daily geomagnetic activity – shows periodicities of 16, 21, and 44 months. The first two were attributed to the solar wind and IMF oscillations with analogous periodicity while
the 44 month variation is associated with a similar periodicity in recurrent high speed stream caused by sector boundary passage.Needless to say, my training in geophysics ended somewhere at the basic school – in the college, geophysicists were known as the least smart physicists, on par with the meteorologists. But search for the "sector boundary passage" or, if you know these things, please comment on the meaning.
The same Rangarajan and Araki found the same 44-month cycle in a slightly different quantity, the equatorial Dst index.
The idea that the geomagnetic activity could drive the climate change is intriguing, isn't it? But it gets even more puzzling if you read e.g. the 1933 paper by Abbot and Bond. On page 364, you may read:
We are able to reproduce it as the sum of seven regular periodicities ofYes, 45 months is there, is a rather limited list. On page 370, they compare the 25- and 45-month periodicity with some climate data, see also Figure 7 on page 369. I don't quite see the "solid science" by which they obtained those 7 golden frequencies but I will later spend some time in attempts to fit the climate into a combination of these seven sines. 7, 8, 11, 21, 25, 45 and 68 months.
On page 366, they point out that 45 months is one-third of the sunspot period of 135 months, and offer some other numerology for the other frequencies. If the Sun itself were the driver, why would one-third of its sunspot cycle (the third harmonic) matter much more than the full cycle? A triangle would probably have to be hidden in the Sun.
Funnily enough, this photograph is actually real.
Let me stop these solar debates and return to the Earth.
Can you imagine that the geomagnetic activity drives the climate? If it has similarly fast cycles as some solar cycles, is it possible that the geomagnetic activity has been synchronized – the frequency was adjusted to agree – with the solar cycles? (I suppose that the Sun doesn't give a damn about the Earth's magnetic fields.) If they're independent, can then cooperate? Can some important effects depend on an interplay between the solar and geomagnetic activity?
Except for the hypothetical shielding of the cosmic rays, I can't see how the geomagnetic field would drive the climate, either by itself or in some combination with the solar cycles. Or perhaps the wind is (somewhere) charged and moves according to the magnetic fields and it matters for the climate? But because links between the climate and many weird things have been proposed, it seems surprising to me that the possible geomagnetic influences on the climate are not discussed at all.
I am going to look at the geomagnetic processes, compare the periods of the changing Earth's magnet with some historical climate data, and so on. It's clear that if this periodic signal in the climate data were due to geomagnetic effects, there could be many more geomagnetic effects with different cycles that impact the climate, too.
P.S.: Strength of the 44.9-month cycle
You may want to know how big changes of the temperature the cycle is generating. I found the strongest (Pythagorean hypotenuse) effect for the periodicity 44.9 months. This is the code:
perio = 44.9;If \(i\) represents the month between \(i=1\) for December 1978 and \(i=436\) for March 2015 and not the imaginary unit, the temperature anomaly (centered to zero) may be fitted as\[ averagetemp = Total[b]/436; sines = Table[Sin[2*Pi*i/perio], {i, 1, 436}]; cosines = Table[Cos[2*Pi*i/perio], {i, 1, 436}]; fit = Normal[LinearModelFit[{Transpose[{sines, cosines}], b - averagetemp}]]
\eq{
\frac{T_i}{{}^\circ{\rm C}} &= 0.1068 s - 0.0575 c\\
s&=\sin \frac{2\pi i}{44.9}\\
c &= \cos \frac{2\pi i}{44.9}
}
\] If you combine the sine and cosine to a shifted sine, the amplitude would be the (Pythagorean) 0.1215 °C or so. This is rather nontrivial. Every 3.7 years, the temperature goes up 0.12 °C from the baseline and down by –0.12 °C in the middle of the cycle. The latest maximum (warm peak) was the 18th month from the end which, if I can count, was 5 months before March 2014 i.e. October 2013. In August 2015 or so, there will be a minimum of this periodic function.
So most of the year 2015 seems to be in the coolest phases of this cycle. This negative contribution may reduce the warming effect of the El Niño that was recently reborn (and that is predicted to become very strong) on the global mean temperature.
The (warm) maxima of the sine-combined-with-cosine (i.e. shifted sine) occurred in February 1980, November 1983, August 1987, April 1991, January 1995, October 1998, July 2002, April 2006, January 2010, October 2013. These cycles could have helped 1998 and 2010 to be among the warmest years and 2008 to be a cool one, and they may prevent 2015 from being the warmest satellite year despite the El Niño.
BTW I also calculated the Fourier transform of the GISS data since 1880 and the local peak periodicity was about 43 months even though it looks much less exceptional to me. I couldn't see any waves in the Correlation in the GISS data at all which strengthens the possibility that the 44-month periodic signal is a satellite artifact. Or maybe there's too much noise (inaccuracy of older measurements) in GISS.
Someone else mentions the 45-period in the data and suggests it is ENSO-related. |
Fitting Data
A common and powerful way to compare data to a theory is to search for a theoretical curve that matches the data as closely as possible. You may suspect, for example, that friction causes a uniform deceleration of a spinning disk, so you have gathered data for the angular velocity of the disk as a function of time. If your hypothesis is correct, then these data should lie approximately on a straight line when angular velocity is plotted as a function of time. They won't be exactly on the line because your experimental observations are inevitably uncertain to some degree. They might look like the data shown in the figure at right.
Our task is to find the best line that goes through these data. When we have found it, we would like answers to the following questions:
What is the best estimate of the deceleration caused by friction? That is, what is the slope of the line. What is the uncertainty in the value of deceleration? What is the likelihood that these data are in fact consistent with our hypothesis? That is, how probable is it that the disk isuniformly accelerated? What do you mean, “best line”?
Associated with each data point is an error bar, which is thegraphical representation of the uncertainty of the measuredvalue. We assume that the errors are
normallydistributed, which means that they are described by thebell-shaped curve or Gaussian shown in the discussion of standard deviation. The heightbetween the data point and the top or bottom of the error bar is\( \sigma \), so about 2/3 of the time, the line orcurve should pass within one error bar of the data point.
Sometimes the uncertainty of each data point is the same, but it is just as likely (if not more likely!) that the uncertainty varies from datum to datum. In that case the line should pay more attention to the points that have smaller uncertainty. That is, it should try to get close to those “more certain” points. When it can't, we should grow worried that the data and the line (or curve) fundamentally don't agree.
A pretty good way to fit straight lines to plotted data is to fiddle with a ruler, doing your best to get the line to pass close to as many data points as possible, taking care to count more heavily the points with smaller uncertainty. This method is quick and intuitive, and is worth practicing. Here’s my attempt to fit a line by eye.
Least-Squares Fitting
For more careful work, we need a way to evaluate how successfully a givenline (or curve) agrees with the data. Each data point sets its own standardof agreement: its uncertainty. We can quantify the disagreement between apoint and the line by measuring the (vertical) distance between the pointand the line,
in units of the error bar for each point. The datapoint at \( t = 10\text{ s} \), for example, is about 1 error bar unit away fromthe line. It turns out that a very useful way of adding up all thediscrepancies, \[ \frac{y_i - f(x_i)}{\delta y_i} \]between the line and the data is to square them first. That way, all theterms in the sum are positive (after all, a point can't be correct with200% probability!).
We define the function \( \chi^2 \) to be this sum of squares of discrepancies, each measured in units of error bars. Symbolically, \[ \chi^2 \equiv \sum_{i=1}^N \left(\frac{y_i - f(x_i)}{\delta y_i}\right)^2 \] where the sum is over the \( N \) data points and \( f(x) \) is the equation of the line (or curve) we think models the data. Since it is the sum of squares, \( \chi^2 \) cannot be negative. We would like \( \chi^2 \) to be as small as possible. As we try different lines, we can calculate \( \chi^2 \) for each one. The “best line” is the one with the smallest value of \( \chi^2 \). That is, the best line is the one which has the “least squares.”
Igor Pro can perform the operation of finding the line or curve that minimizes \( \chi^2 \). The result of performing this least-squares fit is shown in the red curve in the figure.
Evidently, my \( \chi \) by eye method was pretty good for the slope, but was off a bit in the offset. According to this fit, the acceleration is \( -3.10 \pm 0.08 \text{ bar/s/s} \), which you can read off the fit results table. This is pretty neat! The plotting and analysis program found the best-fit line for me, and even estimated the confidence of the slope. What could be better?
Well, what about some assessment of the likelihood that these data arereally trying to follow a straight line? We may have found the best line,in the sense of the one that minimizes the squared deviations of the datapoints, but it may well be that the data follow a different curve and so
no line properly describes the data. The Meaning of \( \chi^2 \)
The value of \( \chi^2 \) tells us a great deal about whether we should trust this whole fitting operation. If our assumptions about normal errors and the straight line are correct, then the typical deviation between a data point and the line should be a little less than \(1 \sigma \). This means that the value of \( \chi^2 \) should be about equal to the number of data points.
Actually, we have to reduce the number of data points \( N \) by thenumber of fit parameters \( m \) because each fit parameter allows us tomatch one more data point exactly. In the pictured data set, there are 16data points and 2 fit parameters. We can compute the reduced value of\( \chi^2 \), denoted \( \tilde{\chi}^2 \), by dividing \( \chi^2 \) by\( N - m \). Hence, we find here that \( \tilde{\chi}^2 = 2.1 \).This value strongly suggests that the data and the line
do not agree!
How can this be? They look so good together! A good way to look moreclosely is to prepare a plot of
residuals. Residuals are thedifferences between each data point and the line or curve at thecorresponding value of x. Such a plot is shown at the right.
For a reasonable fit, about two-thirds of the points should be within one error bar from the black line at zero. In this fit we can see that several points are considerably more than one standard deviation from the line at zero. The first point is decidedly above the line, and the last point is clearly above the line, too. Almost all the other points are below the line, and a few of them are considerably below, again measured in units of their error bars. Maybe we need a curve that opens up a bit, instead of a line.
On more solid theoretical grounds, if the braking torque (twisting force) is proportional to the rotational speed, then we would expect a speed that decreases exponentially with time. Let’s try an exponential curve of the form \[ \omega = \omega_0 \exp(-t/\tau) \] where \( \omega \) is the angular velocity and \( \tau \) is the characteristic time of the deceleration. The result of performing such a fit is shown below.
Does it look a bit better to the eye? Maybe. But it certainly looks betterstatistically. The value of \( \chi^2 \) = 16.3, which means \( \tilde{\chi}^2 = 1.16 \).It is a little higher than expected, but not alarmingly so. Accordingto the table in Appendix D of An Introduction to Error Analysis,Second Edition, by John R. Taylor, the probability of getting avalue of \( \tilde{\chi}^2 \) that is
larger than 1.16 on repeating this experimentis about 31%. That is, slightly more than 2/3 of the time we should expecta value of \( \tilde{\chi}^2 \) that is smaller than this value. Not perfect,but quite reasonable.
By contrast, the same table gives the probability that the straight line fit shown above is correct is only about 1%. It's hard to see by eye that the exponential fit is so much better than the linear fit.
A residual plot also shows a more even distribution of errors. Now about half the points are above the zero line, half below. The end points are still above the line, but not markedly so. The residual plot helps build confidence in our exponential analysis.
Fit results
Now that we have a fit with a reasonable value for \( \chi^2 \), wecan be more confident of the values determined by the fit. These values,
and their uncertainties, are shown in the red table of the figure.(I hasten to add that such a means of presenting this information isinformal; it is great for lab notebooks and notes, but in a formalpresentation of data, such as in a technical report or journal article,such information is removed from the figure and the most important partsare placed in a caption below the figure.) In particular, thedeceleration time constant is \( \tau = (24.3 \pm 0.7) \text{ s} \)and the initial angular velocity is \( \omega_0 = (100.2 \pm 0.6)\text{ bar/s} \). Conclusions
Based on the better behavior of the exponential fit we can conclude that
The data are inconsistentwith a model of uniform deceleration, but are probably consistent with a frictional torque that is proportional to the angular velocity. The time constant for the exponential decay is \( (24.3 \pm 0.7)\text{ s} \) The initial angular velocity is \( (100.2 \pm 0.6\text{ bar/s} \). Pitfalls to avoid It might seem that the best value of \( \chi^2 \) would be zero. After all, that means that your curve passes exactly through each and every data point. What could be better than that?
Well, each data point is supposed to have some uncertainty, estimated as \( \delta y_i \). It is fantastically improbable that the discrepancy between each point and the curve should vanish. When \( \chi^2 = 0 \), it means that you dry-labbed the experiment. Don’t even think of trying it!
What would it mean if \( \tilde{\chi}^2 \ll 1 \)? See if you can figure it out, before clicking here. What would it mean if \( \tilde{\chi}^2 \gg 1 \)? Think of at least two possible explanations before clicking here. Subtleties
Thus far we have assumed that the errors in the dependent variable(along the
y axis) are normally distributed and random, but thatthe value of the independent variable is perfect. Quite commonly, theuncertainty in the x value is significant and contributes to theoverall uncertainty of the data point. Is there a way to account for thisadditional uncertainty?
Conceptually it is not too much more difficult to account foruncertainties in both the
x and y values. If the x uncertainties dominate, the simplest approach is simplyto reverse the roles of the dependent and independent variables.This requires you to invert the functional relationship between x and y, however.
If inverting thefunction is impossible, or if both
x and yuncertainties are significant, you will need to map the xerror into an equivalent y error. As shown in the figure,the significance of an x uncertainty depends on the slopeof the curve. At point A, where the curve is steep, the x uncertainty is sufficient to make the point agree withthe curve. At point B where it is shallow, the same size x error does not produce agreement.
As shown in the inset with the blue triangle, to map theerror in
x into an equivalent error in y, you canuse the straight-line approximation of the derivative of the fitfunction at the x value of the data point to compute aneffective y error according to\[ \delta y_{\rm eff} = \left|\frac{\partial y}{\partial x}\right| \delta x \]
However, there is a problem. You don't know the right curve to use to compute the derivative! Sometimes this is a real problem, but frequently you have a pretty good idea based on the data in the neighborhood what the slope of the right curve must be. If that is the case, multiply \( \delta x_i \) by the slope to produce an effective \( y \) uncertainty, \( \delta y_{i\text{ eff}} \).
If the
y uncertainty in the measurement is alsoappreciable, you can combine \( \delta y \) and \( \delta y_{\rm eff} \)in quadrature to produce an honest estimate of the actual uncertainty of the data point. |
Basically 2 strings, $a>b$, which go into the first box and do division to output $b,r$ such that $a = bq + r$ and $r<b$, then you have to check for $r=0$ which returns $b$ if we are done, otherwise inputs $r,q$ into the division box..
There was a guy at my university who was convinced he had proven the Collatz Conjecture even tho several lecturers had told him otherwise, and he sent his paper (written on Microsoft Word) to some journal citing the names of various lecturers at the university
Here is one part of the Peter-Weyl theorem: Let $\rho$ be a unitary representation of a compact $G$ on a complex Hilbert space $H$. Then $H$ splits into an orthogonal direct sum of irreducible finite-dimensional unitary representations of $G$.
What exactly does it mean for $\rho$ to split into finite dimensional unitary representations? Does it mean that $\rho = \oplus_{i \in I} \rho_i$, where $\rho_i$ is a finite dimensional unitary representation?
Sometimes my hint to my students used to be: "Hint: You're making this way too hard." Sometimes you overthink. Other times it's a truly challenging result and it takes a while to discover the right approach.
Once the $x$ is in there, you must put the $dx$ ... or else, nine chances out of ten, you'll mess up integrals by substitution. Indeed, if you read my blue book, you discover that it really only makes sense to integrate forms in the first place :P
Using the recursive definition of the determinant (cofactors), and letting $\operatorname{det}(A) = \sum_{j=1}^n \operatorname{cof}_{1j} A$, how do I prove that the determinant is independent of the choice of the line?
Let $M$ and $N$ be $\mathbb{Z}$-module and $H$ be a subset of $N$. Is it possible that $M \otimes_\mathbb{Z} H$ to be a submodule of $M\otimes_\mathbb{Z} N$ even if $H$ is not a subgroup of $N$ but $M\otimes_\mathbb{Z} H$ is additive subgroup of $M\otimes_\mathbb{Z} N$ and $rt \in M\otimes_\mathbb{Z} H$ for all $r\in\mathbb{Z}$ and $t \in M\otimes_\mathbb{Z} H$?
Well, assuming that the paper is all correct (or at least to a reasonable point). I guess what I'm asking would really be "how much does 'motivated by real world application' affect whether people would be interested in the contents of the paper?"
@Rithaniel $2 + 2 = 4$ is a true statement. Would you publish that in a paper? Maybe... On the surface it seems dumb, but if you can convince me the proof is actually hard... then maybe I would reconsider.
Although not the only route, can you tell me something contrary to what I expect?
It's a formula. There's no question of well-definedness.
I'm making the claim that there's a unique function with the 4 multilinear properties. If you prove that your formula satisfies those (with any row), then it follows that they all give the same answer.
It's old-fashioned, but I've used Ahlfors. I tried Stein/Stakarchi and disliked it a lot. I was going to use Gamelin's book, but I ended up with cancer and didn't teach the grad complex course that time.
Lang's book actually has some good things. I like things in Narasimhan's book, but it's pretty sophisticated.
You define the residue to be $1/(2\pi i)$ times the integral around any suitably small smooth curve around the singularity. Of course, then you can calculate $\text{res}_0\big(\sum a_nz^n\,dz\big) = a_{-1}$ and check this is independent of coordinate system.
@A.Hendry: It looks pretty sophisticated, so I don't know the answer(s) off-hand. The things on $u$ at endpoints look like the dual boundary conditions. I vaguely remember this from teaching the material 30+ years ago.
@Eric: If you go eastward, we'll never cook! :(
I'm also making a spinach soufflé tomorrow — I don't think I've done that in 30+ years. Crazy ridiculous.
@TedShifrin Thanks for the help! Dual boundary conditions, eh? I'll look that up. I'm mostly concerned about $u(a)=0$ in the term $u'/u$ appearing in $h'-\frac{u'}{u}h$ (and also for $w=-\frac{u'}{u}P$)
@TedShifrin It seems to me like $u$ can't be zero, or else $w$ would be infinite.
@TedShifrin I know the Jacobi accessory equation is a type of Sturm-Liouville problem, from which Fox demonstrates in his book that $u$ and $u'$ cannot simultaneously be zero, but that doesn't stop $w$ from blowing up when $u(a)=0$ in the denominator |
The earliest forms of the extended Euclidean algorithm are ancient, dating back to 5th-6th century A.D. work of Aryabhata - who described the Kuttaka ("pulverizer") algorithm for the more general problem of solving linear Diophantine equations $ ax + by = c$. It was independently rediscovered numerous times since, e.g. by Bachet in 1621, and Fermat and Wallis, and by Euler circa 1731.
Weil discusses this briefly in his book Number Theory: An Approach through History from Hammurapi to Legendre, excerpted below (from pp. 6-7)Euler's rediscovery is mentioned on pp. 176-77:
It deserves to be more widely known that the algorithm is simpler to execute (and remember) if you are already familiar row-operations such as in Gaussian elimination / triangularization in linear algebra. See this MSE answer for a presentation from that viewpoint. This method eliminates the notoriously error-prone back-substitution in the more common presentation of the algorithm. Below is a worked example done this way, computing $\, \gcd(141,19),\, $ shown firstly in full equational form, and secondly in more concise tabular form.$$\rm\begin{eqnarray}(1)\quad \color{#C00}{141}\!\ &=&\,\ \ \ 1&\cdot& 141\, +\ 0&\cdot& 19 \\(2)\quad\ \color{#C00}{19}\ &=&\,\ \ \ 0&\cdot& 141\, +\ 1&\cdot& 19 \\\color{#940}{(1)-7\,(2)}\, \rightarrow\, (3)\quad\ \ \ \color{#C00}{ 8}\ &=&\,\ \ \ 1&\cdot& 141\, -\ 7&\cdot& 19 \\\color{#940}{(2)-2\,(3)}\,\rightarrow\,(4)\quad\ \ \ \color{#C00}{3}\ &=&\, {-}2&\cdot& 141\, + 15&\cdot& 19 \\\color{#940}{(3)-3\,(4)}\,\rightarrow\,(5)\quad \color{#C00}{{-}1}\ &=&\,\ \ \ 7&\cdot& 141\, -\color{}{ 52}&\cdot& \color{}{19} \end{eqnarray}\qquad\qquad\qquad$$ $$\rm\begin{eqnarray} &&(1)\quad \color{#C00}{141} &\ \ \ 1 &\quad\ \ 0 \\ &&(2)\quad\ \color{#C00}{19} &\ \ \ 0 &\quad\ \ 1 \\\color{#940}{(1)-7\,(2)}\,\rightarrow\,&&(3)\quad\ \ \ \color{#C00}{ 8} &\ \ \ 1 &\ -7\\\color{#940}{(2)-2\,(3)}\,\rightarrow\,&&(4)\quad\ \ \ \color{#C00}{3} & -2 &\ \ \ \, 15 \\\color{#940}{(3)-3\,(4)}\,\rightarrow\,&&(5)\quad \color{#C00}{{-}1} &\ \ \ 7 & \, \color{}{{-}52} \end{eqnarray}\qquad\qquad\qquad\qquad\qquad\qquad$$
One can optimize even further (e.g. see the fractional form below). It would be interesting to know who first presented the algorithm from this viewpoint. Likely it as at least a few centuries old.
It would take immense effort to write a good history of the extended Euclidean algorithm and related ideas, since it occurs in many different guises throughout mathematics, e.g. search on the following keywords: Hermite / Smith normal form, invariant factors, lattice basis reduction, continued fractions,Farey fractions / mediants, Stern-Brocot tree / diatomic sequence.
In fact even in recent times there are useful twists on the algorithm that are discovered, e.g. we can present the algorithm efficiently in fractional form using modular arithmetic, e.g. below I show how we can compute $\,1/117\equiv\color{#c00}{-72} \pmod{337}\ $ this way.
${\rm mod}\ 337\!:\,\ \dfrac{0}{337} \overset{\large\frown}\equiv \dfrac{1}{117} \overset{\large\frown}\equiv \dfrac{-3}{\color{#0a0}{-14}} \overset{\large\frown}\equiv \dfrac{-23}5 \overset{\large\frown}\equiv\color{#c00}{\dfrac{-72} {1}}.\,$ Equivalently, without fractions
$\qquad\quad\ \begin{array}{rrl} [1]\!:\!\!\!& 337\,x\!\!\!&\equiv\ 0\\[2]\!:\!\!\!& 117\,x\!\!\!&\equiv\ 1\\[1]-3[2]=:[3]\!:\!\!\!& \color{#0a0}{{-}14}\,x\!\!\!&\equiv -3\\[2]+8[3]=:[4]\!:\!\!\!& 5\,x\!\!\! &\equiv -23\\[3]+3[4]=:[5]\!:\!\!\!& \color{#c00}1\, x\!\!\! &\equiv \color{#c00}{-72}\end{array}$ |
Let $\Omega$ be a nonempty open subset of $\mathbb{R}^n$, and let $\cup_{n=1}^\infty K_n = \Omega$ be an exhaustion of $\Omega$ by compact sets. Let $\mathcal{D}(\Omega) = \mathcal{D}$ be the standard $\mathbb{C}$-vector space of smooth functions $\phi: \Omega \to \mathbb{C}$ with compact support in $\Omega$. Equip $\mathcal{D}$ with the inductive limit topology endowed by the Frechet spaces $C_K^\infty(\Omega) = \mathcal{D}_{K_n} = \{\phi \in \mathcal{D} :\text{supp } \phi \subseteq K_n \}$ (this is the usual locally convex topology on $\mathcal{D}$).
Suppose that $\mu : \mathcal{D} \to \mathcal{D}$ is a continuous linear map, and let $K \subset \Omega$ be compact. I would like to show that there exists an index $N$ such that $\text{supp }\mu \phi \subseteq K_N$ for all $\phi$ such that $\text{supp } \phi \subset K$.
This fact is stated at the end of Chapter 2 (in Definition 2.8.1) in Friedlander's book on distribution theory (in fact, Friedlander takes it as part of the definition of a continuous linear map $\mu : \mathcal{D}(\Omega) \to \mathcal{D}(\Omega)$. However, I am having trouble establishing why it is true when we begin from the usual topological definition of continuity (inverse image of open set is open).
From Rudin's functional analysis bookk, I know that continuous linear maps $\mathcal{D}(\Omega) \to \mathcal{D}(\Omega)$ are bounded (here, I am using bounded in the sense of topological vector spaces). I think this is the fact that I need to use. At the moment, I am trying to make a proof by contradiction work. Suppose that $\mu (\mathcal{D}_K)$ is not contained in some $\mathcal{D}_{K_N}$. Then we can find functions $\phi_m \in \mathcal{D}_K$ and points $x_m$ without limit point in $\Omega$ so that $|\mu \phi_m(x_m)| \neq 0$. But I'm not sure where to go from here.
Hints or solutions are greatly appreciated. |
作者:Hariwan Zikri Ibrahim 来源:[J].Journal of Advanced Studies in Topology, 2015, Vol.6 (2), pp.61-68现代科学出版社 摘要:The purpose of this paper is to introduce the new concepts namely, α-g-closed, pre-g-closed, semi-g-closed, b-g-closed, β-g-closed, α-g-open, pre-g-open, semi-g-open, b-g-open and β-g-open sets in ditopological texture spaces. The relationships between these classes of sets ...
作者:A. D Nezhad , S. Shahriyari 来源:[J].Journal of Advanced Studies in Topology, 2015, Vol.6 (2), pp.43-55现代科学出版社 摘要:In this paper we follow the work of J. A. Alvarez Lopez and X. M. Masa (The problem 31.10 ) given in [1]. The purpose of this work is to study the new concepts of pseudomonoids. We also obtain some interesting results.
作者:V. Inthumathi , M. Maheswari , A. Anis Fathima 来源:[J].Journal of Advanced Studies in Topology, 2015, Vol.6 (2), pp.56-60现代科学出版社 摘要:In this paper, the notions of pairwise \(\delta_{\mathcal{I}}\)-semi-continuous functions and pairwise \(\delta_{\mathcal{I}}\)-semi-irresolute functions are introduced and investigated in ideal bitopological spaces.
作者:N. Selvanayaki , Gnanmbal Ilango 来源:[J].Journal of Advanced Studies in Topology, 2015, Vol.6 (2), pp.38-42现代科学出版社 摘要:In this paper, we introduce the notions of \(\alpha grw\)-separated sets and \(\alpha grw\)-connectedness in topological spaces and study some of their properties in topological spaces
作者:V. A. Khan , Mohd Shafiq , Rami Kamel Ahmad Rababah 来源:[J].Journal of Advanced Studies in Topology, 2015, Vol.6 (1), pp.28-37现代科学出版社 摘要:In this article we introduce and study \(I\)-convergent sequence spaces \(\mathcal{S}^{I}(M)\), \(\mathcal{S} ^{I}_{0}(M)\) and \(\mathcal{S}^{I}_{\infty}(M)\) with the help of compact operator \(T\) on the real space \(\mathbb{R}\) and an Orlicz function \(M\). We study some top...
作者:H. M. Abu-Donia , Mona Bakri 来源:[J].Journal of Advanced Studies in Topology, 2015, Vol.6 (1), pp.9-19现代科学出版社 摘要:In this paper we introduced two new classes of sets in bitopological spaces, the first type is weaker than \(ij\)-\(\Omega\)-closed sets namely, \(ij\)-\(\Omega^{^{*}}\)-closed sets, and the second type called \(ij\)-\(\Omega^{^{**}}\)-closed sets which lies between the class of ...
作者:Hariwan Zikri Ibrahim 来源:[J].Journal of Advanced Studies in Topology, 2015, Vol.6 (1), pp.20-27现代科学出版社 摘要:The purpose of this present paper is to study some new classes of sets by using the open sets and functions in topological spaces. For this aim, the notions of \(\delta^{*}\)-open, \(\delta\)-\(\delta^{*}\)-\(\alpha\)-open, \(\delta\)-\(\delta^{*}\)-preopen, \(\delta\)-\(\delta^{...
作者:Hariwan Z. Ibrahim 来源:[J].Journal of Advanced Studies in Topology, 2014, Vol.6 (1), pp.1-8现代科学出版社 摘要:In this paper, the author introduce and study new notions of continuity, compactness and stability in ditopological texture spaces based on the notions of \(\alpha\)-\(g\)-open and \(\alpha\)-\(g\)-closed sets and some of their characterizations are obtained.
我们正在为您处理中,这可能需要一些时间,请稍等。 |
I understand the principle of less airflow, less control, but why is that the case?
Because moments of inertia don't change with speed
Control effectiveness means that the controls effect a change in the balance of moments which results in the desired attitude change. The smaller the control deflection for the same change in attitude, the higher their effectiveness. If $\ddot{\Theta}$ is the pitch acceleration, $∆F_H$ the force change on the horizontal tail due to a control deflection, $x$ the lever arm of that control around the center of gravity and $I_y$ the moment of inertia around the lateral axis, the formula for $\ddot{\Theta}$ is: $$\ddot{\Theta} = \frac{∆F_H\cdot x}{I_y}$$
Both $x$ and $I_y$ are fixed, so only $∆F_H$ has the potential to increase pitch acceleration. $∆F_H$ is proportional to
Deflection angle $\eta_H$ Tail size $S_H$ (again fixed) dynamic pressure $q = \frac{v^2\cdot \rho}{2}$
A given object will change its attitude more quickly when more force can be created. Therefore, more speed $v$ means more force change and a higher angular acceleration for the same deflection.
When deflected, the control surfaces (ailerons, elevator, rudder) cause an aerodynamic moment about the Aerodynamic Centre. A moment has a moment arm and needs to have a length reference - the aerodynamic moments are defined with reference to wing dimensions: wing span for rolling and yawing moments, and Mean Aerodynamic Chord for pitching moments. If we have a look at the pitching moment P:
$$ P = C_{r_{\delta e}} \cdot \delta_e \cdot q \cdot S \cdot MAC$$
With:
$C_{r_{\delta e}}$ = elevator coefficient (dimensionless) $\delta_e$ = elevator deflection $q$ = dynamic pressure = $\frac {1}{2} \cdot \rho \cdot V^2$ $A$ = wing area MAC = Mean Aerodynamic Chord
$C_{r_{\delta e}}$, A and MAC are constants. So: pitching moment of the aircraft is proportional to elevator deflection, and to the square of the airspeed. Fly twice as fast, and the pitching moment from a certain elevator deflection will be four times as high.
Basically what keeps your plane suspended above the ground despite gravity pulling it to the surface is the fact that your aircraft constantly pushes (and pulls) air molecules downwards; one of Newton's Laws says that this generates an equal and opposing (i.e. upward) force on your aircraft.
In straight and level flight this force is due to the positive angle of attack that the wings make with the relative wind (NOT THE FLIGHT PATH) which essentially forces air molecules downwards: molecules below the wing are deflected dowards along the bottom of the wing while molecules above the wing are pulled downwards along the top surface of the wing as it moves through them. When you go slower you deflect fewer air molecules downwards per unit time which demands a higher angle of attack in order to keep you suspended; this generally translates to more elevator deflection needed on the pilots part, or in other words: your controls are less effective.
Control authority comes from the size of moments you can generate, which result from forces acting on the plane (the elevator, the ailerons or rudder), which come from pressure differences, which have a
squared relation to velocity. If the airflow speed halves, your control authority gets cut in 4. If the airflow speed doubles, you get 4 times the control authority, etc.
Here's further explanation if anything isn't quite clear.
For control authority, you need to be able to apply your desired moment to the aircraft. Moments are forces acting at some distance from your rotation center. In an aircraft, say you want to roll the aircraft. The ailerons deflecting create a pressure difference between the right and left wings. This ends up as different forces acting basically at the ailerons, creating that roll moment. That's just the basics of roll. Now, for the airflow part.
First, I mentioned that for roll, it's those pressure differences caused by airflow over the wing and the aileron. The forces (the ones we're concerned with here) are created by the pressures on a surface. Remember, pressures are forces over areas. Now, lets look at the pressures. The equation for dynamic pressure is $\frac{\rho V^2}{2}$, that's the density times velocity squared over 2. We will assume our density isn't changing here, so in order to change the pressure, we change the velocity of the flow. BUT, its
squared. Without airflow, it's obvious that no roll moment is created because the velocity is zero. A plane on the ground with no airflow over the wing doesn't try to roll.
In general, for roll, pitch and yaw authority(that's all of them), you can consider the feeling when you put your hand out of the window in a moving car. If you deflect air downward, your hand gets pushed up. In reality, it's the difference in pressure between the top and bottom, due to flow speeds. The faster you go, the more airflow, the greater the pressure differences you can generate, because of the squared relation. The slower you go, any flow speed differences might become negligible, meaning no pressure difference, thus no force acting.
With some numbers, let's say that at a high speed, the elevator gets deflected. Let's say that the flow over the top is going 100 (arbitrary velocity units), and the flow under is going 110. The pressure on top will be $\frac{\rho}{2}*100^2 = \frac{\rho}{2}*10000$ lets ignore the $\frac{\rho}{2}$ term, and just be aware that it linearly converts our number into a pressure. so we have 10000 pressure somethings on top, and we have 12100 pressure somethings on bottom (using the same formula). That means we have a net of 2100 pressure somethings pushing
up on the tail now. Great, the tail has enough control authority to push the nose down as commanded.
Now, lets slow the speeds down by a factor of ten. The top air is going 10, and the bottom now goes 11. Lets see the pressure change compared to before. The pressure on top will be 100 pressure somethings, and on bottom it will be 121. The resulting net pressure acting on the tail is then 21 pressure units,
100 times less than before, even though the speeds only changed by a factor of ten. now, you have 100 times less force acting on the tail (resulting in equivalently less moment), and you might not be able to control the pitch as much as you want to.
Control surfaces are used to change the effective camber of the airfoil they are controlling. For example, a downward deflected aileron would increase the effective camber of a wing along the aileron's span. An increase in camber will increase the lift generated at a certain airspeed over that area of the wing, causing the desired rolling moment. It is PARTIALLY due to this change in developed lift that generates adverse yaw, requiring rudder to coordinate turns.
At higher airspeeds, the wing is producing more total lift, and therefore more responsive to changes in camber.
Additionally, control surfaces also respond according to Newton's 3rd law - the ailerons deflect the passing airflow in a direction other than parallel to the wing skin, resulting in a reactive force causing roll. As with the camber change, this phenomenon becomes more pronounced at increased airspeeds, and conversely less pronounced with a reduction in airflow.
A simplified explanation can be found at FAA Pilot's Handbook
This can be explained by Newton's second law, $F = m\times a$ and third law, every force has an equal force to the opposite direction.
$m$ here is the mass of airflow, $a$ is the acceleration caused to airflow (seen as changed direction of the airflow). A force equal to $a\times m$ is exerted to the control surface. More airflow, more mass, more force.
The very same reason why an airplane stays in the air in the first place. |
Basically 2 strings, $a>b$, which go into the first box and do division to output $b,r$ such that $a = bq + r$ and $r<b$, then you have to check for $r=0$ which returns $b$ if we are done, otherwise inputs $r,q$ into the division box..
There was a guy at my university who was convinced he had proven the Collatz Conjecture even tho several lecturers had told him otherwise, and he sent his paper (written on Microsoft Word) to some journal citing the names of various lecturers at the university
Here is one part of the Peter-Weyl theorem: Let $\rho$ be a unitary representation of a compact $G$ on a complex Hilbert space $H$. Then $H$ splits into an orthogonal direct sum of irreducible finite-dimensional unitary representations of $G$.
What exactly does it mean for $\rho$ to split into finite dimensional unitary representations? Does it mean that $\rho = \oplus_{i \in I} \rho_i$, where $\rho_i$ is a finite dimensional unitary representation?
Sometimes my hint to my students used to be: "Hint: You're making this way too hard." Sometimes you overthink. Other times it's a truly challenging result and it takes a while to discover the right approach.
Once the $x$ is in there, you must put the $dx$ ... or else, nine chances out of ten, you'll mess up integrals by substitution. Indeed, if you read my blue book, you discover that it really only makes sense to integrate forms in the first place :P
Using the recursive definition of the determinant (cofactors), and letting $\operatorname{det}(A) = \sum_{j=1}^n \operatorname{cof}_{1j} A$, how do I prove that the determinant is independent of the choice of the line?
Let $M$ and $N$ be $\mathbb{Z}$-module and $H$ be a subset of $N$. Is it possible that $M \otimes_\mathbb{Z} H$ to be a submodule of $M\otimes_\mathbb{Z} N$ even if $H$ is not a subgroup of $N$ but $M\otimes_\mathbb{Z} H$ is additive subgroup of $M\otimes_\mathbb{Z} N$ and $rt \in M\otimes_\mathbb{Z} H$ for all $r\in\mathbb{Z}$ and $t \in M\otimes_\mathbb{Z} H$?
Well, assuming that the paper is all correct (or at least to a reasonable point). I guess what I'm asking would really be "how much does 'motivated by real world application' affect whether people would be interested in the contents of the paper?"
@Rithaniel $2 + 2 = 4$ is a true statement. Would you publish that in a paper? Maybe... On the surface it seems dumb, but if you can convince me the proof is actually hard... then maybe I would reconsider.
Although not the only route, can you tell me something contrary to what I expect?
It's a formula. There's no question of well-definedness.
I'm making the claim that there's a unique function with the 4 multilinear properties. If you prove that your formula satisfies those (with any row), then it follows that they all give the same answer.
It's old-fashioned, but I've used Ahlfors. I tried Stein/Stakarchi and disliked it a lot. I was going to use Gamelin's book, but I ended up with cancer and didn't teach the grad complex course that time.
Lang's book actually has some good things. I like things in Narasimhan's book, but it's pretty sophisticated.
You define the residue to be $1/(2\pi i)$ times the integral around any suitably small smooth curve around the singularity. Of course, then you can calculate $\text{res}_0\big(\sum a_nz^n\,dz\big) = a_{-1}$ and check this is independent of coordinate system.
@A.Hendry: It looks pretty sophisticated, so I don't know the answer(s) off-hand. The things on $u$ at endpoints look like the dual boundary conditions. I vaguely remember this from teaching the material 30+ years ago.
@Eric: If you go eastward, we'll never cook! :(
I'm also making a spinach soufflé tomorrow — I don't think I've done that in 30+ years. Crazy ridiculous.
@TedShifrin Thanks for the help! Dual boundary conditions, eh? I'll look that up. I'm mostly concerned about $u(a)=0$ in the term $u'/u$ appearing in $h'-\frac{u'}{u}h$ (and also for $w=-\frac{u'}{u}P$)
@TedShifrin It seems to me like $u$ can't be zero, or else $w$ would be infinite.
@TedShifrin I know the Jacobi accessory equation is a type of Sturm-Liouville problem, from which Fox demonstrates in his book that $u$ and $u'$ cannot simultaneously be zero, but that doesn't stop $w$ from blowing up when $u(a)=0$ in the denominator |
I should be tarred and feathered for not knowing at least the status of the following question.
Question:Let $\Gamma$ be a discrete amenable group. If $\pi:\Gamma \rightarrow B(\mathcal{H})$ is a unitary representation of $\Gamma$ on a separable Hilbert space $\mathcal{H}$, is the von Neumann algebra $\pi(\Gamma)''$ necessarily injective?
Flippantly one imagines that the answer to this question is yes, by Theorem 2.2 of Bekka's paper on amenable representations. But this result only says that the images of group elements are in the centralizer of a non-normal state...it isn't immediately clear why the entire von Neumann algebra should lie in the centralizer of such a state. If one tries to sidestep this by looking at a proof using almost invariant vectors, one is busted by the fact that a representation that is "$H$-amenable" isn't necessarily amenable in Bekka's sense.
EDIT: Makoto's nice answer below provided me with some closure. I'm still worried that I can't see a more or less direct way to this result from Connes's '76 paper on the classification of injective factors. If this paper can, in a more or less direct and self-contained way, be used to resolve the question, please feel free to include another answer. |
By Dr Adam Falkowski (Résonaances; Orsay, France)
The title of this post is purposely over-optimistic in order to increase the traffic. A more accurate statement is that a recent analysis
of X-ray spectrum of galactic clusters claims the presence of a monochromatic \(3.5\keV\) photon line which can be interpreted as a signal of a\[ Detection of An Unidentified Emission Line in the Stacked X-ray spectrum of Galaxy Clustersby Esra Bulbul and 5 co-authors (NASA/Harvard-Smithsonian)
\large{m_{\nu({\rm ster})} = 7\keV }
\]sterile neutrino dark matter candidate decaying into a photon and an ordinary neutrino. It's a long way before this claim may become a well-established signal. Nevertheless, in my opinion, it's not the least believable hint of dark matter coming from astrophysics in recent years.
First, let me explain why one would anyone dirty their hands to study X-ray spectra. In the most popular scenario the dark matter particle is a WIMP — a particle in the \(\GeV\)-\(\TeV\) mass ballpark that has weak-strength interactions with the ordinary matter. This scenario may predict signals in gamma rays, high-energy anti-protons, electrons etc, and these are being searched high and low by several Earth-based and satellite experiments.
But in principle the mass of the dark matter particle could be anywhere between \(10^{-30}\) and \(10^{50}\GeV\), and there are many other models of dark matter on the market. One serious alternative to WIMPs is a \(\keV\)-mass sterile neutrino. In general, neutrinos
aredark matter: they are stable, electrically neutral, and are produced in the early universe. However we know that the 3 neutrinos from the Standard Model constitute only a small fraction of dark matter, as otherwise they would affect the large-scale structure of the universe in a way that is inconsistent with observations. The story is different if the 3 "active" neutrinos have partners from beyond the Standard Model that do not interact with W- and Z-bosons — the so-called "sterile" neutrinos. In fact, the simplest UV-complete models that generate masses for the active neutrinos require introducing at least 2 sterile neutrinos, so there are good reasons to believe that these guys exist. A sterile neutrino is a good dark matter candidate if its mass is larger than \(1\keV\) (because of the constraints from the large-scale structure) and if its lifetime is longer than the age of the universe.
How can we see if this is the right model? Dark matter that has no interactions with the visible matter seems hopeless. Fortunately, sterile neutrino dark matter is expected to decay and produce a smoking-gun signal in the form of a monochromatic photon line. This is because, in order to be produced in the early universe, the sterile neutrino should mix slightly with the active ones. In that case, oscillations of the active neutrinos into sterile ones in the primordial plasma can populate the number density of sterile neutrinos, and by this mechanism it is possible to explain the observed relic density of dark matter. But the same mixing will make the sterile neutrino decay, as shown in the diagrams here. If the sterile neutrino is light enough and/or the mixing is small enough then its lifetime can be much longer than the age of the universe, and then it remains a viable dark matter candidate.
The tree-level decay into 3 ordinary neutrinos is undetectable, but the 2-body loop decay into a photon and and a neutrino results in production of photons with the energy\[
\large{E=\frac{m_{\rm DM}}{2}.}
\] Such a monochromatic photon line can potentially be observed. In fact, in the simplest models sterile neutrino dark matter heavier than \(\approx 50\keV\) would produce a too large photon flux and is excluded. Thus the favored mass range for dark matter is between \(1\) and \(50\keV\). Then the photon line is predicted to fall into the X-ray domain that can be studied using X-ray satellites like XMM-Newton, Chandra, or Suzaku.
Until last week these searches were only providing lower limits on the lifetime of sterile neutrino dark matter. This paper claims they may have hit the jackpot. The paper use the XMM-Newton data to analyze the stacked X-ray spectra of many galaxy clusters where dark matter is lurking. After subtracting the background they see is this:
Although the natural reaction here is a loud "are you kidding me", the claim is that the excess near \(3.56\keV\) (red data points) over the background model is very significant, at 4-5 astrophysical sigma. It is difficult to assign this excess to any know emission lines from usual atomic transitions. If interpreted as the signal of sterile neutrino dark matter, the measured energy and the flux corresponds to the red star in the plot, with the mass \(7.1\keV\) and the mixing angle of order \(5\times 10^{-5}\). This is allowed by other constraints and, by twiddling with the lepton asymmetry in the neutrino sector, consistent with the observed dark matter relic density.
Clearly, a lot could go wrong with this analysis. For one thing, the suspected dark matter line doesn't stand alone in the spectrum. The background mentioned above consists not only of continuous X-ray emission but also of monochromatic lines from known atomic transitions. Indeed, the \(2\)-\(10\keV\) range where the search was performed is pooped with emission lines: the authors fit 28 separate lines to the observed spectrum before finding the unexpected residue at \(3.56\keV\). The results depend on whether these other emission lines are modeled properly. Moreover, the known argon XVII dielectronic recombination line happens to be nearby at \(3.62\keV\). The significance of the signal decreases when the flux from that line is allowed to be larger than predicted by models. So this analysis needs to be confirmed by other groups and by more data before we really get excited.
Decay diagrams borrowed from this review. For more up-to-date limits on sterile neutrino DM see this paper, or this plot. Update: another independent analysis of XMM-Newton data observes the anomalous 3.5 keV line in the Andromeda and the Perseus cluster. The text was reposted from Adam's blog with his permission... |
Fermi surface of the Weyl type-II metallic candidate Abstract
Weyl type-II fermions are massless quasiparticles that obey the Weyl equation and which are predicted to occur at the boundary between electron- and hole-pockets in certain semi-metals, i.e. the (W,Mo)(Te,P)$$_2$$ compounds. Here, we present a study of the Fermi-surface of WP$$_2$$ \emph{via} the Shubnikov-de Haas (SdH) effect. Compared to other semi-metals WP$$_2$$ exhibits a very low residual resistivity, i.e. $$\rho_0 \simeq 10$$ n$$\Omega$$cm, which leads to perhaps the largest non-saturating magneto-resistivity $$(\rho(H))$$ reported for any compound. For the samples displaying the smallest $$\rho_0$$, $$\rho(H)$$ is observed to increase by a factor of $$2.5 \times 10^{7}$$ $$\%$$ under $$\mu_{0}H = 35$$ T at $T = 0.35$ K. The angular dependence of the SdH frequencies is found to be in excellent agreement with the first-principle calculations when the electron- and hole-bands are shifted by 30 meV with respect to the Fermi level. This small discrepancy could have implications for the predicted topological character of this compound.
Authors: Florida State Univ., Tallahassee, FL (United States). National High Magnetic Field Lab. (MagLab) Florida State Univ., Tallahassee, FL (United States). National High Magnetic Field Lab. (MagLab); Florida State Univ., Tallahassee, FL (United States). Dept. of Physics Univ. of Texas at Dallas, Richardson, TX (United States). Dept. of Chemistry and Biochemistry Univ. of Texas at Dallas, Richardson, TX (United States). Dept. of Chemistry and Biochemistry Publication Date: Research Org.: Florida State Univ., Tallahassee, FL (United States). National High Magnetic Field Lab. (MagLab) Sponsoring Org.: USDOE Office of Science (SC), Basic Energy Sciences (BES) (SC-22); National Science Foundation (NSF) OSTI Identifier: 1399696 Alternate Identifier(s): OSTI ID: 1389122 Grant/Contract Number: SC0002613; DMR-1157490 Resource Type: Accepted Manuscript Journal Name: Physical Review B Additional Journal Information: Journal Volume: 96; Journal Issue: 12; Related Information: https://journals.aps.org/prb/supplemental/10.1103/PhysRevB.96.121108; Journal ID: ISSN 2469-9950 Publisher: American Physical Society (APS) Country of Publication: United States Language: English Subject: 75 CONDENSED MATTER PHYSICS, SUPERCONDUCTIVITY AND SUPERFLUIDITY; Weyl semi-metals; magnetoresistivity; Hall-effect Citation Formats
Schönemann, R., Aryal, N., Zhou, Q., Chiu, Y. -C., Chen, K. -W., Martin, T. J., McCandless, G. T., Chan, J. Y., Manousakis, E., and Balicas, L. Fermi surface of the Weyl type-II metallic candidate WP2. United States: N. p., 2017. Web. doi:10.1103/PhysRevB.96.121108.
Schönemann, R., Aryal, N., Zhou, Q., Chiu, Y. -C., Chen, K. -W., Martin, T. J., McCandless, G. T., Chan, J. Y., Manousakis, E., & Balicas, L. Fermi surface of the Weyl type-II metallic candidate WP2. United States. doi:10.1103/PhysRevB.96.121108.
Schönemann, R., Aryal, N., Zhou, Q., Chiu, Y. -C., Chen, K. -W., Martin, T. J., McCandless, G. T., Chan, J. Y., Manousakis, E., and Balicas, L. Mon . "Fermi surface of the Weyl type-II metallic candidate WP2". United States. doi:10.1103/PhysRevB.96.121108. https://www.osti.gov/servlets/purl/1399696.
@article{osti_1399696,
title = {Fermi surface of the Weyl type-II metallic candidate WP2}, author = {Schönemann, R. and Aryal, N. and Zhou, Q. and Chiu, Y. -C. and Chen, K. -W. and Martin, T. J. and McCandless, G. T. and Chan, J. Y. and Manousakis, E. and Balicas, L.}, abstractNote = {Weyl type-II fermions are massless quasiparticles that obey the Weyl equation and which are predicted to occur at the boundary between electron- and hole-pockets in certain semi-metals, i.e. the (W,Mo)(Te,P)$_2$ compounds. Here, we present a study of the Fermi-surface of WP$_2$ \emph{via} the Shubnikov-de Haas (SdH) effect. Compared to other semi-metals WP$_2$ exhibits a very low residual resistivity, i.e. $\rho_0 \simeq 10$ n$\Omega$cm, which leads to perhaps the largest non-saturating magneto-resistivity $(\rho(H))$ reported for any compound. For the samples displaying the smallest $\rho_0$, $\rho(H)$ is observed to increase by a factor of $2.5 \times 10^{7}$ $\%$ under $\mu_{0}H = 35$ T at $T = 0.35$ K. The angular dependence of the SdH frequencies is found to be in excellent agreement with the first-principle calculations when the electron- and hole-bands are shifted by 30 meV with respect to the Fermi level. This small discrepancy could have implications for the predicted topological character of this compound.}, doi = {10.1103/PhysRevB.96.121108}, journal = {Physical Review B}, number = 12, volume = 96, place = {United States}, year = {2017}, month = {9} } Citation information provided by Web of Science
Web of Science
Works referenced in this record:
QUANTUM ESPRESSO: a modular and open-source software project for quantum simulations of materials journal, September 2009 Giannozzi, Paolo; Baroni, Stefano; Bonini, Nicola Journal of Physics: Condensed Matter, Vol. 21, Issue 39, Article No. 395502 |
Wave energy converters in coastal structures: verschil tussen versies
(→Environmental impact)
(→See also)
Regel 283: Regel 283:
* [[Seawall]]
* [[Seawall]]
* [[Seawalls and revetments]]
* [[Seawalls and revetments]]
−
* [[Coastal defense techniques]]
+
* [[Coastal defense techniques]]
* [[Wave energy converters]]
* [[Wave energy converters]]
* [[Shore protection, coast protection and sea defence methods]]
* [[Shore protection, coast protection and sea defence methods]]
Versie van 28 okt 2013 om 13:51
Fig 1: Construction of a coastal structure. Inhoud Introduction
Coastal works along European coasts are composed of very diverse structures. Many coastal structures are ageing and facing problems of stability, sustainability and erosion. Moreover climate change and especially sea level rise represent a new danger for them. Coastal dykes in Europe will indeed be exposed to waves with heights that are greater than the dykes were designed to withstand, in particular all the structures built in shallow water where the depth imposes the maximal amplitude because of wave breaking.
This necessary adaptation will be costly but will provide an opportunity to integrate converters of sustainable energy in the new maritime structures along the coasts and in particular in harbours. This initiative will contribute to the reduction of the greenhouse effect. Produced energy can be directly used for the energy consumption in harbour area and will reduce the carbon footprint of harbours by feeding the docked ships with green energy. Nowadays these ships use their motors to produce electricity power on board even if they are docked. Integration of wave energy converters (WEC) in coastal structures will favour the emergence of the new concept of future harbours with zero emissions.
Wave energy and wave energy flux
For regular water waves, the time-mean wave energy density E per unit horizontal area on the water surface (J/m²) is the sum of kinetic and potential energy density per unit horizontal area. The potential energy density is equal to the kinetic energy
[1] both contributing half to the time-mean wave energy density E that is proportional to the wave height squared according to linear wave theory [1]:
(1)
[math]E= \frac{1}{8} \rho g H^2[/math]
g is the gravity and [math]H[/math] the wave height of regular water waves. As the waves propagate, their energy is transported. The energy transport velocity is the group velocity. As a result, the time-mean wave energy flux per unit crest length (W/m) perpendicular to the wave propagation direction, is equal to
[1]:
(2)
[math] P = E \times c_{g}[/math]
with [math]c_{g}[/math] the group velocity (m/s). Due to the dispersion relation for water waves under the action of gravity, the group velocity depends on the wavelength λ (m), or equivalently, on the wave period T (s). Further, the dispersion relation is a function of the water depth h (m). As a result, the group velocity behaves differently in the limits of deep and shallow water, and at intermediate depths:
[math](\frac{\lambda}{20} \lt h \lt \frac{\lambda}{2})[/math]
Application for wave energy convertersFor regular waves in deep water:
[math]c_{g} = \frac{gT}{4\pi} [/math] and [math]P_{w1} = \frac{\rho g^2}{32 \pi} H^2 T[/math]
The time-mean wave energy flux per unit crest length is used as one of the main criteria to choose a site for wave energy converters.
For real seas, whose waves are random in height, period (and direction), the spectral parameters have to be used. [math]H_{m0} [/math] the spectral estimate of significant wave height is based on zero-order moment of the spectral function as [math]H_{m0} = 4 \sqrt{m_0} [/math]. Moreover the wave period is derived as follows
[2].
[math]T_e = \frac{m_{-1}}{m_0} [/math]
where [math]m_n[/math] represents the spectral moment of order n. An equation similar to that describing the power of regular waves is then obtained
[2] :
[math]P_{w1} = \frac{\rho g^2}{64 \pi} H_{m0}^2 T_e[/math]
If local data are available ([math]H_{m0}^2, T_e [/math]) for a sea state through in-situ wave buoys for example, satellite data or numerical modelling, the last equation giving wave energy flux [math]P_{w1}[/math] gives a first estimation. Averaged over a season or a year, it represents the maximal energetic resource that can be theoretically extracted from wave energy. If the directional spectrum of sea state variance F (f,[math]\theta[/math]) is known with f the wave frequency (Hz) and [math]\theta[/math] the wave direction (rad), a more accurate formulation is used:
[math]P_{w2} = \rho g\int\int c_{g}(f,h)F(f,\theta) dfd \theta[/math]
Fig 2: Time-mean wave energy flux along
West European coasts
[3] .
It can be shown easily that equations (5 and 6) can be reduced to (4) with the hypothesis of regular waves in deep water. The directional spectrum is deduced from directional wave buoys, SAR images or advanced spectral wind-wave models, known as third-generation models, such as WAM, WAVEWATCH III, TOMAWAC or SWAN. These models solve the spectral action balance equation without any a priori restrictions on the spectrum for the evolution of wave growth.
From TOMAWAC model, the near shore wave atlas ANEMOC along the coasts of Europe and France based on the numerical modelling of wave climate over 25 years has been produced
[4]. Using equation (6), the time-mean wave energy flux along West European coasts is obtained (see Fig. 2). This equation (6) still presents some limits like the definition of the bounds of the integration. Moreover, the objective to get data on the wave energy near coastal structures in shallow or intermediate water requires the use of numerical models that are able to represent the physical processes of wave propagation like the refraction, shoaling, dissipation by bottom friction or by wave breaking, interactions with tides and diffraction by islands.
The wave energy flux is therefore calculated usually for water depth superior to 20 m. This maximal energetic resource calculated in deep water will be limited in the coastal zone:
at low tide by wave breaking; at high tide in storm event when the wave height exceeds the maximal operating conditions; by screen effect due to the presence of capes, spits, reefs, islands,...
Technologies
According to the International Energy Agency (IEA), more than hundred systems of wave energy conversion are in development in the world. Among them, many can be integrated in coastal structures. Evaluations based on objective criteria are necessary in order to sort theses systems and to determine the most promising solutions.
Criteria are in particular:
the converter efficiency : the aim is to estimate the energy produced by the converter. The efficiency gives an estimate of the number of kWh that is produced by the machine but not the cost. the converter survivability : the capacity of the converter to survive in extreme conditions. The survivability gives an estimate of the cost considering that the weaker are the extreme efforts in comparison with the mean effort, the smaller is the cost.
Unfortunately, few data are available in literature. In order to determine the characteristics of the different wave energy technologies, it is necessary to class them first in four main families
[5].
An interesting result is that the maximum average wave power that a point absorber can absorb [math]P_{abs} [/math](W) from the waves does not depend on its dimensions
[5]. It is theoretically possible to absorb a lot of energy with only a small buoy. It can be shown that for a body with a vertical axis of symmetry (but otherwise arbitrary geometry) oscillating in heave the capture (or absorption) width [math]L_{max}[/math](m) is as follows [5]:
[math]L_{max} = \frac{P_{abs}}{P_{w}} = \frac{\lambda}{2\pi}[/math] or [math]1 = \frac{P_{abs}}{P_{w}} \frac{2\pi}{\lambda}[/math]
Fig 4: Upper limit of mean wave power
absorption for a heaving point absorber.
where [math]{P_{w}}[/math] is the wave energy flux per unit crest length (W/m). An optimally damped buoy responds however efficiently to a relatively narrow band of wave periods.
Babarit et Hals propose
[6] to derive that upper limit for the mean annual power in irregular waves at some typical locations where one could be interested in putting some wave energy devices. The mean annual power absorption tends to increase linearly with the wave power resource. Overall, one can say that for a typical site whose resource is between 20-30 kW/m, the upper limit of mean wave power absorption is about 1 MW for a heaving WEC with a capture width between 30-50 m.
In order to complete these theoretical results and to describe the efficiency of the WEC in practical situations, the capture width ratio [math]\eta[/math] is also usually introduced. It is defined as the ratio between the absorbed power and the available wave power resource per meter of wave front times a relevant dimension B [m].
[math]\eta = \frac{P_{abs}}{P_{w}B} [/math]
The choice of the dimension B will depend on the working principle of the WEC. Most of the time, it should be chosen as the width of the device, but in some cases another dimension is more relevant. Estimations of this ratio [math]\eta[/math] are given
[6]: 33 % for OWC, 13 % for overtopping devices, 9-29 % for heaving buoys, 20-41 % for pitching devices. For energy converted to electricity, one must take into account moreover the energy losses in other components of the system.
Civil engineering
Never forget that the energy conversion is only a secondary function for the coastal structure. The primary function of the coastal structure is still protection. It is necessary to verify whether integration of WEC modifies performance criteria of overtopping and stability and to assess the consequences for the construction cost.
Integration of WEC in coastal structures will always be easier for a new structure than for an existing one. In the latter case, it requires some knowledge on the existing coastal structures. Solutions differ according to sea state but also to type of structures (rubble mound breakwater, caisson breakwaters with typically vertical sides). Some types of WEC are more appropriate with some types of coastal structures.
Fig 5: Several OWC (Oscillating water column) configurations (by Wavegen – Voith Hydro).
Environmental impact
Wave absorption if it is significant will change hydrodynamics along the structure. If there is mobile bottom in front of the structure, a sand deposit can occur. Ecosystems can also be altered by change of hydrodynamics and but acoustic noise generated by the machines.
Fig 6: Finistere area and locations of
the six sites (google map).
Study case: Finistere area
Finistere area is an interesting study case because it is located in the far west of Brittany peninsula and receives in consequence the largest wave energy flux along the French coasts (see Fig.2). This area with a very ragged coast gathers moreover many commercial ports, fishing ports, yachting ports. The area produces a weak part of its consumption and is located far from electricity power plants. There are therefore needs for renewable energies that are produced locally. This issue is important in particular in islands. The production of electricity by wave energy will have seasonal variations. Wave energy flux is indeed larger in winter than in summer. The consumption has peaks in winter due to heating of buildings but the consumption in summer is also strong due to the arrival of tourists.
Six sites are selected (see figure 7) for a preliminary study of wave energy flux and capacity of integration of wave energy converters. The wave energy flux is expected to be in the range of 1 – 10 kW/m. The length of each breakwater exceeds 200 meters. The wave power along each structure is therefore estimated between 200 kW and 2 MW. Note that there exist much longer coastal structures like for example Cherbourg (France) with a length of 6 kilometres.
(1) Roscoff (300 meters) (2) Molène (200 meters) (3) Le Conquet (200 meters) (4) Esquibien (300 meters) (5) Saint-Guénolé (200 meters) (6) Lesconil (200 meters) Fig.7: Finistere area, the six coastal structures and their length (google map).
Wave power flux along the structure depends on local parameters: bottom depth that fronts the structure toe, the presence of caps, the direction of waves and the orientation of the coastal structure. See figure 8 for the statistics of wave directions measured by a wave buoy located at the Pierres Noires Lighthouse. These measurements show that structures well-oriented to West waves should be chosen in priority. Peaks of consumption occur often with low temperatures in winter coming with winds from East- North-East directions. Structures well-oriented to East waves could therefore be also interesting even if the mean production is weak.
Fig 8: Wave measurements at the Pierres Noires Lighthouse.
Conclusion
Wave energy converters (WEC) in coastal structures can be considered as a land renewable energy. The expected energy can be compared with the energy of land wind farms but not with offshore wind farms whose number and power are much larger. As a land system, the maintenance will be easy. Except the energy production, the advantages of such systems are :
a “zero emission” port industrial tourism test of WEC for future offshore installations.
Acknowledgement
This work is in progress in the frame of the national project EMACOP funded by the French Ministry of Ecology, Sustainable Development and Energy.
See also Waves Wave transformation Groynes Seawall Seawalls and revetments Coastal defense techniques Wave energy converters Shore protection, coast protection and sea defence methods Overtopping resistant dikes
References Mei C.C. (1989) The applied dynamics of ocean surface waves. Advanced series on ocean engineering. World Scientific Publishing Ltd Vicinanza D., Cappietti L., Ferrante V. and Contestabile P. (2011) : Estimation of the wave energy along the Italian offshore, journal of coastal research, special issue 64, pp 613 - 617. Mattarolo G., Benoit M., Lafon F. (2009), Wave energy resource off the French coasts: the ANEMOC database applied to the energy yield evaluation of Wave Energy, 10th European Wave and Tidal Energy Conference Series (EWTEC’2009), Uppsala (Sweden) Benoit M. and Lafon F. (2004) : A nearshore wave atlas along the coasts of France based on the numerical modeling of wave climate over 25 years, 29th International Conference on Coastal Engineering (ICCE’2004), Lisbonne (Portugal), pp 714-726. De O. Falcão A. F. (2010) Wave energy utilization: A review of the technologies. Renewable and Sustainable Energy Reviews, Volume 14, Issue 3, April 2010, pp. 899–918. Babarit A. and Hals J. (2011) On the maximum and actual capture width ratio of wave energy converters – 11th European Wave and Tidal Energy Conference Series (EWTEC’2011) – Southampton (U-K). |
Happy New Year and welcome to my first post of 2019!
My last post introduced the idea of modelling physical things with math equations. To do this from scratch, requires calculus but seeing the final result is very interesting. So in my last post, I modelled the simple physical event of a ball thrown into the air. Another common example when introducing modelling to students is a mass on a spring. But before I develop this, I want to show what the graphs of some trigonometric equations look like as they will be needed to describe any kind of motion that is cyclic, that is, repeats like a mass on a spring bobbing up and down.
So in a previous post, I defined what sin 𝜃 and cos 𝜃 are in terms of a right triangle. Given the below triangle
the sine and cosine of 𝜃 are defined as\[ \sin\mathit{\theta}\hspace{0.33em}{=}\hspace{0.33em}\frac{\mathrm{opp}}{\mathrm{hyp}}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\cos\mathit{\theta}\hspace{0.33em}{=}\hspace{0.33em}\frac{\mathrm{adj}}{\mathrm{hyp}}\]
Let’s look at the sine for now. For very small angles, the opposite side will be small compared to the hypotenuse. Graphically, I think you can see that for an angle of 0°, there would be no opposite side so sin 0° = 0.
In the other extreme, as the angle gets close to 90°, the opposite side is close to the length of the hypotenuse, so the sine approaches 1. In fact,
sin 90° = 1.
Now angles are periodic in that they repeat every 360°. That is, an angle of 30° is also 30 + 360 = 390°. Another full circle of 360° can be added again to get an equivalent angle 390 + 360 = 750°. Angles can also be negative based on a convention of which direction you move to create the angle. Even with negative angles, multiple of 360° can be added or subtracted to get an equivalent angle whose sine will be the same. The below diagram shows these variations based on angles generated from the positive
x-axis:
The angle in red is a positive angle, that is it is formed by going in the counter-clockwise direction from the
x-axis. From that angle, you can go 1, 2, 3, etc complete circles to form the same angle. The angle in blue is a negative angle, that is it is formed by going clockwise from the x-axis. One can also go multiple complete circles around this angle to get the same angle. The point is that as you measure angles from 0, either in the positive or negative direction. you eventually repeat the same angles and these same angles will have the same sine value.
In my next post, I will plot the sine values against the angle values and show graphically what “periodic” means. |
ISSN:
1556-1801
eISSN:
1556-181X
All Issues
Networks & Heterogeneous Media
June 2015 , Volume 10 , Issue 2
Select all articles
Export/Reference:
Abstract:
We study the approximation of Wasserstein gradient structures by their finite-dimensional analog. We show that simple finite-volume discretizations of the linear Fokker-Planck equation exhibit the recently established entropic gradient-flow structure for reversible Markov chains. Then we reprove the convergence of the discrete scheme in the limit of vanishing mesh size using only the involved gradient-flow structures. In particular, we make no use of the linearity of the equations nor of the fact that the Fokker-Planck equation is of second order.
Abstract:
The paper develops a model of traffic flow near an intersection, where drivers seeking to enter a congested road wait in a buffer of limited capacity. Initial data comprise the vehicle density on each road, together with the percentage of drivers approaching the intersection who wish to turn into each of the outgoing roads.
If the queue sizes within the buffer are known, then the initial-boundary value problems become decoupled and can be independently solved along each incoming road. Three variational problems are introduced, related to different kind of boundary conditions. From the value functions, one recovers the traffic density along each incoming or outgoing road by a Lax type formula.
Conversely, if these value functions are known, then the queue sizes can be determined by balancing the boundary fluxes of all incoming and outgoing roads. In this way one obtains a contractive transformation, whose fixed point yields the unique solution of the Cauchy problem for traffic flow in an neighborhood of the intersection.
The present model accounts for backward propagation of queues along roads leading to a crowded intersection, it achieves well-posedness for general $L^\infty $ data, and continuity w.r.t. weak convergence of the initial densities.
Abstract:
Pipeline networks for gas transportation often contain circles. For such networks it is more difficult to determine the stationary states than for networks without circles. We present a method that allows to compute the stationary states for subsonic pipe flow governed by the isothermal Euler equations for certain pipeline networks that contain circles. We also show that suitably chosen boundary data determine the stationary states uniquely. The construction is based upon novel explicit representations of the stationary states on single pipes for the cases with zero slope and with nonzero slope. In the case with zero slope, the state can be represented using the Lambert--W function.
Abstract:
We consider a two-dimensional atomic mass spring system and show that in the small displacement regime the corresponding discrete energies can be related to a continuum Griffith energy functional in the sense of $\Gamma$-convergence. We also analyze the continuum problem for a rectangular bar under tensile boundary conditions and find that depending on the boundary loading the minimizers are either homogeneous elastic deformations or configurations that are completely cracked generically along a crystallographic line. As applications we discuss cleavage properties of strained crystals and an effective continuum fracture energy for magnets.
Abstract:
The evolution Stokes equation in a domain containing periodically distributed obstacles subject to Fourier boundary condition on the boundaries is considered. We assume that the dynamic is driven by a stochastic perturbation on the interior of the domain and another stochastic perturbation on the boundaries of the obstacles. We represent the solid obstacles by holes in the fluid domain. The macroscopic (homogenized) equation is derived as another stochastic partial differential equation, defined in the whole non perforated domain. Here, the initial stochastic perturbation on the boundary becomes part of the homogenized equation as another stochastic force. We use the two-scale convergence method after extending the solution with 0 in the holes to pass to the limit. By Itô stochastic calculus, we get uniform estimates on the solution in appropriate spaces. In order to pass to the limit on the boundary integrals, we rewrite them in terms of integrals in the whole domain. In particular, for the stochastic integral on the boundary, we combine the previous idea of rewriting it on the whole domain with the assumption that the Brownian motion is of trace class. Due to the particular boundary condition dealt with, we get that the solution of the stochastic homogenized equation is not divergence free. However, it is coupled with the cell problem that has a divergence free solution. This paper represents an extension of the results of Duan and Wang (Comm. Math. Phys. 275:1508--1527, 2007), where a reaction diffusion equation with a dynamical boundary condition with a noise source term on both the interior of the domain and on the boundary was studied, and through a tightness argument and a pointwise two scale convergence method the homogenized equation was derived.
Abstract:
In this paper, we study the stability result for the conductivities diffusion coefficients to a strongly reaction-diffusion system modeling electrical activity in the heart. To study the problem, we establish a Carleman estimate for our system. The proof is based on the combination of a Carleman estimate and certain weight energy estimates for parabolic systems.
Abstract:
We consider localized perturbations to spatially homogeneous oscillations in dimension 3 using the complex Ginzburg-Landau equation as a prototype. In particular, we will focus on inhomogeneities that locally change the phase of the oscillations. In the usual translation invariant spaces and at $ \epsilon=0$ the linearization about these spatially homogeneous solutions result in an operator with zero eigenvalue embedded in the essential spectrum. In contrast, we show that when considered as an operator between Kondratiev spaces, the linearization is a Fredholm operator. These spaces consist of functions with algebraical localization that increases with each derivative. We use this result to construct solutions close to the equilibrium via the Implicit Function Theorem and derive asymptotics for wavenumbers in the far field.
Abstract:
We investigate a class of linear discrete control systems, modeling the controlled dynamics of planar manipulators as well as the skeletal dynamics of human fingers and bird's toes. A self-similarity assumption on the phalanxes allows to reinterpret the control field ruling the whole dynamics as an Iterated Function System. By exploiting this relation, we apply results coming from self-similar dynamics in order to give a geometrical description of the control system and, in particular, of its reachable set. This approach is then applied to the investigation of the zygodactyl phenomenon in birds, and in particular in parrots. This arrangement of the toes of a bird's foot, common in species living on trees, is a distribution of the foot with two toes facing forward and two back. Reachability and grasping configurations are then investigated. Finally an hybrid system modeling the owl's foot is introduced.
Readers Authors Editors Referees Librarians Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top] |
Do we have complexity classes with respect to, say, average-case complexity? For instance, is there a (named) complexity class for problems which take expected polynomial time to decide?
Another question considers the
best case complexity, exemplified below:
Is there a class of (natural) problems whose decision requires
at least exponential time?
To clarify, consider some
EXP-complete language $L$. Obviously, not all instances of $L$ require exponential time: There are instances which can be decided even in polynomial time. So, the best case complexity of $L$ is not exponential time. EDIT: Since several ambiguities arose, I want to try to clarify it even more. By "best case" complexity, I mean a complexity class whose problems' complexity is lower bounded by some function. For instance, define BestE as the class of languages which cannot be decided in time less than a some linear exponential. Symbolically, let $M$ denote an arbitrary Turing machine, and $c$, $n_0$, and $n$ be natural numbers:
$L \in \mathbf{BestE} \Leftrightarrow$ $\quad (\exists c)(\forall M)[(L(M) = L) \Rightarrow (\exists {n_0})(\forall n > {n_0})(\forall x \in {\{0,1\}^n})[T(M(x)) \ge {2^{c|x|}}]]$
where $T(M(x))$ denotes the times it takes before $M$ halts on input $x$.
I accept that defining such class of problems is very odd, since we are requiring that, every Turing machine $M$, regardless of its power, cannot decide the language in time less than some linear exponential.
Yet notice that the polynomial-time counterpart (
BestP) is natural, since every Turing machine requires time $|x|$ to at least read its input.
PS: Maybe, instead of quantifying as "for all Turing machine $M$," we have to limit it to some pre-specified class of Turing machines, such as polynomial-time Turing machines. That way, we can define classes like $\mathbf{Best(n^2)}$, which is the class of languages requiring at least quadratic time to be decided on polynomial-time Turing machines.
PS2: One can also consider the circuit-complexity counterpart, in which we consider the least circuit size/depth to decide a language. |
Let $$f(x,y)=\sum_{n\in\mathbb{Z}\backslash\{0\}}\frac{1}{n}e^{2\pi i(xn+yn^2)} $$ Is it true that $\|f\|_{L^{\infty}(\mathbb{R}^2)}<\infty$? i.e. is $f$ essentially bounded?
The answer is yes. Fix $x,y$, and write $e(\alpha) := e^{2\pi i \alpha}$.
Using a Littlewood-Paley partition of unity and the triangle inequality, we may bound
$$ |f(x,y)| \leq \sum_N a_N$$ where $N$ ranges over powers of two, $$ a_N := \left|\sum_{n \in {\bf Z} \backslash 0} \psi( \frac{n}{N}) \frac{1}{n} e(x n + yn^2)\right|, $$ and $\psi$ is a suitable even bump function supported on (say) $\pm [1/4,4]$. (Actually, if one wished, one could replace the smooth cutoff $\psi(\frac{n}{N})$ here by a restriction $N \leq |n| < 2N$, and the arguments below would all go through essentially unchanged.) By the triangle inequality, we have $a_N = O(1)$ uniformly in $N$.
Fix $0 < \delta \leq 1$. We will show that there are only $O( \log \frac{1}{\delta} )$ values of $N$ for which $a_N \geq \delta$, which on taking $\delta=2^{-k}$ for natural numbers $k$ and summing gives the desired bound $\sum_N a_N = \sum_{k=1}^\infty O(k 2^{-k}) = O(1)$.
By discarding all the small $N$, we may assume that $N \geq C \delta^{-C}$ for a large absolute constant $C$.
Suppose that $a_N \geq \delta$, then by summation by parts we have $|\sum_{n \in I} e( xn + yn^2 )| \gg \delta N$ for some interval $I$ in $[-10N,10N]$. Applying Weyl sum estimates (see e.g. Exercise 16 of my blog notes) and taking common denominators this implies that there are rational numbers $a/q, b/q$ with $q = O( \delta^{-O(1)})$ such that $|x - a/q| \ll\delta^{-O(1)} / N$ and $|y-b/q| \ll \delta^{-O(1)} / N^2$ (we allow $a,b,q$ to have common factors). As we are assuming $N \geq C \delta^{-C}$ for a large $C$, this forces the rationals $a/q, b/q$ to be depend only on $\delta$ and not on $N$ (since two distinct rationals $a/q,a'/q'$ with $q,q' = O(\delta^{-O(1)})$ will be separated from each other by too far of a distance).
We now have
$$ N |x-a/q| + N^2 |y-b/q| \ll \delta^{-O(1)}.$$
Suppose that in fact we had
$$ N |x-a/q| + N^2 |y-b/q| \leq C^{-1} \delta^C$$
for a large absolute constant $C$. Then we can approximate $e(xn+yn^2)$ by $e((an+bn^2)/q)$ with acceptable error and conclude that $$ \left|\sum_n \psi(\frac{n}{N}) \frac{1}{n} e( (an+bn^2)/q )\right| \gg \delta N.$$ As the function $e( (an+bn^2)/q )$ is periodic with period $q = O( \delta^{-O(1)})$, which is much smaller than $N$, one split into arithmetic progressions mod $q$ and approximate Riemann sums by Riemann integrals (crudely upper bounding the mean of $e((an+bn^2)/q)$ in magnitude by $1$) and obtain $$ \left|\int_{\bf R} \psi(\frac{t}{N}) \frac{1}{t}\ dt\right| \gg \delta N.$$ But the integrand is odd and so the integral vanishes, a contradiction.
Thus we have $$ \delta^{O(1)} \ll N |x-a/q| + N^2 |y-b/q| \ll \delta^{-O(1)}$$ and so (since $a,b,q$ do not depend on $N$) there are only $O(\log \frac{1}{\delta})$ powers of two $N$ for which $a_N \geq \delta$, as claimed.
Prof. Tao's answer is excellent. I also found two research papers answering the question so I list them below as complementary reference:
G.I.Arkhipov and K.I.Oskolkov,
On a special trigonometric series and its applications, 1989 Math. USSR Sb. 62 145 Link to the article: http://iopscience.iop.org/0025-5734/62/1/A10 See Theorem 1.
E.S.Stein and S.Wainger,
Discrete analogues of singular Radon transform, Bulletin of AMS 1990 Link to the article: http://www.ams.org/journals/bull/1990-23-02/S0273-0979-1990-15973-7/S0273-0979-1990-15973-7.pdf See the Lemmain Section 6
The key of their results is that the upper bound depends only on the degree (not the coefficients, i.e. $x$ and $y$ in the question) of the polynomial. |
Let $X$ be a smooth projective complex analytic space. We can cook up a complex analytic version of Bloch's cycle complex by declaring
$z^n(X^{\rm an}, m)$
is the free abelian group on all codimension $m$ analytic cycles on $X\times\Delta^n$ ($\Delta^n$ being the usual standard $n$ simplex in complex analytic spaces, ie. the spectrum of $\mathbf{C}\{u_0,\ldots, u_n\}/(u_0+\ldots+u_n -1)$) in good position (ie. intersecting every face in the appropriate codimension, as in Bloch's paper). The differential $d_m$ is the same as in Bloch's original definition, turning $(z^n(X^{\rm an}, m), d_m)$ into a complex of abelian groups.
Call $$\mathbf{Z}(n)_{\mathcal{M}} := (z^n(X^{\rm an}, m), d_m)[2m]$$ and its hypercohomology "motivic cohomology of $X$".
Here's the question. Is motivic cohomology of $X$ at all related to the Deligne cohomology of $X$? More optimistically, does there exist a quasi-isomorphism
$$\mathbf{Z}(n)_{\mathcal{M}}\to\mathbf{Z}(n)_{\mathcal{D}} ?$$
How should one think about Deligne cohomology, in other words? (if not as "the motivic cohomology of complex analytic spaces?)
Remarks
I can imagine a regulator map $\text{reg} : \mathbf{Z}(n)_{\mathcal{M}}\to\mathbf{Z}(n)_{\mathcal{D}}$ can be defined using currents, as done for the classical regulator.
This is for sure going to be a (rather uninteresting) quasi isomorphism, since $X$ is smooth, when $n = 0$.
For $n = 1$ this is likely going to be a quasi-isomorphism too (if one doesn't screw the definition of $\text{reg}$): both sides are just $\mathbf{G}_m[-1]$. |
Let $\chi : (\mathbb Z/f\mathbb Z)^\times \to K = \mathbb Q(\mu_{\phi(f)})$ be a primitive Dirichlet character. Assume moreover that it is
not quadratic, that is, $\chi^2$ is not the trivial character. Let $\pi_1,\dots,\pi_g$ be the primes lying over $2$ and $v_1,\dots,v_g$ be the corresponding valuations. Recall that:$$L(0,\chi) = \frac1f\sum_{n=1}^fn\chi(n)$$
Experimentally (upto conductor 200), I find that there always exists some $k$ such that $v_k(L(0,\chi)) > 0$. Does anyone know a proof?
Note that it is not true that $2 | L(0,\chi)$. For instance, for for the character of conductor $5$ mapping $2 \to i$, we have $L(0,\chi) = (i+3)/5$. There are lots of other examples.
Note also that we do require the condition that $\chi$ is non quadratic. For instance, if $f = p \equiv 3 \pmod4$ and $\chi$ is quadratic, then: $$pL(0,\chi) \equiv (p-1)/2 \equiv 1 \pmod 2.$$
I asked this question a few hours before on stackexchange but at the suggestion of someone, I am posting it here. I have deleted the question on stackexchange. |
Some excellent ideas and approaches to this problem. The first partof the problem (when the triangle is larger than the square) isfairly accessible; as is the part where the two equal sides of thetriangle shrink to less than half the length of the side of thesquare but the bit in between is tricky!!
The first solution offered below is based on the work of ThomasDavies, Hannah McKenzie, Elliot Husband, Lizzie Farnham and HannahBradley of Madras college The group also spent time looking at whathappens as the triangle shrinks (not so easy). Well done to all ofyou.
The second solution is from Andrei Lazanu of School 205, Bucharest.Thank you for this Andrei
Other excellent solutions were received from Alison Colvin, SheilaNorrie and Shona Leenhouts, also of Madras College, who have partlyanswered one of the January problems (if you tackle this problemyou can refer to what you did for tilting triangles). I think theJanuary version is a little easier.
Solution one Introduction
We believed that the triangle took up a quarter of the square,and that a total of four triangles could fit around the square. Wecreated a moving example:
Explanation
We started by rotating a square inside the four triangles asthis has the same effect as rotatiing the triangle (editors note:I"liked this bit of lateral thinking").
From our "moving" representation (Fig. 1) we could see that itis always possible to fit four right angled triangles around thecentre of the square. This is because the centre of the squareallows a 360° rotation and, as the traingles are rightangled, they have angles of 90° (360 / 90 = 4).
Fig 1
Fig 2
Fig 3
We believed that a quarter of the square was overlapped byeach triangle and we set out to prove this.
First we placed the triangle in a simple position on thesquare (Fig. 2). We could clearly see that the green area of thetriangle is a quarter of the square - and what about the yellow andred areas...?
We suspected that they added up to another quarter. To showthis we cut out the areas and stuck them on to the square (Fig. 3).This showed that the yellow and green areas also took up a quarterof the square and the the whole triangle took up half thesquare.
We soon realised that this was only an example for oneposition of the triangle - we needed to look at the biggerpicture.
The Bigger Picture
As can be seen the square in themiddle of the four triangles is 2 units by 2 units. This means thatthe overlap of each of the four triangles is congruent and makes upa quarter of the square.
As the four triangles fit on the square at all times theoverlap of the triangles onto the square must always be a quarterof the square which is equal to 1 square unit.
Conclusion
In our write up we have shown that 4 triangles can fit aroundthe square and that the area of overlap is 1 square unit. This isalways true as long as the sides of the triangle are 2 units.
The Second Solution
To find the length of the diagonal of a square of side 2,using the Pythagorean Theorem:
$2^2 + 2^2 = d^2$
So, $d = 2\sqrt{2}$.
1. Starting with the triangle with the two equal sides 2 unitsin length. \par
1a) If the hypotenuse of the isosceles triangle is parallel toone side of the square.
This means the equal sides of the isosceles triangle are onthe diagonal of the square; the area of the square occupied by thetriangle is the area of the right-angled isosceles triangle, ofside half the diagonal.
This means that half the diagonal is $\sqrt{2}$ units. Thearea of the triangle is $$ \frac{1}{2} \times 2\sqrt{2} \times2\sqrt{2} = C $$ 1b) The triangle is in a tiltedposition.
Triangles $AMB$ and $CMD$ are congruent, because they areformed by the rotation of triangle $FMG$. This means their areasare equal. So, the area of the square occupied by the triangle isthe same. Any position could be reduced to position a).
1c) The equal sides of the triangle are parallel to the sidesof the square.
It is very clear that the triangle forms in the square asmaller square of area 1 unit$^2$.
2. The congruent sides of the triangle are $\sqrt{2}$units.
2a) The hypotenuse of the isosceles triangle is parallel toone side of the square.
As seen from 1a, this is a particular case. The hypotenuse ofthis triangle is the side of the square, also having the same areaas in 1a.
2b) The triangle has a tilted position.
In this part of the problem, the area of the square covered bythe triangle is smaller than 1 unit$^2$.
2c) The equal sides are parallel to the sides of thesquare.
Triangles $ACB$ and $AGF$ are both isosceles right-angledtriangles, with equal sides of $\sqrt{2} - 1$ units because theyare right-angled triangles with $45^{\circ}$ angle. So, $BDE$ isalso a right-angled triangle with the congruent sides:
$1 - (\sqrt{2}-1) = 2 - \sqrt{2}$
The area of the polygon $BCMGE$ is the area of quadrilateral$DCMG$ minus the area of triangle $BDE$. This is:
$1 - [((2 - \sqrt{2})2)/2] = 1 - [(4 + 2 - 4\sqrt{2})/2] = 1 -(3 - 2\sqrt{2}) = 2\sqrt{2} - 2$ units$^2$
3. Here, I must find the limit where the area of the triangleis all inside the square. This happens when triangle $MAF$ is thesame as triangle $MCG$. This happens when the congruent sides ofthe right-angled isosceles triangle are 1 unit. If I reduce furtherthe side of the triangle, all the triangle will be inside thesquare.
. |
Passive vibration suppression of plate using multiple optimal dynamic vibration absorbers Abstract
In the present paper, the optimization problem of the dynamic vibration absorbers (DVAs) for suppressing vibrations in thin plates within the wide frequency band is investigated. It is considered that the plate has simply supported edges and is subjected to a concentrated harmonic force. The vibration suppression is accomplished by the implementation of multiple mass–spring absorbers in order to minimize the plate deflection at the natural frequencies of the plate without absorbers. The governing equations of the plate equipped with DVAs for both isotropic and FG plates are derived and solved numerically and analytically. The formulation of the problem is capable of optimizing the \(L_{2}\) norm of the plate deflection at the wide frequency band with respect to mass, stiffness and position of each absorber attachment point. In this study, the possibility of simultaneous absorption of one or multiple natural frequencies of the plate without any absorbers is also studied. Some numerical results are also presented.
KeywordsDynamic vibration absorber \(L_{2}\) norm Optimization Absorption frequency List of symbols \(F_{{0}},F_{0}^{*}\)
Dimensional and dimensionless amplitudes of excitation forcing, respectively
\(\varOmega ,\alpha \)
Dimensional and dimensionless excitation frequency, respectively
\(t,\tau \)
Dimensional and dimensionless time, respectively
\(\left( X_{{0}},Y_{{0}} \right) ,\left( x_{{0}},y_{{0}} \right) \)
Dimensional and dimensionless coordinates of applying point of the force, respectively
\(\left( X_{j},Y_{j} \right) ,\left( x_{j},y_{j} \right) \)
Dimensional and dimensionless coordinates of
jth absorber attachment point, respectively \(\left( X,Y \right) ,\left( x,y \right) \)
Dimensional and dimensionless coordinates of an arbitrary point of the plate, respectively
\(M_{j},M_{j}^{*}\)
Dimensional and dimensionless masses of
jth absorber, respectively \(k_{j},k_{j}^{*}\)
Dimensional and dimensionless stiffnesses of
jth absorber, respectively \(u_{j},q_{j}\)
Dimensional and dimensionless mass displacement of
jth absorber with respect to a fixed reference point, respectively \(Q_{j}\)
Amplitude of \(q_{j}\)
a, b, h
Length, width, and thickness of the plate, respectively
N
Number of dynamic absorbers
\(\bar{W}\left( X,Y,t \right) ,W\left( x,y,t \right) \)
Dimensional and dimensionless deflection of plate, respectively
\(w\left( x,y \right) \)
Amplitude of the dimensionless deflection of plate
\(a_{mn}\)
Coefficients of the plate mode shapes or components of \(\vec {a}\)
E, E( z)
Elasticity modulus of isotropic and FG plates, respectively
\(\nu \)
Poisson’s ration
D
Flexural or bending rigidity of the plate
\(\rho ,\rho (z)\)
Density of the isotropic and FG plates, respectively
\(\delta \left( . \right) \)
Delta Dirac function
c
Wave velocity in the plate
\(\beta \)
Aspect ratio (ratio of the plate length to its width)
\(\mu _{j}\left( \alpha \right) , \lambda _{j}\left( \alpha \right) , \tau _{lj}\left( \alpha \right) , \rho _{j}(\alpha )\)
Predefined parameters
\(\alpha _{mn}\)
Dimensionless natural frequencies of the bare plate (plate without absorber)
\(f_{mn}\left( x,y \right) , g_{mn}\left( x,y \right) , \psi \left( x,y,z,v,\alpha \right) ,\theta \left( x,y,z,v,\alpha _{rs} \right) P_{jmnpq}\left( x,y \right) ,Q_{jmnpq}\left( x,y \right) , R_{jmnpq}\left( x,y \right) ,S_{jmnpq}\left( x,y \right) \)
Predefined functions
\(A_{mnpq}\left( \alpha \right) ,B_{mnpq}\left( \alpha \right) ,B_{imnpq}\left( \alpha \right) \) ,\(C_{imnpq}\left( \alpha \right) ,D_{imnpq}\left( \alpha \right) \)
Entries of matrices \({\varvec{A}}\left( \alpha \right) ,{\varvec{B}}\left( \alpha \right) ,{\varvec{B}}_{i}\left( \alpha \right) ,{\varvec{C}}_{i}\left( \alpha \right) ,{\varvec{D}}_{i}\left( \alpha \right) \)
\(\gamma _{pq}\left( \alpha \right) \)
Components of the vector \(\vec {d}\)
\(\delta _{mp}\)
Kronecker delta
\(A_{11},B_{11},D_{11},A_{12},B_{12},D_{12},A_{33},B_{33},D_{33},I_{0},I_{1},I_{2}\)
Materials constants defined for FG plate
\(D^{*},I_{1}^{*},I_{2}^{*}\)
Dimensionless parameters defined in terms of materials constants of FG plate
\(\left\| w \right\| \)
\(L_{{2}}\) norm of the plate deflection
\(\vec {e}\)
A predefined vector with components \(f_{pq}\left( x_{{0}},y_{{0}} \right) \)
\(N_\mathrm{f}\)
Number of the natural frequencies of the bare plate
\(N_{{1}},N_{{2}}\)
Numbers of indexes chosen for
rand sin frequency \(\alpha _{rs}\) \({\varvec{J}}_{{4}N\times 4N}\)
Jacobian matrix
\(A_{j},B_{j},\theta _{{11}},\theta _{12},\theta _{22},\theta _{{01}},\theta _{02}\)
Predefined constants
Notes References 1. 2. 3.Kolovsky, M.Z.: Nonlinear Dynamics of Active and Passive Systems of Vibration Protection. Springer, Berlin (2013)Google Scholar 4. 5. 6. 7.Frahm, H.: Device for damping vibrations of bodies. U.S. Patent No. 989,958. U.S. Patent and Trademark Office, Washington, DC (1911) Google Scholar 8.Ormondroyd, J.: The theory of the dynamic vibration absorber. Trans. ASME Appl. Mech. 50, 9–22 (1928)Google Scholar 9.Hahnkamm, E.: The damping of the foundation vibrations at varying excitation frequency. Master Archit. 4, 192–201 (1932)Google Scholar 10. 11.Vu, X.-T., Nguyen, D.-C., Khong, D.-D., Tong, V.-C.: Closed-form solutions to the optimization of dynamic vibration absorber attached to multi-degrees-of-freedom damped linear systems under torsional excitation using the fixed-point theory. Inst. Mech. Eng. Part K J. Multi-body Dyn. 232, 237–252 (2018)Google Scholar 12. 13. 14. 15. 16. 17. 18.Moradi, H., Sadighi, M., Bakhtiari-Nejad, F.: Optimum design of a tuneable vibration absorber with variable position to suppress vibration of a cantilever plate. Int. J. Acoust. Vib. 16, 55 (2011)Google Scholar 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33.Issa, J.S.: Vibration absorbers for simply supported beams subjected to constant moving loads. Proc. Inst. Mech. Eng. Part K J. Multi-body Dyn. 226, 398–404 (2012)Google Scholar 34. 35. 36. 37. 38. 39. 40. 41. 42. 43.Rao, S.S.: Vibration of Continuous Systems. Wiley, New York (2007)Google Scholar 44. |
The Beta distribution has the PDF:
$$f\left(x\right)=\frac{x^{\alpha-1}\left(1-x\right)^{\beta-1}}{\mathrm{B}\left(\alpha,\beta\right)}$$
for $0<x<1$, and $f(x)=0$ otherwise. The parameters $\alpha,\beta$ are positive real numbers.
The mean and variance are given by:
$$\mu=\frac{\alpha}{\alpha+\beta},\quad\sigma^{2}=\frac{\alpha\beta}{\left(\alpha+\beta\right)^{2}\left(1+\alpha+\beta\right)}$$
which can be inverted to give $\alpha,\beta$ in terms of the mean and the variance as $\alpha=\lambda\mu$ and $\beta=\lambda\left(1-\mu\right)$, where
$$\lambda=\frac{\mu\left(1-\mu\right)}{\sigma^{2}}-1$$
Now I want to impose the condition that $\alpha,\beta \ge 1$. What does this imply for the mean and the variance? That is, is there a simple condition on $\mu,\sigma^2$ that is equivalent to $\alpha,\beta \ge 1$? |
The feature that makes LaTeX the right editing tool for scientific documents is the ability to render complex mathematical expressions. This article explains the basic commands to display equations.
Contents
Basic equations in LaTeX can be easily "programmed", for example:
The well known Pythagorean theorem \(x^2 + y^2 = z^2\) was proved to be invalid for other exponents. Meaning the next equation has no integer solutions: \[ x^n + y^n = z^n \]
As you see, the way the equations are displayed depends on the delimiter, in this case
\[ \] and
\( \).
LaTeX allows two writing modes for mathematical expressions: the
inline mode and the display mode. The first one is used to write formulas that are part of a text. The second one is used to write expressions that are not part of a text or paragraph, and are therefore put on separate lines.
Let's see an example of the
inline mode:
In physics, the mass-energy equivalence is stated by the equation $E=mc^2$, discovered in 1905 by Albert Einstein.
To put your equations in
inline mode use one of these delimiters:
\( \),
$ $ or
\begin{math} \end{math}. They all work and the choice is a matter of taste.
The
displayed mode has two versions: numbered and unnumbered.
The mass-energy equivalence is described by the famous equation \[E=mc^2\] discovered in 1905 by Albert Einstein. In natural units ($c$ = 1), the formula expresses the identity \begin{equation} E=m \end{equation}
To print your equations in
display mode use one of these delimiters:
\[ \],
\begin{displaymath} \end{displaymath} or
\begin{equation} \end{equation}
Important Note:
equation* environment is provided by an external package, consult the
amsmath article.
Below is a table with some common maths symbols. For a more complete list see the List of Greek letters and math symbols:
description code examples Greek letters
\alpha \beta \gamma \rho \sigma \delta \epsilon
$$ \alpha \ \beta \ \gamma \ \rho \ \sigma \ \delta \ \epsilon $$ Binary operators
\times \otimes \oplus \cup \cap
Relation operators
< > \subset \supset \subseteq \supseteq
Others
\int \oint \sum \prod
The mathematics mode in LaTeX is very flexible and powerful, there is much more that can be done with it: |
Steven from City of Sunderland College sent us in a wonderful, complete solution to this problem, which we recommend reading in full to any budding problem solver. The main points are as follows:
The gravitational potential energy of a cannon ball of mass $m$ at a distance $r$ from the centre of a planet of mass $M$ is
$$
V = -\frac{GMm}{r}\;.
$$
The kinetic energy of a cannon ball launched at speed $v$ is
$$
KE = \frac{1}{2}mv^2\;.
$$
Suppose that a cannon ball just escapes the pull of a planet and makes it to infinity. At this point, both its potential and kinetic energies will be zero. Thus, the initial kinetic and potential energies must sum to zero. So, if launched from a planet of radius $R$ we must have
$$
\frac{GMm}{R} = \frac{1}{2}mv^2\;.
$$
This gives the escape velocity $v$ as
$$
v =\sqrt{\frac{2GM}{R}}\;.
$$
Putting in the numbers for Earth gives
$$
v=\sqrt{\frac{2\times 6.674\times 10^{-11}\times 5.9763\times 10^{24}}{6.378\times 10^6}}=11.2\textrm{ km s}^{-1}\;.
$$
For the moon, Jupiter and the sun the escape velocity changes by the relative change in the factor $\sqrt{\frac{M}{R}}$. For the moon, Jupiter and the sun these are $0.2122$, $5.32$ and $55.26$, giving rise to escape velocities of
Moon: $2.37\textrm{ km s}^{-1}$,
Jupiter: $59.5\textrm{ km s}^{-1}$,
Sun: $619\textrm{ km s}^{-1}$. |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
Equilibrium Acids and Bases and their Ionisations Strong acid : 100% dissociation HClO 4, H 2SO 4, HNO 3, HI, HBr HCl + H_{2}O \rightarrow H^{+} + Cl^{-} Strong base : NaOH, KOH, RbOH, CsOH, Ba(OH) 2 Weak acid : Which undergoes partial dissociation
Hydrogen ion concentration :
K_{a} = \frac{C{\alpha} \ C{\alpha}}{C - C{\alpha}} = \frac{C\alpha^{2}}{1 - \alpha}
K a = Cα 2 (α < < < 1, 1 − α = 1) \alpha = \sqrt{\frac{K_{a}}{C}} [H^{+}] = C\alpha = C. \sqrt{\frac{K_{a}}{C}} = \sqrt{K_{a}.C} [H^{+}] = C\alpha = \sqrt{K_{a}.C} Arrhenius acid - base theory : Strong acid : Which produce more no. of H + (HClO 4, H 2SO 4, HCl) Weak acid : Which produce less no. of H +(CH 3COOH, HCN, H 2S) Strong base : Which produce more no. of OH − (NaOH, KOH) Bronsted - Lowry acid - base theory : Acid : Proton donor [HCl , H 2SO 4, CH 3COOH ......] Base : Proton acceptor[NaOH, KOH, NH 3 .......] Salt : Neither proton acceptor nor proton donor. [C 6H 6, CCl 4 ..... aprotic] Conjugate acid - base pair : A Bronsted - Lowry acid - base pair which differ by only one proton is called conjugate acid - base pair. Acid − H + → conjugate base Base + H + → conjugate acid.
Ionic product of water : (K W)
H_{2}O + H_{2}O \rightleftharpoons H_{3}O^{+}+ OH^{-}
K = \frac{[H_{3}O^{+}][OH^{-}]}{[H_{2}O]^{2}}
K{[H_{2}O]^{2}} = [H_{3}O^{+}][OH^{-}]
K_{W} = [H_{3}O^{+}][OH^{-}]
Kw → acid, ΔT = 25°C
K w = 1.0 × 10 -14 mol 2 lit -2 [H +][OH -] = 10 -14, [H +] = 1.0 × 10 -7 mol/lit Degree of dissociation : \alpha = \frac{1}{55.5 \times 10^{7}} = \frac{10^{-7}}{55.5} = \frac{10^{-7}}{\left(\frac{1000}{18}\right)} α = 1.8 × 10 -9 % dissociation = 1.8 × 10 -7 , \alpha = \frac{10^{-7}}{55.5} Part1: View the Topic in this Video from 0:10 to 5:20 Part2: View the Topic in this Video from 0:10 to 19:00 Part3: View the Topic in this Video from 0:10 to 9:04
Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers.
1. Degree of ionisation :
\tt \alpha = \frac{number \ of \ molecules\ ionised \ or \ dissociated}{total \ number \ of \ molecules \ taken} For strong electrolytes, α = 1 For weak electrolytes α < 1
2. Ostwald's Dilution law :
k = \frac{C\alpha^{2}}{1 - \alpha} If α is very small 1 - α ≈ 1 ⇒ K = Cα 2 or \alpha = \sqrt{\frac{K}{C}} \Rightarrow \alpha \propto \frac{1}{\sqrt{C}} Here, K is dissociation constant and C is molar concentration of the solution.
3. Dissociation constant of acid, K_{a} = \frac{\left[H^{+}\right]\left[A^{-}\right]}{\left[HA\right]} =\frac{C\alpha^{2}}{\left(1 - \alpha\right)}
4. Dissociation constant of the base K_{b} = \frac{\left[B^{+}\right]\left[OH^{-}\right]}{\left[BOH\right]} =\frac{C\alpha^{2}}{\left(1 - \alpha\right)} |
The simple regression linear model looks like $y = X\beta + \epsilon$ and among constraints we have independent white noise $\epsilon_i \sim N(0, \sigma^2)$. Fitting with least mean squares gives the know normal equations in the form:
$\hat{\beta} = (X^TX)^{-1}X^Ty$
To optimize with least mean square one does not use any statistical reasoning. However if you ask questions about the statistical properties of your sample statistic $\hat{\beta}$ you start too need those assumptions.
For example if you ask about statistical properties of your sample statistics this is something else. For example if we ask if the $\hat{\beta}$ is biased you might go with:
$E\hat{\beta} = E[(X^TX)^{-1}X^Ty] = E[(X^TX)^{-1}X^T(X\beta+\epsilon)] = E\beta + E[(X^TX)^{-1}X^T\epsilon]$
If we consider X as fixed constant we have that $E\hat{\beta} = E\beta + (X^TX)^{-1}X^TE[\epsilon] = \beta$ only if $\epsilon$ have independent observations.
If we go for variance of $\hat{\beta}$ we see that has a fixed component $\beta$ and a random noise $(X^TX)^{-1}X^T\epsilon$. So to find the variance of our estimator we look for
$Var(\hat{\beta}) = Var[\beta + (X^TX)^{-1}X^T\epsilon]$
Here the $\beta$ in variance have no effect, we can remove it. See properties of variance here: basic properties of variance on wikipedia. We also use the assumption that $X$ is given so it behaves like a constant term which factors out squared. We have then
$Var(\hat{\beta}) = ((X^TX)^{-1}X^T)^2 Var(\epsilon) = (X^TX)^{-1}Var(\epsilon)$
Now the answer to your question is, if we assume that variance is the same for all noise (additionally to independence) then we can say that
$Var(\beta) = \sigma^2 (X^TX)^{-1}$
And by estimating $\sigma$ you have all the good stuff about coefficients like standard errors, t values, p values, confidence intervals, hypothesis tests and so on. If the variance is not assumed to by heteroskedastic then you have to control it somehow, but things gets more complicated.
Even if the $X$ is not assumed as fixed variable, if is considered random variable a similar assumption on independence like $E[X\epsilon] = 0$ is enough together with heteroskedasticity to have the same results.
Hope that helps. |
Working paper Open Access
Maurice H.P.M. van Putten
Surveys of the Local Universe show a Hubble expansion significantly greater than $\Lambda$CDM estimates from the Cosmic
Microwave Background by Planck. This may present a challenge to our application of general relativity to cosmology. Such tensions are expected in quantum cosmology, wherein de Sitter space is unstable by dark energy from the cosmological horizon ${\cal H}$. We report on a novel probe of late time cosmology in terms of matter density $\omega_m$ estimated over inner intervals $[0,z_{\max}]$. Independent of Planck, Pantheon data over $0<z_{\max}\le2.26$ are found to exclude $\Lambda$CDM by $4.3\sigma$. In quantum cosmology, this reduces to $2.5\sigma$. |
As Sasho suggested, I am putting my comment as an answer.
The separations between
monotone versions of $\mathsf{NC}^1/\mathsf{poly}$ and $\mathsf{P/poly}$ versions of complexity are long known (Karchmer-Wigderson, Grigni-Sipser, etc), but in the non-monotone world almost nothing was known. Fortunately, Ben Rossman has recently found the first separation of formulas vs. circuits in the bounded depth setting.
Let $\mathrm{Circuit}(S,d)$ (resp., $\mathrm{Formula}(S,d)$) denote the set of all boolean functions computable by unbounded fanin circuits (resp. formulas) of depth $\leq d$ and size $\leq S$. It is clear that$$\mathrm{Circuit}(S,d) \subseteq \mathrm{Formula}(S^d,d).$$In particular,$$\mathrm{Circuit}(n^{O(1)},d) \subseteq \mathrm{Formula}(n^{O(d)},d).$$What Ben has shown is that, if $d=d(n)\leq \log\log\log n$, then$$\mathrm{Circuit}(n^{O(1)},d) \not\subseteq \mathrm{Formula}(n^{o(d)},d).$$Even more important is that he shows this separation on an
explicit and basic function $\mathrm{STCONN}(n,k)$: given an $n$-vertex graph, decide whether it has an $s$-$t$ path of length $\leq k$. This function is in $\mathrm{Circuit}(n^{O(1)},\log k)$.His main result is: if $dk^3\leq \log n/\log\log n$ then$$\mathrm{STCONN}(n,k)\in \mathrm{Formula}(S,d)\ \Longrightarrow\ S\geq n^{\Omega(\log k)}.$$This implies a tight depth lower bound: if $k\leq \log\log n$ then$$\mathrm{STCONN}(n,k)\in \mathrm{Circuit}(n^{O(1)},d)\ \Longrightarrow\ d=\Theta(\log k).$$The existing techniques for small-depth circuit -- namely switching lemmas and approximation bylow-degree polynomials-- do not distinguish between formulas and circuits due to their bottom-up nature. Top-down arguments, as Karchmer-Wigderson games, are difficult to realize in thenon-monotone case. What Ben uses is a combination of these arguments. |
We prove the equivalences $(1) \Leftrightarrow (2)$ and $(2) \Leftrightarrow (3)$.
$(1) \implies (2)$
Suppose that $R$ is a field. Let $I$ be an ideal of $R$.If $I=(0)$, then there is nothing to prove.So assume that $I\neq (0)$.
Then there is a nonzero element $x$ in $I$.Since $R$ is a field, we have $x^{-1}\in R$.
Since $I$ is an ideal, we have\[1=x^{-1}\cdot x\in I.\]This yields that $I=R$.
$(2) \implies (1)$
Suppose now that the only ideals of $R$ are $(0)$ and $R$.Let $x$ be a nonzero element of $R$. We show the existence of the inverse of $x$.Consider the ideal $(x)=xR$ generated by $x$.
Since $x$ is nonzero, the ideal $(x)\neq 0$, and thus we have $(x)=R$ by assumption.Thus, there exists $y\in R$ such that\[xy=1.\]
So $y$ is the inverse element of $x$.Hence $R$ is a field.
$(2)\implies (3)$
Suppose that the only ideals of $R$ are $(0)$ and $R$.Let $S$ be any ring with $1$ and $f:R\to S$ be any ring homomorphism.Consider the kernel $\ker(f)$. The kernel $\ker(f)$ is an ideal of $R$, and thus $\ker(f)$ is either $(0)$ or $R$ by assumption.
If $\ker(f)=R$, then the homomorphism $f$ sends $1\in R$ to $0\in S$, which is a contradiction since any ring homomorphism between rings with $1$ sends $1$ to $1$.Thus, we must have $\ker(f)=0$, and this yields that the homomorphism $f$ is injective.
$(3) \implies (2)$
Suppose that statement 3 is true. That is, any ring homomorphism $f:R\to S$, where $S$ is any ring with $1$, is injective.Let $I$ be a proper ideal of $R$: an ideal $I\neq R$.Then the quotient $R/I$ is a ring with $1$ and the natural projection\[f:R\to R/I\]is a ring homomorphism.
By assumption, the ring homomorphism $f$ is injective, and hence we have\[(0)=\ker(f)=I.\]This proves that the only ideals of $R$ are $(0)$ and $R$.
Determine the Quotient Ring $\Z[\sqrt{10}]/(2, \sqrt{10})$Let\[P=(2, \sqrt{10})=\{a+b\sqrt{10} \mid a, b \in \Z, 2|a\}\]be an ideal of the ring\[\Z[\sqrt{10}]=\{a+b\sqrt{10} \mid a, b \in \Z\}.\]Then determine the quotient ring $\Z[\sqrt{10}]/P$.Is $P$ a prime ideal? Is $P$ a maximal ideal?Solution.We […]
Generators of the Augmentation Ideal in a Group RingLet $R$ be a commutative ring with $1$ and let $G$ be a finite group with identity element $e$. Let $RG$ be the group ring. Then the map $\epsilon: RG \to R$ defined by\[\epsilon(\sum_{i=1}^na_i g_i)=\sum_{i=1}^na_i,\]where $a_i\in R$ and $G=\{g_i\}_{i=1}^n$, is a ring […]
Prove the Ring Isomorphism $R[x,y]/(x) \cong R[y]$Let $R$ be a commutative ring. Consider the polynomial ring $R[x,y]$ in two variables $x, y$.Let $(x)$ be the principal ideal of $R[x,y]$ generated by $x$.Prove that $R[x, y]/(x)$ is isomorphic to $R[y]$ as a ring.Proof.Define the map $\psi: R[x,y] \to […]
Every Prime Ideal of a Finite Commutative Ring is MaximalLet $R$ be a finite commutative ring with identity $1$. Prove that every prime ideal of $R$ is a maximal ideal of $R$.Proof.We give two proofs. The first proof uses a result of a previous problem. The second proof is self-contained.Proof 1.Let $I$ be a prime ideal […] |
Search
Now showing items 1-10 of 26
Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV
(Elsevier, 2017-12-21)
We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ...
Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV
(American Physical Society, 2017-09-08)
The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ...
Online data compression in the ALICE O$^2$ facility
(IOP, 2017)
The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ...
Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV
(American Physical Society, 2017-09-08)
In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ...
J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(American Physical Society, 2017-12-15)
We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ...
Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions
(Nature Publishing Group, 2017)
At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ...
K$^{*}(892)^{0}$ and $\phi(1020)$ meson production at high transverse momentum in pp and Pb-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 2.76 TeV
(American Physical Society, 2017-06)
The production of K$^{*}(892)^{0}$ and $\phi(1020)$ mesons in proton-proton (pp) and lead-lead (Pb-Pb) collisions at $\sqrt{s_\mathrm{NN}} =$ 2.76 TeV has been analyzed using a high luminosity data sample accumulated in ...
Production of $\Sigma(1385)^{\pm}$ and $\Xi(1530)^{0}$ in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Springer, 2017-06)
The transverse momentum distributions of the strange and double-strange hyperon resonances ($\Sigma(1385)^{\pm}$, $\Xi(1530)^{0}$) produced in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV were measured in the rapidity ...
Charged–particle multiplicities in proton–proton collisions at $\sqrt{s}=$ 0.9 to 8 TeV, with ALICE at the LHC
(Springer, 2017-01)
The ALICE Collaboration has carried out a detailed study of pseudorapidity densities and multiplicity distributions of primary charged particles produced in proton-proton collisions, at $\sqrt{s} =$ 0.9, 2.36, 2.76, 7 and ...
Energy dependence of forward-rapidity J/$\psi$ and $\psi(2S)$ production in pp collisions at the LHC
(Springer, 2017-06)
We present ALICE results on transverse momentum ($p_{\rm T}$) and rapidity ($y$) differential production cross sections, mean transverse momentum and mean transverse momentum square of inclusive J/$\psi$ and $\psi(2S)$ at ... |
A particle moves along the x-axis so that at time t its position is given by $x(t) = t^3-6t^2+9t+11$ during what time intervals is the particle moving to the left? so I know that we need the velocity for that and we can get that after taking the derivative but I don't know what to do after that the velocity would than be $v(t) = 3t^2-12t+9$ how could I find the intervals
Fix $c\in\{0,1,\dots\}$, let $K\geq c$ be an integer, and define $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$.I believe I have numerically discovered that$$\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n \quad \text{ as } K\to\infty$$but cannot ...
So, the whole discussion is about some polynomial $p(A)$, for $A$ an $n\times n$ matrix with entries in $\mathbf{C}$, and eigenvalues $\lambda_1,\ldots, \lambda_k$.
Anyways, part (a) is talking about proving that $p(\lambda_1),\ldots, p(\lambda_k)$ are eigenvalues of $p(A)$. That's basically routine computation. No problem there. The next bit is to compute the dimension of the eigenspaces $E(p(A), p(\lambda_i))$.
Seems like this bit follows from the same argument. An eigenvector for $A$ is an eigenvector for $p(A)$, so the rest seems to follow.
Finally, the last part is to find the characteristic polynomial of $p(A)$. I guess this means in terms of the characteristic polynomial of $A$.
Well, we do know what the eigenvalues are...
The so-called Spectral Mapping Theorem tells us that the eigenvalues of $p(A)$ are exactly the $p(\lambda_i)$.
Usually, by the time you start talking about complex numbers you consider the real numbers as a subset of them, since a and b are real in a + bi. But you could define it that way and call it a "standard form" like ax + by = c for linear equations :-) @Riker
"a + bi where a and b are integers" Complex numbers a + bi where a and b are integers are called Gaussian integers.
I was wondering If it is easier to factor in a non-ufd then it is to factor in a ufd.I can come up with arguments for that , but I also have arguments in the opposite direction.For instance : It should be easier to factor When there are more possibilities ( multiple factorizations in a non-ufd...
Does anyone know if $T: V \to R^n$ is an inner product space isomorphism if $T(v) = (v)_S$, where $S$ is a basis for $V$? My book isn't saying so explicitly, but there was a theorem saying that an inner product isomorphism exists, and another theorem kind of suggesting that it should work.
@TobiasKildetoft Sorry, I meant that they should be equal (accidently sent this before writing my answer. Writing it now)
Isn't there this theorem saying that if $v,w \in V$ ($V$ being an inner product space), then $||v|| = ||(v)_S||$? (where the left norm is defined as the norm in $V$ and the right norm is the euclidean norm) I thought that this would somehow result from isomorphism
@AlessandroCodenotti Actually, such a $f$ in fact needs to be surjective. Take any $y \in Y$; the maximal ideal of $k[Y]$ corresponding to that is $(Y_1 - y_1, \cdots, Y_n - y_n)$. The ideal corresponding to the subvariety $f^{-1}(y) \subset X$ in $k[X]$ is then nothing but $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$. If this is empty, weak Nullstellensatz kicks in to say that there are $g_1, \cdots, g_n \in k[X]$ such that $\sum_i (f^* Y_i - y_i)g_i = 1$.
Well, better to say that $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$ is the trivial ideal I guess. Hmm, I'm stuck again
O(n) acts transitively on S^(n-1) with stabilizer at a point O(n-1)
For any transitive G action on a set X with stabilizer H, G/H $\cong$ X set theoretically. In this case, as the action is a smooth action by a Lie group, you can prove this set-theoretic bijection gives a diffeomorphism |
Because complex exponentials $e^{\jmath \omega t}$, which are results of Fourier transform, are the eigenfunctions for linear, time invariant (LTI) systems. See eigenfunction of LTI. Also see this answer on SP.SE.Thus, Fourier transform is useful for analyzing linear (not suitable for non-linear one) time invariant (can be intepreted as stationnary) system....
Yes the MATLAB code is correct. Be careful though, the bandwidth of the signal squared is twice that of the signal itself, which may lead to aliasing if the sampling frequency is too low compared to the signal bandwidth. This can be remedied by properly resampling the signal to a higher sampling frequency with a lowpass filter, then squaring, low-pass ...
well i'm assuming you mean "conventional" DACs and not $\Sigma \Delta$ DACs.in a conventional DAC (like an R-2R ladder or something), there are the micro errors that occur between neighboring DAC codes. e.g. non-monotonicity. i think the DSP solution to that is adding a teeny amount dither noise to the value that is output to the DAC.there is a more ...
I think you mean "images", not "aliases". They become aliases if there is foldover from resampling.It's because you are not adding two signals, $x(t)$ and $\operatorname{III}(t)$, you are multiplying them that these images appear.$$\begin{align}x_\text{s}(t) & \triangleq x(t) \cdot \operatorname{III}(t/T) \\&= x(t) \cdot \sum\limits_{n=-\...
I guess that would be a slew rate limiter. This is concept is mostly used an amplifier design as a practical constraint of the circuit. I haven't seen it applied as a digital filter. It is certainly very non-linear and it would make a poor "smoothing filter" as it's highly dependent on the absolute amplitude. Could you shed some light on the specific ...
If the system is nonlinear then if $y_1(t)$ is the response to the signal $x_1(t)$, and $y_2(t)$ is the output given input signal $x_2(t)$ then the response to the signal$$x(t)=a_1x_1(t)+a_2x_2(t)\tag{1}$$with arbitrary constants $a_1$ and $a_2$ will generally not be equal to$$y(t)=a_1y_1(t)+a_2y_2(t)\tag{2}$$However, for the given system an input ...
A flavor of dynamic convolution (is that a trademark by the way?) has a different impulse response $g_i$ associated with each range of instantaneous input. A number of ranges can be defined by fuzzy membership functions $f_i(x)$ (Fig. 1).Figure 1. Amplitude ranges that each use a different convolution kernel.Omitting time indices, the input $x$ and ...
The system must be time invariant and smooth in the functional derivative sense. That doesn't guarantee that the Volterra series converges (like with Taylor series, there are pathological counter examples), but almost all systems that have these properties have a convergent Volterra series.The problem in practice is that the required expansion order for ...
Since the question has been raised as to whether the hint that I had given to the OP in a comment on the original question was appropriate for a newcomer to signal processing, here goes.Stripped of extraneous baggage and notation, the question is whether it is possible to determine the value of $E[X^2Y^2]$ straightforwardly where $X$ and $Y$ are zero-...
As Matt L. says you'll need to check for homogeneity and, possibly, additivity.HomogeneityThat test says that if:$$y[n] = f(x[n])$$then$$A \cdot y[n] = f(A \cdot x[n])$$for all scalar $A$.AdditivityThis test says that if$$y_1[n] = f(x_1[n])$$and$$y_2[n] = f(x_2[n])$$then$$y_{\rm tot}[n] = f(x_1[n] + x_2[n]) = y_1[n] + y_2[n]$$You ...
Using those four basic elements will allow you to implement linear systems, which can change the magnitude and phase of the input signal, but which will not add the harmonics that are expected from a distortion effect. In order to create distortion in that sense (i.e., non-linear distortion) you will need some non-linear element. The most basic ...
For this problem you can't use the formula involving $|H(f)|^2$ because it only applies to linear time-invariant (LTI) systems, and a squarer is obviously a non-linear system.The only way to solve this problem that I can think of is to use the formula$$E\{x^2y^2\}=E\{x^2\}E\{y^2\}+2E^2\{xy\}\tag{1}$$which is valid for jointly Gaussian and zero mean ...
For $-1 <= x <= 1$, let's compare Chebyshev polynomials of the first kind, $T_n(x)$, and the basis functions of the Fourier cosine series, $F_n(x)$:$F_n(x)=\cos(n \pi x)$$T_n(x)=\cos(n\ \text{acos}\ x)$Writing $T_n(x_T) = F_n(x_F)$ and solving for $x_F$ gives $x_F = (\text{acos}\ x_T) / pi$, revealing that the Chebyshev polynomial series is ...
Another typical approach, that independently of my other answer works, is predistortion, for example with the look-up table mentioned by robert, or with a correction polynomial.If you can really pinpoint your nonlinearities to a simple digital-in/analog out curve, you can just find the inverse of that curve, and put it in a correcting mapping, and apply ...
After going through the literature regarding HHT and EMD, I found that the "Huang" part of HHT comes from the fact that he is the one who proposed EMD in the first place. That explains the name of the method...For more developments regarding EMD and HHT, I recommend the papers by Rilling et al. "On empirical mode decomposition and its algorithms". For the ...
A transform being linear has very little to do with its ability to analyze linear or nonlinear systems.The wavelet transform $W[s(t)]$ of a signal $s(t)$ is linear because $$W[a s_1(t) + b s_2(t)]=a W[s_1(t)]+b W[s_2(t)]$$ for real or complex $a$ and $b$.The signal you're analyzing is just a signal, it has no concept of linearity. However, if you try to ...
Hints:Can an LTI system generate components in some frequency $\omega_0$ if the input signal $x(n)$ was such that $X(e^{j\omega_0})=0$?Does aliasing do such thing?The answers to these questions are straightforward and, combined, they answer the original question.
Is this a well-known phenomenon?Yes, of course. You will see harmonics as soon as your clip point is lower than the maximum amplitude in the time domain. The latter is a function of the relative phases between the harmonic components. In your case the max amplitude is indeed 2.5 (plus whatever the noise adds).If you change the phases you will get a ...
I faced the same problem in the past. Perhaps there is a way without adding a delay but I haven't found it.You need to realize that your 3 first solutions (delay after vq, delay at the delta_freq and delay after the frequency) will yield the same result as omega_g is a constant and because your PI controller has fixed coefficients.Anyway, place the ...
As I said in the comments, just follow the 1D case from Wikipedia and augment it with the extra $y$ and $z$ dimensions (and velocities):$$\mathbf{x}_k = \left [\begin{array}{c}x\\ \dot{x}\\ y\\ \dot{y}\\ z\\ \dot{z}\end{array}\right]$$You will also need to augment $\mathbf{F}$ and $\mathbf{G}$:$$\mathbf{F} = \left[ \begin{array}{cccccc}1 & \...
So, the intuitive reaction to this situation is oversampling.Basically, if you use twice the sampling rate, you can always average to samples to get one "output sample value" (thanks, Nyquist!). That would give you one bit of additional per every oversampling factor of two, or$$\Delta b = \log_2\frac{f_\text{sample}}{f_\text{target}}$$Let's introduce ...
Check out this paper. I would have made a comment but not high enough rep.http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=1211087&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F81%2F27258%2F01211087Looks like you need multiple in to get subharmonics in Volterra seriesThe abstract states "Subharmonic generation is a complex nonlinear ...
If your nonlinearity can be expressed as a polynomial (i.e., in terms of addition and multiplication), you can make use of:The linearity of the Fourier transform, i.e., if $f$ and $g$ are (benign) functions, $a$ and $b$ are numbers and $ℱ$ denotes the Fourier transform, then:$$ℱ(a·f+b·g) = a·ℱ(f) + b·ℱ(g)$$The convolution theorem, which states that ...
Well, any input-output representation obviously admits a state-sapce form. for your equation in $y[k]$ you can easily construct one as follows. Create a "shift" system (an integrator chain) as$$\begin{aligned}x_1[k+1] &= x_2[k],\\x_2[k+1] &= x_3[k],\\&\vdots\\x_n[k+1] &= y[k]\end{aligned}$$In this way indeed you have $x_n[k] = y[k-1]$...
As already suggested by Robert and Olli, a system that maps $x(t)=k\cos(2\pi f_0t)$ to $y(t)=k\cos(4\pi f_0 t)$ can be formalized as$$y(t)=|x(t)|_{max}\left(2\left(\frac{x(t)}{|x(t)|_{max}}\right)^2-1\right)\tag{1}$$which is a time-invariant non-linear system.However, I doubt that this system works well (i.e., sounds good as a distortion effect) when ...
In general there is no systematic way and you simply have to analyze the given system. In the case of the system in your question, it's easy to see that it can't be invertible, because the output is just a constant, namely the integral over the input function (assuming this integral exists). There are infinitely many functions which will result in the same ...
I apologize, the following is a bit rough, but I've put together an example in MATLAB illustrating all the different elements I think you need. That is:a fist-order model for an amplifier (a common approximation for most op. amps.)how to discretize it to find an IIR filterone way to implement a realization of the IIR filter (straight forward state-space ...
After skimming through the paper, I can see more clearly now. The measure $D_3$ quantifies the relative strength of the 3rd order intermodulation product. If two sinusoidal signals with frequencies $f_1$ and $f_2$ are input to a non-linear device (such as a microwave amplifier, as referred to in the paper), there will be intermodulation products at the ...
Looking at an unknown system relates to finding relations between inputs and outputs. A first question is: are there specific inputs that are "almost" unchanged by the system? Those are sometimes called "root" signals. The effect of the system on some other signal is often simpler to analyze by rewriting or approximating them by a combination of several root ...
In step 1 you insert a zero-valued sample between each pair of successive original samples. This is a gain 0.5 operation. For example consider 0 Hz. You can calculate the signed 0 Hz amplitude or direct current (DC) offset simply as the mean of the sample values. Inserting the zero samples halves the mean, so the gain is 0.5, which you need to compensate for ... |
The dependence of a response variable on two factors, A and B, say, is of interest.
Factor A has a levels and Factor B has b levels. In total there are a x b treatments. Sample sizes for all treatment groups are equal (balanced design). Goal: to study the simultaneous effects of the two factors on the response variable, including their respective main effects and their interaction effects. 1.1 Example: drugs for hypertension
A medical investigator studied the relationship between the response to three blood pressure lowering drugs for hypertensive males and females. The investigator selected 30 males and 30 females and randomly assigned 10 males and 10 females to each of the three drugs.
This is a balanced randomized complete block design (RCBD) Two factors: A - gender (observational factor) B - drug (experimental factor) Factor A has a = 2 levels: male vs. female Factor B has b = 3 levels: drug 1, drug 2, and drug 3 In total there are a x b = 2 x 3 = 6 treatments For each treatment, the sample size is n = 10
Treatment Description Sample Size 1 drug 1, male 10 2 drug 1, female 10 3 drug 2, male 10 4 drug 2, female 10 5 drug 3, male 10 6 drug 3, female 10 1.2 Population means Treatment means: \(\mu_{ij}\) = population mean response of the treatment with factor A at level i and factor B at level j Factor level means: \(\mu_{i\cdot}\) = population mean response when the i-th level of factor A is applied; \(\mu_{\cdot j}\) = population mean response when the j-th level of factor B is applied: Overall mean (the basic line quantity in comparisons of factor effects):
FACTOR B j = 1 j = 2 . . . . . j = b Row Avg i = 1 \(\mu_{11}\) \(\mu_{12}\) . . . . . \(\mu_{1b}\) \(\mu_{1\cdot}\) i = 2 \(\mu_{21}\) \(\mu_{22}\) . . . . . \(\mu_{2b}\) \(\mu_{2\cdot}\) FACTOR A . . . . . . . . . . . . . . . . . i = a \(\mu_{a1}\) \(\mu_{a2}\) . . . . . \(\mu_{ab}\) \(\mu_{a\cdot}\) Column Avg \(\mu_{\cdot 1}\) \(\mu_{\cdot 2}\) . . . . . \(\mu_{\cdot b}\) \(\mu_{\cdot \cdot}\) 1.3 Main effects Main effects are defined as the differences between factor level means and the overall mean Factor A main effects: \(\alpha_{i}\) = main effect of factor A at the i-th factor level Factor B main effects: \(\beta_{j}\) = main effect of factor B at the j-th factor level For both factor A and factor B, the sum of main effects is zero 1.4 Interaction effects Interaction effects describe how the effects of one factor depend on the levels of the other factor \((\alpha\beta)_{ij}\) = interaction effect of the i-th level of factor A and j-th level of factor B
Note: for \(1 \leq i \leq a, 1\leq j \leq b\) Interpretation of the interaction effects If all \((\alpha\beta)_{ij} = 0, i = 1, ..., a, j = 1, ..., b\), then the factor effects are additive This is equivalent to additiveif all the interaction effects are zero If at least one of the \((\alpha\beta)_{ij}\)'s is nonzero, then the factor effects are interacting This means that the effects of one factor are different for differing levels of the other factor. 1.5 Additive factor effects If the two are additive (i.e. no interaction) Each factor can be studied separately, based on their factor level means \(\{\mu_{i\cdot}\}\) and \(\{\mu_{\cdot j}\}\), respectively This is much simpler than the joint analysis based on the treatment means \(\{\mu_{ij}\}\) Example 1
Factor A has a = 2 levels, Factor B has b = 3
Check additivity for all pairs of (i,j)
\[\alpha_{1} = \mu_{1\cdot} - \mu_{\cdot \cdot} = 12 - 12 = 0\]
\[\beta_{1} = \mu_{\cdot 1} - \mu_{\cdot \cdot} = 9 - 12 = -3\]
\[\mu_{11} = 9\]
\[\mu_{\cdot \cdot} + (\alpha_{1} + \beta_{1}) = 12 + (0 - 3) = 9 = \mu_{11}\]
Exercise: Complete the check for additivity
FACTOR B j = 1 j = 2 j = 3 \(\mu_{i\cdot}\) FACTOR A i = 1 9 11 16 12 i = 2 9 11 16 12 \(\mu_{\cdot j}\) 9 11 16 12 (= \(\mu_{\cdot \cdot}\)) 1.6 Graphical method: interaction plotsInteraction plots constitute a graphical tool to check additivity. X-axis is for the factor A (or B) levels, and Y-axis is for the treatment means \(\mu_{ij}\)'s seperate curves are drawn for each of the factor B (or A) levels Interpreting the interaction plots If the curves are all horizontal, then the factor on the X-axis has no effect at all, i.e. the treatment means do not depend on the level of that factor If the curves are overlapping, then the other factor (the one not on the X-axis) has no effect If the curves are parallel, then the two factors are additive, i.e. the effects of factor A do not depend on (or interact with) the level of factor B, and vice versa Note: "horizontal" and "overlapping" are special cases of "parallel"
For Example 1:
The two factors are additive Moreover, factor A does not have any effect at all (main effects of factor A are all zero) Factor B does have some effects (not all main effects of factor B are zero) Example 2Factor A: a = 2 levels; Factor B: b = 3 levels
Factor B j = 1 j = 2 j = 3 \(\mu_{i\cdot}\) Factor A i = 1 11 13 18 14 i = 2 7 9 14 10 \(\mu_{\cdot j}\) 9 11 16 12 (=\(\mu_{\cdot \cdot}\)) The two factors are additive, since the curves are parallel Both factors have some effects (main effects of both factors are not all zero), since the curves are neither horizontal to the X-axis nor overlapping in any of the plots Note: Indeed, you only need to examine one of the two plots. If the curves in one plot are parallel, the curves in the other plot must also be parallel Summary: Additive Model For all pairs of (i, j): \(\mu_{ij} = \mu_{\cdot \cdot} + \alpha_{i} + \beta_{j}\) The curves in an interaction plot are parallel The difference between treatments means for any two levels of factor B (respectively, A) is the same for all level of factor A (respectively, B):
\[\mu_{1j} - \mu_{1j'} = ... = \mu_{aj} - \mu_{aj'}, 1 \leq j, j' \leq b\]
1.7 Interacting factor effects Interpretation of \((\alpha\beta)_{ij}\): the difference between the treatment mean \(\mu_{ij}\) and the value that would be expected ifthe two factors were additive Factor A and factor B are interacting: if some \((\alpha\beta)_{ij} \neq 0\), i.e. \(\mu_{ij} \neq \mu_{\cdot \cdot} + \alpha_{i} + \beta_{j}\) for some (i, j) Equivalently, the curves are not parallel in an interaction plot Example 3
Factor A: a = 2 levels; Factor B: b = 3 levels
Factor B j = 1 j = 2 j = 3 \(\mu_{i\cdot}\) Factor A i = 1 9 12 18 13 i = 2 9 10 14 11 \(\mu_{\cdot j\) 9 11 16 12 (=\(\mu_{\cdot \cdot}\)) The two curves in the interaction plot are not parallel, which means the two factors are interacting For example:
\[\alpha_{1} = \mu_{1\cdot} - \mu_{\cdot \cdot} = 13 - 12 = 1 and \beta_{1} = \mu_{\cdot 1} - \mu_{\cdot \cdot} = 9 - 12 = -3\]
\[Thus 9 = \mu_{11} \neq \mu_{\cdot \cdot} + \alpha_{1} + \beta_{1} = 12 + 1 - 3 = 10\] \[Or (\alpha\beta)_{11} = 9 - 10 = -1 \neq 0\] \[\mu_{11} - \mu_{12} = 9 - 12 = -3 \neq \mu_{21} - \mu_{22} = 9 - 10 = -1\] There is a larger difference among treatment means between the two levels of factor A when factor B is at the 3rd level (j = 3) than when B is at the first two levels (j = 1, 2) Summary: Interactions
Suppose we put Factor B on the X-axis of the interaction plot
The differences in heights of the curves reflect Factor A effects. On the other hand, if all curves are overlapping, then Factor A has no effect. The departure from horizontal by the curves reflects Factor B effects. On the other hand, if all curves are horizontal, then Factor B has no effect. The lack of parallelism among the curves reflects interaction effects. Any one factor with no effect means additivity Important: no main effects does not necessarily mean no effects or no interaction effects. Example 4 (refer to figure 4) Figure (a): additive Figure (b) and (c): Factor B has no main effects, but Factor A and Factor B are interacting Figure (d): when Factor A is at level 1, treatment means increase with Factor B levels; when Factor A is at level 2, the trend becomes decreasing Figure (e): larger difference among treatment means between the two levels of Factor A when Factor B is at a smaller indexed level Figure (f): more dramatic change of treatment means among Factor B levels when Factor A is at level 1 |
This question already has an answer here:
Find $$\lim_{n\to\infty} \sum_{k=1}^{n}\left( \frac{k}{n}\right)^{n}$$
I can't compare it with similar series and I can't change it to Riemann's sum.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
This question already has an answer here:
Find $$\lim_{n\to\infty} \sum_{k=1}^{n}\left( \frac{k}{n}\right)^{n}$$
I can't compare it with similar series and I can't change it to Riemann's sum.
For each $k\ge0$, $[n\gt k]\left(1-\frac kn\right)^n$ is non-decreasing in $n$, where $[\dots]$ are Iverson brackets. Therefore, by monotone convergence $$ \begin{align} \lim_{n\to\infty}\sum_{k=1}^n\left(\frac kn\right)^n &=\lim_{n\to\infty}\sum_{k=0}^{n-1}\left(1-\frac kn\right)^n\\ &=\sum_{k=0}^\infty e^{-k}\\[6pt] &=\frac{e}{e-1} \end{align} $$ |
I'm stuck on the following question and would greatly appreciate any help:
A lossless transmission line is formed by a wire of radius a placed in vacuum a distance d above an infinite conducting plane with $d>>a$. Calculate the capacitance and Inductance per unit length.
I'm able to calculate the capacitance per unit length. I let the wire at z = d carry a charge +Q uniformly distributed across its length l (I know this is in general not true but for $d>>a$ the uniform distribution should be a good approximation). As the plate is conducting I then calculate the electric field above the plane by placing an image wire carrying charge -Q at z = -d on the other side of the plane. This allowed me (after fiddling around with the signs to get the correct answer) to integrate the electric field from $0$ to $d-a$ to get the following potential between the plane and the wire:
$$V=\frac{Q}{2\pi\epsilon_0l}\ln\frac{2d}{a}$$
so therefore:
$$C' = \frac{2\pi\epsilon_0}{\ln\frac{2d}{a}}$$
However the Inductance per unit length is what I'm stuck on. At first I tried to calculate the magnetic field of the wire:
$$B = \frac{\mu_0I}{2\pi r}$$
Then I calculated the magnetic field of a large plane of width w carrying a current I:
$$B_{sheet}=\frac{I}{w}$$
so because the conducting plane is described as infinite in the question I took $B_{sheet} = 0$ as $w\rightarrow \infty$. Then I calculated the flux between the wire and the plane:
$$\phi = l\int_{a}^{d}\frac{\mu_0I}{2\pi r}=\frac{\mu_0 I l}{2\pi}\ln\frac{d}{a}$$
However this is not the right answer as I seem to be missing a factor of 2 inside the $ln$.
How do I approach the calculation of the Inductance per unit length to get the correct answer? |
Timetable
Printer-Friendly
PDF Timetable
Files
Abstracts
Detailed
Tuesday, 13 May 2014 :: day 1
Heavy Flavor I
Conveners: Daniel Kikola, Zhenyu Ye, Yifei Zhang
(Cytosin : 09:00 - 12:30)
Time Talk Presenter 09:00 QM rehearsal: Open charmed hadron production in p+p, Au+Au and U+U collisions at STAR ( 01:00 ) 1 file Zhenyu Ye (University of Illinois at Chicago) 10:00 QM rehearsal: Recent STAR measurements of J/psi production from Beam Energy Scan and U+U collisions ( 01:00 ) 1 file Wangmei Zha (USTC) 11:00
Coffee break ( 00:20 )
11:20 Upsilon production in U+U collisions at 193 GeV ( 00:40 ) 0 files Robert Vertesi 12:00 J/psi in min-bias U+U collisions at 193 GeV ( 00:40 ) 0 files Ota Kukral
Light Flavor Spectra Parallel Session - I
Conveners: Lokesh Kumar, Xianglei Zhu
(Guanin : 09:00 - 18:00)
Time Talk Presenter 09:00 Beam Energy Dependence of Dielectron Production in Au+Au Collisions at STAR ( 00:30 ) 0 files Patrick Huck 09:30 Direct virtual photon and dielectron production in Au+Au collisions at $\sqrt{s_{NN}} $ = 200 GeV at STAR ( 00:30 ) 1 file Chi Yang 10:00 Dielectron production in Au+Au and p+p collisions at sNN = 200GeV at STAR ( 00:30 ) 0 files Yi Guo 10:30 11:00 The production of low mass dielectrons in Au+Au collisions at \sqrt{s_{NN}} = 27 GeV from STAR (EVO) ( 00:30 ) 1 file Joey Butterworth 11:30 Systematics of the Kinetic Freeze-out Properties in High-Energy Nuclear Collisions from STAR ( 00:30 ) 0 files Lokesh Kumar
Bulk Correlations PWG --- Session I
Conveners: Shusu Shi, Hui Wang
(Adenin : 09:30 - 12:30)
Time Talk Presenter 09:30 QM rehearsal - Flow Measurements and selection of body-body and tip-tip enhanced samples in U+U collisions at STAR ( 00:45 ) 0 files Hui Wang 10:15 QM rehearsal - High moments measurements of net-proton, net-charge and net-kaon distributions at STAR ( 00:45 ) 1 file Amal Sakar 11:00
Coffee Break ( 00:15 )
11:15 Higher Moments of Multiplicity Distributions at STAR ( 00:30 ) 1 file Xiaofeng Luo (Central China Normal University) 11:45 QM Poster rehearsal - Femtoscopic analysis of charged kaon correlations at small relative momentum in $p+p$ collisions in STAR ( 00:30 ) 2 files Grigory Nigmatkulov
Heavy Flavor II
Conveners: Daniel Kikola, Zhenyu Ye, Yifei Zhang
(Cytosin : 14:00 - 18:00)
Time Talk Presenter 14:00 J/psi in U+U collisions with HT trigger ( 00:20 ) 2 files Guannan Xie 14:30 Measurement of NPE in p+p at 200GeV Run 12 ( 00:30 ) 2 files Xiaozhi Bai 15:00 J/psi production in p+p 500 GeV run 11 ( 00:30 ) 1 file Qian Yang 15:30
Coffee break ( 00:15 )
15:45 Low-pT NPE in Au+Au 200 GeV run 10 ( 00:30 ) 2 files Kunsu Oh 16:15 D0 analysis in p+p 200 GeV with Run 12 data ( 00:30 ) 1 file Mustafa Mustafa (EVO) (LBNL)
Bulk Correlations PWG --- Session II
Conveners: Shusu Shi, Hui Wang
(Adenin : 14:00 - 18:00)
Time Talk Presenter 14:00 QM Poster rehearsal - Triangular Flow of Identified Hadrons in Au+Au Collisions at √sNN = 39 and 200 GeV ( 00:30 ) 1 file Xu Su 14:30 QM Poster rehearsal - Pion-kaon femtoscopy for Au+Au collisions at sqrt(s_NN) = 39GeV from Beam Energy Scan program at STAR ( 00:30 ) 0 files Katarzyna Poniatowska
Light Flavor Spectra Parallel Session - II
Conveners: Lokesh Kumar, Xianglei Zhu
(Guanin : 14:00 - 18:00)
Time Talk Presenter 14:00 Identified particle production for p + p collisions in root{s} = 62.4 GeV at STAR ( 00:30 ) 1 file Shikshit Gupta (evo) 14:30 Charged particle $p_{\mathrm{T}}$ spectra measured at mid-rapidity in the Beam Energy Scan from STAR and comparisons to models ( 00:30 ) 4 files Stephen Horvat 15:00 Search for Antimatter Muonic Hydrogen at STAR ( 00:30 ) 1 file Kefeng Xin 15:30 16:00 Dimuon Production in Au+Au sNN = 200 GeV Collisions at STAR ( 00:30 ) 1 file Kefeng Xin
Wednesday, 14 May 2014 :: day 2
Light Flavor Spectra Parallel Session - III
Conveners: Lokesh Kumar, Xianglei Zhu
(Guanin : 09:00 - 18:00)
Time Talk Presenter 09:00 Pseudo-rapidity dependence of inclusive photon multiplicity distributions at forward rapidity in STAR at RHIC Beam ( 00:30 ) 0 files Dronika Solanki 09:30 A Fixed-Target Program for STAR: Extending the Low Energy Reach of the RHIC Beam Energy Scan ( 00:30 ) 2 files Brooke Haag 10:00 Omega and Phi production in p+p, Au+Au and U+U collisions at STAR ( 00:30 ) 1 file Xianglei Zhu
Jet-correlation PWG parallel session
Conveners: Saskia Mioduszewski, Fuqiang Wang
(Lab Lounge (EVO/seeVogh: Frankfurt Jet-corr parallel session) : 09:00 - 12:00)
Time Talk Presenter 09:00 Di-Jet Imbalance Measurements and Semi-Inclusive Recoil Jet Distributions in Central Au+Au Collisions in STAR ( 01:00 ) 1 file Jörn Putschke (WSU) 10:00 Search for the ’Ridge’ in d+Au Collisions at RHIC by STAR ( 01:00 ) 1 file Li Yi (Purdue) 11:00 High-pt Direct Photon Azimuthal Correlation Measurements ( 01:00 ) 1 file Ahmed Hamed (Texas A&M) 12:00 Jet analysis update ( 00:20 ) 1 file Jan Rusnak (NPI)
Heavy Flavor III
Conveners: Daniel Kikola, Zhenyu Ye, Yifei Zhang
(Cytosin : 09:00 - 13:00)
Time Talk Presenter 09:00 Heavy Quark Interactions with the Medium as Measured with Electron-Hadron Correlations in $Au+Au$ Collisions in STAR ( 00:40 ) 2 files Jay Dunkelberger 09:40 NPE in p+p 200 GeV run 2009 ( 00:40 ) 3 files Olga Rusnakova 10:20
Coffee break ( 00:20 )
10:40 D* analysis in p+p 500 GeV BHT data ( 00:40 ) 1 file David Tlusty 11:20 Upsilon production in p+p 500 GeV ( 00:30 ) 1 file Leszek Kosarzewski
Bulk Correlations PWG --- Session III
Conveners: Shusu Shi, Hui Wang
(Adenin : 09:30 - 18:00)
Time Talk Presenter 09:30 QM rehearsal - CME and CVE ( 00:45 ) 1 file Feng Zhao 10:15 QM rehearsal - Charge asymmetry dependency of pi/K anisotropic flow in U+U and Au+Au collisions at STAR ( 00:45 ) 1 file Qi-ye Shou 11:00
Coffee Break ( 00:15 )
11:15 QM rehearsal - Elliptic flow of light nuclei and identified hadrons, their centrality and energy dependence in STAR ( 00:45 ) 0 files Rihan Haque 12:00 QM Poster rehearsal (EVO) - Measurement of higher harmonic flow of ϕ meson in STAR at RHIC ( 00:30 ) 1 file Mukesh Sharma
Plenary Session: Welcome and STAR status
Conveners: Nu Xu and Olga Evdokimov
(14:00 - 18:00)
Time Talk Presenter 14:00 Welcome ( 00:15 ) 0 files Ivan Kisel and Zhangbu Xu 14:15 Run14 status ( 00:15 ) 1 file Bill Christie 14:30 Analysis and paper status ( 00:30 ) 1 file Frank Geurts 15:00 BUR discussion ( 00:30 ) 1 file Zhangbu Xu 15:30 16:00 MTD status ( 00:30 ) 2 files Lijuan Ruan 16:30 HFT status ( 00:30 ) 1 file Hans Georg Ritter 17:00 QM talk rehearsal: HFT ( 00:30 ) 1 file Hao Qiu 17:30 QM talk rehearsal: Open charmed hadron production in p+p, Au+Au and U+U collisions at STAR ( 00:30 ) 5 files Zhenyu Ye
Poster Exhibition Session with Wine&Cheese
(18:30 - 20:30)
Thursday, 15 May 2014 :: day 3
Plenary Session: Theory
Conveners: Reinhard Stock
(09:00 - 12:30)
Time Talk Presenter 09:00 Theory I: Penetrating Probes ( 00:40 ) 1 file Olena Linnyk 09:40 Theory II: particle number fluctuations ( 00:40 ) 1 file Kenji Morita 10:20 10:50 QM STAR highlight talk ( 01:00 ) 1 file Nu Xu 11:50 Theory III: Heavy quark production ( 00:40 ) 0 files Marlene Nahrgang
Plenary Session: QM talk rehearsal
Conveners: Lijuan Ruan and Helen Caines
(14:00 - 18:00)
Time Talk Presenter 14:00 Direct virtual photon and dielectron production in Au+Au collisions at $\sqrt{s_{NN}} $ = 200 GeV at STAR ( 00:30 ) 1 file Chi Yang 14:30 Systematics of the Kinetic Freeze-out Properties in High-Energy Nuclear Collisions from STAR ( 00:30 ) 0 files Lokesh Kumar 15:00 Beam Energy Dependence of Dielectron Production in Au+Au Collisions at STAR ( 00:30 ) 1 file Patrick Huck 15:30
Coffee break ( 00:30 )
16:30 Searching for the "Ridge" in d+Au Collisions at RHIC by STAR ( 00:30 ) 1 file Li Yi 17:30 Omega and Phi production in p+p, Au+Au and U+U collisions at STAR ( 00:30 ) 1 file Xianglei Zhu
Friday, 16 May 2014 :: day 4
Plenary Session: QM talk rehearsal
Conveners: Gang Wang and Fuqiang Wang
(09:00 - 12:30)
Time Talk Presenter 08:30 QM STAR highlight talk ( 00:30 ) 1 file Nu Xu 09:00 Flow Measurements and selection of body-body and tip-tip enhanced samples in U+U collisions at STAR ( 00:30 ) 0 files Hui Wang 09:30 Charge asymmetry dependency of pi/K anisotropic flow in U+U and Au+Au collisions at STAR ( 00:30 ) 0 files Qi-ye Shou 10:00 $\Lambda$($K_{S}^{0}$)-$h^{\pm}$ Azimuthal Correlations with Respect to Reaction Plane and Searches for CME and CVE ( 00:30 ) 0 files Feng Zhao 10:30
Coffee break ( 00:30 )
11:00 The centrality and energy dependence of the elliptic flow of light nuclei and hadrons in STAR ( 00:30 ) 1 file Rihan Haque (NISER, India) 11:30 High moment measurements of net-proton, net-charge and net-kaon distributions at STAR ( 00:30 ) 1 file Amal Sakar 12:00 Recent STAR measurements of $J/\psi$ production from Beam Energy Scan and U$+$U collisions ( 00:30 ) 6 files Wangmei Zha
Plenary Session: QM talk rehearsal and possible re-rehearsal
Conveners: Frank Geurts
(14:00 - 16:30)
Time Talk Presenter 14:00 Semi-inclusive recoil jet distribution and di-jets imbalance measurements in central Au+Au collisions at sNN = 200 GeV from STAR ( 00:30 ) 3 files Joern Putschke 14:30 Measurements of direct-photon-hadron correlations and direct-photon azimuthal anisotropy by STAR ( 00:30 ) 1 file Ahmed Hamed 15:00 Omega and Phi production in p+p, Au+Au and U+U collisions at STAR ( 00:30 ) 1 file Xianglei Zhu 15:30
Plenary Session: QM flash talk preparation
Conveners: Saskia Mioduszewski
(16:00 - 18:30)
Time Talk Presenter 16:00 Pion-kaon femtoscopy for Au+Au collisions at sqrt(s_NN)=39GeV from Beam Energy Scan program at STAR ( 00:15 ) 0 files Katarzyna Poniatowska 16:15 Azimuthally-sensitive two-pion interferometry in U+U collisions at STAR ( 00:15 ) 1 file John Campbell 16:30 Heavy Quark Interactions with the Medium as Measured with Electron-Hadron Correlations in $Au+Au$ Collisions in STAR ( 00:15 ) 1 file Jay Dunkelberger 16:45 J/psi polarization measurement in p+p collisions at 500 GeV in STAR ( 00:15 ) 1 file Barbara Trzeciak 17:00 Upsilon production in U+U collisions at the STAR experiment ( 00:15 ) 2 files Robert Vertesi 17:15 Search for Antimater Muonic Hydrogen at STAR ( 00:15 ) 1 file Kefeng Xin 17:30 Triangular Flow of Inclusive charge and Identified Particles at STAR ( 00:15 ) 1 file Xu Sun 17:45 Spokesperson's address ( 00:15 ) 0 files Zhangbu Xu
Printer-Friendly
PDF Timetable
Files
Abstracts
Detailed |
From my last three posts, I think we are ready to model the motion of a mass on a spring:
So, just like I did with the tennis ball, I will show you equations that describe this motion.
Now again, to develop these equations requires calculus, so I will just provide the final result. But before I do, just how does one begin modelling a physical process like this?
What is usually done, is to write down known equations that describe the forces acting on the mass. If you think about it, there are two: a force due to gravity and a force from the spring. There are other forces as well like resistance from the air, but as before, we will assume these to be zero to simplify the development.
Let’s first set up the picture. We have a weight on a spring. The weight has mass
m. Now weight is different than mass, but on earth, the units are the same. So on earth, a 1 kg weight has a mass of 1 kg. But on the moon, the mass is still 1 kg but its weight is 0.165 kg because gravity is weaker there. We have a spring with a spring constant of k which is a measure of how stiff the spring is. The higher the value of k, the stiffer the spring.
To start the mass moving, we have stretched it
A centimeters down from its resting position, then let go. We set up a one dimensional coordinate system where the rest position of the mass is 0 and up is positive.
The force due to gravity is –
mg where m is the mass and g is the acceleration due to gravity. From my post on the tennis ball, remember that g is 9.8 m/s². It’s negative because the force is acting in the down direction. This comes from Isaac Newton’s second law that says that force is equal to mass times acceleration. That is, F = ma. The force due to the spring comes from something called Hooke’s Law: F = kx where k is the spring constant and x is the amount that the spring is stretched (negative) or compressed (positive) from the resting position.
So the force equation for this setup is:
F = ma = kx – mg
This is the equation engineers start with before they do calculus on it. So now in this post, this is the part where a miracle happens, and I’ll give you the final result.
So the equation that shows where the mass is at a certain time is below where
t is time in seconds. It is assumed that time starts (that is t = 0) when the mass is travelling upwards and is at the 0 position:
%Translator MathMagic Personal Edition Mac v9.41, LaTeX converter, 2019.1.28 09:06
{x}\hspace{0.33em}{=}\hspace{0.33em}{A}\hspace{0.33em}\sin\left({\frac{180t}{\mathit{\pi}}\sqrt{\frac{k}{m}}}\right)
\]
In my next post, I’ll dissect this a bit and put some actual numbers in it and plot the results. |
An element $a \in \mathbb{Z}_n$ is a quadratic residue in $\mathbb{Z}_n$ if it's congruent to some perfect square modulo $n$.
Is there an efficient algorithm to find all quadratic residues in $\mathbb{Z}_n$?
$n$ is composite and we know all it's factors if that helps. Update:
We have one more restriction: $n$ = $p_1 p_2 \dots p_k$, where $p_i$ are distinct odd primes and $p_i \equiv 3 \pmod 4$. Can we get something in this case?
I use the following approach at the moment:
Iterate over $\left\lfloor\frac{n}{2}\right\rfloor + 1$ perfect squares starting from $0$ and store them as we go. The problem is that it becomes slow quickly as $n$ grows. Here's the code example:
#include <stdio.h>int main() { int n = 7 * 11; int qr = 0; int step = 1; for (int i = 0;i <= n / 2;i++) { printf("qr: %i\n", qr); // perform some operation on qr here // e.g. store it somewhere to access later qr = (qr + step) % n; step += 2; } return 0;} |
65 4 Homework Statement Two identical audio speakers, connected to the same amplifier, produce monochromatic sound waves with a frequency that can be varied between 300 and 600 Hz. The speed of the sound is 340 m/s. You find that, where you are standing, you hear minimum intensity sound a) Explain why you hear minimum-intensity sound b) If one of the speakers is moved 39.8 cm toward you, the sound you hear has maximum intensity. What is the frequency of the sound? c) How much closer to you from the position in part (b) must the speaker be moved to the next position where you hear maximum intensity? Homework Equations interference Homework Statement:Two identical audio speakers, connected to the same amplifier, produce monochromatic sound waves with a frequency that can be varied between 300 and 600 Hz. The speed of the sound is 340 m/s. You find that, where you are standing, you hear minimum intensity sound
a) Explain why you hear minimum-intensity sound
b) If one of the speakers is moved 39.8 cm toward you, the sound you hear has maximum intensity. What is the frequency of the sound?
c) How much closer to you from the position in part (b) must the speaker be moved to the next position where you hear maximum intensity?
Homework Equations:interference
I have no idea on how to proceed
I started with
## frequency=\frac {speed\space of\space sound} \lambda \space = \frac {340 \frac m s} \lambda ##
then
##d \space sin\alpha \space = \space \frac \lambda 2\space ##
but now i'm stuck
Any help please? |
Recall the third isomorphism theorem of groups:Let $G$ be a group and let $H, K$ be normal subgroups of $G$ with $H < K$.Then we have $G/K$ is a normal subgroup of $G/H$ and we have an isomorphism\[G/K \cong (G/H)/(G/K).\]
Proof 1 (Using third isomorphism theorem)
Since $H, K$ are normal subgroups of $G$ and $H < K$, the third isomorphism theorem yields that\[G/K \cong (G/H)/(G/K).\]
Abelian Normal subgroup, Quotient Group, and Automorphism GroupLet $G$ be a finite group and let $N$ be a normal abelian subgroup of $G$.Let $\Aut(N)$ be the group of automorphisms of $G$.Suppose that the orders of groups $G/N$ and $\Aut(N)$ are relatively prime.Then prove that $N$ is contained in the center of […]
Quotient Group of Abelian Group is AbelianLet $G$ be an abelian group and let $N$ be a normal subgroup of $G$.Then prove that the quotient group $G/N$ is also an abelian group.Proof.Each element of $G/N$ is a coset $aN$ for some $a\in G$.Let $aN, bN$ be arbitrary elements of $G/N$, where $a, b\in […]
Commutator Subgroup and Abelian Quotient GroupLet $G$ be a group and let $D(G)=[G,G]$ be the commutator subgroup of $G$.Let $N$ be a subgroup of $G$.Prove that the subgroup $N$ is normal in $G$ and $G/N$ is an abelian group if and only if $N \supset D(G)$.Definitions.Recall that for any $a, b \in G$, the […]
Normal Subgroups, Isomorphic Quotients, But Not IsomorphicLet $G$ be a group. Suppose that $H_1, H_2, N_1, N_2$ are all normal subgroup of $G$, $H_1 \lhd N_2$, and $H_2 \lhd N_2$.Suppose also that $N_1/H_1$ is isomorphic to $N_2/H_2$. Then prove or disprove that $N_1$ is isomorphic to $N_2$.Proof.We give a […]
Group of Order 18 is SolvableLet $G$ be a finite group of order $18$.Show that the group $G$ is solvable.DefinitionRecall that a group $G$ is said to be solvable if $G$ has a subnormal series\[\{e\}=G_0 \triangleleft G_1 \triangleleft G_2 \triangleleft \cdots \triangleleft G_n=G\]such […] |
If $A$ is a noetherian domain and not a field then the infinite product $M=A\times A\times \dots$ is not free. Suppose there is a basis. For $x\in M$ define its support to be the finite set of basis elements for which the coefficient is not zero. Note that if the supports of $x$ and $y$ are disjoint then their union is the support of $x+y$. Choose $\pi\in A$ neither zero nor invertible. Define the $n$-support of $x$ to consist of those basis elements for which the coefficient is not divisible by $\pi^n$. Note that $n$-support is contained in $(n+1)$-support is contained in support.
Choose an infinite sequence of nonzero elements $m_1,m_2,\dots $ of $M$ such that
(1) $m_n$ projects to zero in the first $n-1$ factors of the infinite product,
(2) the $m_n$ have pairwise disjoint support.
To get $m_n$ when all the previous $m_k$ are given, you just have to know that the kernel of a certain map from $M$ to a finite product of copies of $A$ is nontrivial (project the product on the first $n-1$ factors and project the free module on the span of a finite subset of basis).
Then divide each $m_n$ by as high a power of $\pi$ as possible; this preserves 1 and 2 while also arranging
(3) $m_n$ is nonzero mod $\pi$.
Now let $s_n=\pi m_1+\pi^2 m_2+\dots +\pi^nm_n$ and let $s$ be the limit of $s_n$ (defined because of 1).
The contradiction is that the support of $s$ must contain arbitrarily large finite sets $S_n$: Let $S_n$ be the $(n+1)$-support of $s_n$. Then the support of $s$ contains the $(n+1)$-support of $s$, which equals $S_n$. And $S_n$ properly contains $S_{n-1}$ because it is the disjoint union of the $(n+1)$-support of $s_{n-1}$ and the $(n+1)$-support of $\pi^nm_n$, this last being the (by 3 nonempty) $1$-support of $m_n$.
EDIT This implies that if $A$ is noetherian and has dimension $>0$ then the infinite product is not free, because $(A/P)\otimes \prod A=\prod (A/P)$ if $P$ is a finitely generated ideal -- choose $P$ to be a non-maximal prime. Also, the argument above proves more than I said: for a noetherian domain the infinite product is not even a submodule of a free module. |
Rank Abundance Graphs
Species abundance distribution can also be expressed through rank abundance graphs. A common approach is to plot some measure of species abundance against their rank order of abundance. Such a plot allows the user to compare not only relative richness but also evenness. Species abundance models (also called abundance curves) use all available community information to create a mathematical model that describes the number and relative abundance of all species in a community. These models include the log normal, geometric, logarithmic, and MacArthur’s brokenstick model. Many ecologists use these models as a way to express resource partitioning where the abundance of a species is equivalent to the percentage of space it occupies (Magurran 1988). Abundance curves offer an alternative to single number diversity indices by graphically describing community structure.
Figure \(\PageIndex{1}\). Generic Rank-abundance diagram of three common mathematical models used to fit species abundance distributions: Motomura’s geometric series, Fisher’s logseries, and Preston’s log-normal series (modified from Magurran 1988) by Aedrake09.
Let’s compare the indices and a very simple abundance distribution in two different situations. Stand A and B both have the same number of species (same richness), but the number of individuals in each species is more similar in Stand A (greater evenness). In Stand B, species 1 has the most individuals, with the remaining nine species having a substantially smaller number of individuals per species. Richness, the compliment to Simpson’s D, and Shannon’s H’ are computed for both stands. These two diversity indices incorporate both richness and evenness. In the abundance distribution graph, richness can be compared on the x-axis and evenness by the shape of the distribution. Because Stand A displays greater evenness it has greater overall diversity than Stand B. Notice that Stand A has higher values for both Simpson’s and Shannon’s indices compared to Stand B.
Figure \(\PageIndex{2}\). Two stands comparing richness, Simpson’s D, and Shannon’s index.
Indices of diversity vary in computation and interpretation so it is important to make sure you understand which index is being used to measure diversity. It is unsuitable to compare diversity between two areas when different indices are computed for each area. However, when multiple indices are computed for each area, the sampled areas will rank similarly in diversity as measured by the different indices. Notice in this previous example both Simpson’s and Shannon’s index rank Stand A as more diverse and Stand B as less diverse.
Similarity between Sites
There are also indices that compare the similarity (and dissimilarity) between sites. The ideal objective is to express the ecological similarity of different sites; however, it is important to identify the aim or focus of the investigation in order to select the most appropriate index. While many indices are available, van Tongeren (1995) states that most of the indices do not have a firm theoretical basis and suggests that practical experience should guide the selection of available indices.
The Jaccard index (1912) compares two sites based on the presence or absence of species and is used with qualitative data (e.g., species lists). It is based on the idea that the more species both sites have in common, the more similar they are. The Jaccard index is the proportion of species out of the total species list of the two sites, which is common to both sites:
$$SJ = \frac {c} {(a + b + c)}$$
where
SJ is the similarity index, c is the number of shared species between the two sites and a and b are the number of species unique to each site. Sørenson (1948) developed a similarity index that is frequently referred to as the coefficient of community (CC):
$$CC = \frac {2c} {(a + b + 2c)}$$
As you can see, this index differs from Jaccard’s in that the number of species shared between the two sites is divided by the average number of species instead of the total number of species for both sites. For both indices, the higher the value the more ecologically similar two sites are.
If quantitative data are available, a similarity ratio (Ball 1966) or a percentage similarity index, such as Gauch (1982), can be computed. Not only do these indices compare number of similar and dissimilar species present between two sites, but also incorporate abundance. The similarity ratio is:
$$SR_{ij} = \dfrac {\sum y_{ki}y_{kj}}{\sum y_{ki}^2 +\sum y_{kj}^2 -\sum(y_{ki}y_{kj})}$$
where
yki is the abundance of the kth species at site i (sites i and j are compared). Notice that this equation resolves to Jaccard’s index when just presence or absence data is available. The percent similarity index is:
$$PS_{ij} = \dfrac {200\sum min (y_{ki},y_{kj})} {\sum y_{ki}+\sum y_{kj}}$$
Again, notice how this equation resolves to Sørenson’s index with qualitative data only. So let’s look at a simple example of how these indices allow us to compare similarity between three sites. The following example presents hypothetical data on species abundance from three different sites containing seven different species (A-G).
4
0
1
0
1
0
0
0
0
1
0
1
1
4
0
3
1
1
1
0
3
Let’s begin by computing Jaccard’s and Sørenson’s indices for the three comparisons (site 1 vs. site 2, site 1 vs. site 3, and site 2 vs. site 3).
\(SJ1,2=\frac {2}{(3+1+2)}=0.33\) \(SJ1,3 = \frac {4}{(4+1+0)}=0.80\) \(SJ2,3 =\frac {1}{(1+2+3)} = 0.17\)
\(CC1,2=\frac {2(2)}{(3+1+2(2))} = 0.50\) \(CC1,3 =\frac {2(4)}{(1+0+2(4))} = 0.89\) \(CC2,3 =\frac {2(1)}{(2+3+2(1))} = 0.29\)
Both of these qualitative indices declare that sites 1 and 3 are the most similar and sites 2 and 3 are the least similar. Now let’s compute the similarity ratio and the percent similarity index for the same site comparisons.
$$SR1,2=\dfrac {[(4 \times 0)+(0 \times 1) +(0\times 0)+(1\times 0)+(1\times4)+(3\times 1)+(1\times 0)]}{(4^2+0^2+0^2+1^2+1^2+3^2+1^2)+(0^0+1^2+0^2+0^2+4^2+1^2+0^2)+(4 \times 0)+(0 \times 1) +(0\times 0)+(1\times 0)+(1\times4)+(3\times 1)+(1\times 0)}$$
$$SR1,2= 0.23$$
$$SR1,3=\dfrac {[(4\times 1)+(0\times 0)+(0\times 0)+(1\times 1)+(1\times 0)+(3\times 1)+(1\times 3)]}{(4^2 +0^2+0^2+1^2+1^2+3^2+1^2)+(1^2+0^2+0^2+1^2+0^2+1^2+3^2)+(4\times 1)+(0\times 0)+(0\times 0)+(1\times 1)+(1\times 0)+(3\times 1)+(1\times 3)}$$
$$SR1,3= 0.38$$
$$SR2,3=\dfrac {[(0\times 1)+(1\times 0)+(0\times 0)+(0\times 1) +(4\times 0) +(1\times 1) +(0\times 3)]}{(0^2+1^2+0^2+0^2+4^2+1^2+0^2)+(1^2+0^2+0^2+1^2+0^2+1^2+3^2)+(0\times 1)+(1\times 0)+(0\times 0)+(0\times 1) +(4\times 0) +(1\times 1) +(0\times 3)}$$
$$SR1,3= 0.03$$
$$PS1,2=\dfrac {200(0+0+0+0+1+1+0)}{(4+0+0+1+1+3+1)+(0+1+0+0+4+1+0)}=25.0$$
$$PS1,3=\dfrac {200(1+0+0+1+0+1+1)}{(4+0+0+1+1+3+1)+(1+0+0+1+0+1+3)} = 50.0$$
$$PS2,3=\dfrac {200(0+0+0+0+0+1+0)}{(0+1+0+0+4+1+0)+(1+0+0+1+0+1+3)} = 16.7$$
A matrix of percent similarity values allows for easy interpretation (especially when comparing more than three sites).
Table \(\PageIndex{1}\). A matrix of percent similarity for three sites.
The quantitative indices return the same conclusions as the qualitative indices. Sites 1 and 3 are the most similar ecologically, and sites 2 and 3 are the least similar; and also site 2 is most unlike the other two sites.
Habitat Suitability Index (HSI)
In 1980, the U.S. Fish and Wildlife Service (USFWS) developed a procedure for documenting predicted impacts to fish and wildlife from proposed land and water resource development projects. The Habitat Evaluation Procedures (HEP) (Schamberger and Farmer 1978) were developed in response to the need to document the non-monetary value of fish and wildlife resources. HEP incorporates population and habitat theories for each species and is based on the assumption that habitat quality and quantity can be numerically described so that changes to the area could be assessed and compared. It is a species-habitat approach to impact assessment and habitat quality, for a specific species is quantified using a habitat suitability index (HSI).
Habitat suitability index (HSI) models provide a numerical index of habitat quality for a specific species (Schamberger et al. 1982) and in general assume a positive, linear relationship between carrying capacity (number of animals supported by some unit area) and HSI. Today’s natural resource manager often faces economically and socially important decisions that will affect not only timber but wildlife and its habitat. HSI models provide managers with tools to investigate the requirements necessary for survival of a species. Understanding the relationships between animal habitat and forest management prescription is vital towards a more comprehensive management approach of our natural resources. An HSI model synthesizes habitat use information into a framework appropriate for fieldwork and is scaled to produce an index value between 0.0 (unsuitable habitat) to 1.0 (optimum habitat), with each increment of change being identical to another. For example, a change in HSI from 0.4 to 0.5 represents the same magnitude of change as from 0.7 to 0.8. The HSI values are multiplied by area of available habitat to obtain Habitat Units (HUs) for individual species. The U.S. Fish and Wildlife Service (USFWS) has documented a series of HSI models for a wide variety of species (FWS/OBS-82/10).
Let’s examine a simple HSI model for the marten (
Martes americana) which inhabits late successional forest communities in North America (Allen 1982). An HSI model must begin with habitat use information, understanding the species needs in terms of food, water, cover, reproduction, and range for this species. For this species, the winter cover requirements are more restrictive than cover requirements for any other season so it was assumed that if adequate winter cover was available, habitat requirements for the rest of the year would not be limiting. Additionally, all winter habitat requirements are satisfied in boreal evergreen forests. Given this, the research identified four crucial variables for winter cover that needed to be included in the model.
Figure \(\PageIndex{3}\). Habitat requirements for the marten.
For each of these four winter cover variables (V1, V2, V3, and V4), suitability index graphs were created to examine the relationship between various conditions of these variables and suitable habitat for the marten. A reproduction of the graph for % tree canopy closure is presented below.
Figure \(\PageIndex{4}\). Suitability index graph for percent canopy cover.
Notice that any canopy cover less than 25% results in unacceptable habitat based on this variable alone. However, once 50% canopy cover is reached the suitability index reaches 1.0 and optimum habitat for this variable is achieved. The following equation was created that combined the life requisite values for the marten using these four variables:
$$\frac{(V_1 \times V_2 \times V_3 \times V_4)} {2}$$
Since winter cover was the only life requisite considered in this model, the HSI equals the winter cover value. As you can see, the more life requisites included in the model, the more complex the model becomes.
While HSI values identify the quality of the habitat for a specific species, wildlife diversity as a whole is a function of size and spatial arrangement of the treated stands (Porter 1986). Horizontal and structural diversity are important. Generally speaking, the more stands of different character an area contains, the greater the wildlife diversity. The spatial distribution of differing types of stands supports animals that need multiple cover types. In order to promote wildlife species diversity, a manager must develop forest management prescription that varies the spatial and temporal patterns of timber reproduction, thereby providing greater horizontal and vertical structural diversity.
Figure \(\PageIndex{5}\): Bird species diversity nesting across a forest to field gradient (After Strelke and Dickson 1980).
Typically, even-aged management reduces vertical structural diversity, but options such as the shelterwood method tend to mitigate this problem. Selection system tends to promotes both horizontal and vertical diversity.
Integrated natural resource management can be a complicated process but not impossible. Vegetation response to silvicultural prescriptions provides the foundation for understanding the wildlife response. By examining the present characteristics of the managed stands, understanding the future response due to management, and comparing those with the requirements of specific species, we can achieve habitat manipulation together with timber management. |
Search
Now showing items 11-20 of 108
Long-range angular correlations of π, K and p in p–Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV
(Elsevier, 2013-10)
Angular correlations between unidentified charged trigger particles and various species of charged associated particles (unidentified particles, pions, kaons, protons and antiprotons) are measured by the ALICE detector in ...
Anisotropic flow of charged hadrons, pions and (anti-)protons measured at high transverse momentum in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2013-03)
The elliptic, $v_2$, triangular, $v_3$, and quadrangular, $v_4$, azimuthal anisotropic flow coefficients are measured for unidentified charged particles, pions, and (anti-)protons in Pb–Pb collisions at $\sqrt{s_{NN}}$ = ...
Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV
(Springer, 2015-01-10)
The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ...
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV
(Springer, 2015-05-27)
The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ...
Measurement of inelastic, single- and double-diffraction cross sections in proton-proton collisions at the LHC with ALICE
(Springer, 2013-06)
Measurements of cross sections of inelastic and diffractive processes in proton--proton collisions at LHC energies were carried out with the ALICE detector. The fractions of diffractive processes in inelastic collisions ...
Production of $\Sigma(1385)^{\pm}$ and $\Xi(1530)^{0}$ in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Springer, 2017-06)
The transverse momentum distributions of the strange and double-strange hyperon resonances ($\Sigma(1385)^{\pm}$, $\Xi(1530)^{0}$) produced in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV were measured in the rapidity ...
Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2016-03)
Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ...
Charged–particle multiplicities in proton–proton collisions at $\sqrt{s}=$ 0.9 to 8 TeV, with ALICE at the LHC
(Springer, 2017-01)
The ALICE Collaboration has carried out a detailed study of pseudorapidity densities and multiplicity distributions of primary charged particles produced in proton-proton collisions, at $\sqrt{s} =$ 0.9, 2.36, 2.76, 7 and ... |
Let $I \subset \mathbb{R}$ and for $\forall n \in \mathbb{N}: f_n \in C(I, \mathbb{R})$. Prove that if for any $\sigma:\mathbb{N} \rightarrow \mathbb{N}$ bijection, the series $$\sum_n f_{\sigma(n)}$$ converges uniformly on $I$, then $$\sum_n |f_n|$$ also converges uniformly on $I$. I'm thinking of a proof by contradiction: Let's suppose that the latter series is not uniformly convergent, that is: $$\exists \varepsilon >0:\forall N\in\mathbb{N}:\exists n>N: \exists x \in I:$$ $$\sum_{k=N}^{n}|f_k(x)|\ge \varepsilon$$ Then $$\prod_{N \in \mathbb{N}} \{n|n>N \land \exists x \in I: \sum_{k=N}^{n}|f_k(x)|\ge \varepsilon \}\ne \emptyset$$ but then I stuck, because an element of the latter set is not necessarely a bijection, because it might fail to be injective. My thought was to construct somehow a rearrangement of the series that would not converge uniformly hence a contradiction. I also fail to see where the continuity of the functions $f_n$ comes in. I would appreciate any suggestions.
Assume otherwise. Then there exist $\epsilon > 0$ such that
$$ \forall n \in \mathbb{N}, \quad \exists x \in I \quad \text{s.t.} \quad \sum_{i=n}^{\infty} |f_i(x)| > \epsilon. \tag{*}$$
Now we would like to construct $\sigma$ which violates the assumption. To this end, we recursively define the triple $(A_j, n_j, x_j)_{j=1}^{\infty}$ as follows:
Construction. Assume that $(A_j, n_j, x_j)_{j=1}^{k-1}$ is well-defined so that $A_j$'s and $\{n_j\}$'s are mutually disjoint. Pick $n$ so that it is larger than any elements in $\bigcup_{j=1}^{k-1}A_j \cup \{n_j \}$. In the following picture, elements chosen up to the $(k-1)$-th stage are represented by black dots.
By $\text{(*)}$, there exists $x_k \in I$ such that $\sum_{i=n}^{\infty} |f_i(x_k)| > \epsilon$. So, either the sum of positive parts or the sum of negative parts must exceed $\epsilon/2$, and in particular, there exists a finite subset $A_k \subset \mathbb{N} \cap [n, \infty)$ so that
$$\left| \sum_{i \in A_k} f_i(x_k) \right| > \epsilon / 2. \tag{2}$$
Then pick $n_k$ as the smallest element in $\mathbb{N}\setminus\left(\bigcup_{j=1}^{k-1}A_j \cup \{n_j \} \cup A_k\right)$. In the following figure, elements of $A_k$ are represented by red dots and $n_k$ is represented by the blue dot.
By the construction, it is clear that $\mathbb{N} = \bigcup_{j=1}^{\infty} A_j \cup \{n_j\}$. From this, we may define $\sigma : \mathbb{N} \to \mathbb{N}$ as the function that enumerates elements in the sets of
$$(A_1, \{n_1\}, A_2, \{n_2\}, \cdots) $$
in order of appearance. In other words, if we regard $A_k$'s as ordered lists, then $\sigma$ is an infinite ordered list obtained by concatenating $A_1$, $\{n_1\}$, $A_2$, $\{n_2\}$, $\cdots$. Now, if we write $N_k = \#\big( \bigcup_{j=1}^{k} A_j \cup \{n_j\} \big)$, then
$$ \sup_{x \in I} \left| \sum_{i = N_{k-1} + 1}^{N_k} f_{\sigma(i)}(x) \right| \geq \left| \sum_{i \in A_k} f_i (x_k) \right| - |f_{n_k}(x_k)| \geq (\epsilon/2) - |f_{n_k}(x_k)|. $$
But it is easy to check that $f_n \to 0$ uniformly, and so, it follows that this lower bound is at least as large as $\epsilon/3$ for all sufficiently large $k$. This proves that partial sums of $(f_{\sigma(i)})$ cannot converge uniformly, contradicting the assumption. $\square$ |
I've seen a number of 2D Poisson disc sampling algorithms online that use a grid to accelerate checking for existing points within the minimum radius [![r][r image]][r link] of a candidate point. For example:
They use a grid of squares of side $\frac{r}{\sqrt2}$, which is the same side length that I intuitively came up with when implementing this myself.
I can see the reason - that is the largest square that cannot contain more than 1 point (assuming the minimum is not attainable - the distance between two points must be
strictly greater than $r$).
However, having thought about it further, I adjusted the grid size to $\frac{r}2$ instead. This finer grid means 4 additional squares need to be checked (the 4 corner squares are now within the radius), but the total area covered by the required squares is less, so that on average fewer points will need to go through the Euclidean distance check. The difference can be visualized using the same style as the diagram in the first linked article.
For a candidate new point, existing points must be checked in all squares that are within a radius $r$ of the corners of the candidate's square. Here the two grid sizes are shown side by side, to scale, for the same radius $r$. This shows clearly that a significantly smaller area is being checked. Each square is exactly half the area of the previous approach, and even if the 4 outer corner squares are excluded in the previous approach (left image), this still gives an area $2 \cdot \frac{21}{25} = 1.68$ times larger than in the new approach.
My main question is this:
Is this approach still correct, and does it give identical results?
I'm also interested to know whether there is any reason to favor the $\frac{r}{\sqrt2}$ approach. Using $\frac{r}{2}$ seems more efficient in time, which seems worth the cost in space efficiency. Is there anything I'm missing?
Images produced with this jsfiddle (in case I need to edit them later). |
Evidence for the production of three massive vectorbosons in $pp$ collisions with the ATLAS detector
Pre-published on: 2019 June 27
Published on: 2019 October 04
Abstract
A search for the production of three massive vector bosons in proton--proton collisions is performed using data at $\sqrt{s}=13\,TeV$ recorded with the ATLAS detector at the Large Hadron Collider in the years 2015--2017, corresponding to an integrated luminosity of $79.8\,\text{fb}^{-1}$. Events with two same-sign leptons $\ell$ (electrons or muons) and at least two reconstructed jets are selected to search for $WWW\to\ell\nu\ell\nu qq$. Events with three leptons without any same-flavour opposite-sign lepton pairs are used to search for $WWW\to\ell\nu\ell\nu\ell\nu$, while events with three leptons and at least one same-flavour opposite-sign lepton pair and one or more reconstructed jets are used to search for $WWZ\to\ell\nu qq \ell\ell$. Finally, events with four leptons are analysed to search for $WWZ\to\ell\nu\ell\nu\ell\ell$ and $WZZ\to qq \ell\ell\ell\ell$. Evidence for the joint production of three massive vector bosons is observed with a significance of 4.0 standard deviations, where the expectation is 3.1 standard deviations.
DOI: https://doi.org/10.22323/1.352.0135 |
Let $w$ denote the weight on $A$ so that $1-w$ is the weight on $B$. Recall from the properties of variance that
$\sigma_p^2 = w^2\sigma_A^2 + 2w(1-w)\sigma_A\sigma_B \rho_{AB}+ (1-w)^2\sigma_B^2$
Without loss of generality, assume $\sigma_A \geq \sigma_B$. We wish to show that
$w^2\sigma_A^2 + 2w(1-w)\sigma_A\sigma_B \rho_{AB}+ (1-w)^2\sigma_B^2\leq \sigma_A^2$
Note that
$\sigma_A^2 = \sigma_A^2 (w + (1-w)) ^2 = \sigma_A^2 w^2 + 2w(1-w)\sigma_A^2 + \sigma_A^2(1-w)^2$
Since $\sigma_A \geq \sigma_B$ and $w$, $(1-w)$, and $\sigma_A$ are positive, this means that
$\sigma_A^2 \geq \sigma_A^2 w^2 + 2w(1-w)\sigma_A\sigma_B + \sigma_B^2(1-w)^2$
And since the correlation has the property that $-1 \leq \rho_{AB} \leq 1$ and $w$, $(1-w)$, $\sigma_B$ and $\sigma_A$ are all positive, it must be the case that
$\sigma_A^2 w^2 + 2w(1-w)\sigma_A\sigma_B + \sigma_B^2(1-w)^2 \geq \sigma_A^2 w^2 + 2w(1-w)\sigma_A\sigma_B\rho_{AB} + \sigma_B^2(1-w)^2$
Therefore
$\sigma_A^2 \geq \sigma_A^2 w^2 + 2w(1-w)\sigma_A\sigma_B\rho_{AB} + \sigma_B^2(1-w)^2$ $\square$
In words, looking at the formula for variance of a convex combination of random variables, the variance is maximized if the correlation between the assets is 1. In this case, the possible portfolio values as a function of $w$ are a straight line segment between $A$ and $B$, which clearly can't have a variance higher than either. Now, if the correlation is less than 1, then any combination of the two will be lower than the straight line case.
Intuitively, the returns to assets $A$ and $B$ will partially cancel each other out any time they are not a fixed multiple of each other. This canceling out behavior reduces the variance of the resulting portfolio. The worst-case scenario is that the two assets are equal to each other, so the portfolio can never have a higher variance than the component asst with the highest variance. |
This question already has an answer here:
I have a long equation that must fit in a two-column journal. I know how to break up the right-hand terms nicely, but even so it won't fit and I need the equal signs to start on a new line, but I can't figure out how to do it nicely.
Here is what I've tried:
\begin{align*}p(\{K_{\text{pre}},\theta_{\text{b,pre}},\sigma_{\text{pre}},K_{\text{post}},\theta_{\text{b,post}},\sigma_{\text{post}}\}\mid D)\\\begin{split}=\phantom{} & p(\{K_{\text{pre}},\theta_{\text{b,pre}},\sigma_{\text{pre}}\}\mid D)\\ & \times p(\{K_{\text{post}},\theta_{\text{b,post}},\sigma_{\text{post}}\}\mid D)\end{split}\\\begin{split}=\phantom{} & p(\{K_{\text{pre}},\theta_{\text{b,pre}},\sigma_{\text{pre}}\}\mid D_{\text{pre}})\\ & \times p(\{K_{\text{post}},\theta_{\text{b,post}},\sigma_{\text{post}}\}\mid D_{\text{post}})\end{split}\end{align*}
and the result:
Ideally I guess the right-hand lines should be "moved" to the left, but I'm not sure how to do that. I've seen the answers to How can I split an equation over two lines but here the first equal sign must come on a new line. |
Estimating a firm’s true market value presents a challenge for financial professionals and technical analysts. Researchers at Bank of England have investigated this problem to understand how a firm’s true value is affected by periods of high market volatility.
A firm’s assets are subject to uncertainties such as profit flows and risk exposure. Similarly, default risk is driven by uncertain future asset values relative to promised payments on debt. Market value is typically estimated using mathematical models such as Black-Scholes/Merton, which are based on freely available but limited information about the firm, such as its market capitalization, the published face value of its debt, and the risk-free interest rate.
Financial crises have revealed shortcomings in these methodologies. One shortcoming is the inability of such models to include relatively infrequent but extreme movements, or
jumps, in observed time series. While jumps are usually observed only occasionally, they occur frequently during financial crises or other periods of market uncertainty. For example, the plot in Figure 1 showing the market capitalization of a major UK bank in the first half of 2007 reveals several jumps of varying magnitudes (both upwards and downwards).
One way to assess the effect of jumps on market value is to use a jump-diffusion model. This is a combination of two stochastic processes, one to model the usual behavior of a series and another to model the presence of randomly occurring jumps. This article describes a workflow in which MATLAB
®, Statistics and Machine Learning Toolbox™, and Signal Processing Toolbox™ are used to estimate the parameters of a jump-diffusion model for a firm’s hidden market value, starting with freely available market data. The resulting model can be used to derive other series of interest, such as the default probability and the credit spread. Creating a Jump-Diffusion Model
Jump-diffusion models are based on the standard geometric Brownian motion (GBM) diffusion model. A GBM model has two parameters: the
drift (average trend) and the diffusion (volatility) of the process. These parameters can be used to model the distribution of continuously compounded (log) returns \(R_{t}\) for a given price series \(P_{t}\):
\[R_t = {\text{log}} \frac{P_{t+∆t}}{P_t} \sim N \Biggl(\left(μ - \frac{σ^2}{2} \right)∆t, σ^2∆t \Biggr), \]
where \(∆t\) is the time increment, \(μ\) is the drift parameter, and \(σ\) is the diffusion parameter. The model assumes that the log returns are normally distributed with mean \((μ - {σ^2\over 2}) ∆t\) and variance \(σ^2∆t\).
Our jump-diffusion model extends the GBM model by introducing random jumps. The jumps \(J_{k}\) are a sequence of i.i.d. lognormal random variables: \(\text{log} J_{k} \sim N(μ_J,σ_J^2) \). The arrival of the jumps is modeled by a Poisson process \(N_{t}\) with rate \(λ\). The resulting dynamics of the time-series model are:
\[R_t = {\text{log}} \frac{P_t}{P_0} = \left(μ - \frac{σ^2}{2} \right)t + σW_t + \sum_{k=0}^{N_t}\ \text{log}J_k, \]
where \(W_{t}\) is a Wiener process. To estimate the model numerically, we discretize this continuous-time equation over time intervals \([t,t + ∆t]\). We assume that the time increment \(∆t\) is such that the probability of more than one jump occurring in \([t,t + ∆t]\) is negligible.
As with all mathematically complex models, jump-diffusion models present several computational challenges—for example, achieving convergence—and require careful analysis of the optimization process. With MATLAB, we can express the equations intuitively, with minimal coding; estimate the model parameters robustly; and track the convergence of the optimization procedures.
Estimating the Model Parameters
There are five model parameters to estimate:
\(μ\) – the drift parameter of the GBM component \(σ\) – the diffusion parameter of the GBM component \(λ\) – the arrival rate of the jumps in the Poisson process \(μ_j\) – the lognormal location parameter for the jump sizes \(σ_j\) – the lognormal scale parameter for the jump sizes
We can estimate the last three parameters directly from the available time series data (assuming that the underlying market value of the company exhibits characteristics similar to those of the observable market capitalization). We can use the
findchangepts function in Signal Processing Toolbox to automatically identify the points within a series where abrupt changes occur (Figure 2). In financial time series, we would expect structural changes to occur when the mean or standard deviation of the series changes significantly. Looking for points where the standard deviation changes is especially important when studying financial crises periods or other periods of high volatility.
Estimation is based on the Black-Scholes/Merton model, where \(μ\) is a function of \(σ\). To do the optimization, we use the
mle function in Statistics and Machine Learning Toolbox to perform maximum-likelihood estimation, specifying the negative log-likelihood function and the parameter bound constraints as inputs. The value of the likelihood function is ultimately determined by a single unknown parameter, \(σ\). Since market value is not observable, we begin the estimation process by fitting the jump-diffusion model to the observed market capitalization series and producing an initial estimate for the market value series. Using this initial estimate, we apply the process iteratively until the parameter values stabilize.
In models of implied market value and debt, the value of a firm is divided into assets that go to the equity holders and assets that go to the debt holders. When the debt falls due, if the assets are sufficient to repay the liabilities, then the excess value goes to the equity holders. If not, the equity holders receive nothing.
The value of debt is equivalent to a risk-free debt holding plus a short put option on the value of the assets: If the assets are more than enough to pay off the liabilities, then the debt holders receive the full value of the debt. If the assets are not enough to repay the liabilities, then the debt holders receive the full value of the assets. To the extent that the debt may not be repaid in full, it is deemed risky. Debt holders receive a put option premium in the form of a credit spread above the risk-free rate of interest in return for holding risky debt.
The asset value satisfies an implicit equation involving the market capitalization and the value of risky debt, which, in turn, is a function of the asset value and other variables, such as the risk-free interest rate. Within the maximum-likelihood estimation process, we solve this implicit equation for the asset value using the
fzero function in MATLAB. After convergence, we plot the negative log-likelihood function in a neighborhood of the candidate solution point to verify that a local minimum point was identified by
mle (Figure 3).
Inferring Market Value
After fitting the model, we can use it to infer the underlying market value of the asset and related quantities such as the value of the implicit put option on the asset and the asset’s leverage. Figure 4 shows these time series. As expected, we see that the value of a put option rapidly increases as the market capitalization and inferred market value of the asset drop. The leverage, a measure of the value-to-debt ratio, also increases as the asset value drops.
Having developed and implemented a procedure for estimating the parameters of a jump-diffusion model, we can use the MATLAB Live Editor to share the results with colleagues as a live script. The process can be applied to various time series representing different assets and asset classes. The range of potential applications is broad, because many distinct financial series exhibit jumps during crisis periods and periods of high market uncertainty. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.