text
stringlengths 256
16.4k
|
|---|
The Annals of Probability Ann. Probab. Volume 25, Number 4 (1997), 1621-1635. Strong approximation theorems for geometrically weighted random series and their applications Abstract
Let ${X_n;n\geq 0}$ be a sequence of random variables. We consider its geometrically weighted series $\xi(\beta)=\sum_{n=0}^\infty \betaX_n$ for $0<\beta < 1$. This paper proves that $\xi (\beta)$ can be approximated by $\sum_{n=0}^\infty \beta^n Y_n$ under some suitable conditions, where ${Y_n; n \geq 0}$ is a sequence of independent normal random variables. Applications to the law of the iterated logarithm for $\xi(\beta)$ are also discussed.
Article information Source Ann. Probab., Volume 25, Number 4 (1997), 1621-1635. Dates First available in Project Euclid: 7 June 2002 Permanent link to this document https://projecteuclid.org/euclid.aop/1023481105 Digital Object Identifier doi:10.1214/aop/1023481105 Mathematical Reviews number (MathSciNet) MR1487430 Zentralblatt MATH identifier 0903.60017 Citation
Zhang, Li-Xin. Strong approximation theorems for geometrically weighted random series and their applications. Ann. Probab. 25 (1997), no. 4, 1621--1635. doi:10.1214/aop/1023481105. https://projecteuclid.org/euclid.aop/1023481105
|
The Kunen inconsistency The Kunen inconsistency, the theorem showing that there can be no nontrivial elementary embedding from the universe to itself, remains a focal point of large cardinal set theory, marking a hard upper bound at the summit of the main ascent of the large cardinal hierarchy, the first outright refutation of a large cardinal axiom. On this main ascent, large cardinal axioms assert the existence of elementary embeddings $j:V\to M$ where $M$ exhibits increasing affinity with $V$ as one climbs the hierarchy. The $\theta$-strong cardinals, for example, have $V_\theta\subset M$; the $\lambda$-supercompact cardinals have $M^\lambda\subset M$; and the huge cardinals have $M^{j(\kappa)}\subset M$. The natural limit of this trend, first suggested by Reinhardt, is a nontrivial elementary embedding $j:V\to V$, the critical point of which is accordingly known as a Reinhardtcardinal. Shortly after this idea was introduced, however,Kunen famously proved that there are no such embeddings,and hence no Reinhardt cardinals in ZFC.
Since that time, the inconsistency argument has been generalized by various authors, including Harada [1](p. 320-321), Hamkins, Kirmayer and Perlmutter [2], Woodin [1](p. 320-321), Zapletal [3] and Suzuki [4, 5].
There is no nontrivial elementary embedding $j:V\to V$ from the set-theoretic universe to itself. There is no nontrivial elementary embedding $j:V[G]\to V$ of a set-forcing extension of the universe to the universe, and neither is there $j:V\to V[G]$ in the converse direction. More generally, there is no nontrivial elementary embedding between two ground models of the universe. More generally still, there is no nontrivial elementary embedding $j:M\to N$ when both $M$ and $N$ are eventually stationary correct. There is no nontrivial elementary embedding $j:V\to \text{HOD}$, and neither is there $j:V\to M$ for a variety of other definable classes, including gHOD and the $\text{HOD}^\eta$, $\text{gHOD}^\eta$. If $j:V\to M$ is elementary, then $V=\text{HOD}(M)$. There is no nontrivial elementary embedding $j:\text{HOD}\to V$. More generally, for any definable class $M$, there is no nontrivial elementary embedding $j:M\to V$. There is no nontrivial elementary embedding $j:\text{HOD}\to\text{HOD}$ that is definable in $V$ from parameters.
It is not currently known whether the Kunen inconsistency may be undertaken in ZF. Nor is it known whether one may rule out nontrivial embeddings $j:\text{HOD}\to\text{HOD}$ even in ZFC.
Metamathematical issues
Kunen formalized his theorem in Kelly-Morse set theory, but it is also possble to prove it in the weaker system of Gödel-Bernays set theory. In each case, the embedding $j$ is a GBC class, and elementary of $j$ is asserted as a $\Sigma_1$-elementary embedding, which implies $\Sigma_n$-elementarity when the two models have the ordinals.
Reinhardt cardinal
Although the existence of Reinhardt cardinals has now been refuted in ZFC and GBC, the term is used in the ZF context to refer to the critical point of a nontrivial elementary embedding $j:V\to V$ of the set-theoretic universe to itself.
References Kanamori, Akihiro. Second, Springer-Verlag, Berlin, 2009. (Large cardinals in set theory from their beginnings, Paperback reprint of the 2003 edition) www bibtex The higher infinite. Zapletal, Jindrich. A new proof of Kunen's inconsistency.Proc Amer Math Soc 124(7):2203--2204, 1996. www DOI MR bibtex Suzuki, Akira. Non-existence of generic elementary embeddings into the ground model.Tsukuba J Math 22(2):343--347, 1998. MR bibtex | Abstract Suzuki, Akira. No elementary embedding from $V$ into $V$ is definable from parameters.J Symbolic Logic 64(4):1591--1594, 1999. www DOI MR bibtex
|
The Erdos-Rado sunflower lemma Contents The problem
A
sunflower (a.k.a. Delta-system) of size [math]r[/math] is a family of sets [math]A_1, A_2, \dots, A_r[/math] such that every element that belongs to more than one of the sets belongs to all of them. A basic and simple result of Erdos and Rado asserts that Erdos-Rado Delta-system theorem: There is a function [math]f(k,r)[/math] so that every family [math]\cal F[/math] of [math]k[/math]-sets with more than [math]f(k,r)[/math] members contains a sunflower of size [math]r[/math].
(We denote by [math]f(k,r)[/math] the smallest integer that suffices for the assertion of the theorem to be true.) The simple proof giving [math]f(k,r)\le k! (r-1)^k[/math] can be found here.
The best known general upper bound on [math]f(k,r)[/math] (in the regime where [math]r[/math] is bounded and [math]k[/math] is large) is
[math]\displaystyle f(k,r) \leq D(r,\alpha) k! \left( \frac{(\log\log\log k)^2}{\alpha \log\log k} \right)^k[/math]
for any [math]\alpha \lt 1[/math], and some [math]D(r,\alpha)[/math] depending on [math]r,\alpha[/math], proven by Kostkocha from 1996. The objective of this project is to improve this bound, ideally to obtain the Erdos-Rado conjecture
[math]\displaystyle f(k,r) \leq C^k [/math]
for some [math]C=C(r)[/math] depending on [math]r[/math] only. This is known for [math]r=1,2[/math](indeed we have [math]f(k,r)=1[/math] in those cases) but remains open for larger r.
Variants and notation
Given a family [math]\cal F[/math] of sets and a set S, the
star of S is the subfamily of those sets in [math]\cal F[/math] containing S, and the link of S is obtained from the star of S by deleting the elements of S from every set in the star. (We use the terms link and star because we do want to consider eventually hypergraphs as geometric/topological objects.)
We can restate the delta system problem as follows: f(k,r) is the maximum size of a family of k-sets such that the link of every set A does not contain r pairwise disjoint sets.
Let f(k,r;m,n) denote the largest cardinality of a family of k-sets from {1,2,…,n} such that that the link of every set A of size at most m-1 does not contain r pairwise disjoint sets. Thus f(k,r) = f(k,r;k,n) for n large enough.
Conjecture 1: [math]f(k,r;m,n) \leq C_r^k n^{k-m}[/math] for some [math]C_r[/math] depending only on r.
This conjecture implies the Erdos-Ko-Rado conjecture (set m=k). The Erdos-Ko-Rado theorem asserts that
[math]f(k,2;1,n) = \binom{n-1}{k-1}[/math] (1)
when [math]n \geq 2k[/math], which is consistent with Conjecture 1. More generally, Erdos, Ko, and Rado showed
[math]f(k,2;m,n) = \binom{n-m}{k-m}[/math]
when [math]n[/math] is sufficiently large depending on k,m. The case of smaller n was treated by several authors culminating in the work of Ahlswede and Khachatrian.
Erdos conjectured that
[math]f(k,r;1,n) = \max( \binom{rk-1}{k}, \binom{n}{k} - \binom{n-r}{k} )[/math]
for [math]n \geq rk[/math], generalising (1), and again consistent with Conjecture 1. This was established for k=2 by Erdos and Gallai, and for r=3 by Frankl (building on work by Luczak-Mieczkowska).
A family of k-sets is
balanced (or k-colored) if it is possible to color the elements with k colors so that every set in the family is colorful. Reduction (folklore): It is enough to prove Erdos-Rado Delta-system conjecture for the balanced case. Proof: Divide the elements into d color classes at random and take only colorful sets. The expected size of the surviving colorful sets is [math]k!/k^k \cdot |\cal F|[/math]. Hyperoptimistic conjecture: The maximum size of a balanced collection of k-sets without a sunflower of size r is (r-1)^k. Disproven for [math]k=3,r=3[/math]: set [math]|V_1|=|V_2|=|V_3|=3[/math] and use ijk to denote the 3-set consisting of the i^th element of V_1, j^th element of V_2, and k^th element of V_3. Then 000, 001, 010, 011, 100, 101, 112, 122, 212 is a balanced family of 9 3-sets without a 3-sunflower.
A
weak sunflower ( weak Delta-system) of size [math]r[/math] is a family of [math]r[/math] sets, [math] A_1,\ldots,A_r[/math], such that their pairwise intersections have the same size, i.e., [math] |A_i\cap A_j|=|A_{i'}\cap A_{j'}|[/math] for every [math] i\ne j[/math] and [math] i'\ne j'[/math]. If we denote the size of the largest family of [math]k[/math]-sets without an [math]r[/math]-weak sunflower by [math]g(k,r)[/math], by definition we have [math]g(k,r)\le f(k,r)[/math]. Also, if we denote by [math]R_r(k)-1[/math] the size of the largest complete graph whose edges can be colored with [math]r[/math] colors such that there is no monochromatic clique on [math]k[/math] vertices, then we have [math]g(k,r)\le R_r(k)-1[/math], as we can color the edges running between the [math]k[/math]-sets of our weak sunflower-free family with the intersection sizes. For all three functions only exponential lower bounds and factorial type upper bounds are known.
Denote by [math]3DES(n)[/math] the largest integer such that there is a group of size [math]n[/math] and a subset [math]S[/math] of size [math]3DES(n)[/math] without three
disjoint equivoluminous subsets, i.e., there is no [math]S=S_1\cup^* S_2\cup^* S_3\cup^* S_{rest}[/math] such that [math]\sum_{s\in S_1} s=\sum_{s\in S_2} s=\sum_{s\in S_3} s[/math]. Then [math]{3DES(n) \choose DES(n)} / n \le g(DES(n),3)[/math] holds, thus if [math]g(k,3)[/math] grows exponentially, then [math]3DES(n)=O(\log n)[/math]. Small values
Below is a collection of known constructions for small values, taken from Abbott-Exoo. Boldface stands for matching upper bound (and best known upper bounds are planned to be added to other entries). Also note that for [math]k[/math] fixed we have [math]f(k,r)=k^r+o(k^r)[/math] from Kostochka-Rödl-Talysheva.
r\k 2 3 4 5 6 ...k 3 6 20 54- 160- 600- ~3.16^k 4 10 38- 114- 380- 1444- ~3.36^k 5 20 88- 400- 1760- 8000- ~4.24^k 6 27 146- 730- 3942- 21316- ~5.26^k Threads Polymath10: The Erdos Rado Delta System Conjecture, Gil Kalai, Nov 2, 2015. Inactive Polymath10, Post 2: Homological Approach, Gil Kalai, Nov 10, 2015. Inactive Polymath 10 Post 3: How are we doing?, Gil Kalai, Dec 8, 2015. Inactive Polymath10-post 4: Back to the drawing board?, Gil Kalai, Jan 31, 2016. Active Erdos-Ko-Rado theorem (Wikipedia article) Sunflower (mathematics) (Wikipedia article) What is the best lower bound for 3-sunflowers? (Mathoverflow) Bibliography
Edits to improve the bibliography (by adding more links, Mathscinet numbers, bibliographic info, etc.) are welcome!
On set systems not containing delta systems, H. L. Abbott and G. Exoo, Graphs and Combinatorics 8 (1992), 1–9. On finite Δ-systems, H. L. Abbott and D. Hanson, Discrete Math. 8 (1974), 1-12. On finite Δ-systems II, H. L. Abbott and D. Hanson, Discrete Math. 17 (1977), 121-126. Intersection theorems for systems of sets, H. L. Abbott, D. Hanson, and N. Sauer, J. Comb. Th. Ser. A 12 (1972), 381–389. Hodge theory for combinatorial geometries, Karim Adiprasito, June Huh, and Erick Katz The Complete Nontrivial-Intersection Theorem for Systems of Finite Sets, R. Ahlswede, L. Khachatrian, Journal of Combinatorial Theory, Series A 76, 121-138 (1996). On set systems without weak 3-Δ-subsystems, M. Axenovich, D. Fon-Der-Flaassb, A. Kostochka, Discrete Math. 138 (1995), 57-62. Intersection theorems for systems of finite sets, P. Erdős, C. Ko, R. Rado, The Quarterly Journal of Mathematics. Oxford. Second Series 12 (1961), 313–320. Intersection theorems for systems of sets, P. Erdős, R. Rado, Journal of the London Mathematical Society, Second Series 35 (1960), 85–90. On the Maximum Number of Edges in a Hypergraph with Given Matching Number, P. Frankl An intersection theorem for systems of sets, A. V. Kostochka, Random Structures and Algorithms, 9 (1996), 213-221. Extremal problems on Δ-systems, A. V. Kostochka On Systems of Small Sets with No Large Δ-Subsystems, A. V. Kostochka, V. Rödl, and L. A. Talysheva, Comb. Probab. Comput. 8 (1999), 265-268. On Erdos' extremal problem on matchings in hypergraphs, T. Luczak, K. Mieczkowska Intersection theorems for systems of sets, J. H. Spencer, Canad. Math. Bull. 20 (1977), 249-254.
|
Home
Integration by PartsIntegration by Parts
Examples
Integration by Parts with a definite integral
Going in Circles
Tricks of the Trade
Integrals of Trig FunctionsAntiderivatives of Basic Trigonometric Functions
Product of Sines and Cosines (mixed even and odd powers or only odd powers)
Product of Sines and Cosines (only even powers)
Product of Secants and Tangents
Other Cases
Trig SubstitutionsHow Trig Substitution Works
Summary of trig substitution options
Examples
Completing the Square
Partial FractionsIntroduction to Partial Fractions
Linear Factors
Irreducible Quadratic Factors
Improper Rational Functions and Long Division
Summary
Strategies of IntegrationSubstitution
Integration by Parts
Trig Integrals
Trig Substitutions
Partial Fractions
Improper IntegralsType 1 - Improper Integrals with Infinite Intervals of Integration
Type 2 - Improper Integrals with Discontinuous Integrands
Comparison Tests for Convergence
Modeling with Differential EquationsIntroduction
Separable Equations
A Second Order Problem
Euler's Method and Direction FieldsEuler's Method (follow your nose)
Direction Fields
Euler's method revisited
Separable EquationsThe Simplest Differential Equations
Separable differential equations
Mixing and Dilution
Models of GrowthExponential Growth and Decay
The Zombie Apocalypse (Logistic Growth)
Linear EquationsLinear ODEs: Working an Example
The Solution in General
Saving for Retirement
Parametrized CurvesThree kinds of functions, three kinds of curves
The Cycloid
Visualizing Parametrized Curves
Tracing Circles and Ellipses
Lissajous Figures
Calculus with Parametrized CurvesVideo: Slope and Area
Video: Arclength and Surface Area
Summary and Simplifications
Higher Derivatives
Polar CoordinatesDefinitions of Polar Coordinates
Graphing polar functions
Video: Computing Slopes of Tangent Lines
Areas and Lengths of Polar CurvesArea Inside a Polar Curve
Area Between Polar Curves
Arc Length of Polar Curves
Conic sectionsSlicing a Cone
Ellipses
Hyperbolas
Parabolas and Directrices
Shifting the Center by Completing the Square
Conic Sections in Polar CoordinatesFoci and Directrices
Visualizing Eccentricity
Astronomy and Equations in Polar Coordinates
Infinite SequencesApproximate Versus Exact Answers
Examples of Infinite Sequences
Limit Laws for Sequences
Theorems for and Examples of Computing Limits of Sequences
Monotonic Covergence
Infinite SeriesIntroduction
Geometric Series
Limit Laws for Series
Test for Divergence and Other Theorems
Telescoping Sums
Integral TestPreview of Coming Attractions
The Integral Test
Estimates for the Value of the Series
Comparison TestsThe Basic Comparison Test
The Limit Comparison Test
Convergence of Series with Negative TermsIntroduction, Alternating Series,and the AS Test
Absolute Convergence
Rearrangements
The Ratio and Root TestsThe Ratio Test
The Root Test
Examples
Strategies for testing SeriesStrategy to Test Series and a Review of Tests
Examples, Part 1
Examples, Part 2
Power SeriesRadius and Interval of Convergence
Finding the Interval of Convergence
Power Series Centered at $x=a$
Representing Functions as Power SeriesFunctions as Power Series
Derivatives and Integrals of Power Series
Applications and Examples
Taylor and Maclaurin SeriesThe Formula for Taylor Series
Taylor Series for Common Functions
Adding, Multiplying, and Dividing Power Series
Miscellaneous Useful Facts
Applications of Taylor PolynomialsTaylor Polynomials
When Functions Are Equal to Their Taylor Series
When a Function Does Not Equal Its Taylor Series
Other Uses of Taylor Polynomials
Functions of 2 and 3 variablesFunctions of several variables
Limits and continuity
Partial DerivativesOne variable at a time (yet again)
Definitions and Examples
An Example from DNA
Geometry of partial derivatives
Higher Derivatives
Differentials and Taylor Expansions
Differentiability and the Chain RuleDifferentiability
The First Case of the Chain Rule
Chain Rule, General Case
Video: Worked problems
Multiple IntegralsGeneral Setup and Review of 1D Integrals
What is a Double Integral?
Volumes as Double Integrals
Iterated Integrals over RectanglesHow To Compute Iterated Integrals
Examples of Iterated Integrals
Fubini's Theorem
Summary and an Important Example
Double Integrals over General RegionsType I and Type II regions
Examples 1-4
Examples 5-7
Swapping the Order of Integration
Area and Volume Revisited
Double integrals in polar coordinatesdA = r dr (d theta)
Examples
Multiple integrals in physicsDouble integrals in physics
Triple integrals in physics
Integrals in Probability and StatisticsSingle integrals in probability
Double integrals in probability
Change of VariablesReview: Change of variables in 1 dimension
Mappings in 2 dimensions
Jacobians
Examples
Bonus: Cylindrical and spherical coordinates
Definitions and Examples
Partial derivatives help us track the change of multivariable functions by dealing with one variable at a time. If we think of $z=f(x,y)$ as being in 3-space, we can discuss movement in the $x$-direction, or $y$-direction, and see how this movement affects $z$. We do this by
Definitions and notation
The definition of partial derivativesThe partial derivative of $f$ with respect to $x$ is $\displaystyle{ \lim_{h \rightarrow 0} \frac{f(x+h,y)-f(x,y)}{h}}.$
The partial derivative of $f$ with respect to $y$ is $\displaystyle{ \lim_{h \rightarrow 0} \frac{f(x,y+h)-f(x,y)}{h}}.$
In general we
do not use these definitions to compute partial derivatives.
There are many notations for partial derivatives. If $z = f(x,y)$, then some, but not all, of the notations:
The partial derivative of $f$ with respect to $x$: $\displaystyle f_x(x,y) = f_x = \frac{\partial f}{\partial x}= \frac{\partial}{\partial x}f(x,y) = \frac{\partial z}{\partial x}= f_1$.
The partial derivative of $f$ with respect to $y$: $\displaystyle f_y(x,y) = f_y = \frac{\partial f}{\partial y}= \frac{\partial}{\partial y}f(x,y) = \frac{\partial z}{\partial y}= f_2$.
We can evaluate these partial derivatives at particular values of $x$ and $y$, e.g. $f_x(2,7)=\frac{\partial f}{\partial x}\big|_{(2,7)}$ or $f_y(1,-10)$.
To find $f_x$, hold $y$ constant and differentiate with respect to $x$. To find $f_y$, hold $x$ constant and differentiate with respect to $y$. Literally, when computing $f_y$ we treat $x$ as a constant because
it is a constant. This appears as a slice of 3-space. We can slice 3 dimensions at a particular $x$-value, say we slice at $x=1$ (see the previous page for a graphic example); such a slice is parallel to the $yz$-plane. The $x$-value is the same everywhere in this slice -- it's constant. Then we observe what happens to $z$ as $y$ changes. This procedure makes computing partial derivatives very simple. -------------------------------------------------------------------------------- Example 2: Compute $f_x$ and $f_y$ when $f(x,y) = \sin(x+y^2)$. Solution 2: We must use the chain rule here. Since the derivative with respect to $x$ of $\sin(x + \hbox{ constant })$ is $\cos(x + \hbox{ constant })\cdot(1+0)$, and since the derivative with respect to $y$ of $\sin(\hbox{constant} + y^2)$ is $ \cos(\hbox{constant }+y^2)\cdot(0+2y)$, we get $\displaystyle f_x = \cos(x+y^2)$ and $\displaystyle f_y = 2y \cos(x+y^2).$ -------------------------------------------------------------------------------- Example 3 : Compute both partial derivatives of $\tan{\left(xy^2+7\right)}$ Example 4: Find $\displaystyle\frac{\partial f}{\partial x},\frac{\partial f}{\partial y}$ for $f(x,y)=\sqrt{x^2-5y}\left(\ln{xy}\right)$. Example 5: Find $f_x(-1,2)$ and $f_y(-1,2)$ for $f(x,y)=3x^2-4y^3-7x^2y^3$ from Example 1 above. What do these numbers mean? DO: Try to compute these derivatives before looking ahead. Solution 3: We must use the chain rule. $\displaystyle\frac{\partial}{\partial x}(\tan(xy^2+7))=\sec^2(xy^2+7)\frac{\partial}{\partial x}(xy^2+7)=\sec^2(xy^2+7)(y^2)$.
$\displaystyle\frac{\partial}{\partial y}(\tan(xy^2+7))=\sec^2(xy^2+7)\frac{\partial}{\partial y}(xy^2+7)=\sec^2(xy^2+7)(2xy)$.
-------------------------------------------------------------------------------- Solution 4: We must use the product rule and the chain rule: $\displaystyle\frac{\partial f}{\partial x}=\frac12(x^2-5y)^{-1/2}(2x-0)\ln(xy)+\sqrt{x^2-5y}\ \frac{y}{xy}=\frac{x}{\sqrt{x^2-5y}}\ln(xy)+\frac{\sqrt{x^2-5y}}{x}$.
$\displaystyle\frac{\partial f}{\partial y}=\frac12(x^2-5y)^{-1/2}(0-5)\ln(xy)+\sqrt{x^2-5y}\
\frac{x}{xy}=-\frac{5\ln(xy)}{2\sqrt{x^2-5y}}+\frac{\sqrt{x^2-5y}}{y}$. -------------------------------------------------------------------------------- Solution 5: Since $f_x(x,y)=6x-14xy^3$, $f_x(-1,2)=6(-1)-14(-1)2^3=-6+112=106$. And since $f_y(x,y)=12y^2-21x^2y^2$, $f_y(-1,2)=12\cdot 2^2-21(-1)^2\cdot 2^2=48-84=-48$. This means that if we stand at the point $(-1,2)$ and look in the positive $x$ direction, $z=f(x,y)$ is heading upward, but if we look in the positive $y$ direction $z$ is heading downward.
|
Classification trees are used, as the name suggests, in solving classification problems. Here are some definitions and Matlab tips to help you dabble in this subject.
The objective of any problem of this nature is to assign an object to one of a number of specified categories or classes. Classification is a useful technique that’s widely applied in many other fields, such as
pattern recognition.
A tree is essentially a set of sequential conditions and actions that relate certain factors with a result or decision. It is a supervised classification method, meaning that data are classified a priori and from this, the knowledge may be extracted.
First of all, the algorithm analyses the data provided in search of patterns, and then uses the result of this analysis
to define sequences and conditions in order to create the classification model.
In this post we go over the main theoretical concepts and translate them to
Matlab, which includes the tools required to work with trees in its Statistics Toolbox. I hope you find it useful. What are classification trees?
A
decision tree is a way of representing knowledge obtained in the inductive learning process. The space is split using a set of conditions, and the resulting structure is the tree.
These conditions are created from a series of characteristics or features, the explained variables:
We initialise the matrix
a with features in Matlab. We define the function attributes to encompass the characteristics that we want the tree to consider.
If we are working with financial series, some examples of features are volatility, the efficiency of the movements, etc.
Each set of features is assigned to a response or class, which it attempts to explain. This classification has been done using
future information; hence the possibility to categorise historical data, but it is impossible to do so with new data:
According to the class type, we have
classification trees with discrete classes and regression trees with continuous classes. We focus on the binary case, as this is the condition for the Matlab function we are going to use. An example of a binary class, for example, is the price moving upwards or downwards, the price moving above a certain value or not, or if the market has been lateral moving or in a trend.
We label these classes as -1 and 1 from here onwards. We define our function of classes as a vector of responses for each data in
x:
What are classification trees like?
A tree is made up of nodes:
Internal nodes: Each internal node contains a question about a specific feature (node = <attribute,value>) and it provides two children, one for each possible answer, classification or decision. The questions are of the type: \(a_i \geq j\) ? End nodes (leaves): The ones that are assigned to a single class at the bottom of the tree. The complexity of the tree is established by the number of leaves.
How are classification trees built?
The construction of a tree is the
learning stage of the method. It consists of analysing a set of available features and obtaining some logical rules adapted to the known classification; the classes that are assigned to each vector \((a_1,\ldots,a_s)\).
The construction process is recursive:
All the possible partitionsare analysed (attribute, value)and from them we take the one with the best separation. The optimum separationis applied. Step 1 is repeated with the children nodes, only for the ones that are not leaves.
All partitions?
Yes, all of them. For each one of the features, the values of the observations are used.
The partitions are defined by using the midpoint between each two values as the cutoffs.
Which is the optimum separation?
The best separation is the one that divides the data into groups such that there is one dominant class
(child node 1 => -1 and child node 2 => 1).
The so-called
impurity measures are used and, from all possible partitions, we choose the one that minimises the impurity of the two child nodes.
We use
Gini diversity index, although there are many other impurity measures, amongst them, we highlight entropy and information gain.
The impurity is calculated as the product of the probabilities of belonging to one class or the other.
When these probabilities are very equal it means that the separation doesn’t coincide well with the classification defined by the vector of classes. The more similar the probabilities are, more impurity exists and the greater the Gini index.
The Gini index of a node
a(ij) =(attribute i, value j) is calculated by adding together the impurities of the children nodes.\(\)$$G(a_{ij})=P(a_{i}<j)\cdot G(c|a_{i}<j) + P(a_{i}\ge j)$$$$G(c|a_{ij})=P(c=1|a_{ij})\cdot P(c\neq 1|a_{ij})+P(c=-1|a_{ij})\cdot P(c\neq -1|a_{ij})=1-\sum_{c_k} P(c_{k}|a_{ij})^{2}$$A pure node has a Gini index of 0.
In this way, the importance of the features is established. It’s possible that the algorithm keeps some of the characteristics that we had chosen out of the tree definition. We, therefore, interpret this feature as irrelevant, since the decision is independent of its value.
In the same way, the features in the lower levels also have less importance.
By analysing the tree structure we can infer the interest of each of the chosen explanatory variables.
When is a node a leaf?
If the node contains a single class, then it is pure and the process is complete. However, it’s typical to use additional stop criteria, such as when the number of data within the node is too small. In such cases, progressing with its classification isn’t deemed relevant.
Furthermore, no further divisions are allowed if the impurity of the children is not better than the impurity of the parent: if we reach a node where any possible separation leads to children nodes with too few values, the partition would not be carried out either.
How should we use classification trees?
One of the greatest advantages of classification trees is that they are intuitive and easy to apply.
The application of the tree is known as
the classification stage, and consists of assigning the class corresponding to a new set of features, independent from the learning set.
Once created, it’s straight forward to find the unknown class of a new value
x with characteristics \((a_1,\ldots,a_s)\). One simply has to answer the posed questions at each node and follow the path mapped out by our tree, until the node leaf is reached. Class c is the predicted, most common class for the leaf to which the value x belongs.
The percentage of data with the most common class in the leaf for
x is the accuracy of the classification.
The result of the model classification provided by the tree is not merely a decision, but it is also its own certainty, since the probabilities of all the possible alternatives are also available:
|
Research Open Access Published: Iterative solution to singular nth-order nonlocal boundary value problems Boundary Value Problems volume 2015, Article number: 125 (2015) Article metrics
1038 Accesses
11 Citations
0 Altmetric
Abstract
By using the cone theory and the Banach contraction mapping principle, we study the existence and uniqueness of an iterative solution to the singular
nth-order nonlocal boundary value problems. Introduction
The boundary value problems (BVPs for short) for nonlinear differential equations arise in a variety of areas of applied mathematics, physics, and variational problems of control theory. The nonlocal BVPs have been studied extensively. The methods used therein mainly depend on the fixed-point theorems, degree theory, upper and lower techniques, and monotone iteration. Many existence, uniqueness, and multiplicity results have been obtained. For instance, see [1–19] and the references therein.
The purpose of this paper is to investigate the existence and uniqueness of iterative solution to the following
nth-order nonlocal BVP:
where \(f\in C((0,1)\times R^{n-1}, R)\), \(\Gamma:=\int_{0}^{1}t\, dA(t)\neq 1\). \(\int_{0}^{1}x^{(n-2)}(s)\, dA(s)\) denotes the Riemann-Stieltjes integral, where
A is of bounded variation.
In BVP (1.1), \(\int_{0}^{1}x^{(n-2)}(s)\, dA(s)\) denotes the Riemann-Stieltjes integral with a signed measure. This includes as special cases the two-point, three-point, multi-point problems and integral problems. Let us remark that the idea of using a Riemann-Stieltjes integral in the boundary conditions is quite old, see for example the review by Whyburn in [1]. The BVP (1.1) used to model various nonlinear phenomena in physics, chemistry and biology. Over the past decades, great efforts have been devoted to nonlinear
nth-order nonlocal BVP (1.1) and its particular and related cases, and many results of the existence of solutions have been obtained by several authors; see [8, 9, 11, 14, 16–19] and references therein. For example, when \(n=2\), \(A(t)\equiv0\), the BVP (1.1) becomes the second-order two-point BVP
BVP (1.2) is the well-known second-order Dirichlet BVP, which has been extensively studied and has important applications in physical sciences. When \(n=2\), \(\int_{0}^{1}x(s)\, dA(s)=\alpha x(\eta)\), the BVP (1.1) reduces to the second-order three-point BVP
When \(n=4\), the BVP (1.1) reduces to the fourth-order nonlocal BVP
In material mechanics, the BVP (1.3) describes the deflection or deformation of an elastic beam whose the ends are controlled.
Motivated by the works mentioned above, in this paper, we consider the
nth-order nonlocal BVP (1.1). The existence and uniqueness of iterative solution is established by applying the cone theory and the Banach contraction mapping principle. In comparison with previous works, this paper has several new features. Firstly, the nonlinearity f is allowed to depend on higher derivatives of unknown function \(x(t)\) up to \(n-2\) order, and we allow f to be singular at \(t=0,1\). The second new feature is that the nonlinearity f is not monotone or convex, the conclusions and the proof used in this paper are different from the known papers. Thirdly, the scope of Γ is not limited to \(0\leq\Gamma<1\), therefore, we do not need to suppose that the Green function \(G(t,s)\) is nonnegative. The preliminary lemmas Lemma 2.1
([3])
For any \(y\in L[0,1]\), the BVP has a unique solution \(x(t)=\int_{0}^{1}G(t,s)y(s)\, ds\), where
Denote \(I=[0,1]\), \(J=(0,1)\), and for any \(x\in C(I)\), \(t\in I\), define
and
By Lemma 2.1 and routine calculations, we have the following lemma.
Lemma 2.2 (i) If\(x\in C^{n-2}(I)\) is a solution of BVP(1.1), then\(y(t)=x^{(n-2)}(t)\in C(I)\) is a fixed point of the operator F. (ii) If\(x\in C(I)\) is a fixed point of the operator F, then\(y(t)=(I_{n-2}x)(t)=\int_{0}^{t}\frac{(t-s)^{n-3}}{(n-3)!}x(s)\, ds\in C^{n-2}(I)\) is a solution of BVP(1.1).
Let
It is easy to see that \(\overline{G}>0\).
Lemma 2.3
([20])
P is a generating cone in the Banach space \((E,\|\cdot\|)\) if and only if there exists a constant \(\tau>0\) such that every element \(x\in E\) can be represented in the form \(x=y-z\), where \(y,z\in P\) and \(\|y\| \leq\tau\|x\|\), \(\|z\| \leq\tau\|x\|\). Main results
Consider the Banach space \(C(I)\) of the usual real-valued continuous functions \(u(t)\) defined on
I with the norm \(\|u\|=\sup_{t\in I}|u(t)|\) for all \(u\in C(I)\). Let \(P=\{u\in C(I) \mid u(t)\geq0,\forall t\in I\}\). Obviously, P is a normal solid cone of \(C(I)\), by Lemma 2.1.2 in [21], we see that P is a generating cone in \(C(I)\). Theorem 3.1 Suppose that \(f(t,x_{0},x_{1},\ldots,x_{n-2})=g(t,x_{0},x_{0},x_{1},x_{1},\ldots,x_{n-2},x_{n-2})\), and there exist positive constants \(B_{0},C_{0},B_{1},C_{1},\ldots,B_{n-2},C_{n-2}\) with \(B_{0}+C_{0}+B_{1}+C_{1}+B_{2}+C_{2}+\cdots+\frac {B_{n-2}+C_{n-2}}{({n-3})!}<\overline{G}\), such that for any \(t\in J\), \(a_{10}, b_{10}, a_{20}, b_{20}, a_{11}, b_{11}, a_{21}, b_{21}, \ldots, a_{1,n-2}, b_{1,n-2}, a_{2,n-2}, b_{2,n-2} \in R\) with \(a_{10}\leq b_{10}, a_{20}\geq b_{20}, a_{11}\leq b_{11}, a_{21}\geq b_{21}, \ldots, a_{1,n-2}\leq b_{1,n-2}, a_{2,n-2}\geq b_{2,n-2}\), and there exist \(x_{0},y_{0}\in C^{n-2}(I)\) such that Then BVP (1.1) has a unique solution \(I_{n-2}x^{*}\) in \(C^{n-2}(I)\). Moreover, for any \(\overline{x}_{0}\in C(I)\), the iterative sequence converges to \(x^{*}\) in \(C(I)\). Proof
By \(t(1-t)g (t,x_{0}(t),y_{0}(t),x_{0}'(t),y_{0}'(t),\ldots, x_{0}^{(n-2)}(t),y_{0}^{(n-2)}(t) )\in L^{1}[0,1]\), it is easy to see that for any \(t\in J\),
is well defined. Set \(p(t)=x_{0}^{(n-2)}(t)\), \(q(t)=y_{0}^{(n-2)}(t)\), then
For any \(x,y\in C(I)\), let \(u(t)=|p(t)|+|x(t)|\), \(v(t)=-|q(t)|-|y(t)|\), then \(u\geq p\), \(v\leq q\). By (3.1), we have
then
Following the former inequality, we can easily get
is convergent, and then
is convergent. Similarly, by \(u\geq x\), \(v\leq y\), we get
Define the operator \(A: C(I)\times C(I)\to C(I)\) by
Then \(I_{n-2}x\) is the solution of BVP (1.1) if and only if \(x=A(x, x)\). Let
By (3.1), for any \(x_{1}, x_{2}, y_{1}, y_{2}\in C(I)\), \(x_{1}\geq x_{2}\), \(y_{1}\leq y_{2}\), we have
and
By the method of mathematical induction, for any positive integer
m and \(t\in J\),
Then
and
Hence, we can choose a \(\beta>0\) such that
So, there exists a positive integer \(m_{0}\) such that
Since
P is a generating cone in \(C(I)\), from Lemma 2.3, there exists \(\tau>0\) such that every element \(x\in C(I)\) can be represented in the form
which implies
Let
On the other hand, for any \(u\in P\) which satisfies \(-u\leq x\leq u\), we have \(\theta\leq x+u \leq2u\), then \(\|x\| \leq \|x+u\|+\|-u\| \leq(2N+1)\|u\|\), where
N denotes the normal constant of P. Since u is arbitrary, we have
Now, for any \(x,y\in C(I)\) and \(u\in P\) which satisfies \(-u\leq x-y\leq u\), we set
then \(x\geq u_{1}\), \(y\geq u_{1}\), and \(x-u_{1}=u_{2}\), \(y-u_{1}=u_{3}\), \(u_{2}+u_{3}=u\). It follows from (3.2) that
Let \(\widetilde{A}(x)=A(x,x)\), then we obtain
As
K and M are both positive linear bounded operators, so \(K+M\) is a positive linear bounded operator, and therefore \((K+M)u\in P\). Hence, by mathematical induction, it is easy to see that for the natural number \(m_{0}\) in (3.3), we have
Since \((K+M)^{m_{0}}u\in P\), we see that
which implies by virtue of the arbitrariness of
u that
By \(0<\beta<1\), we have \(0 < \beta^{m_{0}}<1\). Thus the Banach contraction mapping principle implies that \(\widetilde{A}^{m_{0}}\) has a unique fixed point \(x^{*}\) in \(C(I)\), and so \(\widetilde{A}\) has a unique fixed point \(x^{*}\) in \(C(I)\). By the definition of \(\widetilde{A}\),
A has a unique fixed point \(x^{*}\) in \(C(I)\), then by Lemma 2.2, \(I_{n-2}x^{*}\) is the unique solution of BVP (1.1). And, for any \(\overline{x}_{0}\in C(I)\), let \(x_{1}=A(\overline {x}_{0},\overline{x}_{0})\), \(x_{m}=A(x_{m-1},x_{m-1})\) (\(m=2,3,\ldots\)), we have \(\|x_{m}-x^{*}\|_{0}\to0\) (\(m\to\infty\)). By the equivalence of \(\|\cdot\|_{0}\) and \(\|\cdot\|\) again, we get \(\|x_{m}-x^{*}\|\to0\) (\(m\to\infty\)). This completes the proof. □ Remark 3.1
For the case \(n=3\),
if \(f(t,x,y)=f(t,x)\), Theorem 3.1 is reduced to Theorem 3.1 in [19], if \(1<\alpha<\frac{1}{\eta}\) and \(f(t,x,y)=h(t)f(t,x)\), the existence results of nontrivial solutions are given by means of the topological degree theory in [11]. So our results extend the corresponding results of [11, 19] to some degree.
Example 3.1
To illustrate the applicability of our results, we consider the BVP (1.1) with \(n=3\) and
where \(n_{1}\), \(n_{2}\), \(n_{3}\), \(n_{4}\) are positive integral numbers. Then \(\Gamma=\int_{0}^{1}t\, dA(t)=2\times\frac{1}{4}=\frac{1}{2}\), and BVP (1.1) becomes the singular third-order three-point BVP
Let
where
χ is the characteristic function, i.e.
and
By Lemma 2.2, if \(x\in C(I)\) is a fixed point of the operator
F, then \(y(t)=(I_{1}x)(t)=\int_{0}^{t}x(s)\, ds\in C^{1}(I)\) is a solution of BVP (3.12). Let \(f(t,x,y)=g(t,x,x,y,y)\), then for any \(t\in J\), \(a_{10}, b_{10}, a_{20}, b_{20}, a_{11}, b_{11}, a_{21}, b_{21}\in R\) with \(a_{10}\leq b_{10}\), \(a_{20}\geq b_{20}\), \(a_{11}\leq b_{11}\), \(a_{21}\geq b_{21}\), we have
By Theorem 3.1, BVP (3.12) has a unique solution \(I_{1}x^{*}\in C^{1}(I)\) provided \(\frac{1}{n_{1}}+\frac{1}{n_{2}}+\frac{1}{n_{3}}+ \frac{1}{n_{4}}<\overline{G}\). Moreover, for any \(x_{0}\in C(I)\), the iterative sequence
converges to \(x^{*}\) (\(m\to\infty\)).
References 1.
Whyburn, WM: Differential equations with general boundary conditions. Bull. Am. Math. Soc.
48, 692-704 (1942) 2.
Webb, JRL, Infante, G: Positive solutions of nonlocal boundary value problems: a unified approach. J. Lond. Math. Soc.
74, 673-693 (2006) 3.
Webb, JRL, Infante, G: Positive solutions of nonlocal boundary value problems involving integral conditions. Nonlinear Differ. Equ. Appl.
15, 45-67 (2008) 4.
Webb, JRL: Nonlocal conjugate type boundary value problems of higher order. Nonlinear Anal.
71, 1933-1940 (2009) 5.
Eloe, PW, Ahmad, B: Positive solutions of a nonlinear
nth order boundary value problem with nonlocal conditions. Appl. Math. Lett. 18, 521-527 (2005) 6.
Goodrich, CS: Positive solutions to boundary value problems with nonlinear boundary conditions. Nonlinear Anal.
75, 417-432 (2012) 7.
Hao, X, Liu, L, Wu, Y, Sun, Q: Positive solutions for nonlinear
nth-order singular eigenvalue problem with nonlocal conditions. Nonlinear Anal. 73, 1653-1662 (2010) 8.
Hao, X, Liu, L, Wu, Y, Xu, N: Multiple positive solutions for singular
nth-order nonlocal boundary value problem in Banach spaces. Comput. Math. Appl. 61, 1880-1890 (2011) 9.
Graef, JR, Yang, B: Positive solutions to a multi-point higher order boundary value problem. J. Math. Anal. Appl.
316, 409-421 (2006) 10.
Henderson, J: Existence and uniqueness of solutions of \((k+2)\)-point nonlocal boundary value problems for ordinary differential equations. Nonlinear Anal.
74, 2576-2584 (2011) 11.
Wang, F, Cui, Y: On the existence of solutions for singular boundary value problem of third-order differential equations. Math. Slovaca
60, 485-494 (2010) 12.
Zhang, X, Feng, M, Ge, W: Existence of solutions of boundary value problems with integral boundary conditions for second-order impulsive integro-differential equations in Banach spaces. J. Comput. Appl. Math.
233, 1915-1926 (2010) 13.
Kong, L, Kong, Q: Higher order boundary value problems with nonhomogeneous boundary conditions. Nonlinear Anal.
72, 240-261 (2010) 14.
Sun, Y, Liu, L, Zhang, J, Agarwal, RP: Positive solutions of singular three-point boundary value problems for second-order differential equations. J. Comput. Appl. Math.
230, 738-750 (2009) 15.
Kwong, MK, Wong, JSW: Solvability of second-order nonlinear three-point boundary value problems. Nonlinear Anal.
73, 2343-2352 (2010) 16.
Li, F, Sun, J, Jia, M: Monotone iterative method for the second-order three-point boundary value problem with upper and lower solutions in the reversed order. Appl. Math. Comput.
217, 4840-4847 (2011) 17.
Ma, R: Positive solutions for a nonlinear three-point boundary value problem. Electron. J. Differ. Equ.
1999, 34 (1999) 18.
Yao, Q: Successive iteration and positive solution for nonlinear second-order three-point boundary value problems. Comput. Math. Appl.
50, 433-444 (2005) 19.
Zhang, P: Iterative solutions of singular boundary value problems of third-order differential equation. Bound. Value Probl.
2011, Article ID 483057 (2011) 20.
Guo, D, Lakshmikantham, V: Nonlinear Problems in Abstract Cones. Academic Press, New York (1988)
21.
Guo, D, Lakshmikantham, V, Liu, X: Nonlinear Integral Equations in Abstract Spaces. Kluwer Academic, Dordrecht (1996)
Acknowledgements
The authors would like to thank the referees for their pertinent comments and valuable suggestions. This work is supported by the National Natural Science Foundation of China (11371221, 11201260), the Natural Science Foundation of Shandong Province of China (ZR2015AM022, ZR2013AQ014), the Specialized Research Fund for the Doctoral Program of Higher Education (20123705120004, 20123705110001) and Foundation of Qufu Normal University (BSQD20100103).
Additional information Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors wrote, read, and approved the final manuscript.
|
Both portfolio valuation and management have many facets, but one they have in common is attribution, that is, how much each asset or collection of them is contributing to the return. Thus, in these series of posts, we are going to wax technical about the computation of asset exposure. We will show how an initially balanced allocation, left to itself navigate the markets, can create a risky situation if you don’t intervene.
When we talk about exposure, we are referring to the weight each asset has in our portfolio. The higher we are exposed to an asset or a sector, the more our portfolio will rise when the given line goes up. However, it will capture the downs in the same proportion.
The FAANG
For the sake of example, we are going to set up a portfolio with the five most important technology companies in the States. They are known as the FAANG: Facebook – Amazon – Apple – Netflix and Google. In the following chart, we can see how all their prices have skyrocketed since 2012.
However, prices are not often the best metric to determine value. Could you tell which is the company that has grown the most? You might be tempted to say Google, or Amazon, right? Let’s take a look at each company’s cumulative return to get a better picture.
Surprise! Netflix is actually the company that has grown the most in the past years. In fact, if you had invested 100 USD by the time it started selling on the market, today you would have shares worth 3200 USD. Alas, how many of us knew of the existence of Netflix back then?
A portfolio’s performance
Let’s do some backtesting (simulation and analysis of past financial performance) with these five giants to learn how to correctly compute a portfolio’s performance through a weight formulation. This mathematical dressing, the weight, is very appealing from an analytical point of view. However, when put into practice, it reveals a fundamental fragility due to the
integer nature of shares (I will let you ponder that one).
We set up two portfolios,
Free and Rebalanced. The difference lies in the fact that for Rebalanced we will sell and buy every first business day of the month such that it becomes again an equally distributed portfolio. Free, on the other hand, will be left alone without intervention. In the financial jargon you would say Rebalanced is actively managed.
Now the technical stuff. All of the following seems trivial when explained out loud, but there are subtle nuances to bear in mind. For each day, we have compute the portfolio’s performance and what proportion of the cake each asset has in our allocation. To this proportion we call the asset weight.
\(\)For each day
t, the return for each asset i is
\[r_{i,t} = \frac{P_{i,t}}{P_{i,t-1}}-1, \quad i = 1, \ldots, N\]
where the prices are taken at the
close of each session, and adjusted for possible splits and dividend payments. With these returns and the asset weights, the portfolio’s evolution that day is given by
\[r_{P,t} = \sum_{i=1}^{N} \omega_{i, t-1}r_{i,t}\]
Aha! The return on day
t of the portfolio is given by the sum of yesterday’s weights times today’s returns. Why? Well, simply because we are computing the returns with the prices at the session close. Therefore, we will capture the evolution of day t with whatever we had in our portfolio the day before when the markets closed. This formulation forces you to work changing your allocations on day t-1, just before the markets closed, if you want to capture the effect on day t.
Indeed, on day
t the weights, i.e. the exposure, cannot be known a priori without the returns. To compute each asset weight on day t, you need to put into context how much the assets worth has changed with respect to the portfolio. But you need to do it in absolute terms, not in relative ones, otherwise you will run into trouble with the sign!
\[\omega_{i,t} = \omega_{i,t-1} \left (\frac{1+r_{i,t}}{1+\sum_{i=1}^{N}\omega_{i,t-1}r_{i,t}} \right )\]
We have applied these ideas and equations to the period of time for which we have shown the price evolution, and we find something you might have not expected: the assets which grow the most eat up those that don’t.
This has serious implications. In our backtest, Netflix represents most of the portfolio’s allocation by 2019, which means that if Netflix drops tomorrow more than you can afford, most of your money will follow (which is actually something that happened during the second half of the year in 2018).
Comparison with the technological index
Since during the overall period Netflix has grown so much, when we compare the cumulative return of both portfolios, we are not surprised to see
Free winning over Rebalanced. Free has outperformed the index associated with the technological companies, the Nasdaq Composite 100, by almost 800%, while our Rebalanced portfolio has “roughly” made it 400% better than the index.
Next week we will determine the dangers hidden behind the extra return made by
Free and take a look at how these weights can be used to understand better where your profit and losses coming from.
Thanks for reading!
I have shown a very shiny example, don’t you think life is so easy in the equity markets:
Don’t miss the second part of this post: Portfolio weightlifting (II)
|
Banach Journal of Mathematical Analysis Banach J. Math. Anal. Volume 9, Number 3 (2015), 248-260. A Hilbert space approach to approximate diagonals for locally compact quantum groups Abstract
For a locally compact quantum group $\mathbb{G}$, the quantum group algebra $L^1(\mathbb{G})$ is operator amenable if and only if it has an operator bounded approximate diagonal. It is known that if $L^1(\mathbb{G})$ is operator biflat and has a bounded approximate identity then it is operator amenable. In this paper, we consider nets in $L^2(\mathbb{G})$ which suffice to show these two conditions and combine them to make an approximate diagonal of the form $\omega_{{W'}^*\xi\otimes\eta}$ where $W$ is the multiplicative unitary and $\xi\otimes\eta$ are simple tensors in $L^2(\mathbb{G})\otimes L^2(\mathbb{G})$. Indeed, if $L^1(\mathbb{G})$ and $L^1(\hat{\mathbb{G}})$ both have a bounded approximate identity and either of the corresponding nets in $L^2(\mathbb{G})$ satisfies a condition generalizing quasicentrality then this construction generates an operator bounded approximate diagonal. In the classical group case, this provides a new method for constructing approximate diagonals emphasizing the relation between the operator amenability of the group algebra $L^1(G)$ and the Fourier algebra $A(G)$.
Article information Source Banach J. Math. Anal., Volume 9, Number 3 (2015), 248-260. Dates First available in Project Euclid: 19 December 2014 Permanent link to this document https://projecteuclid.org/euclid.bjma/1419001716 Digital Object Identifier doi:10.15352/bjma/09-3-18 Mathematical Reviews number (MathSciNet) MR3296138 Zentralblatt MATH identifier 1311.43004 Subjects Primary: 43A07: Means on groups, semigroups, etc.; amenable groups Secondary: 20G42: Quantum groups (quantized function algebras) and their representations [See also 16T20, 17B37, 81R50] 81R50: Quantum groups and related algebraic methods [See also 16T20, 17B37] 22D35: Duality theorems Citation
Willson, Benjamin. A Hilbert space approach to approximate diagonals for locally compact quantum groups. Banach J. Math. Anal. 9 (2015), no. 3, 248--260. doi:10.15352/bjma/09-3-18. https://projecteuclid.org/euclid.bjma/1419001716
|
The speed of light in a material is $c = 1/\sqrt{\epsilon\mu}$. Very slow light therefore means either a very high permittivity (high $\epsilon$) or a very high permeability (high $\mu$).
High $\epsilon$ means high electric polarizability, which implies high van-der-Waals force. I highly doubt that a substance with such a high permittivity that you could see light come out after a macroscopic time could be a gas at temperatures you could survive; I'd expect the forces between the atoms to be strong enough that it would be a solid body (which clearly would preclude entering and leaving).
I'm less sure about permeability, but I highly suspect it would have a similar effect on magnetic attraction between the atoms.
Therefore I highly doubt that in our universe there could be a substance that has those properties.
However, since this is worldbuilding.SE, not physics.SE, we can hypothetize some substance which would interact with yet another field (not available in our universe) which causes repulsion between the atoms. That way you could have a gas despite strong electromagnetic attraction. Alternatively you could hypothesize that the scientist wears a suit that protects him from very high temperatures (however one then has to additionally handwave why his image is not drowned in thermal radiation, that is, glowing of the hot gas).
|
Is it possible to draw the path of a point on the boundary while the polygon is rolling, or writing a function of theta similar to the cycloid parametric equations:
$$ x = r (\theta - \sin(\theta)),\quad y = r (1 - \cos(\theta) $$
My main goal is to generalize the problem to write the parametric equations for the cyclogon. The following part of a program shows the multiple of four polygons after moving them above the x-axis and to the right of the y-axis. Please see below. The next step is to find a function to draw a path while a point on its vertex is rotating.
Manipulate[ trM4 = Graphics[ {{EdgeForm[{Thick, Red}], FaceForm[LightGray], RegularPolygon[n]}, {PointSize[0.025], Blue, Point[{Cos[π/n] - Sin[π/n], 0}]}, If[hint, Inset[ Text @ Style[TraditionalForm[{Cos[π/( n)] - Sin[π/( n)], 0}], 16, Black], {Cos[π/( n)] - Sin[π/n], -0.2}]], {Translate[ {EdgeForm[{Thick, Red}], FaceForm[{Yellow, Opacity[0.5]}], RegularPolygon[n], {PointSize[0.03], Red, Point[{0, 0}]}}, {Cos[π/n], Cos[π/n]}]}}, Axes -> True, ImageSize -> 400]; Show[trM4], Row[{ Control[{{n, 4, "Number of sides"}, 4, 21, 1, Appearance -> "Labeled"}], Spacer[45], Control[{{hint, False, "hint"}, {False, True}}]}], TrackedSymbols :> {n, hint}]
The code above creates polygons that start above the x-axis and to the left of the y-axis.
|
The BCS wave function you write down depends on a parameter $\phi$, but the ground state energy is independent of it. This implies that $\phi$ is the (would be) Goldstone mode that governs the low energy dynamics of the system. The gradient of $\phi$is the conserved $U(1)$ current $\vec\jmath \sim \vec\nabla\phi$, and the ordinary charged current is $\vec\jmath_s =n_s e \vec\nabla\phi /m$, where $n_s$ is the superfluid density of electrons.
Because $\phi$ is a Goldstone mode the effective low energy action can only depend on gradients of $\phi$. By gauge invariance the effective actionis of the form $S[A_\mu-e\nabla_\mu\phi]$. The explicit form of $S$ can be computed from the BCS wave function, or more easily determined using diagrammatic methods. For our purposes the only important point is that $S$ has at least a local minimum if the field vanishes. This means that solutions of the classical equation of motion are of the form $A_\mu=e\nabla_\mu \phi$ (This is the London equation). Let's consider an applied electric field $\vec{E}=-\vec\nabla A_0$. I find$$\vec{E}=-e\vec\nabla \dot\phi=\frac{m}{n_s}\frac{d\vec\jmath}{dt}$$ which shows that a static current corresponds to zero field, and the resisitivity is zero.
The effective action also governs other properties of the system, such as the Meissner effect, the critical current, and fluctuations of the current in a thermal ensemble.
Postscript: A commenter argues that I really need to show that $S$ has a minimum$$ S \sim \gamma (A-e\nabla\phi)^2 + \ldots$$First, note that $\gamma$ determines the Meissner mass, so even without a calculation I have shown that the Meissner effect implies superconductivity. Beyond that I do indeed have to do a calculation of $\gamma$ based on the BCS wave function (I could appeal to the Landau-Ginzburg functional, but this only shifts the question to the gradient term in the LG functional). Fortunately, the calculation is straightforward, and can be found in many text books. For people with more of a particle physics interest there is a beautiful explanation in Vol II of Weinberg's QFT book. There is Anderson's famous paper on gauge invariance and the Higgs effect. I provided a version of the calculation in Sect. 3.4 of these lecture notes https://arxiv.org/abs/nucl-th/0609075
Post-Postscript: How is this different from a weakly interacting electron gas? In the electron gas I have a low energy description in terms of electrons and phonons (and other degrees of freedom). For simplicity consider the high temperature limit, where a classical description applies (as explained by Landau Fermi liquid theory, this generalizes to low T). The equation of motion for a single electron is just $m\dot v=e E$, which superficially looks like the London equation. However, this is not a macroscopic current. When I pass from microscopic to macroscopic equations there is no symmetry that forbids the appearance of dissipative terms, so the conductivity is non-zero. There is indeed a subtlety in the coupling of electrons and phonons, because without the umklapp process momentum conservation would force the conductivity to vanish.
In a superconductor the gradient of the Goldstone boson automatically describes a macroscopic current ($\gamma$ is proportional to the density of electrons). S is a quantum effective action, and dissipative terms are automatically forbidden. At finite temperature things do get a little more complicated because the total current is in general the sum of a non-dissipative supercurrent, governed by $S$, and a dissipative normal current. However, below $T_c$ part of the response is carried by a supercurrent.
|
Functiones et Approximatio Commentarii Mathematici Funct. Approx. Comment. Math. Volume 49, Number 2 (2013), 229-240. Some elementary explicit bounds for two mollifications of the Moebius function Abstract
We prove that the sum $\sum_{\left\{d\le x, (d,r)=1\right.}\mu(d)/d^{1+\varepsilon}$ is bounded by $1+\varepsilon$, uniformly in $x\ge1$, $r$ and $\varepsilon>0$. We prove a similar estimate for the quantity $\sum_{\left\{d\le x, (d,r)=1\right.}\mu(d)\log(x/d)/d^{1+\varepsilon}$. When $\varepsilon=0$, $r$ varies between 1 and a hundred, and $x$ is below a million, this sum is non-negative and this raises the question as to whether it is non-negative for every~$x$.
Article information Source Funct. Approx. Comment. Math., Volume 49, Number 2 (2013), 229-240. Dates First available in Project Euclid: 20 December 2013 Permanent link to this document https://projecteuclid.org/euclid.facm/1387572228 Digital Object Identifier doi:10.7169/facm/2013.49.2.3 Mathematical Reviews number (MathSciNet) MR3161492 Zentralblatt MATH identifier 1288.11091 Citation
Ramaré, Olivier. Some elementary explicit bounds for two mollifications of the Moebius function. Funct. Approx. Comment. Math. 49 (2013), no. 2, 229--240. doi:10.7169/facm/2013.49.2.3. https://projecteuclid.org/euclid.facm/1387572228
|
Astrid the astronaut is floating in a grid. Each time she pushes off she keeps gliding until she collides with a solid wall, marked by a thicker line. From such a wall she can propel herself either parallel or perpendicular to the wall, but always travelling directly \(\leftarrow, \rightarrow, \uparrow, \downarrow\). Floating out of the grid means death.
In this grid, Astrid can reach square Y from square ✔. But if she starts from square ✘ there is no wall to stop her and she will float past Y and out of the grid.
In this grid, from square X Astrid can float to three different squares with one push (each is marked with an *). Push \(\leftarrow\) is not possible from X due to the solid wall to the left. From X it takes three pushes to stop safely at square Y, namely \(\downarrow, \rightarrow, \uparrow\). The sequence \(\uparrow, \rightarrow\) would have Astrid float past Y and out of the grid.
Question:
In the following grid, what is the least number of pushes that Astrid can make to safely travel from X to Y?
|
Homework Helper
1,059 9
Ok so I've got a question after walking through the time dilation derivation that used 'light clocks' (think a beam of light bouncing back and forth between mirrors) to derive ##\delta t^\prime = \frac{\delta t}{\sqrt{1-\frac{v^2}{c^2}}}##. So my Q is could you derive the same equation if you had used atomic clocks instead? Don't actually do so here, just wanted to know if it would have lead to the same relation. Thanks for your time!
Edit, my tex tags seem to not work?! I'm sure that was how... <mentor edit latex>
Edit, my tex tags seem to not work?! I'm sure that was how...
<mentor edit latex>
Last edited by a moderator:
|
Porto and the nearby Douro Valley are famous for producing port wine. Wine lovers from all over the world come here to enjoy this sweet wine where it is made. The International Consortium of Port Connoisseurs (ICPC) is organizing tours to the vineyards that are upstream on the Douro River. To make visits more pleasurable for tourists, the ICPC has recently installed sun tarps above the vineyards. The tarps protect tourists from sunburn when strolling among the vines and sipping on a vintage port.
Unfortunately, there is a small problem with the tarps. Grapes need sunlight and water to grow. While the tarps let through enough sunlight, they are entirely waterproof. This means that rainwater might not reach the vineyards below. If nothing is done, this year’s wine harvest is in peril!
The ICPC wants to solve their problem by puncturing the tarps so that they let rainwater through to the vineyards below. Since there is little time to waste before the rainy season starts, the ICPC wants to make the minimum number of punctures that achieve this goal.
We will consider a two-dimensional version of this problem. The vineyard to be watered is an interval on the $x$-axis, and the tarps are modeled as line segments above the $x$-axis. The tarps are slanted, that is, not parallel to the $x$- or $y$-axes (see Figure 1 for an example). Rain falls straight down from infinitely high. When any rain falls on a tarp, it flows toward the tarp’s lower end and falls off from there, unless there is a puncture between the place where the rain falls and the tarp’s lower end—in which case the rain will fall through the puncture instead. After the rain falls off a tarp, it continues to fall vertically. This repeats until the rain hits the ground (the $x$-axis).
(a) Tarps are shown as black slanted
line segments and the vineyard as a
green line segment at the bottom.
(b) An optimal solution: by puncturing two
tarps in the locations of the red circles,
some rain (shown in blue) that starts
above the vineyard will reach the vineyard.
For legal reasons you have to ensure that at least some of the rain that reaches the vineyard originated from directly above the vineyard. This is to prevent any vineyard from stealing all their rain from neighboring vineyards (see the second sample input for an example).
The first line of input contains three integers $\ell $, $r$ and $n$, where $(\ell , r)$ ($0 \le \ell < r \le 10^9$) is the interval representing the vineyard and $n$ ($0 \leq n \leq 5 \cdot 10^5$) is the number of tarps. Each of the following $n$ lines describes a tarp and contains four integers $x_{1}$, $y_{1}$, $x_{2}$, $y_{2}$, where $(x_1, y_1)$ is the position of the tarp’s lower end and $(x_2, y_2)$ is the position of the higher end ($0 \leq x_{1},x_{2} \leq 10^9$, $x_{1} \neq x_{2}$, and $0 < y_{1} < y_{2} \leq 10^9$).
The $x$-coordinates given in the input ($\ell $, $r$, and the values of $x_{1}$ and $x_{2}$ for all tarps) are all distinct. The tarps described in the input will not intersect, and no endpoint of a tarp will lie on another tarp.
Output the smallest number of punctures that need to be made to get some rain falling from above the vineyard to the vineyard.
Sample Input 1 Sample Output 1 10 20 5 32 50 12 60 30 60 8 70 25 70 0 80 15 30 28 40 5 20 14 25 2
Sample Input 2 Sample Output 2 2 4 2 3 2 0 3 5 2 1 5 1
|
It is often useful to know what shape your data takes. Linear regression is one example, where you want to find out how well your data fits a straight line. Knowing that your data fits a straight line reasonably well gives you an advantage for understanding your data. For instance, you know that your data holds a linear relation, and you can predict the value of one of the variables with the value of the other.
But, of course, data does not always fit a straight line.
Most of the time, the shape it takes is more complex. The number of dimensions might even be too big to understand the shape your data takes, and dimensionality reduction techniques might not even be able to help. Luckily for us, we have Topology on our side. But… what is Topology? and how can it help us?
What is Topology?
Topology (not Topography, though it can be related) is a branch of Mathematics that studies the properties of spaces that remain invariant under continuous transformations. Ok, maybe this definition does not clarify things a lot, but let me expand on this. In
Geometry, you are interested in concepts like distance, angles, etc. In Topology, you study the very same objects as in Geometry, but forgetting about distance. Just as in Geometry you would consider two cubes with the exact same size but one rotated respect to the other to be essentially the same object, in Topology, you would consider a cube and a sphere to be the same object, as you can continuously transform one into the other, and you can continuously undo the transformation. In Topology, you can stretch, compress, bend or twist an object without changing its topological nature. Only gluing or tearing wouldn’t be allowed, as they cannot be undone continuously. This is the reason why sometimes, Topology is called the “Rubber Sheet Geometry”.
Let me give you one more example, concerning the birth of Topology, that will make the concept clearer. Topology was born in 1736 when Leonhard Euler gave a solution to the
Seven Bridges of Königsberg problem. Given the seven bridges of the city of Königsberg (now Kaliningrad) as shown in the picture, the problem is to find a route through the city that will cross all the bridges exactly once.
Euler noted that the lengths of the bridges or the particular shape of the river were not relevant aspects to take into account in order to solve the problem. The only important thing was to know which points the bridges connect. He made the following representation:
Here, each line represents each one of the seven bridges, and each point represents the portion of land that is not separated by the river. You could have drawn the points further apart from each other, as you could have drawn the lines in a polygonal way or in a more curved way, but the information enclosed in this representation wouldn’t change at all. Topologically it would remain the same object. Using this simpler representation, Euler found a negative solution to the problem.
You might recognize the previous figure as a graph. That is correct. In fact, graphs are very interesting objects from the topological viewpoint, and as we have just seen, both Graph Theory and
Topology share the same birthday.
This problem shows us that the key concepts that Topology studies are
continuity and connection. Distances or angles are no longer important. So… why use Topology?
Ok, so now we have an idea of what Topology is. But, how can it help us in understanding our data? Well, there are many properties of Topology that make it very useful when studying data. We can mention the following:
Coordinates are no longer a thing to worry about, or at least they are not that big of a problem, as the topological features that we study will no longer be affected by them. Changes under small deformations can be easily tracked with Topology, therefore noise can be handled in a much easier way. Thanks to Topology we can study objects in a simplified representation, called simplicial complexes (which we will introduce later), that preserve the topological features that we want to focus on.
Ok, that sounds cool! So how can I start applying Topology to my data? Well, we need to introduce some concepts before.
Simplicial complexes
Simplicial complexes are a key concept in Algebraic Topology, that is, the branch of Topology that studies topological objects through their algebraic properties, such as homotopy or homology groups. If you don’t know what a group is, you might take a look at this link. All the groups that we’ll use are abelian groups (groups which hold the commutative property that we all know from primary school), which simplifies things a lot. We will introduce homology groups later on.
The reason for the importance of simplicial complexes is that many objects studied in Mathematics can be reduced to a simplicial complex that preserves the homological groups of the object.
So what exactly is a simplicial complex?
We first define what a simplex is:
A 0-simplex is a point. Its boundary is the empty set, i.e., it doesn’t exist. A 1-simplex is a segment. Its boundary consists of two points. A 2-simplex is a triangle. Its boundary consists of three segments, each of which has two points as a boundary.
Generalizing for higher dimensions, you get the idea of what a simplex is. Also, you can see that the boundary of a simplex consists of a union of simplices of lower dimensions, which have been glued by their boundaries. We call each simplex of the boundary a face. Note that the boundary of an n-simplex contains 0-faces (0-dimensional faces), 1-faces (1-dimensional faces), …, and all the way up to (n-1)-faces.
A
simplicial complex is an object that results from the union of several simplices, such that: Every face of a simplex that the simplicial complex contains is also contained in the simplicial complex. The intersection of any two simplices of the simplicial complex is a face of each of the simplices.
The
dimension of the complex is considered to be the highest of the dimensions of its simplices.
Basically, you form a simplicial complex by taking several simplices and gluing them by their boundaries, in a way such that the parts that get glued are faces of each simplex being glued (and hence, the parts being glued must be of the same dimension).
We mentioned boundaries before. A boundary is a key concept in Topology. What happens to the boundary when we glue two simplices together? We will give a proper definition shortly, but for the moment take a look at this particular example:
The boundary of the union of this five 1-simplices would consist of the points \(A_1\) and \(A_6\). The points that have been used for gluing are no longer part of the boundary, as they sort of cancel each other out.
So now we have boundaries, and we have simplicial complexes. Let’s jump now to the next section, where we define what simplicial homology is.
Simplicial Homology
We start by defining what a chain complex is.
First, taking all the
k-simplices \(\sigma_i\) of a simplicial complex \(S\), we give each one of them an orientation, and then we define the k– chain complex \(C_k\) of \(S\) as the free abelian group generated by all the k-simplices of \(S\). What this means basically is that \(C_k\) is formed by all the possible linear combinations of the k-simplices of \(S\):
$$\sum_{i=1}^N c_i \sigma _i, $$
where \(c_i \)s are integers.
A negative sign of one of the \(c_i\)s can be interpreted as giving the opposite orientation to the k-simplex accompanied by the \(c_i\).
We define the
boundary operator on the chain complexes:
$$\partial_k: C_k\longrightarrow C_{k-1},$$
thus, the boundary operator takes each
k-chain to a (k-1)-chain.
For a
k-chain consisting of solely one k-simplex, we first get the (k-1)-simplices that form the boundary of the k-simplex. We sum them up to form a (k-1)-chain, where the signs of each of the (k-1)-simplices will be given according to the orientation that the k-simplex induces on them (see picture below).
In this picture, the triangle is a 2-simplex, where the orientation it has been assigned is a counterclockwise one. The orientation given to the 1-simplexes of its boundary are represented by the arrows. Then, when finding the boundary of this 2-simplex, as the orientations of the arrows match the counterclockwise orientation given to the 2-simplex, we would have that the boundary is \(a + b + c\). Now, imagine that at the beginning, we had given the \(a\) arrow the opposite direction. Then, as the direction wouldn’t match with that given by the 2-simplex orientation, it would be of negative sign, and the boundary would be \(-a+b+c\). This might look rather cumbersome, but it’s simpler than it seems: you can give the simplices any orientation you want, as long as they stick to it through the whole following process, and as long as you are careful enough with the signs when calculating the boundaries.
Now, as a k-chain is a linear combination of k-simplices, the boundary of a k-chain is defined as the linear combination of the boundaries of its k-simplices:
$$\partial_k\left( \sum_{i=1}^N c_i \sigma _i \right)= \sum_{i=1}^N c_i \partial_k(\sigma_i).$$
This is called a group homomorphism.
Remember when we said that when gluing together 2 simplices, the faces used to do the gluing sort of canceled each other out? This statement will become clearer with the following example:
In this picture we see the 2-simplices \(A\) and \(B\), which share a face \(a\) and the points \(P_0\) and \(P_1\). Now as the orientation of both 2-simplices match, we can give a meaning to the 2-chain \(A+B\). It is the result of gluing \(A\) and \(B\) through face \(a\). If the orientations didn’t match, we would have to subtract one of the 2-simplices to the other in order to obtain the gluing. Otherwise, the sum would be just a formal sum, as it would be the case if the simplices had no common faces. Now, let’s find the boundary for \(A+B\). According to the formula we saw before, this boundary is the boundary of \(A\) summed to the boundary of \(B\), that is,
$$\partial_2(A+B) = \partial_2(A) + \partial_2(B) = (a+b+c) + (-a + d – e).$$
Summing it all up, we have that the resulting boundary is \(b + c + d – e,\) that is, the 4 outermost faces after gluing \(A\) and \(B\), with the sign given by the orientation (the actual order of the addends does not matter, as we are in an abelian group, but the orientation is important to give the elements in the boundary a sign: note the minus sign on \(e\)).
Elements that have no boundaries, that is, elements in \(Z_k = \ker \partial_k,\) are called
cycles, and elements in \(B_k = \text{Im}\partial_{k+1},\) that is, elements that are boundaries of other elements, are called, as you may have guessed, boundaries.
It’s straightforward to see that \(\partial^2 = 0,\) that is, if you are the boundary of something, then you don’t have a boundary. Or in other terms, all boundaries are cycles (try calculating step by step the boundary of the boundary of \(A + B\) given in the previous example. Hint: the boundary of \(b\) would be \(P_2 – P_1\), then … ).
Now, we call the sequence of maps $$ C_n \overset{\partial_n}{\rightarrow}\cdots \overset{\partial_{k+1}}{\rightarrow}C_k \overset{\partial_k}{\rightarrow} C_{k-1}\overset{\partial_{k-1}}{\rightarrow}\cdots \overset{\partial_1}{\rightarrow} C_0 $$
the
Chain Complex of the simplicial complex S. This chain complex always starts in the n-dimensional chain complex \(C_n\), where n is the dimension of S, as with higher dimensions all the chains would be nil.
The
k-th homology group of S is then defined as the quotient abelian group \(H_k(S) = Z_k(S)/B_k(S)\), that is, the cycles modulo the boundaries.
Then, the k-th Homology group won’t be
0 when there are k-cycles which are not boundaries. If you think about it, these cycles which are not boundaries represent k-th dimensional holes on S.
The
k-th Betti number, defined as \(\beta_k = \text{rank}(H_k(S))\), tells us the number of k-dimensional holes in S. When k=0, the Betti number tells us the number of connected components in S.
That is it! We have defined Simplicial homology groups. Now we want to apply them to our data. But data usually consists of a bunch of points distributed in space. So, where is the connection or continuity between points? Aren’t they just separate of each other? Is it just a big 0-simplicial complex that we have to analyze? Don’t worry, Čech has the answers to all these questions.
The Čech complex
Ok, so of course, a bunch of points in space (also called a
point cloud) doesn’t have a topological structure per se. One has to define the Čech complex in order to extract an interesting structure that relates the points. I know that I told you before that in Topology we forget about distance, but in this case, between points distance will be the feature that relates the points. So, how do we define this Čech complex from the between-points distance?
First, we define a ball centered on each of the points. We take a parameter \(t>0,\) that will describe the diameter of all the balls. So as \(t\) starts to get bigger, the balls start to grow. When two balls intersect each other, then a
1-simplex gets formed between the two centers. When 3 balls intersect, a 2-simplex gets formed between the 3 centers, and so on. Then, it is clear that for each \(t>0\) we have a simplicial complex \(C_t,\) that will change for specific values of \(t\). This is the way that the Čech complex is formed.
Note that the set of points need not lie in a particular space. Having a distance matrix of the set of points would suffice to construct the Čech complex.
Now, for each t that makes the complex change, we can obtain the homology groups, and the Betti numbers for \(C_t\). We call this the
persistent homology of the point cloud, as we are seeing how persistent the topological features for this space are.
It’s clear then, that for each hole we have the values of t for which the hole is born or dies, as eventually, for a big enough \(t\), all the balls will intersect, thus closing all the holes that may exist. This way, we relate each hole with a tuple
(birth, death). We can represent this in several ways. One of them is the persistence barcode. Let’s see an example to better understand this barcode:
Ok, so there you have it! We have seen how to understand the shape of your data better by extracting topological information from it. But one thing is the theory, and another thing is applying it to practice. There is a lot of algorithmic work behind this that we haven’t showed. Lucky for us, it has already been implemented. For example, you have the package TDA for R, Javaplex for Java, or Topology Toolkit and Gudhi for C++, also usable in Python.
Well, I hope that this post, meant to be an introduction to Topology and Topological Data Analysis, has helped you understand the importance and usefulness of Topology, and has given you some new ideas to study your data. For the sake of simplicity, I’ve had to omit lots of technical details and definitions, but I hope you got a general idea, and in case you want to delve into these topics, you can ask for additional references. Of course, you are welcome to ask any questions that you may have. Now, it’s your turn to apply these ideas that you’ve learnt. Show us the results!
|
I'm trying to find solutions for the Poisson equation under Neumann conditions, and have a couple of questions. More specifically, I'm interested in the gradient of the function $\phi(x)$ in a space $\Omega \subset \mathbb{R}^d$. (note that I'm only interested in the gradient. For my problem I do not care about $\phi(x)$ at all. I know two things about $\phi(x)$. First, I know the Laplacian on the entire set $\Omega$: $$ \nabla^2 \phi(x)=f(x)\quad \forall \quad x \in \Omega $$ Second, the following boundary condition: $$ \nabla \phi(x)n=0 \quad \forall \quad x \in \partial \Omega $$ where n is the outward unit normal to $\Omega$. As I understand it, the solution for $\phi(x)$ is given by: $$ \phi(x_0)=\int_\Omega f(x) G(x,x_0) dx + boundary terms+arbitrary constant $$ And my object of interest, the gradient of $\phi$ is given by: $$ \nabla_{x_0} \phi(x_0)=\int_\Omega f(x) \nabla_{x_0} G(x,x_0) dx +\nabla_{x_0} boundary terms $$ where $G$ is the Green function of my problem.
I have a couple of questions:
Does anybody know a text that works out this problem under Neumann conditions? I have seen many treatises where they look at Dirchilet conditions, but none with Neumann. I would particularly be interested in how to define the Green function exactly.
Are my boundary terms zero (because of the rather simple boundary condition) in the problem?
I'm trying to get a feel for the Green's function in different spaces. Is defining the Green's function in this problem somehow similar to determining the appropriate bounds for integration? For example, suppose that the problem occurs in only 1 dimension. In that case, the gradient of the Green's function, $\nabla_{x_0} G(x,x_0)$ should be a stepwise function that takes value 1 for all $x$ smaller than $x_0$ and value 0 thereafter right? To me this seems to be the only way to retrieve the standard solution for a one-dimensional problem.
Should it not be easier to retrieve the gradient of the Green's function (which I'm interested in), rather than the Green's function itself? Is there any text that treats this issue?
Many thanks for any help you can offer.
|
So, I'm doing an extensive homework of electromagnetism and we are searching for the total electromagnetic angular momentum of the Thomson dipole. In the end, there is one integral we cannot solve. By '"we" I mean the entire class: nobody is getting how to solve it. Professor swears there are no expansions or approximations, so...
The integral is, in spherical coordinates
$$\int_0^{2\pi} d\phi \int_0^\pi \frac{r^2(r-q\cos\theta)\cdot \sin\theta \cdot \cos\theta}{(r^2+q^2-2rq\cdot \cos\theta)^{3/2}}d\theta.$$
The solution is supposed to be $\frac{8\pi}{3}\frac{q}{r}$ for $r > q$ (q is the distance separating the monopoles). I could find only $\frac{4\pi}{3}\frac{q}{r}$, and with a negative sign. Can anybody illuminate me on a possible solution?
|
Yes, such a directed graph exists, with a "quickly" computable path of length $O(\log|n-m|)$ from $m$ to $n$ for any distinct integers $m,n$.
I'll first construct a digraph with this property but with each vertex having out-degree at most $8$, then show how to get from it a digraph of out-degree $2$.
Denote by $v(x)$ the $2$-adic valuation of the integer $x$, which is the unique integer $e \geq 0$ such that $x/2^e$ is an odd integer; conventionally $v(0) = \infty$.
The graph edges emanating from $x$ will be as follows. If $x$ has valuation $e>0$ then$$x \rightarrow x \pm 1, \phantom+ x \pm 2^{e-1}, \phantom+ x \pm 2^e, \phantom+ x \pm 3\cdot 2^e.$$If $x$ is odd, omit $x \pm 2^{e-1}$ so $x$ goes only to $x \pm 1$ and $x\pm 3$. If $x=0$ then $x\mapsto \pm 1$ only.
Then:
(i) For each $e \geq 0$, the integers of valuation $e$ constitute an arithmetic progression of common difference $2^{e+1}$. We can get from such $x$ to $x \pm 2^{e+1}$ in two steps: if $e=0$ then add or subtract $1$ twice; if $e>0$ then add or subtract $2^{e-1}$ and then $3 \cdot 2^{e-1}$.
(ii) If $v(x) = e$ and $|n-m| \leq 2^e$ we can get from $m$ to $n$ in at most $2 \phantom. \log_2 |n-m| + 1$ steps. Indeed if $|n-m| = 2^e$ one step suffices, so assume $|n-m| < 2^e$ and then $|n-m| \in [2^d, 2^{d+1})$ with $d < e$. By symmetry we may assume $n>m$. Then$$m \rightarrow m + 1 \rightarrow m + 2 \rightarrow m+4 \rightarrow \cdots \rightarrow m+2^d$$in $d+1$ steps, and then in a further $d - v(n)$ steps we reach $n$ by writing$$n = m + 2^d + 2^{d-1} \pm 2^{d-2} \pm 2^{d-3} \pm \cdots \pm 2^{d-v(n)}.$$
(iii) If $v(m) = e < f$, and $n$ is the integer of valuation exactly $f$ that's nearest to $m$, then we can get from $m$ to $n$ in $f-e$ steps. At the $i$-th step, add either $\pm 2^{e+i-1}$ or $\pm 3 \cdot 2^{e+i-1}$ to reach a number of valuation exactly $e+i$; there are two choices, so take the one closer to $n$. The distance to $n$ always stays below $2^f$ so at step $f-e$ we hit $n$.
(iv) Finally, suppose $m$ is odd and $|n-m| \in [2^k, 2^{k+1})$. Find $n'$ of valuation $k$ such that $|n-n'| \leq 2^k$. Thus $|m-n'| < 3 \cdot 2^k$. By (iii), in $k$ steps we get from $m$ to some $n_1$ such that $|m-n_1| < 2^k$ and $v(n_1) = k$. So either $n_1=n'$ or $n_1 = n' \pm 2^{k+1}$, and in the latter case we reach $n'$ in a further two steps using (i). Then in at most $2k+1$ further steps we reach $n$ using (ii).
We are done, in a total of $3 \phantom. \log_2 |n-m| + O(1)$ steps, because if $m$ is not odd then we can use our first step to move to the odd number $m \pm 1$ closer to $n$.
Now to get from out-degree $8$ down to $2$: for each $x \in {\bf Z}$ choose $f_i(x)$ for $i=0,1,2,\ldots,7$ so that $\{f_0(x),f_1(x),f_2(x),\ldots,f_7(x)\}$ is the set of integers reachable from $x$ (with repetitions if $x$ is odd or zero). Define a new graph of out-degree $2$ as follows: write every integer as $8x+i$ for some $x \in \bf Z$ and $i\in\{0,1,2,\ldots,7\}$, and then $8x+i \rightarrow 8f_i(x)$ and $8x+i \rightarrow 8x+i'$ where $i'=0$ if $i=7$ and $i'=i+1$ otherwise. That is, we partition $\bf Z$ into directed $8$-cycles, which consumes only one outgoing edge per integer and leaves us with $8$ further edges per cycle to assign as we wish, letting us use our graph of out-degree $8$. Each step of our algorithm then expands to at most $8$, because we might have to take as many as $7$ steps cycling around until we reach the correct $i$ to make the next step in our original graph, and then at the end we might still have to spend an extra $7$ steps to reach the target $n$ within its cycle. But that's still at most $24 \phantom. \log_2 |n-m| + O(1)$ steps, and still with an explicit and "quick" algorithm.
[
EDIT we can reduce $3 \phantom. \log_2 |n-m| + O(1)$ to $\frac52 \log_2 |n-m| + O(1)$ by compressing$$m \rightarrow m + 1 \rightarrow m + 2 \rightarrow m+4 \rightarrow \cdots \rightarrow m+2^d$$to$$m \rightarrow m + 1 \rightarrow m + 4 \rightarrow m+16 \rightarrow \cdots \rightarrow m+2^d,$$and there's probably another multiple of $\log_2|n-m|$ to be saved by permuting $f_0(x),f_1(x),f_2(x),\ldots,f_7(x)$ to reduce the number of steps spent within directed $8$-cycles.]
|
Search response: 6136 publications match your query. Listing starts with latest publication first: (1 - 10) MPP-2019-203 Determination of jet calibration and energy resolution in proton-proton collisions at $\sqrt{s}$ = 8 TeV using the ATLAS detector, ATLAS Collaboration,, arxiv:1910.04482 (abs), (pdf), (ps), CERN-EP-2019-057, inSPIRE entry.
[ATLAS], [Article]
MPP-2019-201 Test of High-Resolution Muon Drift-Tube Chambers for the Upgrade of the ATLAS Experiment, Šejla Hadžić, (Full text), TU München, Garching (02102019).
[ATLAS], [Thesis]
MPP-2019-199 Measurement of $J/ψ$ production in association with a $W^\pm$ boson with $pp$ data at 8 TeV, ATLAS Collaboration,, arxiv:1909.13626 (abs), (pdf), (ps), CERN-EP-2018-352, inSPIRE entry.
[ATLAS], [Article]
MPP-2019-196 Measurement of the $t\bar{t}$ production cross-section in the lepton+jets channel at $\sqrt{s}=13$ TeV with the ATLAS experiment, The ATLAS collaboration, ATLAS-CONF-2019-044, (External full text link).
[ATLAS], [Article]
MPP-2019-195 Observation of the associated production of a top quark and a Z boson in $pp$ collisions at $\sqrt{s} = 13$ TeV with the ATLAS detector, The ATLAS collaboration, ATLAS-CONF-2019-043, (External full text link).
[ATLAS], [Article]
MPP-2019-194 Search for the Higgs boson decays $H \to ee$ and $H \to eμ$ in $pp$ collisions at $\sqrt{s} = 13$ TeV with the ATLAS detector, ATLAS Collaboration,, arxiv:1909.10235 (abs), (pdf), (ps), CERN-EP-2019-184, inSPIRE entry.
[ATLAS], [Article]
MPP-2019-193 Measurements of inclusive and differential cross-sections of $t\bar{t}\gamma$ production in the $e\mkern-2mu\mu$ channel at 13 TeV with the ATLAS detector, The ATLAS collaboration, ATLAS-CONF-2019-042, (External full text link).
[ATLAS], [Article]
MPP-2019-192 Search for direct production of electroweakinos in final states with one lepton, missing transverse momentum and a Higgs boson decaying into two $b$-jets in (pp) collisions at $\sqrt{s}=13$ TeV with the ATLAS detector, ATLAS Collaboration,, arxiv:1909.09226 (abs), (pdf), (ps), CERN-EP-2019-188, inSPIRE entry.
[ATLAS], [Article]
MPP-2019-191 Search for squarks and gluinos in final states with same-sign leptons and jets using 139 fb$^{-1}$ of data collected with the ATLAS detector, ATLAS Collaboration,, arxiv:1909.08457 (abs), (pdf), (ps), CERN-EP-2019-161, inSPIRE entry.
[ATLAS], [Article]
MPP-2019-190 Combined measurements of Higgs boson production and decay using up to $80$ fb$^{-1}$ of proton-proton collision data at $\sqrt{s}=$ 13 TeV collected with the ATLAS experiment, ATLAS Collaboration,, arxiv:1909.02845 (abs), (pdf), (ps), CERN-EP-2019-097, inSPIRE entry.
[ATLAS], [Article]
|
NDSolve can easily handle the PDE system $$\partial_t y = \partial_z w \quad\quad \partial_t w = \partial_z y $$ along with initial-boundary conditions $$w(t,0)=w(t,-1)=0\quad\quad w(0,z)=-sin^2(z\pi)\quad\quad y(0,z)=1$$ for $(t,z)\in[0,1]\times[-1,0]$.
Lets call this problem A.
The above PDE system can be altered as $$\partial_t y = \partial_z w \quad\quad \partial_t w = (1+x)\partial_z y \quad \quad \partial_z x=w $$ the initial-boundary conditions $$w(t,0)=w(t,-1)=0\quad\quad w(0,z)=-sin^2(z\pi)\quad\quad y(0,z)=1$$ being supplemented by $$ x(t,0)=0 \quad\quad x(0,z)=x_0(z) $$ where $x_0(z)$ is the solution of $$x_0'(z)=w(0,z)\quad\quad x_0(0)=0$$
Lets call this problem B.
From a mathematical point of view if boundary conditions for $w$ and $y$ suffice for problem A then they should be also sufficient for problem B. And of course supplementary boundary condition for $x$ is enough.
However mathematica warns that "an insufficient number of boundary conditions have been specified" and as a result "Artificial boundary effects may be present in the solution".
So there must be something wrong with my code
(*Constructing initial conditions*)wo[z_] := -Sin[z*π]^2yo[z_] := 1s = NDSolve[{x'[z] == wo[z], x[0] == 0}, {x}, {z, 0, -1}] xo[z_] := First[x[z] /. s](*Evolution of initial conditions towards t=1*)equations := {D[yt[t, z], t] == D[wt[t, z], z], D[wt[t, z], t] == (xt[t, z] + 1)*D[yt[t, z], z], D[xt[t, z], z] == wt[t, z], wt[0, z] == wo[z], yt[0, z] == yo[z], wt[t, 0] == 0, wt[t, -1] == 0, xt[t, 0] == 0, xt[0, z] == xo[z]}system = NDSolve[equations, {wt , yt, xt}, {z, 0, -1}, {t, 0, 1}]
I cannot see what went wrong. Can anyone help?
|
Home
Integration by PartsIntegration by Parts
Examples
Integration by Parts with a definite integral
Going in Circles
Tricks of the Trade
Integrals of Trig FunctionsAntiderivatives of Basic Trigonometric Functions
Product of Sines and Cosines (mixed even and odd powers or only odd powers)
Product of Sines and Cosines (only even powers)
Product of Secants and Tangents
Other Cases
Trig SubstitutionsHow Trig Substitution Works
Summary of trig substitution options
Examples
Completing the Square
Partial FractionsIntroduction to Partial Fractions
Linear Factors
Irreducible Quadratic Factors
Improper Rational Functions and Long Division
Summary
Strategies of IntegrationSubstitution
Integration by Parts
Trig Integrals
Trig Substitutions
Partial Fractions
Improper IntegralsType 1 - Improper Integrals with Infinite Intervals of Integration
Type 2 - Improper Integrals with Discontinuous Integrands
Comparison Tests for Convergence
Modeling with Differential EquationsIntroduction
Separable Equations
A Second Order Problem
Euler's Method and Direction FieldsEuler's Method (follow your nose)
Direction Fields
Euler's method revisited
Separable EquationsThe Simplest Differential Equations
Separable differential equations
Mixing and Dilution
Models of GrowthExponential Growth and Decay
The Zombie Apocalypse (Logistic Growth)
Linear EquationsLinear ODEs: Working an Example
The Solution in General
Saving for Retirement
Parametrized CurvesThree kinds of functions, three kinds of curves
The Cycloid
Visualizing Parametrized Curves
Tracing Circles and Ellipses
Lissajous Figures
Calculus with Parametrized CurvesVideo: Slope and Area
Video: Arclength and Surface Area
Summary and Simplifications
Higher Derivatives
Polar CoordinatesDefinitions of Polar Coordinates
Graphing polar functions
Video: Computing Slopes of Tangent Lines
Areas and Lengths of Polar CurvesArea Inside a Polar Curve
Area Between Polar Curves
Arc Length of Polar Curves
Conic sectionsSlicing a Cone
Ellipses
Hyperbolas
Parabolas and Directrices
Shifting the Center by Completing the Square
Conic Sections in Polar CoordinatesFoci and Directrices
Visualizing Eccentricity
Astronomy and Equations in Polar Coordinates
Infinite SequencesApproximate Versus Exact Answers
Examples of Infinite Sequences
Limit Laws for Sequences
Theorems for and Examples of Computing Limits of Sequences
Monotonic Covergence
Infinite SeriesIntroduction
Geometric Series
Limit Laws for Series
Test for Divergence and Other Theorems
Telescoping Sums
Integral TestPreview of Coming Attractions
The Integral Test
Estimates for the Value of the Series
Comparison TestsThe Basic Comparison Test
The Limit Comparison Test
Convergence of Series with Negative TermsIntroduction, Alternating Series,and the AS Test
Absolute Convergence
Rearrangements
The Ratio and Root TestsThe Ratio Test
The Root Test
Examples
Strategies for testing SeriesStrategy to Test Series and a Review of Tests
Examples, Part 1
Examples, Part 2
Power SeriesRadius and Interval of Convergence
Finding the Interval of Convergence
Power Series Centered at $x=a$
Representing Functions as Power SeriesFunctions as Power Series
Derivatives and Integrals of Power Series
Applications and Examples
Taylor and Maclaurin SeriesThe Formula for Taylor Series
Taylor Series for Common Functions
Adding, Multiplying, and Dividing Power Series
Miscellaneous Useful Facts
Applications of Taylor PolynomialsTaylor Polynomials
When Functions Are Equal to Their Taylor Series
When a Function Does Not Equal Its Taylor Series
Other Uses of Taylor Polynomials
Functions of 2 and 3 variablesFunctions of several variables
Limits and continuity
Partial DerivativesOne variable at a time (yet again)
Definitions and Examples
An Example from DNA
Geometry of partial derivatives
Higher Derivatives
Differentials and Taylor Expansions
Differentiability and the Chain RuleDifferentiability
The First Case of the Chain Rule
Chain Rule, General Case
Video: Worked problems
Multiple IntegralsGeneral Setup and Review of 1D Integrals
What is a Double Integral?
Volumes as Double Integrals
Iterated Integrals over RectanglesHow To Compute Iterated Integrals
Examples of Iterated Integrals
Fubini's Theorem
Summary and an Important Example
Double Integrals over General RegionsType I and Type II regions
Examples 1-4
Examples 5-7
Swapping the Order of Integration
Area and Volume Revisited
Double integrals in polar coordinatesdA = r dr (d theta)
Examples
Multiple integrals in physicsDouble integrals in physics
Triple integrals in physics
Integrals in Probability and StatisticsSingle integrals in probability
Double integrals in probability
Change of VariablesReview: Change of variables in 1 dimension
Mappings in 2 dimensions
Jacobians
Examples
Bonus: Cylindrical and spherical coordinates
Kepler's laws say that planets follow elliptical orbits around the sun, with the sun at a focus. In other words, the distance $r$ between a planet and the sun is given by $$r = \frac{ed}{1-e\cos(\theta-\theta_0)}.$$
Most planets have very low eccentricity, and their orbits are almost circles. The earth's eccentricity is only 0.0167, or 1.67%, so the distance to the sun at aphelion (around July 4) is about 3.3% farther than at perihelion (around January 3). As a result, the earth receives about 6.5% more sunlight in January than in July! [Note: seasons are caused by the tilt of the earth's axis, not by the eccentricity of the orbit.]
|
Unless I'm overlooking something:
The groups must be conjugate in $\operatorname{Aut}(\mathbb{H})$.
Say we have two hyperbolic surfaces $S,\, T$ with their universal covers $\pi_S,\, \pi_T$ and a biholomorhism $\varphi \colon S \to T$. Choosing a point $s \in S$ and $\sigma \in \mathbb{H}$ above $s$, and a point $\tau \in \mathbb{H}$ above $t = \varphi(s)$, there is a unique lift of $\varphi$ to $\Phi \colon \mathbb{H} \to \mathbb{H}$ with $\Phi(\sigma) = \tau$ such that the diagram
$$\begin{matrix}\hphantom{XY}\mathbb{H} & \overset{\Phi}{\longrightarrow} & \mathbb{H}\hphantom{XY}\\\pi_S\downarrow & & \downarrow\pi_T\\\hphantom{XY}S & \underset{\varphi}{\longrightarrow} & T\hphantom{XY}\end{matrix}$$
is commutative. Now let $\delta$ a deck transformation of $\pi_S$. Then $\pi_T \circ \Phi \circ \delta = \varphi \circ \pi_s \circ \delta = \varphi \circ \pi_S$, hence $\Phi \circ \delta$ is the unique lift of $\varphi$ that maps $\sigma$ to $\Phi(\delta(\sigma))$. Let $\hat{\delta}$ the deck transformation of $\pi_T$ that maps $\tau$ to $\Phi(\delta(\sigma))$. Then we see that $\Phi \circ \delta = \hat{\delta} \circ \Phi$, or $\hat{\delta} = \Phi \circ \delta \circ \Phi^{-1}$. That is, conjugation with $\Phi$ gives an isomorphism $\operatorname{Deck}(\pi_S) \to \operatorname{Deck}(\pi_T)$.
Conversely, if we have $\operatorname{Deck}(\pi_T) = \Phi\operatorname{Deck}(\pi_S) \Phi^{-1}$ for a $\Phi \in \operatorname{Aut}(\mathbb{H})$, we can push down $\Phi$ to a biholomorphism $\varphi$ between $S$ and $T$.
|
Existence of sign changing solutions for some critical problems on $\mathbb R^N$
1.
Department of Mathematics, Yokohama National University, 156 Tokiwadai Hodogaya-ku - Yokohama, Japan
2.
Dipartimento di Matematica Applicata "U.Dini", Università di Pisa, Via Bonanno 25B - 56126 Pisa, Italy
3.
Dipartimento di Metodi e Modelli Matematici, Università di Roma "La Sapienza", Via Scarpa, 16 - 00166 Roma
$-\Delta u+V(x)u=N(N-2)|u|^{\frac{4}{N-2}\pm\varepsilon}u$ in $\mathbb R^N,$
which blow-up and concentrate at different points of $\mathbb R^N$ as $\varepsilon$ goes to 0, under certain conditions on the potential $V.$
Mathematics Subject Classification:35J6. Citation:Norimichi Hirano, A. M. Micheletti, A. Pistoia. Existence of sign changing solutions for some critical problems on $\mathbb R^N$. Communications on Pure & Applied Analysis, 2005, 4 (1) : 143-164. doi: 10.3934/cpaa.2005.4.143
[1]
Futoshi Takahashi.
An eigenvalue problem related to blowing-up solutions for a
semilinear elliptic equation with the critical Sobolev exponent.
[2] [3]
Yanfang Peng, Jing Yang.
Sign-changing solutions to elliptic problems with two critical Sobolev-Hardy exponents.
[4]
Futoshi Takahashi.
Morse indices and the number of blow up points of blowing-up solutions for a Liouville equation
with singular data.
[5]
Adrien Blanchet, Philippe Laurençot.
Finite mass self-similar blowing-up solutions of a chemotaxis system with non-linear diffusion.
[6] [7] [8] [9]
Yuxin Ge, Monica Musso, A. Pistoia, Daniel Pollack.
A refined result on sign changing solutions for a critical elliptic problem.
[10]
Salomón Alarcón, Jinggang Tan.
Sign-changing solutions for some nonhomogeneous nonlocal critical elliptic problems.
[11]
Tsung-Fang Wu.
On semilinear elliptic equations involving critical Sobolev exponents and sign-changing weight function.
[12]
Xiaomei Sun, Wenyi Chen.
Positive solutions for singular elliptic equations with critical Hardy-Sobolev exponent.
[13]
Peng Chen, Xiaochun Liu.
Multiplicity of solutions to Kirchhoff type equations with critical Sobolev exponent.
[14]
Kaimin Teng, Xiumei He.
Ground state solutions for fractional Schrödinger equations with critical Sobolev exponent.
[15]
Mateus Balbino Guimarães, Rodrigo da Silva Rodrigues.
Elliptic equations involving linear and superlinear terms and critical
Caffarelli-Kohn-Nirenberg exponent with sign-changing weight functions.
[16]
Jun Yang, Yaotian Shen.
Weighted Sobolev-Hardy spaces and sign-changing solutions of
degenerate elliptic equation.
[17] [18]
Mitsuru Shibayama.
Non-integrability criterion for homogeneous Hamiltonian systems via blowing-up technique of singularities.
[19]
Yohei Sato.
Sign-changing multi-peak solutions for nonlinear Schrödinger equations with critical frequency.
[20]
2018 Impact Factor: 0.925
Tools Metrics Other articles
by authors
[Back to Top]
|
Difference between revisions of "Fujimura's problem"
(→n = 10)
Line 189: Line 189:
== n = 10 ==
== n = 10 ==
− − − − − −
== Computer data ==
== Computer data ==
Revision as of 04:22, 7 March 2009
Let [math]\overline{c}^\mu_n[/math] the largest subset of the triangular grid
[math]\Delta_n := \{ (a,b,c) \in {\Bbb Z}_+^3: a+b+c=n \}[/math]
which contains no equilateral triangles [math](a+r,b,c), (a,b+r,c), (a,b,c+r)[/math] with [math]r \gt 0[/math]; call such sets
triangle-free. (It is an interesting variant to also allow negative r, thus allowing "upside-down" triangles, but this does not seem to be as closely connected to DHJ(3).) Fujimura's problem is to compute [math]\overline{c}^\mu_n[/math]. This quantity is relevant to a certain hyper-optimistic conjecture.
n 0 1 2 3 4 5 6 7 8 9 10 11 12 [math]\overline{c}^\mu_n[/math] 1 2 4 6 9 12 15 18 22 26 31 35 40 Contents n=0
[math]\overline{c}^\mu_0 = 1[/math]:
This is clear.
n=1
[math]\overline{c}^\mu_1 = 2[/math]:
This is clear.
n=2
[math]\overline{c}^\mu_2 = 4[/math]:
This is clear (e.g. remove (0,2,0) and (1,0,1) from [math]\Delta_2[/math]).
n=3 [math]\overline{c}^\mu_3 = 6[/math]:
For the lower bound, delete (0,3,0), (0,2,1), (2,1,0), (1,0,2) from [math]\Delta_3[/math].
For the upper bound: observe that with only three removals each of these (non-overlapping) triangles must have one removal:
set A: (0,3,0) (0,2,1) (1,2,0) set B: (0,1,2) (0,0,3) (1,0,2) set C: (2,1,0) (2,0,1) (3,0,0)
Consider choices from set A:
(0,3,0) leaves triangle (0,2,1) (1,2,0) (1,1,1) (0,2,1) forces a second removal at (2,1,0) [otherwise there is triangle at (1,2,0) (1,1,1) (2,1,0)] but then none of the choices for third removal work (1,2,0) is symmetrical with (0,2,1) n=4 [math]\overline{c}^\mu_4=9[/math]:
The set of all [math](a,b,c)[/math] in [math]\Delta_4[/math] with exactly one of a,b,c =0, has 9 elements and is triangle-free. (Note that it does contain the equilateral triangle (2,2,0),(2,0,2),(0,2,2), so would not qualify for the generalised version of Fujimura's problem in which [math]r[/math] is allowed to be negative.)
Let [math]S\subset \Delta_4[/math] be a set without equilateral triangles. If [math](0,0,4)\in S[/math], there can only be one of [math](0,x,4-x)[/math] and [math](x,0,4-x)[/math] in S for [math]x=1,2,3,4[/math]. Thus there can only be 5 elements in S with [math]a=0[/math] or [math]b=0[/math]. The set of elements with [math]a,b\gt0[/math] is isomorphic to [math]\Delta_2[/math], so S can at most have 4 elements in this set. So [math]|S|\leq 4+5=9[/math]. Similar if S contain (0,4,0) or (4,0,0). So if [math]|S|\gt9[/math] S doesn’t contain any of these. Also, S can’t contain all of [math](0,1,3), (0,3,1), (2,1,1)[/math]. Similar for [math](3,0,1), (1,0,3),(1,2,1)[/math] and [math](1,3,0), (3,1,0), (1,1,2)[/math]. So now we have found 6 elements not in S, but [math]|\Delta_4|=15[/math], so [math]S\leq 15-6=9[/math].
Remark: curiously, the best constructions for [math]c_4[/math] uses only 7 points instead of 9. n=5 [math]\overline{c}^\mu_5=12[/math]:
The set of all (a,b,c) in [math]\Delta_5[/math] with exactly one of a,b,c=0 has 12 elements and doesn’t contain any equilateral triangles.
Let [math]S\subset \Delta_5[/math] be a set without equilateral triangles. If [math](0,0,5)\in S[/math], there can only be one of (0,x,5-x) and (x,0,5-x) in S for x=1,2,3,4,5. Thus there can only be 6 elements in S with a=0 or b=0. The set of element with a,b>0 is isomorphic to [math]\Delta_3[/math], so S can at most have 6 elements in this set. So [math]|S|\leq 6+6=12[/math]. Similar if S contain (0,5,0) or (5,0,0). So if |S| >12 S doesn’t contain any of these. S can only contain 2 point in each of the following equilateral triangles:
(3,1,1),(0,4,1),(0,1,4)
(4,1,0),(1,4,0),(1,1,3)
(4,0,1),(1,3,1),(1,0,4)
(1,2,2),(0,3,2),(0,2,3)
(3,2,0),(2,3,0),(2,2,1)
(3,0,2),(2,1,2),(2,0,3)
So now we have found 9 elements not in S, but [math]|\Delta_5|=21[/math], so [math]S\leq 21-9=12[/math].
n=6 [math]15 \leq \overline{c}^\mu_6 \leq 17[/math]:
[Incomplete: need to add rotations of solution II.]
[math]15 \leq \overline{c}^\mu_6[/math] from the bound for general n.
Note that there are ten extremal solutions to [math] \overline{c}^\mu_3 [/math]:
Solution I: remove 300, 020, 111, 003
Solution II (and 2 rotations): remove 030, 111, 201, 102 Solution III (and 2 rotations): remove 030, 021, 210, 102 Solution III' (and 2 rotations): remove 030, 120, 012, 201
Also consider the same triangular lattice with the point 020 removed, making a trapezoid. Solutions based on I-III are:
Solution IV: remove 300, 111, 003
Solution V: remove 201, 111, 102 Solution VI: remove 210, 021, 102 Solution VI': remove 120, 012, 201
Suppose we can remove all equilateral triangles on our 7×7x7 triangular lattice with only 10 removals.
The triangle 141-411-114 must have at least one point removed. Remove 141, and note because of symmetry any logic that follows also applies to 411 and 114.
There are three disjoint triangles 060-150-051, 240-231-330, 042-132-033, so each must have a point removed.
(Now only six removals remaining.)
The remainder of the triangle includes the overlapping trapezoids 600-420-321-303 and 303-123-024-006. If the solutions of these trapezoids come from V, VI, or VI', then 6 points have been removed. Suppose the trapezoid 600-420-321-303 uses the solution IV (by symmetry the same logic will work with the other trapezoid). Then there are 3 disjoint triangles 402-222-204, 213-123-114, and 105-015-006. Then 6 points have been removed. Therefore the remaining six removals must all come from the bottom three rows of the lattice.
Note this means the "top triangle" 060-330-033 must have only four points removed so it must conform to solution either I or II, because of the removal of 141.
Suppose the solution of the trapezoid 600-420-321-303 is VI or VI'. Both solutions I and II on the "top triangle" leave 240 open, and hence the equilateral triangle 240-420-222 remains. So the trapezoid can't be VI or VI'.
Suppose the solution of the trapezoid 600-420-321-303 is V. This leaves an equilateral triangle 420-321-330 which forces the "top triangle" to be solution I. This leaves the equilateral triangle 201-321-222. So the trapezoid can't be V.
Therefore the solution of the trapezoid 600-420-321-303 is IV. Since the disjoint triangles 402-222-204, 213-123-114, and 105-015-006 must all have points removed, that means the remaining points in the bottom three rows (420, 321, 510, 501, 312, 024) must be left open. 420 and 321 force 330 to be removed, so the "top triangle" is solution I. This leaves triangle 321-024-051 open, and we have reached a contradiction.
[math]15 \leq \overline{c}^\mu_6 \leq 16[/math]:
Here, "upper triangle" means the first four rows (with 060 at top) and "lower trapezoid" means the bottom three rows.
Suppose 11 removals leave a triangle-free set.
First, suppose that 5 removals come from the upper triangle and 6 come from the lower trapezoid.
Suppose the trapezoid 600-420-321-303 used solution IV. There are three disjoint triangles 402-222-204, 213-123-114, and 105-015-006. The remainder of the points in the lower trapezoid (420, 321, 510, 501, 402, 312, 024) must be left open. 024 being open forces either 114 or 015 to be removed.
Suppose 114 is removed. Then 213 is open, and with 312 open that forces 222 to be removed. Then 204 is open, and with 024 that forces 006 to be removed. So the bottom trapezoid is a removal configuration of 600-411-303-222-114-006, and the rest of the points in the bottom trapezoid are open. All 10 points in the upper triangle form equilateral triangles with bottom trapezoid points, hence 10 removals in the upper triangle would be needed, so 114 being removed doesn't work.
Suppose 015 is removed. Then 006-024 forces 204 to be removed. Regardless of where the removal in 123-213-114, the points 420, 321, 222, 024, 510, 312, 501, 402, 105, and 006 must be open. This forces upper triangle removals at 330, 231, 042, 060, 051, 132, which is more than the 5 allowed, so 015 being removed doesn't work, so the trapezoid 600-420-321-303 doesn't use solution IV.
Suppose the trapezoid 600-420-321-303 uses solution VI. The trapezoid 303-123-024-006 can't be IV (already eliminated by symmetry) or VI' (leaves the triangle 402-222-204). Suppose the trapezoid 303-123-024-006 is solution VI. The removals from the lower trapezoid are then 420, 501, 312, 123, 204, and 015, leaving the remaining points in the lower trapezoid open. The remaining open points is forces 10 upper triangle removals, so the trapezoid 600-420-321-303 doesn't use solution VI. Therefore the trapezoid 303-123-024-006 is solution V. The removals from the lower trapezoid are then 420, 510, 312, 204, 114, and 105. The remaining points in the lower trapezoid are open, and force 9 upper triangle removals, hence the trapezoid 303-123-024-006 can't be V, and the solution for 600-420-321-303 can't be VI.
The solution VI' for the trapezoid 600-420-321-303 can be eliminated by the same logic by symmetry.
Therefore it is impossible for 5 removals come from the upper triangle and 6 come from the lower trapezoid. Therefore 4 removals come from the upper triangle and 7 come from the lower trapezoid.
At this point note the triangle 141-411-141 must have one point removed, so let it be 141 and note that any logic that follows is also true for a removal of 411 and 141 by symmetry.
This implies the upper triangle must have either solution I or II.
Suppose it has solution II. Note there are five disjoint triangles 600-510-501, 411-321-312, 402-222-204, 213-123-114, and 105-015-006.
Suppose 420 and 024 are removed. Then, noting 303 must be open, 606 must be removed, leaving 510 open. 510-240 forces 213 to be removed, and 510-150 force 114 to be removed. 213 are 114 are in the same disjoint triangle. Hence both 420 and 024 both can't be removed.
So at least either 420 or 024 is open. Let it be 420, noting by symmetry identical logic will apply if 024 is removed. Then 321, 222, and 123 are removed based on 420 and the open spaces in the upper triangle. This leaves four disjoint triangles 600-501-510, 402-303-312, 213-033-015, 204-114-105. So 411 and 420 are open, forcing the removal of 510. This leaves 501 open, and 501-411 forces the removal of 402. 600-303, and 330 are then open, forming an equilateral triangle. Therefore 420 isn't open, therefore the upper triangle can't have solution II.
Therefore the upper triangle has solution I.
Suppose 222 is open. 222 with open points in the upper triangle force 420, 321, 123, and 024 to be removed. This leaves four disjoint triangles 411-501-402, 213-303-204, 015-105-006, and 132-312-114. This would force 8 removals in the lower trapezoid, so 222 must be closed.
Therefore 222 is removed. There are six disjoint triangles 150-420-123, 051-321-024, 231-501-204, 132-402-105, 510-150-114, and 312-042-015. So 600, 411, 393, 114, and 006 are open. 600-240 open forces 204 to be removed and 600-150 open forces 105 to be removed. This forces 501 and 402 to be open, but 411 is open, so there is the equilateral triangle 501-411-402.
Therefore the solution of the upper triangle is not I, and we have a contradiction. So [math] \overline{c}^\mu_6 \neq 17 [/math].
n = 7
[math]\overline{c}^\mu_{7} \leq 22[/math]:
Using the same ten extremal solutions to [math] \overline{c}^\mu_3 [/math] as previous proofs:
Solution I: remove 300, 020, 111, 003
Solution II (and 2 rotations): remove 030, 111, 201, 102 Solution III (and 2 rotations): remove 030, 021, 210, 102 Solution III' (and 2 rotations): remove 030, 120, 012, 201
Suppose the 8x8x8 lattice can be triangle-free with only 13 removals.
Slice the lattice into region A (070-340-043) region B (430-700-403) and region C (034-304-007). Each region must have at least 4 points removed. Note there is an additional disjoint triangle 232-322-223 that also must have a point removed. Therefore the points 331, 133, and 313 are open. 331-313 open means 511 must be removed, 331-133 open means 151 must be removed, and 133-313 open means 115 must be removed. Based on the three removals, the solutions for regions A, B, and C must be either I or II. All possible combinations for the solutions leave several triangles open (for example 160-520-124). So we have a contradiction, and [math] \overline{c}^\mu_7 \leq 22 [/math].
n = 8
[math]\overline{c}^\mu_{8} \geq 22[/math]:
008,026,044,062,107,125,134,143,152,215,251,260,314,341,413,431,440,512,521,620,701,800
n = 9
[math]\overline{c}^\mu_{9} \geq 26[/math]:
027,045,063,081,126,135,144,153,207,216,252,270,315,342,351,360,405,414,432,513,522,531,603,630,720,801
n = 10 Computer data
From integer programming, we have
n=3, maximum 6 points, 10 solutions n=4, maximum 9 points, 1 solution n=5, maximum 12 points, 1 solution n=6, maximum 15 points, 4 solutions n=7, maximum 18 points, 85 solutions n=8, maximum 22 points, 72 solutions n=9, maximum 26 points, 183 solutions n=10, maximum 31 points, 6 solutions n=11, maximum 35 points, 576 solutions n=12, maximum 40 points, 876 solutions General n
A lower bound for [math]\overline{c}^\mu_n[/math] is 2n for [math]n \geq 1[/math], by removing (n,0,0), the triangle (n-2,1,1) (0,n-1,1) (0,1,n-1), and all points on the edges of and inside the same triangle. In a similar spirit, we have the lower bound
[math]\overline{c}^\mu_{n+1} \geq \overline{c}^\mu_n + 2[/math]
for [math]n \geq 1[/math], because we can take an example for [math]\overline{c}^\mu_n[/math] (which cannot be all of [math]\Delta_n[/math]) and add two points on the bottom row, chosen so that the triangle they form has third vertex outside of the original example.
An asymptotically superior lower bound for [math]\overline{c}^\mu_n[/math] is 3(n-1), made of all points in [math]\Delta_n[/math] with exactly one coordinate equal to zero.
A trivial upper bound is
[math]\overline{c}^\mu_{n+1} \leq \overline{c}^\mu_n + n+2[/math]
since deleting the bottom row of a equilateral-triangle-free-set gives another equilateral-triangle-free-set. We also have the asymptotically superior bound
[math]\overline{c}^\mu_{n+2} \leq \overline{c}^\mu_n + \frac{3n+2}{2}[/math]
which comes from deleting two bottom rows of a triangle-free set and counting how many vertices are possible in those rows.
Another upper bound comes from counting the triangles. There are [math]\binom{n+2}{3}[/math] triangles, and each point belongs to n of them. So you must remove at least (n+2)(n+1)/6 points to remove all triangles, leaving (n+2)(n+1)/3 points as an upper bound for [math]\overline{c}^\mu_n[/math].
Asymptotics
The corners theorem tells us that [math]\overline{c}^\mu_n = o(n^2)[/math] as [math]n \to \infty[/math].
By looking at those triples (a,b,c) with a+2b inside a Behrend set, one can obtain the lower bound [math]\overline{c}^\mu_n \geq n^2 \exp(-O(\sqrt{\log n}))[/math].
|
I raised a related question but hope to get some answer using the nonvanishing 2 form definition.
Let P be the real projective plane obtained by identifying antipodal points on the unit sphere of $R^3$.
How to prove that P is nonorientable by showing that any 2 form on P will vanish somewhere?
My idea is to consider the closed curve $a(t)=(\cos t,\sin t, 0)$ , $0 \leq t \leq \pi$
This curve is closed in P and the tangent vectors $a'(0)$ and $a'(\pi)$ are identical.
However, for a vector field V on $a(t)$ defined by $V(a(t))=(0,0,1)$, the tangent vectors at $a(0)$ and $a(\pi)$ differ by a sign. Also V and $\alpha'(t)$ are always linearly independent.
For any 2 form $u$ on P,
$u(a'(0), V(a(0))=-u(a'(\pi), V(a(\pi))$ Therefore $u$ must vanish somewhere on the curve.
Is the constant vector field V continuous on the curve? Are my arguments right?
|
Rankings are everywhere. They are sometimes useful and, at other times, contradicting. In such a case, we need to come up with a consensus ranking but… how do we
evaluate ranking consensus?
The other day I was reading about something called
rank aggregation, which is just a fancy name for combining preferences expressed through rankings.
I bet you know that rankings are everywhere. The page Google returns after a search is a ranking, as are the ratings we leave for a restaurant in TripAdvisor or a movie in IMDB. Of course, they are not constrained to the interwebs: when we vote in an election, we rank the candidates; when you’re chosen for a job after that interview, you were sitting at the top of their rank.
Almost every selection involves a ranking process, and selections are ubiquitous in finance, as well. Nonetheless, one investor’s preferences (rankings) may differ from another. Even if none of them know the absolute truth, as well-respected experts we could benefit from putting their opinions in common and coming up with a new, hopefully more truthful, ranking. This is what rank aggregation is all about, and Machine-Learning people like to see this as a type of ensemble method.
A couple of weeks ago I stumbled on a great blog post about one of the best-known techniques to find these consensuated rankings: the so-called
Kemeny-Young optimal rank aggregation. The goal of this technique is coming up with a new ranking that minimizes the aggregated Kendall tau distance to every other ranking. In the post, a Python function to compute the Kendall tau distance according to the naive algorithm was provided. The time complexity of this algorithm is quadratic in the length of the rankings but, given the similarity of the problem with that of ordering, I thought something close to linearithmic ought to exist (and, indeed, it does). Confusion on top of confusion
I turned to Google to look for a more-efficient Python implementation and, to my surprise, I found that the existence of the closely-related Kendall rank correlation coefficient, sometimes referred to as Kendall tau coefficient (not distance), was causing a lot of confusion on the net. At the time of writing, this distinction seems clear but confusion is still everywhere, even in those places where people usually go to shake confusion off, namely, Stack Exchange questions or Github issues. Confusion has even lead to a new function implementation but, worst of all, quoting the Wikipedia article about Kendall tau distance, as done in this comment, only worsens the situation.
So, what’s going on? Is Wikipedia wrong? Are there two alternative definitions of the Kendall tau distance? Not really. After ordering my thoughts I came upon another source of confusion, which I will try to clarify for you through an example.
A simple example
Let’s assume two experts must evaluate three planets (red, yellow, and blue) according to how favourable they are to establish a future human colony. In order to collect their opinions, they are asked to fill the following spreadsheet (yes, I know, spreadsheets are as ubiquitous as rankings):
We could say (special attention here) that each expert has been asked to provide a
ranking list. An equivalent alternative could have been asking the experts for a ranked list of items (planets, in this case) by filling this spreadsheet:
As you have probably noticed, the
same preferences can be expressed with any of the formats:
and note that we could even change the order of the columns in Case 1 and it would not matter.
Now, see what happens when we compute the Kendall tau distance between the three rankings according to the some of the equivalent definitions available:
Number of pairwise disagreements between two ranking lists (number of pairs that are in different order in the two rankings): Case 1: Both experts disagree in the order of a single pair, (red, blue), so, the distance is 1. Case 1b: Column ordering does not matter and, again, both experts disagree in the order of the (red, blue) pair. Distance is 1. Case 2: (red, blue) are in different order. Distance is 1. Number of swaps that the bubble sort algorithm would take to place one list in the same order as the other list (minimum number of adjacent transpositions needed to transform one ranking into the other): Case 1: Three swaps are required, (3<->1), (3<->2) and (1<->2). Distance seemsto be 3. Case 1b: Swapping 3 and 2 suffices to match both rankings. Distance is 1. Case 2: Swapping red and blue suffices to match both rankings. Distance is 1.
From these results, it is clear that the second definition cannot be applied reliably to ranking lists (Case 1). The reason why it works for Case 1b is that Expert 1’s preferences were already in their natural order. When this happens, the second definition also works and indeed gives rise to yet another usually considered definition: the number of inversions in the unordered ranking.
Computing distance from correlation coefficient
At this point, you may be thinking that the confusion I told you about at the beginning was actually reasonable. I agree. But, fortunately, there is an easier way. To avoid column ordering issues just plot one ranking against the other and measure how linear this scatter plot looks by using the Kendall correlation coefficient, \(\rho_\tau\).
Once you have done that, since both the Kendall correlation coefficient and the Kendall tau distance depend on the number of discordant pairs in the ranking, you can rely on the definition of the correlation coefficient to work out the Kendall tau distance:
$$\rho_\tau = \frac{\text{number of concordant pairs}-\text{number of discordant pairs}}{n(n-1)/2},$$
where \(n\) is the number of items in the list. Then, the Kendall tau distance \(d_{\tau} \) can be computed as:
$$d_{\tau}=\text{number of discordant pairs}=\frac{n(n-1)}{2}\frac{(1-\rho_\tau)}{2}.$$
In our example, $$\frac{3(3-1)}{2}\frac{(1-1/3)}{2}=1.$$
Much simpler this way, right? See you next time.
|
Astrid the astronaut is floating in a grid. Each time she pushes off she keeps gliding until she collides with a solid wall, marked by a thicker line. From such a wall she can propel herself either parallel or perpendicular to the wall, but always travelling directly \(\leftarrow, \rightarrow, \uparrow, \downarrow\). Floating out of the grid means death.
In this grid, Astrid can reach square Y from square ✔. But if she starts from square ✘ there is no wall to stop her and she will float past Y and out of the grid.
In this grid, from square X Astrid can float to three different squares with one push (each is marked with an *). Push \(\leftarrow\) is not possible from X due to the solid wall to the left. From X it takes three pushes to stop safely at square Y, namely \(\downarrow, \rightarrow, \uparrow\). The sequence \(\uparrow, \rightarrow\) would have Astrid float past Y and out of the grid.
Question:
In the following grid, what is the least number of pushes that Astrid can make to safely travel from X to Y?
|
Infinitary logic
Infinitary logic is a type of logic which is used in the standard characterizations of several large cardinals, such as weakly compact cardinals and strongly compact cardinals. It also is used in alternate characterizations of other large cardinals such as supercompact cardinals and extendible cardinals.
More formally, an infinitary logic is a formal logic which has strings of infinite length. Generally, there is only one type of infinitary logic which is classically studied: Hilbert-type infinitary logic.
Hilbert-Type Infinitary Logic
The idea behind the infinitary logic $\mathcal{L}_{\kappa,\lambda}$ is that you can have $\kappa$-many logical additions and logical products in a row, and $\lambda$-many quantifiers in a row. This is called
Hilbert-type Infinitary Logic. You can also use $(n+1)$-th order quantifiers in $\mathcal{L}_{\kappa,\lambda}^n$. Formal Definition
Let $\kappa$ and $\lambda$ be regular cardinals. Then, $\mathcal{L}_{\kappa,\lambda}$ allows for all first-order finitary assertions (made in $\mathcal{L}_{\omega,\omega}$) along with:
For any set of $\mathcal{L}_{\kappa,\lambda}$-formulae $P$ where $|P|<\kappa$, $\bigwedge_{\varphi\in P} \varphi$ and $\bigvee_{\varphi\in P}\varphi$ For any set of variables $A$ where $|A|<\kappa$, $\forall_{v\in A}\varphi$ where $\varphi$ is an $\mathcal{L}_{\kappa,\lambda}$-formula For any $\varphi$ which is an $\mathcal{L}_{\kappa,\lambda}$-formula, $\neg\varphi$
There is also $\mathcal{L}_{\infty,\lambda}$, $\mathcal{L}_{\kappa,\infty}$, and $\mathcal{L}_{\infty,\infty}$ where $\infty$ is treated like $\text{ORD}$, allowing for statements of any ordinal length. You can even have $\mathcal{L}_{\infty^+,\infty^+}$ which allows for $\text{ORD}$-length statements.
Expressiveness
$\mathcal{L}_{\kappa,\kappa}$ is unable to express some $\Pi_1^1$-formulae under ZFC. Contrastively, $\mathcal{L}^1_{\omega,\omega}$ is unable to express $\mathcal{L}_{\omega_1,\omega_1}$, so first-order infinitary logic and second-order finitary logic both have expressiveness advantages. For why, see this question on MathOverflow.
|
Beatty Sequences
Let $r$ be a positive irrational number. Set $s = 1/r,$ and define two sequences: $a_{n} = n(r + 1)$ and $b_{n} = n(s + 1)$, $n \gt 0.$
Obviously, all terms of both sequences are irrational. In particular, none of them is integer. A remarkable theorem discovered by Sam Beatty in 1926 states that, for any integer $N,$ there is exactly one element from the union $\{a_{n}\}\cup \{b_{n}\}$ that lies in the interval $(N, N + 1)$.
This property is very remarkable for the following reason. By definition, for a non-integer $\alpha,$ $N \lt \alpha \lt (N + 1)$ is the same as $\lfloor \alpha\rfloor = N.$ Thus Beatty's theorem asserts that the union of sequences of whole parts $\{\lfloor a_{n}\rfloor\}$ and $\{\lfloor b_{n}\rfloor\}$ covers the set of natural numbers $\mathbb{N} = \{1, 2, 3, \ldots\}.$ It's a simple matter to show that Beatty's sequences $\{a_{n}\}$ and $\{b_{n}\}$ do not intersect. But more than that, no two of their combined terms fall into the same interval $(N,N+1),\;$ with $N\;$ an integer:
$\{\lfloor a_{n}\rfloor\}\cup \{\lfloor b_{n}\rfloor\} = \mathbb{N}$ and $\{\lfloor a_{n}\rfloor\}\cap\{\lfloor b_{n}\rfloor\} = \emptyset ,$
which means that the sequences of whole parts complement each other in $\mathbb{N}.$ In this context, two sequences of integers that complement each other in
N are called complementary, and Beatty's theorem shows a surprising way to generate such complementary sequences.
Let's prove Beatty's theorem. One elegant proof was published in 1927 by A. Ostrowski and A.C. Eitken The proof appears in Ross Honsberger's Ingenuity in Mathematics (MAA, 1970, pp 94-95.) While reading the book, I realized that Beatty's theorem is related to the problem of distribution of fractions on a unit interval that has been discussed elsewhere. In fact, Ostrowski and Eitken's proof was readily adaptable to the latter problem. It was then natural to ask whether the original proof for the problem of distribution of fractions might have bearings on Beatty's theorem.
Thinking along these lines led to a curious inequality that sheds some light on the distribution of Beatty's sequences on the number line.
Proof (Beatty's Theorem)
Let N be an integer. There are $\lfloor N/(r + 1)\rfloor$ terms of the first sequence less than N. There are $\lfloor N/(s + 1)\rfloor$ such terms from the second sequence. Since none of $a_{n}$ or $b_{n}$ is integer,
(1)
$\begin{align}&N/(r + 1) - 1 \lt \lfloor N/(r + 1)\rfloor \lt N/(r + 1)\\ &N/(s + 1) - 1 \lt \lfloor N/(s + 1)\rfloor \lt N/(s + 1). \end{align}$
Note that
(2)
$\displaystyle\frac{1}{r+1}+\frac{1}{s+1}=\frac{1}{r+1}+\frac{1}{\displaystyle\frac{1}{r}+1}=\frac{1}{r+1}+\frac{r}{r+1}=1.$
Adding up (1) we thus get
$N - 2 \lt \lfloor N/(r + 1)\rfloor + \lfloor N/(s + 1)\rfloor \lt N,$
which implies $\lfloor N/(r + 1)\rfloor + \lfloor N/(s + 1)\rfloor = N - 1.$ Replacing $N$ with $N + 1$, we see that exactly one term from the union $\{a_{n}\}\cup\{b_{n}\}$ is added. This naturally belongs to the interval $(N, N + 1).$
Note: elsewhere, there's another proof of Beatty's theorem.
Let's see how the points $a_{n}$ and $b_{n}$ may be distributed on the number line. Mark all points of the two sequences. We are interested in the pairs of adjacent points. The distance between $a_{n+1}$ and $a_{n}$ equals $r + 1$, which is greater than $1.$ And the same is true for the second sequence. Which proves that between any two adjacent points that belong to the same sequence there's always an integer.
If, on the other hand, points $a_{n}$ and $b_{m}$ are adjacent, we may consider a linear combination $\alpha a_{n} + \beta b_{n},$ where $\alpha , \beta \gt 0,$ and $\alpha + \beta = 1.$ All such combinations lie between $a_{n}$ and $b_{m}.$ In view of (2), we can take $\alpha = 1/(r + 1)$ and $\beta = 1/(s + 1)$. The result - $(n + m)$ - is an integer that lies between $a_{n}$ and $b_{m}.$
Copyright © 1996-2018 Alexander Bogomolny
Sequences $\{a_{n}\}$ and $\{b_{n}\}$ do not intersect. Indeed assume they have a common element: $a_{i} = b_{j}$, or explicitly: $i(r + 1) = j(s + 1)$. Note that since $rs = 1$,
$\displaystyle\frac{r+1}{s+1}=\frac{r+1}{\displaystyle\frac{1}{r}+1}=\frac{r+1}{(r+1)/r}=r.$
Therefore, $i(r + 1) = j(s + 1)$ would imply
$\displaystyle r=\frac{r+1}{s+1}=\frac{j}{i},$
which, for an irrational $r$ and rational $j/i,$ is impossible.
Copyright © 1996-2018 Alexander Bogomolny
65607511
|
Does anyone know of a reference for the following fact?
Let $M_g$ denote the moduli stack of genus g curves, let $A_g$ denote the moduli stack of abelian varieties, and let $U_g \rightarrow A_g$ denote the universal abelian variety. For any basepoint $[C] \in M_g$, there is a representation $\pi_1(M_g, [C]) \rightarrow Sp_{2g}(\mathbb Z)$ by sending a loop to a homeomorphism of the curve C. On the other hand, if one applies the torelli map, sending a curve to its Jacobian, $\tau: M_g \rightarrow A_g$, one can consider the Galois representation $\rho_{U_g}: \pi_1^{et}(A_g, \overline{[\tau(C)]}) \rightarrow Sp_{2g}(\widehat{\mathbb Z})$ (the monodromy of the moduli stack of abelian varieties, given by action on the $\ell$ torsion).
Why do these two representations agree? That is, why does the square, with vertical maps given by profinite completion, commute?
$\require{AMScd}$ \begin{CD} \pi_1(M_g, [C]) @>>> Sp_{2g}(\mathbb Z) \\ @VVV @VVV \\ \pi_1^{et}(A_g, \overline{[\tau(C)]}) @>>> Sp_{2g}(\widehat{\mathbb Z}) \end{CD}
|
Divisibility Criteria
Divisibility criteria are ways of telling whether one number divides another without actually carrying the division through. Implicit in this concept is the assumption that the criteria in question affords a simpler way than the the outright division to answer the question of divisibility. Divisibility criteria constructed in terms of the digits that compose a given number.
To fix the notation, \(A\) will be the number whose divisibility by another number \(d\) we are going to investigate on this page. In the decimal system,
\( A = 10^{n}a_{n} + 10^{n-1}a_{n-1} + ... + 10^{1}a_{1} + a_{0} \)
\(a_{n} \ne 0\). We readily have several examples. But let's first define
\( s_{+}(A) = a_{n} + a_{n-1} + ... + a_{0}, \\ s_{\pm}(A) = a_{0} - a_{1} + \ldots +(-1)^{n}a_{n}. \)
Using these two functions we formulate the criteria of divisibility by \(3,\) \(9,\) and \(11:\)
\(A\) is divisible by \(3\) iff \(s_{+}(A)\) is divisible by \(3.\)
\(A\) is divisible by \(9\) iff \(s_{+}(A)\) is divisible by \(9.\)
Then \(A\) is divisible by \(11\) iff \(s_{\pm}(A)\) is.
All three critera follow from the two basic properties of the modulo arithmetic
\([A]_{d} + [B]_{d} = [A + B]_{d}\) \([A]_{d}\cdot [B]_{d} = [A\cdot B]_{d}\)
and the fact that \(10 = 1\space (\mbox{mod}\space 9)\) and \(10 = -1\space (\mbox{mod}\space 11),\) from which we successively get \(10^{2} = 1\space (\mbox{mod}\space 9)\) and \(10^{2} = 1\space (\mbox{mod}\space 11),\) \(10^{3} = 1\space (\mbox{mod}\space 9)\) and \(10^{3} = -1\space (\mbox{mod}\space 11),\) and so on.
Note that both \(s_{+}(A)\) and \(s_{\pm}(A)\) are linear combinations of the digits of \(A.\) This is the kind of functions we shall allow on this page. (One generalization would be to consider other bases.)
We formalize the definition the following way:
Definition
A function \(f(A) = f(a_{n}, \ldots , a_{0})\) is called a
divisibility criterion by an integer \(d\) provided, starting with some \(A,\) \(|f(A)| \lt A\) and \(A\) is divisible by d iff \(f(A)\) is divisible by \(d.\)
\(O(d)\) is defined as the set of all divisibility criteria by \(d.\)
\(s_{+} \in O(9)\) and \(s_{+} \in O(3).\) (Incidently, \(O(9) \subset O(3).\) Why?) \(s_{\pm} \in O(11)\) \(f_{1}(A) = a_{0} \in O(2) \cap O(5)\) \(f_{2}(A) = 10a_{1} + a_{0} \in O(4) \cap O(25)\) \(f_{3}(A) = 10^{2}a_{2} + 10a_{1} + a_{0} \in O(8) \cap O(125)\) \(f_{4}(A) = 2a_{1} + a_{0} \in O(4)\)
Here are a few examples:
We have more. Indeed, since \(100 = 1\space (\mbox{mod}\space 11),\)
\(f_{5}(A) = (a_{1}a_{0})_{10} + (a_{3}a_{2})_{10} + (a_{5}a_{4})_{10} + ... \in O(11)\)
(\(f_{5}\) is obtained by splitting \(A\) right-to-left into \(2\)-digit numbers.) Similarly,
\(f_{6}(A) = (a_{1}a_{0})_{10} - (a_{3}a_{2})_{10} + (a_{5}a_{4})_{10} - ... \in O(101)\)
In the same spirit,
\(f_{7}(A) = (a_{2}a_{1}a_{0})_{10} - (a_{5}a_{4}a_{3})_{10} + (a_{8}a_{7}a_{6})_{10} - ... \in O(1001)\)
Interestingly, since \(1001 = 7\cdot 11\cdot 13,\) \(f_{7} \in O(7) \cap O(11) \cap O(13).\) The fact may appear uninspiring for it does not relieve one from drudging through the division by \(7\) or \(11\) or \(13.\) However, in some cases this rule is of great help indeed:
\(2,003,008\) is divisible by \(7\) for so is \((008) - (003) + 2 = 7.\) \(524784\) is divisible by \(13\) for so is \(784 - 524 = 260.\)
Would you rather go on with the long division?
Stuart Anderson developed a general framework for deriving divisibility criteria. In particular, he noticed that
\(f_{8}(A) = 2\cdot (... 2\cdot (2\cdot (a_{n}a_{n-1})_{10} + (a_{n-2}a_{n-3})_{10}) + (a_{n-4}a_{n-5})_{10}) + ... \in O(7),\)
which holds for odd \(n.\) For even \(n,\) modification is obvious. This also follows from the fact that \(10^{2} = 2\space (\mbox{mod}\space 7).\)
There is another approach that uses the following generalization of the Euclid's Proposition VII.30:
Let \(a\) and \(d\) be mutually prime (
coprime). Then \(d|ab\) is equivalent to \(d|b.\)
Let \(d\) be a divisor of \((10c - 1)\) for some \(c.\) Then clearly \(d\) and \(c\) are coprime. Denote
\(A_{1} = 10^{n-1}a_{n} + 10^{n-2}a_{n-1} + ... + a_{1},\)
so that \(A = 10A_{1} + a_{0}.\) We have
\(Ac = (ca_{0} + A_{1}) + (10c - 1)A_{1}\)
from which it follows that
\(f(A) = ca_{0} + A_{1} \in O(d)\)
This leads to a recursive criterion. For example, let \(d = 19,\) \(c = 2.\) Then \((10c - 1)\) is divisible by \(19.\) Given a number \(A,\) remove \(a_{0},\) add \(2a_{0}\) to the remaining number \(A_{1}.\) Proceed with these steps until you obtain a number which is obviously divisible by \(19\) or is obviously not divisible by \(19.\) Whatever the case, the same will be true of the original number \(A.\) Thus, we get a sequence \(12311,\) \(1233,\) \(129,\) \(30.\) The latter is not divisible by \(19.\) Hence neither is \(12311.\) On the other hand, as the same calculations show, \(20311\) is divisible by \(19.\) (Additional examples are available elsewhere. An ingenious example was found by Gustavo Toja from Brasil.)
References N. N. Vorob'ev, Criteria for Divisibility, University of Chicago Press, 1980. Modular Arithmetic Chinese Remainder Theorem Euclid's Algorithm Euclid's Game Binary Euclid's Algorithm gcd and the Fundamental Theorem of Arithmetic Extension of Euclid's Algorithm Stern-Brocot Tree Farey series Pick's Theorem Fermat's Little Theorem Wilson's Theorem Euler's Function Divisibility Criteria Examples Equivalence relations A real life story
Copyright © 1996-2018 Alexander Bogomolny
65607299
|
First of all, we should understand what the
R software is doing when no interceptis included in the model. Recall that the usual computation of $R^2$when an intercept is present is$$R^2 = \frac{\sum_i (\hat y_i - \bar y)^2}{\sum_i (y_i - \bary)^2} = 1 - \frac{\sum_i (y_i - \hat y_i)^2}{\sum_i (y_i - \bary)^2} \>.$$The first equality
only occurs because of the inclusion of theintercept in the model even though this is probably the more popularof the two ways of writing it. The second equality actually providesthe more general interpretation! This point is also address in thisrelated question.
But, what happens if there is no intercept in the model?
Well, in thatcase,
R (
silently!) uses the modified form$$R_0^2 = \frac{\sum_i \hat y_i^2}{\sum_i y_i^2} = 1 - \frac{\sum_i (y_i - \hat y_i)^2}{\sum_i y_i^2} \>.$$
It helps to recall what $R^2$ is trying to measure. In the formercase, it is comparing your current model to the
referencemodel that only includes an intercept (i.e., constant term). In thesecond case, there is no intercept, so it makes little sense tocompare it to such a model. So, instead, $R_0^2$ is computed, whichimplicitly uses a reference model corresponding to noise only.
In what follows below, I focus on the second expression for both $R^2$ and $R_0^2$ since that expression generalizes to other contexts and it's generally more natural to think about things in terms of residuals.
But, how are they different, and when?
Let's take a brief digression into some linear algebra and see if wecan figure out what is going on. First of all, let's call the fittedvalues from the model
with intercept $\newcommand{\yhat}{\hat{\mathbf y}}\newcommand{\ytilde}{\tilde {\mathbf y}}\yhat$ and thefitted values from the model without intercept $\ytilde$.
We can rewritethe expressions for $R^2$ and $R_0^2$ as $$\newcommand{\y}{\mathbf y}\newcommand{\one}{\mathbf 1} R^2 = 1 - \frac{\|\y - \yhat\|_2^2}{\|\y - \bar y \one\|_2^2} \>, $$and$$R_0^2 = 1 - \frac{\|\y - \ytilde\|_2^2}{\|\y\|_2^2} \>,$$respectively.
Now, since $\|\y\|_2^2 = \|\y - \bar y \one\|_2^2 + n \bar y^2$, then $R_0^2 > R^2$ if and only if$$\frac{\|\y - \ytilde\|_2^2}{\|\y - \yhat\|_2^2} < 1 + \frac{\bary^2}{\frac{1}{n}\|\y - \bar y \one\|_2^2} \> .$$
The left-hand side is greater than one since the model correspondingto $\ytilde$ is nested within that of $\yhat$. The second term on theright-hand side is the squared-mean of the responses divided by themean square error of an intercept-only model. So, the larger the mean of the response relative to the other variation, the more "slack" we have and a greater chance of $R_0^2$ dominating $R^2$.
Notice that all themodel-dependent stuff is on the left side and non-model dependentstuff is on the right.
Ok, so how do we make the ratio on the left-hand side small?
Recall that$\newcommand{\P}{\mathbf P}\ytilde = \P_0 \y$ and $\yhat = \P_1 \y$ where $\P_0$ and $\P_1$ areprojection matrices corresponding to subspaces $S_0$ and $S_1$ suchthat $S_0 \subset S_1$.
So, in order for the ratio to be close to one, we need the subspaces$S_0$ and $S_1$ to be very similar. Now $S_0$ and $S_1$ differ only bywhether $\one$ is a basis vector or not, so that means that $S_0$had better be a subspace that already lies very close to $\one$.
In essence, that means our predictor had better have a strong meanoffset itself and that this mean offset should dominate the variationof the predictor.
An example
Here we try to generate an example with an intercept explicitly in the model and which behaves close to the case in the question. Below is some simple
R code to demonstrate.
set.seed(.Random.seed[1])
n <- 220
a <- 0.5
b <- 0.5
se <- 0.25
# Make sure x has a strong mean offset
x <- rnorm(n)/3 + a
y <- a + b*x + se*rnorm(x)
int.lm <- lm(y~x)
noint.lm <- lm(y~x+0) # Intercept be gone!
# For comparison to summary(.) output
rsq.int <- cor(y,x)^2
rsq.noint <- 1-mean((y-noint.lm$fit)^2) / mean(y^2)
This gives the following output. We begin with the model
with intercept.
# Include an intercept!
> summary(int.lm)
Call:
lm(formula = y ~ x)
Residuals:
Min 1Q Median 3Q Max
-0.656010 -0.161556 -0.005112 0.178008 0.621790
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.48521 0.02990 16.23 <2e-16 ***
x 0.54239 0.04929 11.00 <2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.2467 on 218 degrees of freedom
Multiple R-squared: 0.3571, Adjusted R-squared: 0.3541
F-statistic: 121.1 on 1 and 218 DF, p-value: < 2.2e-16
Then, see what happens when we
exclude the intercept.
# No intercept!
> summary(noint.lm)
Call:
lm(formula = y ~ x + 0)
Residuals:
Min 1Q Median 3Q Max
-0.62108 -0.08006 0.16295 0.38258 1.02485
Coefficients:
Estimate Std. Error t value Pr(>|t|)
x 1.20712 0.04066 29.69 <2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.3658 on 219 degrees of freedom
Multiple R-squared: 0.801, Adjusted R-squared: 0.8001
F-statistic: 881.5 on 1 and 219 DF, p-value: < 2.2e-16
Below is a plot of the data with the model-with-intercept in red and the model-without-intercept in blue.
|
This post contains the notes taken from the following paper:
Deep Learning Scaling Is Predictable, Empirically by Baidu Research.
The last years in Deep Learning have seen a rush to gigantism:
Models are becoming deeper and deeper from the 8 layers of AlexNet to the 1001-layer ResNet. Training on large dataset is way quicker, ImageNet can now (with enough computing power) been trained in less than 20 minutes. Dataset size are increasing each year.
As this paper rightly declare in its introduction:
The Deep Learning (DL) community has created impactful advances across diverse application domains by following a straightforward recipe: search for improved model architectures, create large training data sets, and scale computation.
However it also notes that new models and hyperparameters configuration areoften depend on
epiphany and serendipity.
In order to harness the power of
big data (more data, more computation power,etc.) models should not be designed to reduce error rate of an epsilon onImagenet but be designed to be better with more data.
Baidu Research introduce a
power-law expononent, that measure the steepnessof the learning curve:
$$\epsilon(m) \propto \alpha m^{\beta_g}$$
Where $\epsilon(m)$ is the generalization error on the number of train samples $m$; $\alpha$ a constant related to the problem; and $\beta_g$ the steepness of the learning curve.
$\beta_g$ is said to settle between -0.07 and -0.35.
The Methodology
Baidu tested four domains: machine translation, language modeling, image classification, and speech recognition.
For each domain, a variety of architectures, optimizers, and hyperparameters is tested. To see how models scale with dataset size, Baidu trained models on samples ranging from 0.1% of the original data to the whole data (minus the validation set).
The paper’s authors try to find the smallest model that is able to overfit each sample.
Baidu also removed any regularizations, like weight decay, that might reduce the model’s effective capacity.
Results
In all domain, they found that the model size growth with dataset size sublinearly.
Domain Learning Curve Steepness $\beta_g$ Machine Translation -0.128 Language Modeling [-0.09, -0.06] Image (top-1) -0.309 Image (top-5) -0.488 Speech -0.299
The first thing that we can conclude from these numbers is that text based problems (translation and language modeling) scale badly faced to image problems.
It is worth noting that (current) models seem to scale better depending on the data dimension: Image and speech are of a higher dimensionality than text.
You may also wonder why image has two entries in the table. One for top-1 generalization error, and one for top-5. This is one of the most interesting finding of this paper. Current models of image classification improve their top-5 faster than top-1 as data size increases! I wonder the reason why.
Implications
The authors separate the generalization error per data size in three areas:
The small data region, where models given so few data can only make random guessing. The power law region, where models follow the power law. However the learning curve steepness may be improved. The irreductible error, a combination of the Bayes error (on which the model cannot be improved) and the dataset defects that may impair generalization.
The authors also underline major implications of the power law:
Given the power law, researchers can train their new architecture on a small dataset, and have a good estimation of how it would scale on a bigger dataset. It may also give a reasonable estimation of the hardware and time requirements to reach a chosen generalization error.
Instead of simply trying to improve a model’s accuracy, the authors suggest thatbeating the power law should be the end goal. Dataset size is going to groweach year, a scalable model would thrive in this situation. The authors advisemethods that may help to
extract more info on less data:
We suggest that future work more deeply analyze learning curves when using data handling techniques, such as data filtering/augmentation, few-shot learning, experience replay, and generative adversarial networks.
Baidu also recommends to search how to push the boundaries of the irreductibleerror. To do that we should be able to distinguish between what contributes tothe
bayes error, and what’s not. Summary
Baidu Research showed that models follow a power law curve. They empiricallydetermined the power law exponent, or
steepness of the learning curve, formachine translation, language modeling, image classification, and speech recognition.
This power law express how much a model can improve given more data. Models for text problems are currently the less scalable.
|
Caesium has a larger size, and the effective nuclear charge that the valence electron experiences will be far less compared to that of lithium's, right? But lithium is still considered the strongest reducing agent among all the alkali metals, and this is evidenced by its large and negative reduction potential. Why is this so?
The trend in the reducing power of the alkali metals is not a simple linear trend, so it is a little disingenuous if I were to solely talk about $\ce{Li}$ and $\ce{Cs}$, implying that data for the metals in the middle can be interpolated.
$$\begin{array}{cc}\hline\ce{M} & E^\circ(\ce{M+}/\ce{M}) \\\hline \ce{Li} & -3.045 \\\ce{Na} & -2.714 \\\ce{K} & -2.925 \\\ce{Rb} & -2.925 \\\ce{Cs} & -2.923 \\\hline\end{array}$$
Source: Chemistry of the Elements 2nd ed., Greenwood & Earnshaw, p 75
However, a full description of the middle three metals is beyond the scope of this question. I just thought it was worth pointing out that the trend is not really straightforward.
The $\ce{M+}/\ce{M}$ standard reduction potential is related to $\Delta_\mathrm{r}G^\circ$ for the reaction
$$\ce{M(s) -> M+(aq) + e-}$$
by the equation
$$E^\circ = \frac{\Delta_\mathrm{r}G^\circ + K}{F}$$
where $K$ is the absolute standard Gibbs free energy for the reaction
$$\ce{H+ + e- -> 1/2 H2}$$
and is a constant (which means we do not need to care about its actual value). Assuming that $\Delta_\mathrm{r} S^\circ$ is approximately independent of the identity of the metal $\ce{M}$, then the variations in $\Delta_\mathrm{r}H^\circ$ will determine the variations in $\Delta_\mathrm{r}G^\circ$ and hence $E^\circ$. We can construct an energy cycle to assess how $\Delta_\mathrm{r}H^\circ$ will vary with the identity of $\ce{M}$.
The standard state symbol will be dropped from now on.
$$\require{AMScd} \begin{CD} \ce{M (s)} @>{\large \Delta_\mathrm{r}H}>> \ce{M+(aq) + e-} \\ @V{\large\Delta_\mathrm{atom}H(\ce{M})}VV @AA{\large\Delta_\mathrm{hyd}H(\ce{M+})}A \\ \ce{M (g)} @>>{\large IE_1(\ce{M})}> \ce{M+ (g) + e-} \end{CD}$$
We can see, as described in Prajjawal's answer, that there are three factors that contribute to $\Delta_\mathrm{r}H$:
$$\Delta_\mathrm{r}H = \Delta_\mathrm{atom}H + IE_1 + \Delta_\mathrm{hyd}H$$
(the atomisation enthalpy being the same as the sublimation enthalpy). You are right in saying that there is a decrease in $IE_1$ going from $\ce{Li}$ to $\ce{Cs}$.
If taken alone, this would mean that $E(\ce{M+}/\ce{M})$ would decrease going from $\ce{Li}$ to $\ce{Cs}$, which would mean that $\ce{Cs}$ is a better reducing agent than $\ce{Li}$.
However, looking at the very first table, this is clearly not true. So, some
numbers will be needed. All values are in $\mathrm{kJ~mol^{-1}}$.
$$\begin{array}{ccccc}\hline\ce{M} & \Delta_\mathrm{atom}H & IE_1 & \Delta_\mathrm{hyd}H & \text{Sum} \\\hline\ce{Li} & 161 & 520 & \mathbf{-520} & 161 \\\ce{Cs} & 79 & 376 & \mathbf{-264} & 211 \\\hline\end{array}$$
Source: Inorganic Chemistry 6th ed., Shriver et al., p 160
This is, in fact, an extremely crude analysis. However, it hopefully does serve to show in a more quantitative way why $E(\ce{Cs+}/\ce{Cs}) > E(\ce{Li+}/\ce{Li})$: it's because of the extremely exothermic hydration enthalpy of the small $\ce{Li+}$ ion.
Just as a comparison, the ionic radii of $\ce{Li+}$ and $\ce{Cs+}$ ($\mathrm{CN} = 6$) are $76$ and $167~\mathrm{pm}$ respectively (Greenwood & Earnshaw, p 75).
To decide which is the best reducing agent we only not consider that who has less ionisation energy yet it follows 3 steps:
Metal (solid) to Metal (gaseous state) sublimation energy Metal from gaseous state to M+ ionisation energy M+ to M+ (aqueous state) hydration energy
Lithium having more charge density has more sublimation energy and ionisation energy than caesium but hydration energy is released in such a big amount that it compensates the S.E and I.E. and caesium's hydration is less than lithium. That's why lithium is good reducing agent.
Well there might be more reasons than these two:
Lithium has a higher reduction potential. If you also look at the electronegativities of just Lithiumand Cesiumthen you would notice that the shielding effectis more prevalent in Cesium, thereby reducing the electronegativityand affecting the reduction potential. So Lithium however, just compared to Cesium, has a higher electronegativity.
I think these are the two main reasons, please correct me If I am wrong.
|
LR Parsing OverviewThere are several different kinds of bottom-up parsing.We will discuss an approach called
LR parsing, whichincludes SLR, LALR, and LR parsers.LR means that the input is scanned left-to-right, andthat a rightmost derivation, in reverse, is constructed.SLR means "simple" LR, and LALR means "look-ahead" LR.
Every SLR(1) grammar is also LALR(1), and every LALR(1) grammar is also LR(1), so SLR is the most limited of the three, and LR is the most general. In practice, it is pretty easy to write an LALR(1) grammar for most programming languages (i.e., the "power" of an LR parser isn't usually needed). A disadvantage of LR parsers is that their tables can be very large. Therefore, parser generators like Yacc and Java Cup produce LALR(1) parsers.
Let's start by considering the advantages and disadvantages of the LR parsing family:
Advantages
Recall that top-down parsers use a stack. The contents of the stack represent a prediction of what the rest of the input should look like. The symbols on the stack, from top to bottom, should "match" the remaining input, from first to last token. For example, earlier, we looked at the example grammar
Grammar: $S$ $\longrightarrow$ $\varepsilon$ | ( $S$ ) | [ $S$ ]
and parsed the input string
([]). At one point during theparse, after the first parenthesis has been consumed, the stack contains
[ S ] ) EOF(with the top-of-stack at the left). This is a prediction that the remaining input will start with a
Bottom-up parsers also use a stack, but in this case, the stack represents a summary of the input already seen, rather than a prediction about input yet to be seen. For now, we will pretend that the stack symbols are terminals and nonterminals (as they are for predictive parsers). This isn't quite true, but it makes our introduction to bottom-up parsing easier to understand.
A bottom-up parser is also called a "shift-reduce" parser because itperforms two kind of operations,
shift operations and reduce operations.A shift operation simply shifts the next input token from the input tothe top of the stack. A reduce operation is only possible when thetop N symbols on the stack match the right-hand side of a production in thegrammar. A reduce operation pops those symbols off the stackand pushes the non-terminal on the left-hand side of the production.
One way to think about LR parsing is thatthe parse tree for a given input is built, starting at the leavesand working up towards the root.More precisely, a
reverse rightmost derivation is constructed.
Recall that a derivation (using a given grammar) is performed as follows:
A
rightmost derivation is one in which the rightmost nonterminal isalways the one chosen.
CFG
$E$ $\longrightarrow$ $E$ + $T$ | $T$ $T$ $\longrightarrow$ $T$ * $F$ | $F$ $F$ $\longrightarrow$ id | ( $E$ )
Rightmost derivation
Note that both the rightmost derivation and the bottom-up parse have 8 steps.Step 1 of the derivation corresponds to step 8 of the parse;step 2 of the derivation corresponds to step 7 of the parse; etc.Each step of building the parse tree (adding a new nonterminal as the parentof some existing subtrees) is called a
reduction (that's wherethe "reduce" part of "shift-reduce" parsing comes from).
The difference between SLR, LALR, and LR parsers is in the tables thatthey use.Those tables use different techniques to determine when to do a
reduce step, and, if there is more than one grammar rule withthe same right-hand side, which left-hand-side nonterminalto push.
id + id * id
Stack Input Action id + id * id shift (id) id + id * id reduce by $F$ $\longrightarrow$ id $F$ + id * id reduce by $T$ $\longrightarrow$ $F$ $T$ + id * id reduce by $E$ $\longrightarrow$ $T$ $E$ + id * id shift(+) $E$ + id * id shift(id) $E$ + id * id reduce by $F$ $\longrightarrow$ id $E$ + $F$ * id reduce by $T$ $\longrightarrow$ $F$ $E$ + $T$ * id shift(*) $E$ + $T$ * id shift(id) $E$ + $T$ * id reduce by $F$ $\longrightarrow$ id $E$ + $T$ * $F$ reduce by $T$ $\longrightarrow$ $T$ * $F$ $E$ + $T$ reduce by $E$ $\longrightarrow$ $E$ + $T$ $E$ accept
(NOTE: the top of stack is to the right; the reverse rightmost derivation is formed by concatenating the stack with the remaining input at each reduction step)
Parse Tables
As mentioned above, the symbols pushed onto the parser'sstack are not actually terminals and nonterminals.Instead, they are
states, that correspond to afinite-state machine that represents the parsing process (more onthis soon).
All LR Parsers use two tables: the
action table and the goto table.The action table is indexed by the top-of-stack symbol andthe current token, and it tells which of the four actions toperform: shift, reduce, accept, or reject.The goto table is used during a reduce action as explained below.
Above we said that a
shift action means to push thecurrent token onto the stack.In fact, we actually push a state symbol onto the stack.Each "shift" action in the action table includes the state tobe pushed.
Above, we also said that when we reduce using the grammar rule A →alpha, we pop alpha off of the stack and then push A.In fact, if alpha contains N symbols, we pop N
states off ofthe stack.We then use the goto table to know what to push:the goto table is indexed by state symbol t and nonterminal A, where tis the state symbol that is on top of the stack after popping N times.
Here's pseudo code for the parser's main loop:
push initial state s 0a = scan() do forever t = top-of-stack (state) symbol switch action[t, a] { case shift s: push(s) a = scan() case reduce by A → alpha: for i = 1 to length(alpha) do pop() end t = top-of-stack symbol push(goto[t, A]) case accept: return( SUCCESS ) case error: call the error handler return( FAILURE ) } end do
Remember, all LR parsers use this same basic algorithm.As mentioned above, for allLR parsers, the states that are pushed onto the stack represent the states inan underlying
finite-state machine. Each state represents "wherewe might be" in a parse; each "place we might be" is represented (within thestate) by an item. What's different for the different LR parsers is:
SLR Parsing
SLR means simple LR; it is the weakest member of the LR family (i.e., every SLR grammar is also LALR and LR, but not vice versa). To understand SLR parsing we'll use a new example grammar (a very simple grammar for parameter lists):
$PList$ $\longrightarrow$ ( $IDList$ ) $IDList$ $\longrightarrow$ id | $IDList$ id
Building the Action and Goto Tables for an SLR Parser
Definition of an SLR item:
The item "
PList $\longrightarrow$ . lparens IDList rparens" can be thought of as meaning "we may beparsing a PList, but so far we haven't seen anything".
The item "
PList $\longrightarrow$ lparens . IDList rparens" means "we may be parsing a PList, and so far we've seen a lparens".
The item "
PList $\longrightarrow$ lparens IDList . rparens" means "we may be parsing a PList, and so far we've seen a lparens and parsed an IDList.
We need 2 operations on sets of items: Closure and GotoClosure
To compute Closure($I$), where $I$ is a set of items:
while there exists an item in Closure($I$) of the form X $\longrightarrow$ $\alpha$ . B $\beta$ B $\longrightarrow$ $\gamma$, and B $\longrightarrow$ . $\gamma$ is not in Closure($I$) do add B $\longrightarrow$ . $\gamma$ to Closure($I$)
The idea is that the item "
X $\longrightarrow$ $\alpha$ . B $\beta$"means "we may be trying to parse an X, and so far we've parsed all of $\alpha$, so thenext thing we'll parse may be a B". And the item " B $\longrightarrow$ . $\gamma$"also means that the next thing we'll parse may be a B (in particular, a B that derives $\gamma$), but we haven't seen any part of it yet. Example 1: Closure({ PList $\longrightarrow$ . lparens IDList rparens })
We'll begin by putting the initial item into the Closure (Step 1 above). So far, ourset is: {
PList $\longrightarrow$ . lparens IDList rparens}
Now, we will do step 2, checking the set we build for productions of the form
B $\longrightarrow$ $\gamma$, where the item B $\longrightarrow$ . $\gamma$ is not in the set. There's only one item that we can check, and the symbol to the immediate right of the dot is lparens which isa terminal symbol. Obviously, there are no productions of the form B $\longrightarrow$ $\gamma$ with a terminal symbol on the left-hand side, so there's nothing else to check.
With Step 2 exhausted, we can return the set we've built up:Closure({
PList $\longrightarrow$ . lparens IDList rparens }) = { PList $\longrightarrow$ . lparens IDList rparens }
As with the previous example, we put the initial item into the Closure. So far, our set is {
PList $\longrightarrow$ lparens . IDList rparens }
For step 2, we begin by selecting the only item in our working set,
PList $\longrightarrow$ lparens . IDList rparens. We now look for productions with a left-hand side of IDList, since that's the symbol to the immediate right of the dot. One production of this form is " IDList $\longrightarrow$ id". Since the item IDList $\longrightarrow$ . id is not in the Closure yet, we add it. Our set so far is { PList $\longrightarrow$ lparens . IDList rparens , IDList $\longrightarrow$ . id}
We know that the item that we just added,
IDList $\longrightarrow$ . id will not yield any more items, because the symbol immediately to the right of the dot is a terminal. However, we still haven't captured every production with IDList on the left-hand side, which we need to check because of our initial item. The grammar also has the production IDList $\longrightarrow$ IDList id, so we addthe item " IDList $\longrightarrow$ . IDList id"to the closure. At this point, our working set is { PList $\longrightarrow$ lparens . IDList rparens , IDList $\longrightarrow$ . id, IDList $\longrightarrow$ . IDList id}
The new item that we added has
IDList to the right-hand side of the dot.Fortunately, we've already exhausted every production of the grammar with IDList on the left-hand side. Thus, we can pronounce our working set complete:
Closure({
PList $\longrightarrow$ lparens . IDList rparens })={ PList $\longrightarrow$ lparens . IDList rparens , IDList $\longrightarrow$ . id, IDList $\longrightarrow$ . IDList id}
Now that we have defined the Closure of a set of items, we can use it to define the Goto operation. The basic idea is that $I$ tells us where we might be in the parse, and Goto($I$, X) tells us where we might be after parsing an X. Here is the definition:
Let us begin by defining an intermediate set:
We can now build $\mathcal{W}$ by taking each item from $I$ (of which there is only one) and advancing the dot to the right.
Thus, $\mathcal{W}$ = { PList $\longrightarrow$ lparens . IDList rparens}
With $\mathcal{W}$ in hand, we are ready to perform the Goto operation by computing Closure($\mathcal{W}$) = Closure({
PList $\longrightarrow$ lparens . IDList rparens}). We already computed this closure above as { PList $\longrightarrow$ lparens . IDList rparens , IDList $\longrightarrow$ . id, IDList $\longrightarrow$ . IDList id}, so we are done:
Goto($I$
1, X 1) = { PList $\longrightarrow$ lparens . IDList rparens , IDList $\longrightarrow$ . id, IDList $\longrightarrow$ . IDList id}
$I$
2 = Goto($I$ 1, X 1 ) X 2 = IDList
The inner Goto operation is the result of Example 1, so we can substitute that result directly. Expanded, the problem statement is:
Item in $I$ of the form A $\longrightarrow$ $\alpha$ . X $\beta$ Item of the form A $\longrightarrow$ $\alpha$ X . $\beta$ PList $\longrightarrow$ lparens . IDList rparens PList $\longrightarrow$ lparens IDList . rparens IDList $\longrightarrow$ . IDList id IDList $\longrightarrow$ IDList . id
Thus, $\mathcal{W}$ = {
PList $\longrightarrow$ lparens IDList . rparens, IDList $\longrightarrow$ IDList . id }
We can now take the closure of $\mathcal{W}$ to complete the operation. This turns out to be trivial, since no element in $\mathcal{W}$ is followed by a nonterminal, andtherefore yields no additional items. Thus, Goto($I$
2, X 2) = Closure($\mathcal{W}$) = $\mathcal{W}$ = PList $\longrightarrow$ lparens IDList . rparens, IDList $\longrightarrow$ IDList . id}
Our ultimate goal is to create a To build the FSM:
Example grammarS' → plist plist → ( idlist ) idlist → ID idlist → idlist ID
Corresponding SLR FSM
Given the FSM, here's how to build Action and Goto tables:
Example FOLLOW(idlist) = { ), ID } FOLLOW(plist) = { $ }
Not every grammar is SLR(1).If a grammar is
not SLR(1), there will be a conflict in theSLR Action table.There is a conflict in the table if there is a table entry with more than1 rule in it.There are two possible kinds of conflicts:
A shift/reduce conflict means that it is not possible to determine, based only on the top-of-stack state symbol and the current token, whether to shift or to reduce. This kind of conflict arises when one state contains two items of the following form:
A reduce/reduce conflict means that it is not possible to determine, based only on the top-of-stack state symbol and the current token, whether to reduce by one grammar rule or by another grammar rule. This kind of conflict arises when one state contains two items of the form
A non-SLR(1) grammar
This grammar causes a shift/reduce conflict
grammar relevant part of the FSM
This grammar causes a reduce/reduce conflict:
grammar
Follow(
Follow sets relevant part of the FSM
|
Survivorship bias is one of the most common biases in finance, and it’s easy to fall victim to it. Let’s find out how to remain vigilant and overcome this hurdle.
“History is written by the victors”. – Winston Churchill
A cognitive bias is a consequence of subjective judgement. When it comes to finance, would you say that this famous British politician was right?
We have lots of human biases in finance (or life), and a very common one is the Survivorship bias. It arises when you go back and try to reconstruct history, as in backtesting or during performance studies. More specifically, it will lead to a selection bias error type, which is the error that comes from sub-selecting assets. How so?
We tend to look to what persists in time. Stocks can get acquired and be merged with others. Mutual funds can close. Therefore, not taking into account these dead assets can lead to overstated conclusions. Moreover, it affects every agent in the finance market: a portfolio manager, an analyst, and even investors. Not considering the risks properly may lead to wrong decisions! Please, allow me to explain it carefully with an example… The distortion of reality
Try to answer this simple question: what has been the average return of a stock index’s components? As you may know, a stock index is a weighted average (sometimes other measurements are computed) of the movements in prices of the market constituents. You may also know that the most common criteria for entering into the index is to have a big market capitalisation (at the same time, this depends on the performance of the stock):
\( market cap = P \times N\)
where \(N\) is the number of outstanding shares of that company. Let’s keep this formula in mind and see the next graph (total return computations were made as in this fantastic post!).
As you can see, the mean return of all stocks that were part of the index in the given year is lower than the mean return of the survivors at the end of that year. This is due to the market capitalisation: as these stocks start performing poorly, its price falls, and they are removed from the index because of the minimum market capitalisation criteria (regardless the eventual M&A’s that could ever happen). In the previous chart, we saw how the EuroStoxx criteria (which, by the way, ensembles minimum liquidity and market cap conditions) and the FTSE 100 (top 100 companies by market cap of the London Stock Exchange) constituents change over time, although the number of constituents remains constant. In effect, different constituents are swapped in and out of the index according to their appropriateness.
Survivorship bias can distort the figures when evaluating a strategy. In our blog we have shown some examples of non-biased strategies. Not having treated the Survivorship bias effect properly would have led those figures to go much higher, since we would have been taking into account the “best performer” stocks. It’s not the same to select stocks from a smaller subsample (i.e., from the survivors) than from the whole universe (i.e. all the index constituents of that year). The absolute error representation of the phenomena would take the following form:
\( \epsilon_t = \| M(S_t) – M(A_t) \| \quad ; \forall S_t \in A_t , \forall t \in T \)
where \(S\) is the set of the survivors, \(A\) is the set of all the constituents and \(M\) is a measure.
Keep looking ahead…
Another type of bias related to the Survivorship bias is the Look-ahead bias. This bias comes when an agent anticipates information, which actually didn’t exist at that point in time, when simulating the path of a strategy. This bias is more common when you try to recreate the fundamentals of a company. To illustrate this, let’s see the price to earnings ratio of a big cap.
Companies are regulated, and therefore have to release their financial statements. The thing is that this company may review and edit
past releases, altering the previous statements. But when these statements were first released, you didn’t know that they would be amended in the future! An investor may be driven by these releases, so the Look-ahead bias is just the unnatural anticipation of information that, again, would lead to wrong decisions. Just look at the figure. The biased investor would be looking to the bigger PER. What if you’re a value investor? As a consequence, that investor may not be even considering this stock… Data, data, data
Invest in data. Buy data. Create data. The raw material of the financial markets is information. Expend resources to create an extensive database, with all the ins and outs of the stocks from the index, the birth and death of funds…
What if you don’t have enough resources? Well, data snooping is a beautiful way to create artificial data, so maybe
raising the t-statistic or the confidence interval when hypothesis testing would be a nice approach.
Do you think now that Churchill was right or wrong? Well… it really depends on how you dig into the history 😉 Now we’re ready to make our own strategies. Cheers!
|
As Kimball mentioned, this is all in Godement-Jacquet's monograph "Zeta Functions of Simple Algebras". For the case $n = 1$, this is just Tate's thesis. When the field is $\mathbb{Q}$, a good reference is Goldfeld-Hundley "Automorphic Representations and $L$-Functions for the General Linear Group".
The story is roughly the following. There is a functional equation of the following form:\[\Lambda(s,\pi) = \epsilon(s,\pi) \Lambda(1 - s,\widetilde{\pi}).\]Here $\pi$ is a unitary cuspidal automorphic representation of $\mathrm{GL}_n(\mathbb{A}_F)$, where $F$ is a number field, $\widetilde{\pi}$ denotes the contragredient, and $\Lambda(s,\pi) = \prod_v L_v(s,\pi_v)$ is the completed $L$-function of $\pi$ (including the archimedean factors). The epsilon factor $\epsilon(s,\pi)$ factorises as\[\epsilon(s,\pi) = \epsilon(s,\pi,\psi) = \prod_v \epsilon_v(s,\pi_v,\psi_v),\]where $\psi_v$ is an additive character of $F_v$ that is the local component of an additive character of $\mathbb{A}_F$. Each local epsilon factor may depend on $\psi_v$, but the global epsilon factor is independent of $\psi$.
Let $v$ be a nonarchimedean place of $F$ with associated local field $F_v$ having ring of integers $\mathcal{O}_v$, maximal ideal $\mathfrak{p}_v$, and whose residue field $\mathcal{O}_v / \mathfrak{p}_v$ has cardinality $q_v$. Let $\psi_v$ be an additive character of $F_v$. The conductor of $\psi_v$ is $\mathfrak{p}_v^{c(\psi_v)}$, where $c(\psi_v)$ is the least integer (possibly negative) for which $\psi_v$ is trivial on $\mathfrak{p}_v^{c(\psi_v)}$. The conductor of $\pi_v$ is $\mathfrak{p}_v^{c(\pi_v)}$, where $c(\pi_v)$ is the least integer (necessarily nonnegative) for which $\pi_v$ contains a nonzero vector fixed by the congruence subgroup$$ K_0(\mathfrak{p}^{c(\pi_v)}) = \left\{ \begin{pmatrix} a & b \\\ c & d \end{pmatrix} \in \mathrm{GL}_n(\mathcal{O}_v) : c \in \mathrm{Mat}_{1 \times (n - 1)}(\mathfrak{p}_v^{c(\pi_v)}), \ d - 1 \in \mathfrak{p}_v^{c(\pi_v)} \right\}.$$Then the local epsilon factor is\[\epsilon_v(s,\pi_v,\psi_v) = \epsilon_v\left(\frac{1}{2},\pi_v,\psi_v\right) q_v^{(n c(\psi_v) - c(\pi_v))\left(s - \frac{1}{2}\right)}.\]Suppose that $F_v$ is an extension of $\mathbb{Q}_p$. If we choose $\psi_v$ to be the composition of the standard unramified additive character of $\mathbb{Q}_p$ with the trace map $\mathrm{Tr}_{F_v/\mathbb{Q}_p}$, then $\epsilon_v(1/2,\pi_v,\psi_v)$ is a complex number of absolute value $1$. For example, if $n = 1$, so that $\pi_v$ is a character, $\epsilon_v(1/2,\pi_v,\psi_v)$ is a normalised Gauss sum; in general, it can be quite complicated. Moreover, $\mathfrak{p}_v^{-c(\psi_v)}$ is the different ideal $\mathfrak{D}_{F_v/\mathbb{Q}_p}$ and $q_v^{-c(\psi_v)}$ is the absolute discriminant $\Delta_{F_v/\mathbb{Q}_p} = N_{F_v/\mathbb{Q}_p}(\mathfrak{D}_{F_v/\mathbb{Q}_p})$ of the extension $F_v/\mathbb{Q}_p$.
Note that at all but finitely many places, the additive character $\psi_v$ is unramified ($c(\psi_v) = 0$) and the representation $\pi_v$ is unramified ($c(\pi_v) = 0$).
I haven't yet discussed the archimedean factors. The local $L$-function at an archimedean place $v$ of $F$ will be something of the form\[L_v(s,\pi_v) = \prod_{j = 1}^{n} \zeta_v(s + t_j),\]for some complex numbers $t_j$ (with some restrictions on the possible vertical lines that $t_j$ lies on, since I am assuming that $\pi$ is unitary). Here $\zeta_v(s) = \pi^{-s/2} \Gamma(s/2)$ for a real place $v$ and $\zeta_v(s) = 2(2\pi)^{-s} \Gamma(s)$ for a complex place $v$. Finally, the local epsilon factor at an archimedean place will just be something of the form $\epsilon_v(s,\pi_v,\psi_v) = i^k$ for some integer $k$ (in particular, this is independent of $s$). For more details on the archimedean $L$-functions and epsilon factors, see Knapp's paper "Local Langlands Correspondence: the Archimedean Case".
So the global epsilon factor is\[\prod_v \epsilon_v\left(\frac{1}{2},\pi_v, \psi_v\right) \prod_{v \text{ nonarchimedean}} q_v^{n c(\psi_v) \left(s - \frac{1}{2}\right)} \prod_{v \text{ nonarchimedean}} q_v^{- c(\pi_v) \left(s - \frac{1}{2}\right)}.\]The first term is $\epsilon(1/2,\pi)$, the global root number, which is some complex number of absolute value $1$. The second is\[\Delta_{F/\mathbb{Q}}^{-n\left(s - \frac{1}{2}\right)},\]where $\Delta_{F/\mathbb{Q}}$ is the absolute discriminant of the extension $F/\mathbb{Q}$; this is the norm $N_{F/\mathbb{Q}}(\mathfrak{D}_{F/\mathbb{Q}})$ of the different$$\mathfrak{D}_{F/\mathbb{Q}} = \prod_{v \text{ nonarchimedean}} \mathfrak{p}_v^{-c(\psi_v)}.$$The third is\[N_{F/\mathbb{Q}}(\mathfrak{q})^{-\left(s - \frac{1}{2}\right)},\]where\[\mathfrak{q} = \prod_{v \text{ nonarchimedean}} \mathfrak{p}_v^{c(\pi_v)}\]is the (relative) conductor of $\pi$ and $N_{K/\mathbb{Q}}$ denotes the norm.
|
Im trying to maximize the probability of a particular outcome occurring subject to a constraint. In particular
$$\max \prod_{i \leq n} 1 - (1 - x_i)^{y_i} \;\;\; \text{ s.t. } \;\;\; i \in \mathbb{N}^+,\; 0 \leq x_i \leq 1,\; x_1\cdots x_n = z, \forall n \in \mathbb{N}^+$$
where $y_i \in \mathbb{N}^+$ and $0 \leq z \leq 1$. The context really isn't important, I'm interested only in a solution to this problem. I've been able to find and prove a solution for the minimum, but I haven't been able to for the max.
I highly suspect that the maximum is at $x_1 = \cdots = x_n = z^{(1/n)}$ ($y_i$ fixed for all $i$), but I have not been able to come close to proving this. I'm looking for a proof that either the solution that I have proposed is correct, or incorrect. I'm not necessarily looking for a solution, but it would be welcome. I've been trying to prove this for quite some time and haven't had any luck (proving it false or true). Any advice or suggestions would be greatly appreciated.
Note (read all first): I've posted this on math exchange, but despite the number of views, I haven't received any responses. I'm reposting here (shouldn't do this, I know, but in retrospect I think this question is more relevant here) because I'm beginning to wonder if this problem is in fact much more difficult than I originally thought. It seems to me that someone would have looked at a problem similar to this from the research community since this problem, at least to me, seems relatively elementary despite its potential usefulness when calculating outcome probabilities. Barring a solution, is anyone aware of any references that I could take a look at that might lead me to a proof or counter proof?
In addition I have included algebraic geometry as a tag because one of the approaches I have looked at is a reduction of this problem by looking at it as the maximization of a $n$ hyper rectangle. That is to say given a hyper rectangle with $n$ dimensional volume $x_1\cdot...\cdot x_n = z$, what side lengths will give the largest $n$ dimensional volume if we set each side length to be $1 - (1 - x_i)^{y_i}$ for fixed $y_i$. From this perspective, I would expect the greatest $n$ dimensional volume increase (and thus greatest volume) would occur when all sides ($x_i$) have the same length. I don't have much of a background in geometry though so I haven't gotten far on this.
Edit.
I mentioned that I was able to prove what the minimum was. My proof was incorrect. Doesn't change this question, but I wanted to make sure the problem description was as accurate as possible.
|
For my field theory class I am trying to build the Lagrangian for the following system. Consider a 2D square lattice where the nearest and next-nearest neighbor interactions are modeled by springs with spring constant $C$. Let $u_1(\mathbf{x},t)$ and $u_2(\mathbf{x},t)$ be the displacement fields in the $x_1$ and $x_2$ direction respectively.
I need to show that the lagrangian density for this system is given by
$$\mathscr{L}=\frac{1}{2}\left\{\sum_i\rho\left(\frac{\partial u_i}{\partial t}\right)^2-\lambda\left(\sum_i\frac{\partial u_i}{\partial x_i}\right)^2\right\}$$
The first term is easy, and the terms containing $(\frac{\partial u_i}{\partial x_i})^2$ I can also get by evaluating the added energy by the horizontal and vertical springs and approximating their change of length $\Delta l_i=a\frac{\partial u_i}{\partial x_i}$ and taking the continuum limit which gets rid of the $a$. But I am unable to show that the change in length of the diagonal spring is proportional to $2a^2\frac{\partial u_1}{\partial x_1}\frac{\partial u_2}{\partial x_2}$.
My best attempt so far is using the fact that an expansion of the potential $V(x_1,x_2)$ for small changes $\delta x_i$ gives (with reference potential being 0)
$$V\approx \frac{1}{2}\frac{\partial^2 V}{\partial x_1^2}\delta x_1^2+\frac{1}{2}\frac{\partial^2 V}{\partial x_2^2}\delta x_2^2+\frac{\partial^2 V}{\partial x_1 x_2}\delta x_1 \delta x_2$$
and setting all the derivatives of the potential equal to $C$ if I want this potential to represent the combined energy of one vertical, horizontal and diagonal spring, but I am not satisfied with this solution, since I just assume that all the double derivatives equal $C$. I feel that my calculation is rather sloppy and cutting corners, even though it might lead to the correct solution. Is there a more illustrative way to build this lagrangian?
|
Current through a LR circuit is given by$$i=i_0 (1-e^{\frac{-tR}{L}})..1$$From here I can observe that as I tend R towards 0 no current flows through the circuit , and current will be 0 when the resistance in circuit actually becomes 0.
Though for an A.C. LR circuit I have seen my book presenting a proof where there is a sinusoidally varying emf source and an inductor in a circuit, and nothing else, still the expression of current is derived which is of finite magnitude. Now I might be wrong but I am taking the liberty of focusing my observation on the AC curcuit at a particular moment of time ,I assume here that my 'moment' is small enough to avoid any appreciable change in magnitude of current or its direction, so I am assuming it to be a DC source for this moment of time(which I don't know is valid or not).Given these assumptions and now comparing my momentary AC circuit to the one which is actually DC,how is the momentary AC circuit working when it has no resistance with reference to equation 1?
Current through a LR circuit is given by$$i=i_0 (1-e^{\frac{-tR}{L}})..1$$From here I can observe that as I tend R towards 0 no current flows through the circuit , and current will be 0 when the resistance in circuit actually becomes 0.
The time constant of an resistor, $R$ and inductor, $L$, circuit is $\tau= \frac LR$.
As $R$ gets smaller and smaller the time constant gets larger and larger.
The initial charging current is $I_0 = \frac {\mathcal E}{R}$ where $\mathcal E$ is the emf of the voltage source.
The term $e^{- \frac {t}{\tau}}= e^{- \frac {Rt}{L}} $ can be expanded as far as the second term $1-\frac {Rt}{L}$ to a better and better approximation as $R$ becomes smaller and smaller.
So now your equation for the current is $I(t) = \frac {\mathcal E}{R}\left ( 1-e^{- \frac {t}{\tau}}\right )\approx \frac {\mathcal E\,t}{L} $ and the approximation gets better and better as the resistance gets smaller and smaller.
Indeed if $R=0$ then $I(t) = \frac {\mathcal E\,t}{L} $.
This expression can be found directly by having the voltage source connected an inductor with no resistance.
Then Kirchhoff’s voltage law for such a circuit is $\mathcal E - L \frac{dI}{dt}$ and integration gives the same equation for the current, $I(t) = \frac {\mathcal E\,t}{L} $.
I don't know what is meant by "...1" at the end of the equation, but otherwise the equation is for a series LR circuit connected to a dc (e.g., battery) source when a switch is first closed to connect the circuit to the battery and thereafter as time progresses.
The equation describes the dc transient behavior of the circuit between $t=0$ and $t=∞$ and assumes no initial current flowing in the circuit.
The equation does not apply to the series LR behavior in an ac circuit.
The boundary conditions are $i=0$ at time $t=0$ the instant the switch is closed because you can't change the current through an ideal inductor instantaneously. At time $t=∞$ $i=i_0$ where $i_0$ is the final current and equals $\frac{V}{R}$ where $V$ is the battery voltage. This is because an ideal inductor looks like a short circuit to dc after transients have died out.
I should add that if you want to analyze the response of a series LR circuit to an ac source you will have to solve a first order differential equation. The following link may be of help to you in this regard:
Hope this helps.
In the DC case powered by constant voltage source, the smaller the R, the greater is the final current. This is because in the end, all voltage is on the resistor, so it takes greater current to make RI equal to voltage of the source.
In both AC and DC case the current will flow even if resistance R is zero.
|
Problem statement
Let $P$ be a probability measure on the positive real line and assume all it's
raw moments, $\mu_k = \mathbb{E}[x^k]$, $k=1,2,\dots$ exist and $\mu_k < \infty$ for all $k$. Let $\mu = \mu_1$ be the mean of $P$ and assume $\mu > 0$.
Does the following inequality hold? $$ 4 \frac{\mu_3}{\mu^3} + 6 \frac{\mu_2}{\mu^2} - 9\left(\frac{\mu_2}{\mu^2}\right)^2 - 1 \leq 0 $$
Further remarks
I have tried to find a counterexample experimentally and found the above holds over a wide variety of continuous distributions (i.e. Gamma and Beta distributions) and may be tight in the sense that there are distributions which are just below zero.
Pointers
From
Stieltjes moment problem we know that for any measure on the positive real line we must have that
$$ \det\left(\left[\begin{array}{cc} 1 & \mu\\ \mu & \mu_2 \end{array} \right]\right) > 0, \qquad\textrm{and}\qquad \det\left(\left[\begin{array}{cc} \mu & \mu_2\\ \mu_2 & \mu_3 \end{array}\right]\right) > 0, $$ from which we can infer that $$ \frac{\mu_2}{\mu^2} > 1, \qquad\textrm{and}\qquad \frac{\mu_3}{\mu^3} > \left(\frac{\mu_2}{\mu^2}\right)^2. $$
(But I have not been able to show the above inequality based on this.)
|
Hey there. I hope you are doing well. In my last post, we had set up the required environment. Let us get started with today's post about perceptron.
A perceptron is the fundamental unit of a neural network. It is an algorithm for supervised learning (we say what is right and wrong) for binary classification (either zero or one).
Perceptron is also a basic mathematical model of a neuron. It tries to replicate the work of a neuron. So to implement a perceptron let us study a little bit about neuron.
Neuron
We know neurons are the basic building block of the brain (A massive and complex neural network).
So this was the guy I was talking about. Let us see the functions of this guy on a very high level of abstraction.
He takes in the signal from other neurons through dendrites. It will process the signal in the nucleus. Passes the processed information to other cells via the Axon Termina.
So let us try to come up with a model that replicates the neuron's activity to some extent.
Birth of the perceptron
Perceptron
tries to do above-mentioned activities. Let us try to model one.
So what it should do?
It can take \(n\) number of inputs. Does some computation on the input. Passes it through a non-linear function (Very important) . Produces a single output.
\(x_i\) is the \(i^{th}\) input.
\(W_i\) is the weight value corresponding to the \(i^{th}\) input. \(\sigma\) is the activation function.
We multiply all \(x_i\) with \(W_i\) and sum them to produce a single output which is then fed into an activation function to produce the output. This is our model of the neuron.
Why Activation?
Activation functions are non-linear, meaning their graph is not a straight line. The usage of non-linearity is helpful in approximating any function given we have a sufficient number of units.
Let us consider one activation function called as step function. Step function
\( S(x) = \begin{cases}
1,& \text{if } x\geq 0 \\ 0,& \text{if } x\lt 0 \end{cases} \)
As we see below.
\( x \) can take values from \( {-\infty} \) to \( {\infty} \) \( S(x) \) can take values from \( 0 \) to \( 1 \)
Here the graph is not a straight line and its slope at different points are different.
Enough of Theory, Let us start coding. We have modelled our perceptron, now let us see what it can do.
Fire up your jupyter notebook by using this command
$ jupyter notebook
Create a new notebook named Exercise-2
Import the following modules Let us create a perceptron
Here we model a Perceptron using a class and it has some important methods which are used to mimic the action of the real neuron to some extent. The Complete code is given below for you to try and a break down of the components is also provided.
Let us break down the code.
We have an internal State W for our perceptron, which has to be learned. The size of the state is the number of inputs plus one. It is because it gives us a way to integrate the bias inside the W matrix itself if we can pad the input with one. This would be an easy implementation of
$$ W*x + b $$
So the perceptron has been initialized. Now we can know more about the other methods in the perceptron.
step_function
This takes a single input and applies the step_function which gives us output in binary.
forward
This takes in an array of input and performs an element-wise multiplication and sums it to give a single output (dot product). Then it is passed to a non-linear activation. This would yield an output, either 1 or 0. This is where the decisions are made about the data.
loss
This function acts as a critic. This guy has both the predicted value and the original label. He compares both of them and teaches the perceptron on the correct and wrong things.
back_propagate
This takes the inputs from the loss function and adjusts the weights of \( W \).
batch_train
This function takes the input and feeds it to the network and trains the perceptron.
Analogy
Let us consider the forward function as a student who is going to a school.
Let \( W \) represent his actions in a test. So forward is the action of taking the test. The loss function is like a teacher who corrects his paper and tells him what is wrong and what is correct. back_propagate are the parents of that student. They take input from the teacher and correct his actions so that he can earn more marks in the next exam.
Now our perceptron is ready. Let us use this to approximate AND function.
Here the AND gate takes two inputs and its learning rate is 0.5. You can change the learning rate and see what happens. The last line prints the value of AND gate's output when both the inputs are 1. Now try with different inputs.
Do you have some burning questions about this?
Because I had some questions when I first heard about this.
How do we know when the perceptron is ready?
Since we know all the inputs that this perceptron will ever get and luckily it is small, we can use all the data to train the perceptron. So in this case when the loss value goes to zero, we know that our perceptron is ready for action.
We know after the 5 th epoch the perceptron has seen all the possible inputs and correctly classified them.
What is the correct number for learning rate and epochs?
The answer is "we don't know". They are called as hyperparameter. We can use the trial and error method to find the correct numbers.
We know that it works, but how does it work?
This is a classification problem, where we should separate the given data into \( 0 \) or \( 1 \). All the inputs yield either \( 0 \) or \( 1 \). So the perceptron classified them correctly. It somehow knows inputs with two one should be classified as one and all other should be zero. So how does it know this? The answer to the above question is that it tries to create a boundary line between data points(Linearly separable). In this case, it creates a plane which divides the input points which yields zero and one separately.
Here three points are separated by the plane. Those three points lie under the plane and one point lies above it. That plane is the decision plane. The inputs to the AND gate is mapped to X and Y in the graph. The XY points below the plane belong to one class and points above the plane belongs to another class.
Okay. How does the perceptron know the plane?
The perceptron starts with a random plane in 3-Dimension and moves it so that it could separate the inputs based on their class. This is where the loss function and the backpropagation comes into the picture. Loss function evaluates the current plane and finds where it goes wrong and by means of backpropagation teaches the perceptron.
Where is the code for the graphs and 3D plots? In my next blog... Sneak peek at a Neural Network. The relation between a perceptron and a Neural Network. Training a single layer perceptron network. Evaluation and Testing it.
|
Consider a set of $n$ values $x_i$ for $i = 1, 2, \ldots n$. The arithmetic mean $\mu_n$ of this collection of values is defined as: $$ \mu_n = \displaystyle\frac{1}{n}\sum_{i=1}^n x_i \label{post_c34d06f4f4de2375658ed41f70177d59_mean} $$ The simplicity of equation \eqref{post_c34d06f4f4de2375658ed41f70177d59_mean} hides an important issue: computing the sum of all values $x_i$ numerically is not a good idea since accuracy errors which are inherent in floating point arithmetic might degrade the accuracy of the computed mean value. This is due to the fact that as we sum the values $x_i$, the partial sum can become very large, so adding another value $x_i$ to it might amount to adding a small number to a large number. Given the finite precision involved in each of these additions, each computed partial sum may become less and less accurate; if this is the case, the accuracy of the computed mean value will suffer as well.
Notice, however, that equation \eqref{post_c34d06f4f4de2375658ed41f70177d59_mean} can be written as below: $$ \mu_n = \displaystyle\frac{1}{n}\sum_{i=1}^n x_i = \frac{1}{n}\left(x_n + \sum_{i=1}^{n-1} x_i\right) = \frac{1}{n}\left(x_n + (n-1)\mu_{n-1}\right) $$ where $\mu_{n-1}$ is the arithmetic mean of the first $(n-1)$ values $x_1, x_2, \ldots, x_{n-1}$: $$ \mu_{n-1} = \displaystyle\frac{1}{n-1}\sum_{i=1}^{n-1} x_i $$ Therefore, we have that: $$ \boxed{ \displaystyle\mu_n = \mu_{n-1} + \frac{1}{n}(x_n - \mu_{n-1}) } \label{post_c34d06f4f4de2375658ed41f70177d59_mean_stable} $$ Equation \eqref{post_c34d06f4f4de2375658ed41f70177d59_mean_stable} gives us a recursive formula for computing $\mu_n$ from the values of $\mu_{n-1}$ and $x_n$. This means we will need to compute $\mu_{n-1}$ before computing $\mu_n$. This recursive approach requires us then to compute $\mu_{n-2}$ to compute $\mu_{n-1}$, and so on. Therefore, to compute $\mu_n$, we will need to compute all of $\mu_1, \mu_2, \ldots, \mu_{n-1}$ first. This technique is a bit more expensive than directly using \eqref{post_c34d06f4f4de2375658ed41f70177d59_mean} since we have to perform more arithmetic operations to compute $\mu_n$, but the overall time complexity is still $O(n)$.
Why is the recusive formula \eqref{post_c34d06f4f4de2375658ed41f70177d59_mean_stable} better than the sum formula \eqref{post_c34d06f4f4de2375658ed41f70177d59_mean}? The reason is simple: the recursive formula avoids doing arithmetic operations with large and small numbers. The only issue there is that the factor $1/n$ can make the second term too small compared to the first one if $n$ is very large, but in practice this is much less of a problem than the accuracy issues discussed above.
To exemplify, suppose we throw a dice with six faces $n$ times and compute the mean value of the face which falls upwards (the dice needs not be fair). Assume that each face $k = 1, 2, \ldots, 6$ falls $n_k$ times upwards. For this particular example, the exact mean value of the top face can be computed directly: $$ \mu^e_n = \displaystyle\frac{1}{n}\sum_{i=1}^n x_i = \frac{1}{n}\sum_{k=1}^6 k n_k \label{post_c34d06f4f4de2375658ed41f70177d59_dice_mean} $$ where $x_i$ is the result of the $i$-th throw. Denoting the mean values computed using equations \eqref{post_c34d06f4f4de2375658ed41f70177d59_mean} and \eqref{post_c34d06f4f4de2375658ed41f70177d59_mean_stable} as $\mu_n^s$ and $\mu_n^r$ (for "sum" and "recursive") respectively, we can then see which one is more accurate by comparing their values with the exact value $\mu^e_n$. In what follows, we will use single-precision floating-point numbers to make the effects of finite precision arithmetic more visible, but the results shown below are also true for double-precision numbers even though larger values of $n$ may be necessary for the effects to become significant. Table 1 shows some simulation results obtained for different sets of values $(n_1, n_2, \ldots, n_6)$.
$n\;(\times 10^6)$ $(n_1, n_2, n_3, n_4, n_5, n_6)\;(\times 10^6)$ $\mu_n^s$ $\mu_n^r$ $\mu_n^e$ $6$ $(1,1,1,1,1,1)$ $3.4856$ $3.4997$ $3.5000$ $15$ $(4,2,1,4,1,3)$ $3.2263$ $3.2376$ $3.3333$ $18$ $(3,3,3,3,3,3)$ $3.4037$ $3.5002$ $3.5000$ $20$ $(20,0,0,0,0,0)$ $0.8389$ $1.0000$ $1.0000$
Table 1: Mean values $\mu_n^s$ and $\mu_n^r$ computed using equations \eqref{post_c34d06f4f4de2375658ed41f70177d59_mean} and \eqref{post_c34d06f4f4de2375658ed41f70177d59_mean_stable} respectively, and exact mean values $\mu_n^e$ computed using equation \eqref{post_c34d06f4f4de2375658ed41f70177d59_dice_mean}. All values of $n$ and $n_k$ for $k = 1, 2, \ldots, 6$ are shown divided by $10^6$. Notice how the values of $\mu_n^r$ are significantly better than those of $\mu_n^s$.
For completeness, here is the Python (version 3) script used for computing $\mu_n^s$, $\mu_n^r$ and $\mu_n^e$:
import random import numpy # dice face values face = [1,2,3,4,5,6] # number of times each dice face falls upwards n_face = [4000000, 2000000, 1000000, 4000000, 1000000, 3000000] # simulate n dice throws (with counts for each face given by n_face) values = [] for i in range(0,6): values += [face[i]] * n_face[i] random.seed(0) random.shuffle(values) # mean computed using the sum formula mean = numpy.sum(values, dtype=numpy.float32) / numpy.float32(len(values)) print("mu^s: %.4f" % mean) # mean computed using the recursive formula mean = numpy.float32(0.0) n = 1 for x in values: mean += (numpy.float32(x) - mean) / numpy.float32(n) n += 1 print("mu^r: %.4f" % mean) # exact mean value (up to float32 precision) mean = numpy.float32(numpy.dot(face, n_face) / len(values)) print("mu^e: %.4f" % mean)
|
Forgot password? New user? Sign up
Existing user? Log in
How has everyone fared on here? What do you think is going to be the cutoffs for these examinations?
Note by Tarun Yadav 3 years, 10 months ago
Easy Math Editor
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
*italics*
_italics_
**bold**
__bold__
- bulleted- list
1. numbered2. list
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)
> This is a quote
This is a quote
# I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world"
# I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world"
\(
\)
\[
\]
2 \times 3
2^{34}
a_{i-1}
\frac{2}{3}
\sqrt{2}
\sum_{i=1}^3
\sin \theta
\boxed{123}
Sort by:
And don't you all think that the answers of the comprehension on Newton's law of cooling wrong..in the official answer key?
Log in to reply
I think in a few questions, yes!
I am from Delhi getting about 141 in NSEP and 140 in NSEC acc. to the ALLEN upload(which I don't completely trust). How about everyone else? When is the official answer key out usually? Will I qualify for INChO and INPhO?
r u in class 11 or 12
Class 12, but I didn't have time for preparation. SO it was like I moved into the test hall without much expecting. Anyway, I found some questions in NSEC to be ambiguous. I don't think my score would make me eligible to write the INOs
@Tarun Yadav – Pretty hectic because I was busy with my college applications and could find only a little time. And besides this, I had a blunder with marking the responses in NSEP. Would have got 168 if I had done it correctly!
also every coaching institute is giving different answers for More than one correct answers in nsep . what do you think about that?
I hate the idea of coaching institutes, but okay here, I think the answer key uploaded by ALLEN has correct solutions for More than One Correct type. The Resonance keys are always absurd. Any other coaching institutes.
i am in class 11 . NSEP- 80-90NSEC- 100-110NSEA- 175-180
Also it was nightmare for class 11 students because about 45 questions in nsec were from 12th standard syllabus also in nsec about 60% questions were from 12th . so 12th standard students would have done well in it .
I am in 10th. According to Allen answer key I am getting 94 in NSEP.
That is a good score in class 10th, I got around a 100 when wrote it for the first time in 2012! Pretty cool!Have you taken the NSEA and NSEJS too?
no, next time ! i am not eligible for NSEJS.
@Ch Nikhil – Okay, but do try!
@Tarun Yadav – sure!
@Ch Nikhil – Ask some of your seniors to put their scores here!
@Prakhar Bindal , @CH Nikhil ask your friends to post too!
what would be the approximate cutoff in nse's this year?.I'm scoring 150 in nsep, 170 in nsec 195 in nsea
According to the official answer key?
You are most probably getting through in all three if you are from Delhi. But didn't you find any problems with the answer key?
well.I'm from Rajasthan.And I think I would clear only nsea as my batch mates are getting 180-190 in both nsep and nsec.
@Rahul Agrawal – I know they do. Tough luck being from Rajasthan! But still it is alright, because I think you would go to an IIT. What grade are you in?
@Tarun Yadav – 12th . i wanna go to iisc
@Rahul Agrawal – Oh! Actually I have got a medal in IESO and went to IChOTC but I don't think they get consider that, do they? I mean you would have gotten to IISc with a good KVPY rank. For others they take an admission test! But your score says you will do it. My NSEs went badly because of the confusion this time. Those paper codes and then that experimental shizz I had not prepared for, drowned me.
@Tarun Yadav – Firstly congratulations on your achievements.i did clear kvpy but i screwed the interview and thus my rank is not worth mentioning
@Rahul Agrawal – How much did you scored in kvpy last year and what rank did u got? . i am scoring about 74 in aptitude test in kvpy (sa stream) . what marks are required in interview to enter top 100 ?
@Prakhar Bindal – Well I think,you should try to score around 70-75 in interview and you'll have a great chance to be in the top 50
@Rahul Agrawal – Thanks i will try my best! :)
@Prakhar Bindal – Stay optimistic! Nothing can be said about the ranks as I know a person who is getting 92 in the test. So there may be more such people. Since the paper has been easier this time, focus on the interview as you will surely qualify within the top 100 now. There are people who give amazing interviews and drift their standings. The interview is nothing like a normal interview. Instead of asking about your activity and interests, your bent on science they ask regular subject questions. It is weird and I was shocked at this because I entered and those guys started asking me electro-chemistry and rotational dynamics. One of them asked me about something related to astronomy and though I answered it correctly and knew he was incorrect, I was marked absent because I argued!
@Tarun Yadav – Thanks! . i will surely stay confident ! . I Also know that person . there is like a buzz here in delhi ncr about his score! . i dont know how the news spreads so widely !
@Rahul Agrawal – Try studying hard for their entrance or rather focus on JEE. You will get in! I anyway don't have any plans to continue in India. I'll be going to US next year.
@Tarun Yadav – what kind of exam are you talking about? iisc offers admission through iit/kvpy/aipmt performance And did you appear for act?
@Rahul Agrawal – I did! Haven't got my result as yet!Yeah, I meant the interviews! Are you applying abroad too?
@Tarun Yadav – no.i'm not applying this year.but i appeared for the act in october
@Rahul Agrawal – Has your result come?
@Tarun Yadav – yes
@Rahul Agrawal – How much did you score ?
@Tarun Yadav – please u 2 share ur marks in aptitude+interview and rank so that i can get an idea about how much should i score in interview
@Prakhar Bindal – I had a 68 in the aptitude test. But I think your score was the like in the highest 10 last year. 16-20/25 is a very good score, but I guess the aptitude test this time was pretty easy. SO, it is hard to say anything.
I Think official answer keys are more correct because they are leading to an increase of 20 marks in my score! :P (NSEP)
@Prakhar Bindal – What is your score now?
@Tarun Yadav – about 95-100 in nsep and about 110-120 in nsec and 180 in nsea
@Prakhar Bindal – i just gave it for the sake of giving it . i will prepare for it next year
Hi guys. What is the expected cut off in Maharashtra (NSEP)
I got 150-160 in both NSEP and NSEC. I am from Orissa. Do I have any chance of qualifying?
Problem Loading...
Note Loading...
Set Loading...
|
If elementary particles (specifically, those with mass, such as the electron or other leptons) are pointlike particles, wouldn't that mean they are naked singularities?
But these particles have spin- wouldn't that make them naked
ring singularities, thus giving them an observed radius, making them non-pointlike?
If I remember correctly, the radius of a ring singularity is given by $a=\frac{J}{Mc}$. If we assume the intrinsic spin property of a particle is equal to $J$ of the corresponding singularity, we get for the electron:
$$r=\frac{\frac{\sqrt{3}\hbar}{2}}{m_ec}≈3.3\cdot10^{-13}>>10^{-22}$$
So this seems utterly nonsensical given the upper bound on the electron radius.
|
I couldn't find a bijective map from $(0,1)$ to $\mathbb{R}$. Is there any example?
Here is a nice one ${}{}{}{}{}{}{}{}{}$, can you find the equation?
Here is a bijection from $(-\pi/2,\pi/2)$ to $\mathbb{R}$: $$ f(x)=\tan x. $$ You can play with this function and solve your problem.
$g(x)=\frac 1{1+e^x}$ gives a bijection from $\Bbb R$ to $(0,1)$, so take the inverse of this map.
A homeomorphism (continuous bijection with a continuous inverse) would be $f:(0,1)\to\Bbb R$ given by $$f(x)=\frac{2x-1}{x-x^2}.$$
Added: Let me provide some explanation of how I came by this answer, rather than simply leave it as an unmotivated (though effective) formula and claim in perpetuity.
Many moons ago, I was assigned the task of demonstrating that the real interval $(-1,1)$ was in bijection with $\Bbb R.$ Prior experience with rational functions had shown me graphs like this:
The above is a graph of a continuous function from most of $\Bbb R$ onto $\Bbb R.$ This doesn't do the trick on its own, since it certainly isn't injective. However, it occurred to me that if we restrict the function to the open interval between the two vertical asymptotes, we get this graph, instead:
This graph is of a continuous, injective (more precisely, increasing) function from a bounded open interval of $\Bbb R$ onto $\Bbb R.$ This showed that rational functions could do the job. Other options occurred to me, certainly (such as trigonometric functions), but of the ideas I had (and given the results I was allowed to use) at the time, the most straightforward approach turned out to be using rational functions.
Now, given the symmetry of the interval $(-1,1)$ (and, arguably, of $\Bbb R$) about $0,$ the natural choice of the unique zero of the desired function was $x=0.$ In other words, I wanted $x$ to be the unique factor of the desired rational function's numerator that could be made equal to $0$ in the interval $(-1,1)$--meaning that for $\beta\in(-1,1)$ with $\beta\ne0,$ I needed to make sure that $x-\beta$ was not a factor of the numerator. For simplicity, I hoped that I could make $x$ the only factor of the numerator that could be made equal to $0$
--that is, I hoped that I could have $\alpha x$ as the numerator of my function for some nonzero real $\alpha.$ at all
In order to get the vertical asymptotic behavior I wanted on the given interval--that is, only at the interval's endpoints--I needed to make sure that $x=\pm1$ gave a denominator of $0$--that is, that $x\mp1$ were factors of the denominator--and that for $-1<\beta<1,$ $\beta$ was
not a zero of the denominator--that is, that $x-\beta$ wasn't a factor of the denominator. For simplicity, I hoped that I could make $x\mp1$ the only factors of the denominator.
Playing to my hopes, I assumed $\alpha$ to be some arbitrary nonzero real, and considered the family of functions $$g_\alpha(x)=\frac{\alpha x}{(x+1)(x-1)}=\frac{\alpha x}{x^2-1},$$ with domain $(-1,1).$ It was readily seen that all such functions are real-valued and onto $\Bbb R.$
I wanted more, though! (I'm demanding of my functions when I can be. What can I say?) I wanted an increasing function. I determined (through experimentation which suggested proof) that $g_\alpha$ would be increasing if and only if $\alpha<0.$ Again, for convenience, I chose $\alpha=-1,$ which gave me the function that satisfied the desired (
and required) properties: $g:(-1,1)\to\Bbb R$ given by $$g(x)=\frac{-x}{x^2-1}=\frac{x}{1-x^2}.$$
Much later, you posted your question, and I realized (again, based on experience) that my earlier result could be adapted to the one you wanted. Playing around a bit with linear interpolation showed that the function $h:(0,1)\to(-1,1)$ given by $h(x)=2x-1$ was a bijection--in fact, an
increasing bijection.
It is readily shown (and I had previously seen) that if $X,Y,$ and $Z$ are ordered sets and if we are given increasing maps $X\to Y$ and $Y\to Z,$ then the composition of those maps is an increasing map $X\to Z.$ Also, it is readily shown (and I had seen previously) that if both such maps are continuous and surjective, then so is their composition. Just from these results, my originally posted map was obtained (though named differently): $$\begin{align}(g\circ h)(x) &= g\bigl(h(x)\bigr)\\ &= \frac{h(x)}{1-\left(h(x)\right)^2}\\ &=\frac{2x-1}{1-(2x-1)^2}\\ &=\frac{2x-1}{1-\left(4x^2-4x+1\right)}\\ &= \frac{2x-1}{4x-4x^2}\\ &=\frac{2x-1}{4(x-x^2)}.\end{align}$$
As lhf astutely pointed out shortly thereafter (and as I should have seen immediately), the factor of $4$ in the denominator serves no particular purpose, hence its later removal to yield the function $f$ that I eventually posted.
The remaining claim that I made (that $f$ has a continuous inverse), I leave to you (the reader). If you're curious how I determined this, try to prove it on your own first. If you're stymied (or if you simply want to run your proof attempt by me), let me know. I will do what I can to get you "unstuck."
For a less differentiable example, consider the bijection in the following picture,
In symbols, given $x \in (0,1)$ let $n$ be the largest natural number such that $1-\frac{1}{n}<x$, define $$y=\frac{x-n}{\frac{1}{n}-\frac{1}{n+1}}$$ to be the renormalized version of $x$ if the interval $(1-\frac{1}{n},1-\frac{1}{n+1}]$ is rescaled and shifted to map to $(0,1)$. Then we have the following bijection: $$f(x)=\begin{cases}\frac{n-1}{2}+y,& n \text{ odd} \\ -\frac{n-2}{2}-y,& n \text{ even}\end{cases}$$
Yes. let $f(x)=\tan((x-1/2)\pi)$. the domain is $(0,1)$ and range is $\mathbb{R}$
Yes, see above answers. There are even bijective maps between $(0,1)$ and $\mathbb{R}^n$. To see this, note that a bijection $\phi$ between $(0,1)$ and $(0,1)^2$ can be made in this way: Let $x= 0.b_1b_2\ldots$, with $b_j$ being the digits in a decimal expansion. Define $$\phi(x) = (0.b_1b_3b_5\ldots,0.b_2b_4b_6\ldots),$$ i.e., extract even and odd digits. For $\phi^{-1}(x_1,x_2)$, let $x_1 = 0.a_1a_2a_3\ldots$, and $x_2=b_1b_2b_3\ldots$. Then, $$ \phi^{-1}(x,y) = 0.a_1b_1a_2b_2\cdots$$ Some care has to be taken with identification between digital expansions like $0.199999\cdots$ and $0.20000\cdots$, but that is an exercise.
Having the bijection between $(0,1)$ and $(0,1)^2$, we can apply one of the other answers to create a bijection with $\mathbb{R}^2$.
The argument easily generalizes to $\mathbb{R}^n$.
The trigonometric function $\tan x$ is an invertible function from $(-\pi/2,\pi/2)$ to $\mathbb{R}$. Also to find an invertible function from $(0,1)$ to $(-\pi/2,\pi/2)$ find the equation of the straight line joining the points $(0,-\pi/2)$ and $(1,\pi/2)$. Now compose the two functions together. You can likewise find bijections between any two open intervals and any open interval and $\mathbb{R}$.
$x \mapsto \ln (- \ln x)$ with the inverse $y \mapsto e^{-e ^ {\ y}}$. It's also a $C ^ \infty$ diffeomorphism.
By virtue of http://natureofmathematics.wordpress.com/lecture-notes/cantor/, here's another picture. The interval at the bottom is $\mathbb{R}$.
|
In one problem I'm working on, I came across the computation of the reduced matrix element for the component of the rank 2 tensor
$$\langle j'\|[J^{(1)}\times J^{(1)}]^{(2)}_{(1)}\|j\rangle$$
I'm having a bit of trouble computing the respective tensor:
$$[J^{(1)}\times J^{(1)}]^{(2)}$$
I tried the following. Given two vectors, I tried computing the tensor using the well-known formula:
$T_Q^{(k)}=\sum_{q,m}\langle\ k,q;l,m|KQ\rangle A_q^{(k)}B_m^{(l)}$
Replacing with the angular momentum operator, we get:
$T_Q^{(2)}=\sum_{q,m}\langle\ 1,q;1,m|2Q\rangle A_q^{(1)}B_m^{(1)}$
With $Q=q+m$, so we really have one sum. However, this is the case for the direct product of two vectors, and I'm looking for the vector product, so I introduced an epsilon factor $\epsilon$ according to the definition of vector product:
$T_Q^{(2)}=\sum_{q,m}\langle\ 1,q;1,m|2Q\rangle \epsilon_{Qqm} A_q^{(1)}B_m^{(1)}$
But my problem is when I compute the components, I get zero, so I'm not sure if I'm defining correctly the operator. For example, for the component $T_1^{(2)}$:
$T_1^{(2)}=\langle 1,1;1,0|2 \rangle\epsilon_{1,0,0}J_1^{(1)}J_0^{(1)}+\langle 1,0;1,1|2 \rangle\epsilon_{1,0,1}J_0^{(1)}J_1^{(1)}+\langle 1,-1;1,2|2 \rangle\epsilon_{1,-1,2}J_{-1}^{(1)}J_2^{(1)}=0$
Since $\epsilon_{1,0,0}=\epsilon_{1,0,1}=0$ and $J_2^{(1)}=0$ since it doesn't exist.
|
There are special integrals such as the logarithmic integral and exponential integrals. I want to know if there are primitives for such integrals. If not, why not?
This last link should clarify some of the ideas used :
the only new term appearing during an integration (i.e. that was not in the integrand) is a linear combination of logarithms (because logarithms alone may disappear during differentiation...) exponentials $e^f$ had to be in the integrand first (since differentiation doesn't make them disappear) and will reappear as $h\,e^f$ (of course subtle points exist like considering $\sqrt{x}=e^{\,\large{\ln(x)/2}}\cdots$) differentiation of an algebraic function $\theta$ (i.e. there is a polynomial $P(\theta)=0$) will give a rational function $\dfrac {d(\theta)}{e(\theta)}$ with $\,d$ and $e\,$ two polynomials.
These 3 ideas will provide logarithmic, exponential and algebraic extensions to the differential algebra (starting for example with the field of rational functions over $\mathbb{Q}$) that will give all the elementary functions.
An excellent tutorial about this is "Symbolic Integration" from Manuel Bronstein.
Geddes, Czapor and Labahn's book "Algorithms for Computer Algebra" is very clear too.
Now let's use these ideas to study $\;\displaystyle\int\frac {e^x}x\,dx$.
From a more precise version of $2.$ a primitive must be of type $\ I(x)=h(x)\,e^x\;$ with $h(x)$ a rational function. Let's suppose this and differentiate $\,I(x)$ : $$(h'(x)+h(x))\,e^x=\frac {e^x}x$$ so that we need : $$h'(x)+h(x)=\frac 1x$$ We supposed $h$ rational so that it may be decomposed in simple elements but $h'(x)$ can't give $\dfrac 1x$ so that $\dfrac 1x$ must be part of $h(x)$. In this case $h'(x)$ will create a term $-\dfrac 1{x^2}$ that must be compensated by a $\dfrac 1{x^2}$ term inside $h(x)$ that will generate a $-\dfrac 2{x^3}$ term... This process clearly doesn't end !
The same method could be used for the sine integral : $\;\displaystyle\int \frac {\sin(x)}x\,dx\,$ simply by writing $\ \sin(x)=\dfrac{e^{ix}-e^{-ix}}{2i}$.
(this method was presented by Matthew P Wiener in an old post at sci.math : recommended reading too !)
Concerning the logarithmic integral we have $\ \operatorname{li}(x)=\operatorname{Ei}(\ln(x))\ $ so that the non-elementary proof for the one should apply for the other as well.
|
Let $X$ be a real random variable and $f$ the density belonging to $X$. Let $a\neq0$, and $b \in \mathbb R$, while $Y:=aX+b$. Show:
i) The density of $Y$ wrt the lebesgue measure exists and is $g(x):=\frac{1}{|a|}f(\frac{x-b}{a}), x\in\mathbb R$
ii) Find the distribution of $Y$ if $X$ ~ $\mathcal{N}(\mu,\sigma^{2})$
My steps:
i) I think I've got a good grasp of the it, but still got a few questions. I will just do the case $a>0$ here: Let $c \in \mathbb R$
$P(Y\leq c)=P(aX+b\leq c)=P(X\leq\frac{c-b}{a})$ and we know the distribution of RV $X$, thus: $P(X\leq\frac{c-b}{a})=\int_{-\infty}^{\frac{c-b}{a}}f(x)d\lambda(x)=\int_{-\infty}^{\frac{c-b}{a}}f(x)dx$
now I set $y=ax + b \Rightarrow dy=adx$
Therefore $\int_{-\infty}^{\frac{c-b}{a}}f(x)dx=\int_{-\infty}^{c}\frac{1}{a}f(\frac{x-b}{a})dx$
Using that case $a < 0$ too, we get $\frac{1}{|a|}f(\frac{x-b}{a}), x\in\mathbb R$
Now I have the proposed density function, I need to show that it is indeed a pdf. On measurability, it is clear since $\int_{-\infty}^{c}\frac{1}{|a|}f(\frac{x-b}{a})dx$ exists that $g(x):=\frac{1}{|a|}f(\frac{x-b}{a})$ is measurable on $(\infty, c], \forall c \in \mathbb R$. And since $\{ (\infty, c] |c \in \mathbb R\}$ is a generator of the $\mathcal{B}(\mathbb R)$, therefore $g$ is borel measurable. (Is this correct reasoning?)
First Question: I would think that an alternative way of showing that $g$ is Borel-Measurable is simply stating that $g$ is the product of borel-measurable functions $\frac{1}{|a|}$ as well as $f(\frac{x-b}{a})$, and is therefore borel-measurable. I have a feeling that this my however not be so simple because $f(x)$ being measurable does not imply that $f(\frac{x-b}{a})$ is indeed borel-measurable. Any notes, clarification on this would be of great help.
ii) I get $Y$ ~ $\mathcal{N}(b+a\mu,(a\sigma)^2)$
|
Question:
In the early morning hours of June 14, 2002, the Earth had a remarkably close encounter with an asteroid the size of a small city. The previously unknown asteroid, now designated 2002 MN, remained undetected until three days after it had passed the Earth. At its closest approach, the asteroid was 73,600 miles from the center of the Earth-about a third of the distance to the Moon.
Part A
Find the speed of the asteroid at closest approach, assuming its speed at infinite distance to be zero and considering only its interaction with the Earth.
Part B
Observations indicate the asteroid to have a diameter of about 2.0 km. Estimate the kinetic energy of the asteroid at closest approach, assuming it has an average density of 3.35 g/cm{eq}^{3} {/eq}. (For comparison, a 1-megaton nuclear weapon releases about 5.6 x 10{eq}^{15} {/eq} J of energy.) Express your answer in joules using two significant figures.
Conservation of Energy:
The law of conservation of energy says the total energy of a closed system is conserved; no energy is created nor destroyed. The energy is transferred from one form to another, but the total will remain constant.
Answer and Explanation: Part A
The mechanical energy of the asteroid and the Earth system is conserved from the law of conservation of energy.
Let
i and f subscripts denote the initial (when the asteroid is very far away) and the final (when it is its closest approach) stages, respectively.
Let
KE and PE be the kinetic energy and the potential energy of the system. Then;
{eq}KE_i = 0 {/eq} since the asteroid is at rest at infinity, and {eq}PE_i = 0 {/eq} since it is very far way (at infinity).
Thus,
{eq}\begin{align} PE_i + KE_i &= PE_f + KE_f\\ 0 + 0 &= -\frac{GM_eM_a}{R_f} + \frac{1}{2}M_av_f^2\\ \therefore v_f &= \sqrt{\frac{2GM_e}{R_f}} \end{align} {/eq}
Here,
{eq}G = 6.67\times 10^{-11} \ \rm N\cdot m^2/kg^2 {/eq} is the gravitational constant, {eq}M_e = 5.972 \times 10^{24} \ \textrm{kg} {/eq} is the mass of Earth, {eq}M_a {/eq} is the mass of the asteroid, {eq}R_f = 73600 \ \textrm{mi} = 1.185 \times 10^8 \ \textrm m {/eq} is the distance of the asteroid at the final stage (closest to Earth), {eq}v_f {/eq} is the velocity of the asteroid at this distance.
Thus,
{eq}\begin{align} v_f &= \sqrt{\frac{2GM_e}{R_f}}\\ &= \sqrt{\frac{2\times 6.67\times 10^{-11} \ \rm N\cdot m^2/kg^2 \times 5.972 \times 10^{24} \ \textrm{kg}}{1.185 \times 10^8 \ \textrm m}}\\ &=\boxed{ 2.593 \ \textrm10^3 \ \textrm{m/s}} \end{align} {/eq}
Part B
The kinetic energy of the asteroid is;
{eq}\begin{align} KE_f &= \frac{1}{2}M_av_f^2\\ &= \frac{1}{2}\rho_aV_av_f^2 \end{align} {/eq}
Here,
{eq}\rho_a = 3.35 \ \textrm{g/cm}^3= 3.35 \times 10^{3} \ \textrm{kg/m}^3 {/eq} is the density of the asteroid, {eq}V_a = \frac{4\pi}{3}R_a^3 = \frac{4\pi}{3}\times (1000 \ \textrm m)^3 = 4.19 \times 10^9 \ \textrm m^3 {/eq} is the volume of the asteroid, {eq}R_a = \frac{2.0 \ \textrm{km}}{2} = 1000 \ \textrm m {/eq} is the radius of the asteroid.
Thus,
{eq}\begin{align} KE_f &= \frac{1}{2}\rho_aV_av_f^2\\ &= \frac{1}{2} \times 3.35 \times 10^{3} \ \textrm{kg/m}^3 \times 4.19 \times 10^9 \ \textrm m^3 \times (2.593 \ \textrm10^3 \ \textrm{m/s})^2\\ &= \boxed{4.7 \times 10^{19} \ \textrm J} \end{align} {/eq}
This is ~8400 times more than 1-megaton nuclear weapon releases.
Learn more about this topic:
from ICSE Environmental Science: Study Guide & SyllabusChapter 1 / Lesson 6
|
Given the following
\[K = \min_{x} \left\{ c^T{x} + \sum_{k=1}^{M} p_kq_k^Ty_k ~|~ Ax = b; T_kx + W_ky_k = h_k; y_k \geq 0, x \geq 0 \,\,\forall ~k=1,2, \ldots M \right\} \]
where \((T_k, W_k, h_k, y_k)\) occurs with probability \(p_k > 0 ~\forall ~k = 1,2, \ldots, M\)
Now, I want to prove the following claim:
For \(M \geq 2\),
\[K \neq p_1 \min_{x} \left\{c^{T}x + q_1^{T} y_1 ~|~ Ax = b; T_1 x + W_1 y_1 = h_1; y_1 \geq 0, x \geq 0 \right\} + \\ \ldots + p_M min_{x} \left \{c^{T}x + q_M^{T} y_M ~|~ Ax = b; T_M x + W_M y_M = h_M; y_M \geq 0, x \geq 0 \right\}\]
First, since \(\sum_{k=1}^{M}p_k = 1\), I planned to establish
\[min_{x} \left\{c^Tx + \sum_{k=1}^{M} p_kq_k^Ty_k ~|~ Ax = b; T_kx + W_ky_k = h_k; y_k \geq 0, x \geq 0 ~\forall ~k=1,2,\ldots M \right\} \\ > \sum_{k=1}^{M} min_{x} \left\{p_kc^Tx + p_kq_k^Ty_k ~|~ Ax = b; T_kx + W_ky_k = h_k; y_k \geq 0, x \geq 0 \right\}\]
However, such inequality seems not to always hold, as we know nothing about the signs of the coefficients of the objective function, besides the fact that the constraints form a polyhedron as a feasible region. The reverse sign is also not correct for
Define optimal solutions \(x^{*}\) for K, and \(x_{1}^{*},x_{2}^{*}, \ldots, x_{M}^{*}\) for
\(\min_{x} \left\{p_kc^Tx + p_kq_k^Ty_k ~|~ Ax = b; T_kx + W_ky_k = h_k; y_k \geq 0, x \geq 0 \right\} ~\forall ~k=1,2,\ldots, K\)
By the sake of contradiction, assume
\[K = \sum_{k=1}^{M} \min_{x} \left\{p_kc^{T}x + p_kq_k^{T} y_k ~|~ Ax = b; T_{k}x + W_{k} y_k = h_k; y_k \geq 0, x \geq 0 \right\}\]
This is equivalent to: \(c^Tx^{*} = \sum_{k=1}^{M} p_{k} c^{T} x_k^{*}\).
However, note that \(Ax^{*} = Ax_{1}^{*} = Ax_{2}^{*} = \ldots = Ax_{M}^{*}\), and \(T_kx^{*} = T_kx_{1}^{*} = \ldots = T_kx_{M}^{*}\). So, does the first set of equality imply \(x^{*} = x_{1}^{*} = x_{2}^{*} = \ldots = x_{M}^{*}\), if \(A\) is an invertible matrix? But if that's the case, shouldn't this means \(\sum_{k=1}^{M} p_kc^Tx_k^{*} = c^Tx^{*} \sum_{k=1}^{M} p_k = c^Tx^{*}\) (since \(\sum_{k=1}^{M}p_k = 1\))? But this implies the claim above is wrong...
|
Given a differentiable function $\kappa(s)$, $s \in I$, show that the parametrized plane curve having $k(s)=k$ as curvature is given by:
$\alpha(s)=(\int cos(\theta(s))ds+a,\int sin(\theta(s))ds+b)$
where
$\theta(s) = \int k(s)ds + \gamma$
And that the curve is determined up to translation of the vector (a,b) and a rotation of the angel $\gamma$
Solution:
$T'(s)=\kappa(-sin(\theta(s)),cos(\theta(s))=\kappa N$
Which is easy to see as $\alpha(s)$ is arc length parameterized.
How do I show that this curve is uniquely determined up to translation of the vector (a,b) and rotation of the angel $\gamma$?
|
Motivation : Here is a motivation as to why this problem is so important.
Let $f(t)$ be an audio signal. We can safely asume it to be bandlimited to 0-20kHz as we cannot hear anything above that. Capture this signal in digital computer with appropriate sampling frequency and denote it as $f[n]$.
Now take Discrete Hilbert transform of $f[n]$ to get $f_h[n]$, (using the code $f_h$ = imag(hilbert(f)); in Matlab).
Compute the signal $f_{\theta}[n] = f[n]\cos\theta + f_h[n]\sin\theta$ for any value of $\theta$, then listen to the signal with different values for $\theta$.
They all sound
exactly identical.
Similarly our $MI_{\omega_0,\omega_1}(t)$ is same for all $f_{\theta} = f\cos\theta + f_h\sin\theta$, for any value of $\theta$.
Question :
just try it. $<f,f_h> = 0$, they why do they produce same effect in the listner? Is it some quantum mechanical effect gone wrong?
Added :
Also see this metric space : metric space
I've recently filed a patent using this metric with a slight change, instead of arccos i used sqrt(2(1-cos(theta))), which makes it a Hilbertian metric. I had then embedded this metric space into an Hilbert space isometrically, to model using vectors.
MATLAB code :
[f,fs] = wavread('audio_file.wav');
fh = imag(hilbert(f));
theta = pi/4;
f_tht = f
cos(theta) + fhsin(theta);
wavplay(f,fs);
wavplay(f_tht,fs);
|
Geometric Meaning of the Geometric Mean
The
geometric mean of two positive numbers $a\;$ and $b\;$ is the (positive) number g whose square equals the product $ab\;$:
$g^{2} = ab\;$.
While it is possible to (at least partially) adapt the definition to handle negative numbers, I do not believe this is ever done.
The geometric mean then answers this question: given a rectangle with sides $a\;$ and $b\;$, find the side of the square whose area equals that of the rectangle. In all likelihood, this particular problem gave that number its commonly used name: the
geometric mean. It appears in a more algebraic setting as the mean proportional $p\;$ between two numbers $a\;$ and $b\;$:
$a : p = p : b.$
Euclid VI.13 gives a geometric construction of the mean proportional:
Draw a semicircle on a diameter of length $a + b\;$ and a perpendicular to the diameter where the two segments join. The length of the perpendicular from the circumference to the diameter is exactly the geometric mean of $a\;$ and $b\;$.
(In passing, the are two more terms in Euclid VI that relate to the proportions like the above. The
fourth proportional of the given numbers $a\;$, $b\;$, $c\;$ is number $x\;$ such that $ab = c/x.\;$ The third proportional of two numbers $a\;$ and $b\;$ is number $y\;$ such that $a/b = b/y.\;$ Also, this same construction is used by Euclid II.14 to construct a square of the same area as a given rectangle.)
The above construction of the mean proportional is based on a Corollary from Euclid VI.8: If in a right-angled triangle a perpendicular is drawn from the right angle to the base, then the straight line so drawn is a mean proportional between the segments of the base. However this is not the only appearance of the mean proportional in right triangle. For example, in a right triangle with the altitude to the hypotenuse drawn we may observe three similar triangles: the given one, and the smaller ones cut off by the altitude. The corollary to VI.8 is derived from the similarity of two small triangles. If we pair the big triangle with any of the smaller ones, we'll find that a leg of a right triangle is the mean proportional between its projection on the hypotenuse and the hypotenuse itself.
In fact the geometric mean makes quite frequent appearances in a variety of geometric situations. I'll mention a few.
The length of the common tangent of two circles of diameters $a\;$ and $b\;$ that are tangent externally is the geometric mean of the diameters:
The geometric mean appeared as a tangent to a circle in John Wallis' very first geometric interpretation of complex numbers:
In the framework of the See-Saw Lemma, if a semicircle is inscribed between two perpendiculars to its diameter and a transversal tangent to the semicircle cut segments of lengths $a\;$ and $b\;$ from the two lines, then the radius of the semicircle is the geometric mean of $a\;$ and $b\;$.
In one of the simplest sangaku a square is inscribed into a right triangle. The process of inscribing a square continues for two more steps with the cut off triangles. Inscribe the incircles into three similar and similarly obtained triangles. Let their radii be $a\;$, $p,\;$ $b\;$ in the decreasing order, then $p\;$ is the mean proportional of $a\;$ and $b\;$.
Let $AB\;$ be a chord in a circle and $P\;$ a point on the circle. Draw perpendiculars $PQ,\;$ $PR,\;$ and $PS\;$ from $P\;$ to $AB\;$ and the tangents to the circle at $A\;$ and $B.\;$ Then $PQ^{2} = PR\cdot PS.$
Let points $C\;$ and $D\;$ lie on a semicircle with diameter $AB.\;$ Let $E\;$ be the intersection of $AC\;$ and $BD\;$ and $F\;$ the intersection of $AD$ and $BC.\;$ Let $EF\;$ meet the semicircle in $G\;$ and $AB\;$ in $H.\;$ Then $GH^{2} = EH\cdot FH$.
[W. H. Besant,
Conic Sections Treated Geometrically, George Bell & Sons, London, 1895, p. 28]. If from a point $Q\;$ tangents $QP,\;$ $QP'\;$ be drawn to a parabola, the two triangles $SPQ\;$ and $SQP'\;$ $(S\;$ the focus of the parabola), are similar, and $SQ\;$ is a mean proportional between $SP\;$ and $SP'.$
Produce $PQ\;$ to meet the axis in $T,\;$ and draw $SY,\;$ $SY'\;$ perpendicularly on the tangents. Then $Y\;$ and $Y'\;$ are points on the tangent at $A.\;$
$\begin{align} \angle SPQ &= \angle STY\\ &= \angle SYA\\ &= \angle SQP', \end{align}$
since $S, Y', Y, Q\;$ are points on a circle, and $SYA,\;$ $SQP'\;$ are in the same segment.
Also, since the tangents drawn from any point to a conic subtend equal angles at the focus, $\angle PSQ = \angle QSP';\;$ therefore the triangles $PSQ,\;$ $QSP'\;$ are similar, and
$SP:SQ = SQ:SP'.$
If two isosceles triangles $OTB\;$ and $OAT\;$ are similar, as in the diagram below, we get an easy proportion $OT/BO = AO/OT,\;$ meaning $OT\;$ is the geometric mean of $AO\;$ and $BO.\;$
In case the common base angle equals $72^{circ}\;$ we have a dissection of the golden triangle; however, the geometric mean stays on even for pedestrian angles.
The configuration of two isosceles triangles has been used for a fast construction of the geometric mean of two line segments.
One consequence of Bui Quang Tuan's Lemma of equal areas is an assertion about the areas of triangles in this configuration:
Namely, $[BDE]^{2} = [ABD][BCE],\;$ where $[X]\;$ denotes the area of shape $X.$
The diagonals of a trapezoid cut it into four triangles:
Two of them have equal areas, say $X,\;$ if the areas of the other two are $M\;$ and $N\;$ then $X=\sqrt{M\cdot N}.$
Have you seen the geometric mean elsewhere? Let me know. Thank you.
Related material
65607474
|
I find myself needing to compute (or asymptotically estimate) the following sum over the $2^{S-1}$ compositions of $S$. I am hoping an expert in combinatorics (I am a computer scientist) will recognise this summation.
Let $B_{S,k}$ denote the set of compositions that have exactly $k$ parts, and define $T_{k}$ as,
$$T_{k}=\sum_{a \in B_{S,k}}\frac{a_{1}^{a_{2}}a_{2}^{a_{3}}...a_{k-1}^{a_{k}}}{a_{1}!a_{2}!...a_{k}!},\:\: \text{with}\:\: T_{1}=1.$$
The sum that I am interested in is $\sum_{k}T_{k}$.
Does an expression in terms of $S$ (either exact, or asymptotic) exist, or can it be found?
|
Communications in Mathematical Analysis Commun. Math. Anal. Volume 14, Number 2 (2013), 143-162. Asymptotic Behavior and Stability of Solutions to Barotropic Vorticity Equation on a Sphere Abstract
Orthogonal projectors on the subspace $\mathbf{H}_{n}$ of homogeneous spherical polynomials of degree $n$ and on the subspace $\mathbb{P}^{N}$ of spherical polynomials of degree $% n\leq N$ are defined for functions on the unit sphere $S$, and their derivatives $\Lambda ^{s}$ of real degree $s$ are introduced by using the multiplier operators. A family of Hilbert spaces $\mathbb{H}^{s}$ of generalized functions having fractional derivatives of real degree $s$ on $S$ is introduced, and some embedding theorems for functions from $\mathbb{H}^{s} $ and Banach spaces $\mathbb{L}^{p}(S)$ and $\mathbb{L} ^{p}(0,T;\mathbb{X})$ on $S$ are given. Non-stationary and stationary problems for barotropic vorticity equation (BVE) describing the vortex dynamics of viscous incompressible fluid on a rotating sphere $S$ are considered. A theorem on the unique weak solvability of nonstationary problem and theorem on the existence of weak solution to stationary problem are given, and a condition guaranteeing the uniqueness of such steady solution is also formulated. The asymptotic behaviour of solutions of nonstationary BVE as $t\rightarrow \infty $ is studied. Particular forms of the external vorticity source have been found which guarantee the existence of such bounded set $\mathbf{B}$ in a phase space $\mathbf{X}$ that eventually attracts all solutions to the BVE. It is shown that the asymptotic behaviour of the BVE solutions depends on both the structure and the smoothness of external vorticity source. Sufficient conditions for the global asymptotic stability of both smooth and weak solutions are also given.
Article information Source Commun. Math. Anal., Volume 14, Number 2 (2013), 143-162. Dates First available in Project Euclid: 20 December 2012 Permanent link to this document https://projecteuclid.org/euclid.cma/1356039038 Mathematical Reviews number (MathSciNet) MR3011526 Zentralblatt MATH identifier 1263.76020 Citation
Skiba , Yuri N. Asymptotic Behavior and Stability of Solutions to Barotropic Vorticity Equation on a Sphere. Commun. Math. Anal. 14 (2013), no. 2, 143--162. https://projecteuclid.org/euclid.cma/1356039038
|
Mahlo cardinal A cardinal $\kappa$ is Mahlo if and only if it is inaccessible and the regular cardinals below $\kappa$ form a stationary subset of $\kappa$. Equivalently, $\kappa$ is Mahlo if it is regular and the inaccessible cardinals below $\kappa$ are stationary. Every Mahlo cardinal $\kappa$ is inaccessible, and indeed hyper-inaccessible and hyper-hyper-inaccessible, up to degree $\kappa$, and a limit of such cardinals. If $\kappa$ is Mahlo, then it is Mahlo in any inner model, since the concept of stationarity is similarly downward absolute.
Mahlo cardinals belong to the oldest large cardinals together with inaccessible and measurable.
Please add more history. Weakly Mahlo
A cardinal $\kappa$ is
weakly Mahlo if it is regular and the set of regular cardinals below $\kappa$ is stationary in $\kappa$. If $\kappa$ is a strong limit and hence also inaccessible, this is equivalent to $\kappa$ being Mahlo, since the strong limit cardinals form a closed unbounded subset in any inaccessible cardinal. In particular, under the GCH, a cardinal is weakly Mahlo if and only if it is Mahlo. But in general, the concepts can differ, since adding an enormous number of Cohen reals will preserve all weakly Mahlo cardinals, but can easily destroy strong limit cardinals. Thus, every Mahlo cardinal can be made weakly Mahlo but not Mahlo in a forcing extension in which the continuum is very large. Nevertheless, every weakly Mahlo cardinal is Mahlo in any inner model of the GCH. Hyper-Mahlo etc.
A cardinal $\kappa$ is
$1$-Mahlo if the set of Mahlo cardinals is stationary in $\kappa$. This is a strictly stronger notion than merely asserting that $\kappa$ is a Mahlo limit of Mahlo cardinals, since in fact every $1$-Mahlo cardinal is a limit of such Mahlo-limits-of-Mahlo cardinals. (So there is an entire hierarchy of limits-of-limits-of-Mahloness between the Mahlo cardinals and the $1$-Mahlo cardinals.) More generally, $\kappa$ is $\alpha$-Mahlo if it is Mahlo and for each $\beta\lt\alpha$ the class of $\beta$-Mahlo cardinals is stationary in $\kappa$. The cardinal $\kappa$ is hyper-Mahlo if it is $\kappa$-Mahlo. One may proceed to define the concepts of $\alpha$-hyper${}^\beta$-Mahlo by iterating this concept, iterating the stationary limit concept. All such levels are swamped by the weakly compact cardinals, which exhibit all the desired degrees of hyper-Mahloness and more:
Meta-ordinal terms are terms like $Ω^α · β + Ω^γ · δ +· · ·+Ω^\epsilon · \zeta + \theta$ where $α, β...$ are ordinals. They are ordered as if $Ω$ were an ordinal greater then all the others. $(Ω · α + β)$-Mahlo denotes $β$-hyper${}^α$-Mahlo, $Ω^2$-Mahlo denotes hyper${}^\kappa$-Mahlo $\kappa$ etc. Every weakly compact cardinal $\kappa$ is $\Omega^α$-Mahlo for all $α<\kappa$ and probably more. Similar hierarchy exists for inaccessible cardinals below Mahlo. All such properties can be killed softly by forcing to make them any weaker properties from this family.[2]
$\Sigma_n$-Mahlo etc.
A regular cardinal $κ$ is $Σ_n$-Mahlo (resp. $Π_n$-Mahlo) if every club in $κ$ that is $Σ_n$-definable (resp. $Π_n$-definable) in $H(κ)$ contains an inaccessible cardinal. A regular cardinal $κ$ is $Σ_ω$-Mahlo if every club subset of $κ$ that is definable (with parameters) in $H(κ)$ contains an inaccessible cardinal.
Every $Π_1$-Mahlo cardinal is an inaccessible limit of inaccessible cardinals. For Mahlo $κ$, the set of $Σ_ω$-Mahlo cardinals is stationary on $κ$.
In [3] it is shown that every $Σ_ω$-weakly compact cardinal is $Σ_ω$-Mahlo and the set of $Σ_ω$-Mahlo cardinals below a $Σ_ω$-w.c. cardinal is $Σ_ω$-stationary, but if κ is $Π_{n+1}$-Mahlo, then the set of $Σ_n$-w.c. cardinals below $κ$ is $Π_{n+1}$-stationary.
These properties are connected with some forms of absoluteness. For example, the existence of a $Σ_ω$-Mahlo cardinal is equiconsistent with the generic absoluteness axiom $\mathcal{A}(L(\mathbb{R}), Σ_ω , Γ ∩ absolutely−ccc)$ where $Γ$ is the class of projective posets.
References Hamkins, Joel David and Johnstone, Thomas A. Resurrection axioms and uplifting cardinals., 2014. www arχiv bibtex Carmody, Erin Kathryn. Force to change large cardinal strength., 2015. www arχiv bibtex Bosch, Roger. Small Definably-large Cardinals.Set Theory Trends in Mathematics pp. 55-82, 2006. DOI bibtex Bagaria, Joan and Bosch, Roger. Proper forcing extensions and Solovay models.Archive for Mathematical Logic , 2004. www DOI bibtex Bagaria, Joan. Axioms of generic absoluteness.Logic Colloquium 2002 , 2006. www DOI bibtex
|
Hello! I'm struggling with this math problem:
The beginning of coordinate system moved to point \(\displaystyle O'(-1,2)\) and then rotated it over angle \(\displaystyle \alpha\) that \(\displaystyle tg\alpha = \frac{5}{12}\). Designate the point \(\displaystyle M\)coordinates in the old coordinate system if it coordinates in the new one are \(\displaystyle (2,-3)\). So my proposition is: I have to designate the \(\displaystyle \sin\) and \(\displaystyle \cos\) \(\displaystyle \sin\alpha = \frac{5}{13}\) and \(\displaystyle \cos\alpha = \frac{12}{13}\). I use the equation for rotation: \(\displaystyle x' = x \cos\alpha - y \sin\alpha\) \(\displaystyle y' = x \sin\alpha + y \cos\alpha\) So \(\displaystyle 2 = x \cdot \frac{12}{13} - y \cdot \frac{5}{13} \) \(\displaystyle -3 = x \cdot \frac{5}{13} + y \cdot \frac{12}{13}\) I designate \(\displaystyle x\) and \(\displaystyle y\) - coordinates of the point before the rotation. Thats where I suspect my thinking is wrong because the results are too complicated. But after that having \(\displaystyle x\) and \(\displaystyle y\) I use the equation for translation. \(\displaystyle x' = x + a\) \(\displaystyle y' = y + b\) \(\displaystyle x'\) is earlier designated \(\displaystyle x\), as well as \(\displaystyle y'\). And \(\displaystyle a = 2, b = -3\). I designate \(\displaystyle x\) and \(\displaystyle y\) from equation and thats the old coordinates of the point \(\displaystyle M\). If anyone could tell me if this solution is correct? I would be very grateful.
The beginning of coordinate system moved to point \(\displaystyle O'(-1,2)\) and then rotated it over angle \(\displaystyle \alpha\) that \(\displaystyle tg\alpha = \frac{5}{12}\). Designate the point \(\displaystyle M\)coordinates in the old coordinate system if it coordinates in the new one are \(\displaystyle (2,-3)\).
So my proposition is:
I have to designate the \(\displaystyle \sin\) and \(\displaystyle \cos\)
\(\displaystyle \sin\alpha = \frac{5}{13}\) and \(\displaystyle \cos\alpha = \frac{12}{13}\).
I use the equation for rotation:
\(\displaystyle x' = x \cos\alpha - y \sin\alpha\)
\(\displaystyle y' = x \sin\alpha + y \cos\alpha\)
So
\(\displaystyle 2 = x \cdot \frac{12}{13} - y \cdot \frac{5}{13} \)
\(\displaystyle -3 = x \cdot \frac{5}{13} + y \cdot \frac{12}{13}\)
I designate \(\displaystyle x\) and \(\displaystyle y\) - coordinates of the point before the rotation.
Thats where I suspect my thinking is wrong because the results are too complicated.
But after that having \(\displaystyle x\) and \(\displaystyle y\) I use the equation for translation.
\(\displaystyle x' = x + a\)
\(\displaystyle y' = y + b\)
\(\displaystyle x'\) is earlier designated \(\displaystyle x\), as well as \(\displaystyle y'\). And \(\displaystyle a = 2, b = -3\).
I designate \(\displaystyle x\) and \(\displaystyle y\) from equation and thats the old coordinates of the point \(\displaystyle M\).
If anyone could tell me if this solution is correct? I would be very grateful.
|
Forgot password? New user? Sign up
Existing user? Log in
AMC I is just around the corner guys.....
Note by Xuming Liang 5 years, 8 months ago
Easy Math Editor
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
*italics*
_italics_
**bold**
__bold__
- bulleted- list
1. numbered2. list
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)
> This is a quote
This is a quote
# I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world"
# I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world"
\(
\)
\[
\]
2 \times 3
2^{34}
a_{i-1}
\frac{2}{3}
\sqrt{2}
\sum_{i=1}^3
\sin \theta
\boxed{123}
Sort by:
I am looking forward to it.I did bad last year but I am taking my revenge this time
Log in to reply
Couldn't have put it better.
By the way are you really 14? I think I saw a picture of you on Facebook and you look nothing like a 14 year old....:) no offence in any way.
@Xuming Liang – lol
@Ryan Soedjak – lol
This was me last year after 114 on both the 10A and 10B two years ago >:(
But it turned out fine last year :D
Oh gosh. Any tips?
Not me personally, last year's was a disaster for me, mostly due to lack of experience and knowledge. This year however, I'm ready for revenge. Speaking of last year's AMC, I discovered brilliant only a few days before the exam...a few paragraphs of how Brilliant have helped me.....
Good luck to everyone!
Last year? Wow. I got something like 102 on 10B. :/
I received scores ranging from 114-126 on the last 3 practice AMC 10s...
UGH STILL NOT ENOUGH TO MAKE AIME T_T
@Michael Diao – Questions that plague me:
What if I don't make AIME? How should I become better?
@Michael Diao – You are only 13 so you still have a long road ahead of you. I, on the other hand, do not....But then again I only do math solely for the enjoyment of problem solving and all the wonderful results behind certain topics, so I shouldn't worry too much.
Regarding the question of becoming better, I think it's observable to yourself. For instance, a few months ago I wasn't able to do problems confidently, and I guessed through a lot of problems without rigorous logic on Brilliant. Now, although still not up to standard, I feel like I've become a more well-arounded problem solver(in a lot of ways).
I will be taking the AMC 12, which have been getting progressively harder each year. during the exam, my plan is to get through at least the first 13 problems as fast as possible, if I could get that far. At that point, I would be satisfied and go on to see if there's any geometry or number theory or counting problems since those are my bests.
@Xuming Liang – You still have over 50 years to study and expand your knowledge. Math doesn't end at college.
@Xuming Liang – Wow! I have no strategy...
@Michael Diao – I took a few timed tests by myself too. For all of them I was able to get the first 15 correct with decent time left, I will take another one tomorrow to see if I'm really ready.
@Michael Diao – I got 132, 126, and 139.5 for the last three AMC10 practices...
be sure to minus your score by about 15-ish to get your real-time score otherwise you'll be overconfident like me last year
@Bob Yang – ahem i meant amc12 (harder!)
@Bob Yang – now i am just doing amc 10's mock because they are easier :(
last year i failed :( 105 on 10A
@Bob Yang – this year 12 is going to be terrible
@Bob Yang – Don't worry man, we will all go through this together :)
@Xuming Liang – thx
This is my first year taking the AMC 12. I bombed the AMC 10 last year, but I've been getting ~120 on the practice AMC 12s, so I think I'm safe.
Making USAMO is another matter, though. I got a 4 on the last practice AIME I took. Ugh.
I sympathize.
If I were to take the AIME instead of the AMC 12, I feel like I would do better in the AIME because we're given more time and the problems are a lot more like the ones on Brilliant here :) For the AMC, however and from personal experience, we only have a 75 min time pressure and you basically have to be "keen" and keep everything clear for each problem you encounter. On last year's AMC I missed a few easy questions(below #10) due to over-complicating some of the problems and that I simply just didn't fully understand what they are asking for.
There's almost always an elegant solution to each AMC problem(not so sure about this), so always look out for any shortcuts(this means thinking about several possible approaches and choose the best one to begin) while doing a problem. This might not make any sense yet but I will elaborate tomorrow...:)
I just took a practice AIME. I got a 7. :(
I took both the AMC 10 and 12 last year and failed. But it was my first time so... I've been doing timed tests and can get around 110 on the more recent tests for 12, so I think I'm pretty safe unless this year's tests are really hard.
Does anyone know where I could find more mock tests? I've nearly finished doing each of the real AMC 12 tests from the last 13 years (starting from 2000) for the second time, and I'm afraid I get my score because I simply memorized. :(
http://www.artofproblemsolving.com/Wiki/index.php/Mock_AMC
Do all of the AHSME on AoPS (starting from 1950). Only 8 of the years are incomplete.
Can anyone tell me what is AMC ?
American Mathematics Competitions
Link here: http://www.maa.org/math-competitions
Horribly!
(the past two weeks have been finals week, up to this friday. whoops.)
ikr
I sucked horribly last year. First time taking the test, I was so nervous and I had been told by so many people that it was ridiculously hard that I flopped 6th place in the school when I could have been 1st or 2nd. :(
Problem Loading...
Note Loading...
Set Loading...
|
The following equation, which relates the pH of an aqueous solution of an acid to the acid dissociation constant of the acid, is known as the Henderson-Hasselbach equation.
\[ pH = pk_A + \log_{10} \dfrac{[\text{conjugate base}]}{[\text{weak acid}]}\]
The Henderson-Hasselbach equation is derived from the definition of the acid dissociation constant as follows.
Consider the hypothetical compound \(HA\) in water.
\[ HA + H_2O \rightleftharpoons A^- + H_3O^+\]
The acid dissociation constant of \(HA\),
\[ K_a = \dfrac{[A^-][H_3O^+]}{[HA]}\]
\[ K_a \dfrac{[HA]}{[A^-]} = [H_3O^+]\]
Flip the equation around
\[ [H_3O^+] = K_a \dfrac{[HA]}{[A^-]} \]
\[\log_{10}[H_3O^+] = \log_{10} K_a + \log_{10} \dfrac{[HA]}{[A^-]}\]
Multiply both sides of the equation by -1.
\[-\log_{10}[H_3O^+] = -\log_{10} K_a - \log_{10} \dfrac{[HA]}{[A^-]}\]
\[ pH = pK_a - \log_{10} \dfrac{[HA]}{[A^-]}\]
or
\[ pH = pK_a + \log_{10} \dfrac{[A^-]}{[HA]} \label{HH}\]
According to Henderson-Hasselbach equation, when the concentrations of the acid and the conjugate base are the same, i.e, when the acid is 50% dissociated, the \(pH\) of the solution is equal to the \(pK_a\) of the acid.
That is, when \([HA] = [A^-]\), then
\[\dfrac{[A^-]}{[HA]} = 1\]
via the Henderson Hasselbalch Approximation (Equation \(\ref{HH}\))
\[pH = pK_a + \log_{10} 1\]
\[ pH = pK_a\]
This relationship is used to determine the \(pK_a\) of compounds experimentally.
Advanced: The Henderson–Hasselbalch Equation is an Approximation
The Henderson-Hasselbalch equation (Equation \(\ref{HH}\)) is an approximation, with a certain region of validity. By its nature, it does not take into account the self-dissociation of water, which becomes increasingly important in dilute solutions. When concentrations reach somewhere around \(10^{-5}\ mol\ L^{-1}\) or lower, the true \(pH\) will deviate significantly from the value predicted by the HH equation. However, this is not the problem most of the time.
The reason the HH equation might produce poor predictions when calculating buffer \(pH\) is really because of an oft-made assumption which has nothing to do with the HH equation itself: the weak acid (or weak base is assumed to be so weak that its ionization contributes almost no conjugate base (or conjugate acid) in comparison to the dissolution of the buffer salt</strong>. In other words, we assume the formal concentrations \(C_X\) of species in the buffer are equal to their actual concentrations \([X]\); we replace
\[pH=pK_a+\log_{10}\ \dfrac{[A^{-}]}{[HA]}\]
for the worse approximation
\[pH=pK_a+ \log_{10}\ \dfrac{C_{A^-}}{C_{HA}}.\]
This assumption is incorrect, but it is used to make the \(pH\) calculation much easier at the cost of accuracy. The resulting approximation breaks down in sufficiently dilute solution, and is already quite noticeable far before reaching water self-dissociation issues. It is also a noticeably poor approximation for weak acids/bases with relatively high ionization constants (say, \(K > 10^{-2}\)).
So how can we visualize the effect of diluting a buffer without approximations, and where do the approximations start to break down? Let us consider the simple case of solutions of a weak monoprotic acid \(\ce{HA}\) with acid dissociation constant \(K_a\) where the concentration of acid and conjugate base are formally equal. This can be done by titrating half of the weak acid with a strong monoprotic base (e.g. \(\ce{KOH}\)). This problem can be solved exactly (assuming all activity coefficients are equal to 1, which is generally a good approximation for solutions below about \(1\ mol\ L^{-1}\)). The equation relevant to this problem is the following (see this Module for the derivation of this equation):
\[[H^+]^3+ \left(K_a+\frac{C^o_BV_B}{V_A+V_B}\right)[H^+]^2+\left(\dfrac{C^o_BV_B}{V_A+V_B}K_a-\dfrac{C^o_AV_A}{V_A+V_B}K_a-k_w \right)[H^+]-K_ak_w=0\]
\(V_A\) is the volume of the weak acid solution being titrated and \(C^o_A\) is its formal initial concentration (before mixing and reacting with the base), while \(V_B\) is a variable volume of strong base solution added with formal initial concentration \(C^o_B\) (before mixing and reacting with the acid). The presence of the self-dissociation constant of water \(k_w\) shows that it is being taken into consideration.
We can now substitute values as desired to obtain a polynomial in \([H^+]\). I'll set up the volumes and concentrations arbitrarily so we can reach the acid and conjugate base formal concentrations of \(0.3\ mol\ L^{-1}\) as mentioned in the question. We can start off with \(100\ mL\) of weak acid solution at a concentration of \(0.9\ mol\ L^{-1}\) with a dissociation constant \(K_a=10^{-3}\) (this constant can be changed at will). We can reach the target concentrations of acid and conjugate base by adding \(50\ mL\) of strong base at the same concentration \(0.9\ mol\ L^{-1}\) (neutralizing half the original acid). You can check the resulting concentrations after mixing the solutions, if you wish.
Solving this polynomial, the resulting buffer solution containing formal concentrations \(C_{HA}=0.3\ mol\ L^{-1}\) and \(C_{A^{-}}=0.3\ mol\ L^{-1}\) has \([H^+]=9.93399\times 10^{-4}\ mol\ L^{-1}\). This results in \(pH=3.00288\), which is very close to the value predicted by the approximations made at the start (\(pH=pK_a+log\ 1=pK_a=3\)). For now, they work.
After looking at this first case, we shall investigate the effects of dilution by factors of 10. This can be performed simply by diluting both the initial weak acid and strong base concentrations by 10 (i.e., \(100\ mL\) of \(0.09\ mol\ L^{-1}\) \(\ce{HA}\) plus \(50\ mL\) of \(0.09\ mol\ L^{-1}\) \(\ce{KOH}\) result in a buffer with acid/conjugate base concentrations of \(0.03\ mol\ L^{-1}\), and so on). Before going into the calculations, it's quite simple to see an issue with the approximation in the second paragraph. Independent of the level of dilution, so long as the concentration of acid and conjugate base remain equal, then according to the approximation, \(pH=pK_a=3\). However, consider the case when starting with a solution of \(\ce{HA}\) with initial concentration equal to \(9 \times 10^{-4}\ mol\ L^{-1}\). This is a weak monoprotic acid, so the initial solution must have \([H^+] < 9 \times 10^{-4}\ mol\ L^{-1}\) and thus \(pH\). Clearly it's
impossible to add any amount of strong base to this acid and get a buffer solution with \(pH=3\)! The approximations at the start have broken down at this point.
Now, for the calculations. To get the proton concentration, replaced every value of 0.9 with 0.09, then 0.009, and so on. With the proton concentration determined, the actual concentrations \([HA]\) and \([A^{-}]\) are compared with the formal concentrations \(C_{HA}\) and \(C_{A^{-}}\). Note that they are almost equal at high concentration, but the error increases quite suddenly at lower concentrations. While the ratio \(\frac{C_{A^{-}}}{C_{HA}}\) is always equal to 1, the ratio \(\frac{[A^{-}]}{[HA]}\) rockets away.
\[\begin{array}{c|ccc|cc} \hline C_{HA} = C_{A^{-}} \ \ (mol\ L^{-1}) & [HA] \ \ (mol\ L^{-1}) & [A^{-}] \ \ (mol\ L^{-1}) & \frac{[A^{-}]}{[HA]} & [H^+] \ \ (mol\ L^{-1}) & pH \\ \hline 0.3 & 0.29901 & 0.30099 & 1.00664 & 9.93399\times 10^{-4} & 3.00288 \\ 0.03 & 0.029061 & 0.030939 & 1.06464 & 9.39282 \times 10^{-4} & 3.02720 \\ 0.003 & 0.00235421 & 0.0036458 & 1.54858 & 6.45751 \times 10^{-4} & 3.18993\\ 3\times 10^{-4} & 1\times 10^{-4} & 5\times 10^{-4} & 5 & 2 \times 10^{-4} & 3.69897 \\ 3\times 10^{-5} & 1.6539\times 10^{-6} & 5.83461\times 10^{-5} & 35.2778 & 2.83464 \times 10^{-5} & 4.54750 \\ 3\times 10^{-6} & 1.78596\times 10^{-8} & 5.98214\times 10^{-6} & 334.953 & 2.98549×10^{-6} & 5.52498 \\ 3\times 10^{-7} & 1.97992\times 10^{-10} & 5.99802\times 10^{-7} & 3029.42 & 3.30096×10^{-7} & 6.48136 \\ 3\times 10^{-8} & 6.96609\times 10^{-12} & 5.99939\times 10^{-8} & 8612.15 & 1.16115×10^{-7} & 6.93511 \\ 3\times 10^{-9} & 6.09004\times 10^{-13} & 5.99939\times 10^{-9} & 9851.15 & 1.01511×10^{-7} & 6.99349 \\ \hline \end{array} \nonumber\]
Reference Henry N. Po and N. M. Senozan, JChemEd, V 78, N 11, November 2001.
|
Home
Integration by PartsIntegration by Parts
Examples
Integration by Parts with a definite integral
Going in Circles
Tricks of the Trade
Integrals of Trig FunctionsAntiderivatives of Basic Trigonometric Functions
Product of Sines and Cosines (mixed even and odd powers or only odd powers)
Product of Sines and Cosines (only even powers)
Product of Secants and Tangents
Other Cases
Trig SubstitutionsHow Trig Substitution Works
Summary of trig substitution options
Examples
Completing the Square
Partial FractionsIntroduction to Partial Fractions
Linear Factors
Irreducible Quadratic Factors
Improper Rational Functions and Long Division
Summary
Strategies of IntegrationSubstitution
Integration by Parts
Trig Integrals
Trig Substitutions
Partial Fractions
Improper IntegralsType 1 - Improper Integrals with Infinite Intervals of Integration
Type 2 - Improper Integrals with Discontinuous Integrands
Comparison Tests for Convergence
Modeling with Differential EquationsIntroduction
Separable Equations
A Second Order Problem
Euler's Method and Direction FieldsEuler's Method (follow your nose)
Direction Fields
Euler's method revisited
Separable EquationsThe Simplest Differential Equations
Separable differential equations
Mixing and Dilution
Models of GrowthExponential Growth and Decay
The Zombie Apocalypse (Logistic Growth)
Linear EquationsLinear ODEs: Working an Example
The Solution in General
Saving for Retirement
Parametrized CurvesThree kinds of functions, three kinds of curves
The Cycloid
Visualizing Parametrized Curves
Tracing Circles and Ellipses
Lissajous Figures
Calculus with Parametrized CurvesVideo: Slope and Area
Video: Arclength and Surface Area
Summary and Simplifications
Higher Derivatives
Polar CoordinatesDefinitions of Polar Coordinates
Graphing polar functions
Video: Computing Slopes of Tangent Lines
Areas and Lengths of Polar CurvesArea Inside a Polar Curve
Area Between Polar Curves
Arc Length of Polar Curves
Conic sectionsSlicing a Cone
Ellipses
Hyperbolas
Parabolas and Directrices
Shifting the Center by Completing the Square
Conic Sections in Polar CoordinatesFoci and Directrices
Visualizing Eccentricity
Astronomy and Equations in Polar Coordinates
Infinite SequencesApproximate Versus Exact Answers
Examples of Infinite Sequences
Limit Laws for Sequences
Theorems for and Examples of Computing Limits of Sequences
Monotonic Covergence
Infinite SeriesIntroduction
Geometric Series
Limit Laws for Series
Test for Divergence and Other Theorems
Telescoping Sums
Integral TestPreview of Coming Attractions
The Integral Test
Estimates for the Value of the Series
Comparison TestsThe Basic Comparison Test
The Limit Comparison Test
Convergence of Series with Negative TermsIntroduction, Alternating Series,and the AS Test
Absolute Convergence
Rearrangements
The Ratio and Root TestsThe Ratio Test
The Root Test
Examples
Strategies for testing SeriesStrategy to Test Series and a Review of Tests
Examples, Part 1
Examples, Part 2
Power SeriesRadius and Interval of Convergence
Finding the Interval of Convergence
Power Series Centered at $x=a$
Representing Functions as Power SeriesFunctions as Power Series
Derivatives and Integrals of Power Series
Applications and Examples
Taylor and Maclaurin SeriesThe Formula for Taylor Series
Taylor Series for Common Functions
Adding, Multiplying, and Dividing Power Series
Miscellaneous Useful Facts
Applications of Taylor PolynomialsTaylor Polynomials
When Functions Are Equal to Their Taylor Series
When a Function Does Not Equal Its Taylor Series
Other Uses of Taylor Polynomials
Functions of 2 and 3 variablesFunctions of several variables
Limits and continuity
Partial DerivativesOne variable at a time (yet again)
Definitions and Examples
An Example from DNA
Geometry of partial derivatives
Higher Derivatives
Differentials and Taylor Expansions
Differentiability and the Chain RuleDifferentiability
The First Case of the Chain Rule
Chain Rule, General Case
Video: Worked problems
Multiple IntegralsGeneral Setup and Review of 1D Integrals
What is a Double Integral?
Volumes as Double Integrals
Iterated Integrals over RectanglesHow To Compute Iterated Integrals
Examples of Iterated Integrals
Fubini's Theorem
Summary and an Important Example
Double Integrals over General RegionsType I and Type II regions
Examples 1-4
Examples 5-7
Swapping the Order of Integration
Area and Volume Revisited
Double integrals in polar coordinatesdA = r dr (d theta)
Examples
Multiple integrals in physicsDouble integrals in physics
Triple integrals in physics
Integrals in Probability and StatisticsSingle integrals in probability
Double integrals in probability
Change of VariablesReview: Change of variables in 1 dimension
Mappings in 2 dimensions
Jacobians
Examples
Bonus: Cylindrical and spherical coordinates
The Formula for Taylor Series
We have computed power series representations for some functions, including the following.
$\begin{eqnarray}
All of these have radius of convergence $R=1$, which is a result of their geometric series origins.
This says that if a function can be represented by a power series, its coefficients must be those in Taylor's Theorem. This formula works both ways: if we know the $n$-th derivative evaluated at $a$, we can figure out $c_n$; if we know $c_n$, we can figure out the $n$-th derivative evaluated at $a$. To use this theorem, we have the conventions that
Example: We consider the series representation for $\displaystyle f(x)=\frac{1}{1-x}$ (at the top of the page). (Notice that this series is centered around $a=0$.) By Taylor's Theorem, $c_nx^n=x^n$ since $x^n$ are the terms of our series. So $c_n=1$ for all $n$. On the other hand, also by Taylor's Theorem, $c_n=\frac{f^{(n)}(0)}{n!}$, so we must have $f^{(n)}(0)=n!$ for all $n$ here. Let's see if this is true. $\begin{array}{lllll} Warning: The coefficients $c_n$ do not contain the variable $x$, since the derivatives in the $c_n$ are evaluated at $a$.
The video will explain why Taylor's theorem works, in general.
|
Deep Neural Networks (DNNs) are notorious for requiring less feature engineering than Machine Learning algorithms. For example convolutional networks learn by themselves the right convolution kernels to apply on an image. No need of carefully handcrafted kernels.
However a common point to all kinds of neural networks is the
need of normalization.Normalizing is often done on the input, but it can also take place inside thenetwork. In this article I’ll try to describe what the literature is saying aboutthis.
This article is not exhaustive but it tries to cover the major algorithms. If you feel I missed something important, tell me!
Normalizing the input
It is
extremely common to normalize the input(lecun-98b), especiallyfor computer vision tasks. Three normalization schemes are often seen:
Normalizing the pixel values between 0 and 1:
img /= 255.
Normalizing the pixel values between -1 and 1 (as Tensorflow does):
img /= 127.5 img -= 1.
Normalizing according to the dataset mean & standard deviation (as Torch does):
img /= 255. mean = [0.485, 0.456, 0.406] # Here it's ImageNet statistics std = [0.229, 0.224, 0.225] for i in range(3): # Considering an ordering NCHW (batch, channel, height, width) img[i, :, :] -= mean[i] img[i, :, :] /= std[i]
Why is it recommended? Let’s take a neuron, where:
$$y = w \cdot x$$
The partial derivative of $y$ for $w$ that we use during backpropagation is:
$$\frac{\partial y}{\partial w} = X^T$$
The scale of the data has an effect on the magnitude of the gradient for the weights. If the gradient is big, you should reduce the learning rate. However you usually have different gradient magnitudes in a same batch. Normalizing the image to smaller pixel values is a cheap price to pay while making easier to tune an optimal learning rate for input images.
1. Batch Normalization
We’ve seen previously how to normalize the input, now let’s see a normalization inside the network.
(Ioffe & Szegedy, 2015) declared that DNNtraining was suffering from the
internal covariate shift.
The authors describe it as:
[…] the distribution of each layer’s inputs changes during training, as the parameters of the previous layers change.
Their answer to this problem was to apply to the pre-activation a Batch Normalization (BN):
$$BN(x) = \gamma \frac{x - \mu_B}{\sigma_B} + \beta$$
$\mu_B$ and $\sigma_B$ are the mean and the standard deviation of the batch. $\gamma$ and $\beta$ are learned parameters.
The batch statistics are computed for a whole channel:
$\gamma$ and $\beta$ are essential because they enable the BN to represent the identity transform if needed. If it couldn’t, the resulting BN’s transformation (with a mean of 0 and a variance of 1) fed to a sigmoid non-linearity would be constrained to its linear regime.
While during training the mean and standard deviation are computed on the batch, during test time BN uses the whole dataset statistics using a moving average/std.
Batch Normalization has showed a considerable training acceleration to existing architectures and is now an almost de facto layer. It has however for weakness to use the batch statistics at training time: With small batches or with a dataset non i.i.d it shows weak performance. In addition to that, the difference between training and test time of the mean and the std can be important, this can lead to a difference of performance between the two modes.
1.1. Batch ReNormalization
(Ioffe, 2017)’s Batch Renormalization (BR) introduces an improvement over Batch Normalization.
BN uses the statistics ($\mu_B$ & $\sigma_B$) of the batch. BR introduces two new parameters $r$ & $d$ aiming to constrain the mean and std of BN, reducing the extreme difference when the batch size is small.
Ideally the normalization should be done with the instance’s statistic:
$$\hat{x} = \frac{x - \mu}{\sigma}$$
By choosing $r = \frac{\sigma_B}{\sigma}$ and $d = \frac{\mu_B - \mu}{\sigma}$:
$$\hat{x} = \frac{x - \mu}{\sigma} = \frac{x - \mu_B}{\sigma_B} \cdot r + d$$
The authors advise to constrain the maximum absolute values of $r$ and $d$. At first to 1 and 0, behaving like BN, then to relax gradually those bounds.
1.2. Internal Covariate Shift?
Ioffe & Szegedy argued that the changing distribution of the pre-activation hurt the training. While Batch Norm is widely used in SotA research, there is still controversy (Ali Rahami’s Test of Time) about what this algorithm is solving.
(Santurkar et al, 2018) refuted the InternalCovariate Shift influence. To do so, they compared three models, one baseline,one with BN, and one with random noise added
after the normalization.
Because of the random noise, the activation’s input is not
normalized anymoreand its distribution change at every time test.
As you can see on the following figure, they found that the random shift of distribution didn’t produce extremely different results:
On the other hand they found that the Batch Normalization improved the Lipschitzness of the loss function. In simpler term, the loss is smoother, and thus its gradient as well.
According to the authors:
Improved Lipschitzness of the gradients gives us confidence that when we take a larger step in a direction of a computed gradient, this gradient direction remains a fairly accurate estimate of the actual gradient direction after taking that step. It thus enables any (gradient–based) training algorithm to take larger steps without the danger of running into a sudden change of the loss landscape such as flat region (corresponding to vanishing gradient) or sharp local minimum (causing exploding gradients).
The authors also found that replacing BN by a $l_1$, $l_2$, or $l_{\infty}$ lead to similar results.
2. Computing the mean and variance differently
Algorithms similar to Batch Norm have been developed where the mean & variance are computed differently.
2.1. Layer Normalization
(Ba et al, 2016)’s layer norm (LN) normalizeseach image of a batch independently using all the channels. The goal is have constantperformance with a large batch or a single image.
It’s used in recurrent neuralnetworks where the number of time steps can differ between tasks.
While all time steps share the same weights, each should have its own statistic. BN needs previously computed batch statistics, which would be impossible if there are more time steps at test time than training time. LN is time steps independent by simply computing the statistics on the incoming input.
2.2. Instance Normalization
(Ulyanov et al, 2016)’s instance norm (IN)normalizes each channel of each batch’s image independently.
The goal is tonormalize the constrast of the content image. According to the authors, only thestyle image contrast should matter. 2.3. Group Normalization
According to (Wu and He, 2018), convolution filters tend to group in related tasks (frequency, shapes, illumination, textures).
They normalize each image in a batch independently so the model is batch size independent. Moreover they normalize the channels per group arbitrarily defined (usually 32 channels per group). All filters of a same group should specialize in the same task.
3. Normalization on the network
Previously shown methods normalized the inputs, there are methods were the normalization happen in the network rather than on the data.
3.1. Weight Normalization
(Salimans and Kingma, 2016) found that decoupling the length of the weight vectors from their direction accelerated the training.
A fully connected layer does the following operation:
$$y = \phi(W \cdot x + b)$$
In weight normalization, the weight vectors is expressed the following way:
$$W = \frac{g}{\Vert V \Vert}V$$
$g$ and $V$ being respectively a learnable scalar and a learnable matrix.
3.2. Cosine Normalization
(Luo et al, 2017) normalizes both the weights and the input by replacing the classic dot product by a cosine similarity:
$$y = \phi(\frac{W \cdot X}{\Vert W \Vert \Vert X \Vert})$$
4. Conclusion
Batch normalization (BN) is still the most represented method among new architectures despite its defect: the dependence on the batch size. Batch renormalization (BR) fixes this problem by adding two new parameters to approximate instance statistics instead of batch statistics.
Layer norm (LN), instance norm (IN), and group norm (GN), are similar to BN. Their difference lie in the way statistics are computed.
LN was conceived for RNNs, IN for style transfer, and GN for CNNs.
Finally weigh norm and cosine norm normalize the network’s weight instead of simply the input data.
|
Equilateral Triangle on Three Lines What is this about? Problem
Given three straight lines (denoted below by two points $AB,$ $CD,$ $EF$).
Construct an equilateral triangle with vertices one per line.
Construction
Choose an arbitrary point on one of the line, say $X$ on $EF.$ Rotate $CD$ around $X$ $60^{\circ}$ into $C'D'.$ Let $Y'$ be the intersection of $C'D'$ with $AB$ and $Y$ the point that was mapped into $Y'$ by the rotation.
Then $\Delta XYY'$ is equilateral and $X\in EF,$ $Y'\in AB,$ and $Y\in CD.$
In general, the solution is not unique, e.g.,
Acknowledgment
65607306
|
Forgot password? New user? Sign up
Existing user? Log in
∑i=1n2(i)=a2\sum_{i=1}^{n^2} (i) = a^2i=1∑n2(i)=a2
(n,a)∈N(n,a)\in N(n,a)∈N , NNN denotes natural number.
Does there exist only one such aaa to satisfy the above conditions?
Note by Akash Shukla 3 years, 4 months ago
Easy Math Editor
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
*italics*
_italics_
**bold**
__bold__
- bulleted- list
1. numbered2. list
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)
> This is a quote
This is a quote
# I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world"
# I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world"
\(
\)
\[
\]
2 \times 3
2^{34}
a_{i-1}
\frac{2}{3}
\sqrt{2}
\sum_{i=1}^3
\sin \theta
\boxed{123}
Sort by:
No, there are actually infinitely many a a a. The Pell Equation, x2−2y2=−1 x^2 - 2y^2 = -1 x2−2y2=−1has infinite solutions, eg, (1,1),(7,5),(41,29) (1,1), (7,5), (41, 29) (1,1),(7,5),(41,29). So, we can have,∑i=149i=49⋅502=(35)2 \sum_{i = 1}^{49} i = \frac{49\cdot 50}{2} = (35)^2 i=1∑49i=249⋅50=(35)2
Log in to reply
Where x=nx=nx=n and y=any=\frac{a}{n} y=na
Yes, I wanted him to see how the two were related :P
@Ameya Daigavane – Oh. Too bad I spoilt it then. :(
Yes I got the above expression, but couldn't find the other one.
Other one?
@Ameya Daigavane – As there are infinite aaa, so I can't find other value of aaa.
@Akash Shukla – If you look at Deeparaj's comment or after some simple manipulations, you'll see 41⋅29=1189 41 \cdot 29 = 1189 41⋅29=1189 is another value of a a a.
@Ameya Daigavane – Yes I got this. Thank you so much. it has wonderful connection with pell equation. How do you know that pell equation and my question are related
@Akash Shukla – n2+12\frac{n^2 + 1}{2} 2n2+1 had to be a perfect square.
@Ameya Daigavane – OH!, Yes. You mean x2+12=y2 ⟹ \dfrac{x^2 + 1}{2} = y^2 \implies2x2+1=y2⟹ a perfect square
@Akash Shukla – Yes, I changed the variables, as shown in the other comments.
Problem Loading...
Note Loading...
Set Loading...
|
I'm trying to get the AdS solution to the circular wilson loop. The standard AdS metric is:
$ds^2 = \frac{L^2}{z^2}(\eta_{\mu \nu} dx^{\mu} dx^{\nu} + dz^2)$
If I take the circle of radius R at the x1,x2 plane I can choose polar coordinates:
$x_1 = R cos(\theta)$, $x_2 = R sin(\theta)$
$ds^2 = \frac{L^2}{z^2}(-dt^2 + dr^2 + r^2d\theta^2 + dx3^2 + dz^2)$
Now i want to find the area that minimizes the Nambu-Goto action:
$S_{NG} = \int d\sigma d\tau \sqrt{g}$
Where g is the usual pullback: $g_{ab} = G_{\mu \nu} \partial_a X^{\mu} \partial_b X^{\nu}$. Now my fields are $X^{\mu} = (t,r,\theta,x3,z(r))$ and I choose the gauge where: $\sigma = r$, $\tau = \theta$ from where i get:
$S_{NG} = \int dr d\theta \frac{L^2 r}{z^2} \sqrt{1 + z'^2}$
From where I see that the Hamiltonian is conserved and we get:
$H = \frac{-L^2 r}{z^2} \frac{1}{\sqrt{1+z'^2}}$
But the answer is $S_{NG}= \sqrt{\lambda} (\frac{R}{z_0}-1)$ and I don't know where the problem is.
|
Given an isolated $N$-particle system with only two body interaction, that is $$H=\sum_{i=1}^N\frac{\mathbf{p}_i^2}{2m}+\sum_{i<j}V(\mathbf{r}_i-\mathbf{r}_j)$$
In the thermodynamic limit, that is $N\gg 1$ and $N/V=$constant, it seems that not all two body interaction can make system approach thermal equilibrium automatically. For example, if the interaction is inverse square attractive force, we know the system cannot approach thermal equilibrium.
Although there is Boltzmann's H-theorem to derive the second law of thermodynamics, it relies on the Boltzmann equation which is derived from Liouville's equation in approximation of low density and short range interaction.
My question:
Does it mean that any isolated system with low density and short range interaction can approach thermal equilibrium automatically? If not, what's the counterexample?
For long range interaction or high density isolated system, what's the necessary and sufficient conditions for such system can approach thermal equilibrium automatically? What's about coulomb interaction of plasma(i.e. same number of positive and negative charge)?
How to prove rigorously that a pure self-gravitational system cannot approach equilibrium? I only heard the hand-waving argument that gravity has the effect of clot, but I never see the rigorous proof.
I know there is maximal entropy postulate in microscopic ensemble.I just want to find the range of application of this postulate of equilibrium statistical mechanics. I'm always curious about the above questions but I never saw the discussion in any textbook of statistical mechanics. You can also cite the literature in which I can find the answer.
|
Astrid the astronaut is floating in a grid. Each time she pushes off she keeps gliding until she collides with a solid wall, marked by a thicker line. From such a wall she can propel herself either parallel or perpendicular to the wall, but always travelling directly \(\leftarrow, \rightarrow, \uparrow, \downarrow\). Floating out of the grid means death.
In this grid, Astrid can reach square Y from square ✔. But if she starts from square ✘ there is no wall to stop her and she will float past Y and out of the grid.
In this grid, from square X Astrid can float to three different squares with one push (each is marked with an *). Push \(\leftarrow\) is not possible from X due to the solid wall to the left. From X it takes three pushes to stop safely at square Y, namely \(\downarrow, \rightarrow, \uparrow\). The sequence \(\uparrow, \rightarrow\) would have Astrid float past Y and out of the grid.
Question:
In the following grid, what is the least number of pushes that Astrid can make to safely travel from X to Y?
|
Global stability for a class of functional differential equations (Application to Nicholson's blowflies and Mackey-Glass models)
Département de Mathématiques, Faculté des Sciences, Université de Tlemcen, Laboratoire d'Analyse Non Linéaire et Mathématiques Appliquées, Tlemcen, BP 119, 13000, Algeria
$ x'(t)=-f(x(t))+\int_{0}^{\tau}h(a)g(x(t-a))da.$ Mathematics Subject Classification:Primary: 34K20, 37L15; Secondary: 92B05. Citation:Tarik Mohammed Touaoula. Global stability for a class of functional differential equations (Application to Nicholson's blowflies and Mackey-Glass models). Discrete & Continuous Dynamical Systems - A, 2018, 38 (9) : 4391-4419. doi: 10.3934/dcds.2018191
References:
[1]
L. Berezansky, E. Braverman and L. Idels,
Nicholson's blowflies differential equations revisited: Main results and open problems,
[2]
L. Berezansky, E. Braverman and L. Idels,
Mackey-Glass model of hematopoiesis with non-monotone feedback: Stability, oscillation and control,
[3] [4]
E. Braverman and S. Zhukovskiy,
Absolute and delay-dependent stability of equations with a distributed delay,
[5]
H. A. El-Morshedy,
Global attractivity in a population model with nonlinear death rate and distributed delays,
[6] [7]
K. Gopalsamy,
[8] [9] [10]
J. Hale,
[11] [12]
C. Huang, Z. Yang, T. Yi and X. Zou,
On the bassin of attraction for a class of delay differential equations with non-monotone bistable nonlinearities,
[13] [14] [15]
Y. Kuang,
[16] [17]
E. Liz, M. Pinto, V. Tkachenko and S. Tromichuk,
A global stability criterion for a family of delayed population models,
[18]
E. Liz and G. Rost,
On the global attractor of delay differential equations with unimodal feedback,
[19]
E. Liz, V. Tkachenko and S. Tromichuk,
A global stability criterion for scalar functional differential equations,
[20]
E. Liz, V. Tkachenko and S. Trofimchuk,
Mackey-Glass type delay differential equations near the boundary of absolute stability,
[21] [22] [23] [24]
J. Mallet-Paret and R. Nussbaum,
Global continuation and asymptotic behavior for periodic solutions of a differential delay equation,
[25] [26] [27]
G. Rost and J. Wu,
Domain-decomposition method for the global dynamics of delay differential equations with unimodal feedback,
[28]
H. L. Smith,
[29] [30]
H. L. Smith and H. R. Thieme,
Monotone semiflows in scalar non quasi-monotone functional differential equations,
[31]
H. L. Smith and H. R. Thieme,
[32]
H. R. Thieme,
[33] [34] [35]
T. Yi and X. Zou,
Map dynamics versus dynamics of associated delay reaction-diffusion equations with a Newmann condition,
[36]
T. Yi and X. Zou,
Global dynamics of a delay differential equation with spatial non-locality in an unbounded domain,
[37]
T. Yi and X. Zou,
On Dirichlet Problem for a Class of Delayed Reaction-Diffusion Equations with Spatial Non-locality,
[38]
Y. Yuan and J. Belair,
Stability and Hopf bifurcation analysis for functional differential equation with distributed delay,
[39]
Y. Yuan and X. Q. Zhao,
Global stability for non monotone delay equations (with application to a model of blood cell production),
show all references
References:
[1]
L. Berezansky, E. Braverman and L. Idels,
Nicholson's blowflies differential equations revisited: Main results and open problems,
[2]
L. Berezansky, E. Braverman and L. Idels,
Mackey-Glass model of hematopoiesis with non-monotone feedback: Stability, oscillation and control,
[3] [4]
E. Braverman and S. Zhukovskiy,
Absolute and delay-dependent stability of equations with a distributed delay,
[5]
H. A. El-Morshedy,
Global attractivity in a population model with nonlinear death rate and distributed delays,
[6] [7]
K. Gopalsamy,
[8] [9] [10]
J. Hale,
[11] [12]
C. Huang, Z. Yang, T. Yi and X. Zou,
On the bassin of attraction for a class of delay differential equations with non-monotone bistable nonlinearities,
[13] [14] [15]
Y. Kuang,
[16] [17]
E. Liz, M. Pinto, V. Tkachenko and S. Tromichuk,
A global stability criterion for a family of delayed population models,
[18]
E. Liz and G. Rost,
On the global attractor of delay differential equations with unimodal feedback,
[19]
E. Liz, V. Tkachenko and S. Tromichuk,
A global stability criterion for scalar functional differential equations,
[20]
E. Liz, V. Tkachenko and S. Trofimchuk,
Mackey-Glass type delay differential equations near the boundary of absolute stability,
[21] [22] [23] [24]
J. Mallet-Paret and R. Nussbaum,
Global continuation and asymptotic behavior for periodic solutions of a differential delay equation,
[25] [26] [27]
G. Rost and J. Wu,
Domain-decomposition method for the global dynamics of delay differential equations with unimodal feedback,
[28]
H. L. Smith,
[29] [30]
H. L. Smith and H. R. Thieme,
Monotone semiflows in scalar non quasi-monotone functional differential equations,
[31]
H. L. Smith and H. R. Thieme,
[32]
H. R. Thieme,
[33] [34] [35]
T. Yi and X. Zou,
Map dynamics versus dynamics of associated delay reaction-diffusion equations with a Newmann condition,
[36]
T. Yi and X. Zou,
Global dynamics of a delay differential equation with spatial non-locality in an unbounded domain,
[37]
T. Yi and X. Zou,
On Dirichlet Problem for a Class of Delayed Reaction-Diffusion Equations with Spatial Non-locality,
[38]
Y. Yuan and J. Belair,
Stability and Hopf bifurcation analysis for functional differential equation with distributed delay,
[39]
Y. Yuan and X. Q. Zhao,
Global stability for non monotone delay equations (with application to a model of blood cell production),
[1]
Timothy Blass, Rafael De La Llave, Enrico Valdinoci.
A comparison principle for a Sobolev gradient semi-flow.
[2] [3]
Zhili Ge, Gang Qian, Deren Han.
Global convergence of an inexact operator
splitting method for monotone variational inequalities.
[4]
Qiang Tao, Ying Yang.
Exponential stability for the compressible nematic liquid crystal flow with
large initial data.
[5]
Christian Lax, Sebastian Walcher.
A note on global asymptotic stability of nonautonomous master equations.
[6]
Lakmi Niwanthi Wadippuli, Ivan Gudoshnikov, Oleg Makarenkov.
Global asymptotic stability of nonconvex sweeping processes.
[7]
Xiao-Qian Jiang, Lun-Chuan Zhang.
Stock price fluctuation prediction method based on time series analysis.
[8]
Rui Huang, Ming Mei, Kaijun Zhang, Qifeng Zhang.
Asymptotic stability of non-monotone traveling waves for time-delayed nonlocal dispersion equations.
[9]
Hua Chen, Nian Liu.
Asymptotic stability and blow-up of solutions for semi-linear edge-degenerate parabolic equations with singular potentials.
[10]
Rafael O. Ruggiero.
Shadowing of geodesics, weak stability of the geodesic flow and global hyperbolic geometry.
[11] [12]
Shiwang Ma, Xiao-Qiang Zhao.
Global asymptotic stability of minimal fronts in monostable lattice equations.
[13]
Anatoli F. Ivanov, Musa A. Mammadov.
Global asymptotic stability in a class of nonlinear differential delay equations.
[14] [15] [16]
Yuming Qin, Lan Huang, Zhiyong Ma.
Global existence and exponential stability in
$H^4$ for the nonlinear compressible Navier-Stokes equations.
[17]
Antoine Perasso.
Global stability and uniform persistence for an infection load-structured SI model with exponential growth velocity.
[18]
Stefan Meyer, Mathias Wilke.
Global well-posedness and exponential stability for Kuznetsov's equation in $L_p$-spaces.
[19]
Kai Liu, Zhi Li.
Global attracting set, exponential decay and stability in distribution of neutral SPDEs driven by additive $\alpha$-stable processes.
[20]
Manil T. Mohan.
On the three dimensional Kelvin-Voigt fluids: global solvability, exponential stability and exact controllability of Galerkin approximations.
2018 Impact Factor: 1.143
Tools Metrics Other articles
by authors
[Back to Top]
|
I want to count in how many ways three dice can sum to a given number without brute forcing it. In fact, I would like to do it using generating functions and without having to expand out the product.
To do this, I have thought of making a differential equation out of $y=(x+x^2+x^3+x^4+x^5+x^6)^3$ and then solving the equation through the use of power series, or getting the $n$th taylor coefficient from the equation. What I attempted was: $$y=(x+x^2+x^3+x^4+x^5+x^6)^3$$ $$y'=3y^{2/3}(1+2x+3x^2+4x^3+5x^4+6x^5)$$ Then, by setting $x=0$ we see $y(0)=0$ and $y'(0)=0$, which is what I want, however, when trying to obtain $y''(0)$ by differentiating both sides, I get division by $0$ on the RHS. Am I making a mistake in the set up or is calculating the expression through the use of a differenrial equation impossible?
I want to count in how many ways three dice can sum to a given number without brute forcing it. In fact, I would like to do it using generating functions and without having to expand out the product.
Here is an alternate approach. We could at first transform $y(x)$ so that an expansion
after that becomes less cumbersome. Maybe this variant is also useful for your needs.
It is convenient to use the
coefficient of operator $[x^n]$ to denote the coefficient of $x^n$ of a series. We also use Iverson brackets \begin{align*}[[P(x)]]=\begin{cases}1&\qquad P(x) \ \text{ true}\\0&\qquad P(x) \ \text{ false}\end{cases}\end{align*}This way we can treat multiple cases in one expression.
We obtain \begin{align*} [x^n]y(x)&=[x^n](x+x^2+x^3+x^4+x^5+x^6)^3\\ &=[x^n]x^3(1+x+x^2+x^3+x^4+x^5)^3\\ &=[x^{n-3}][[n\geq 3]]\left(\frac{1-x^6}{1-x}\right)^3\tag{1}\\ &=[x^{n-3}][[n\geq 3]](1-3x^6+3x^{12}-x^{18})\sum_{j=0}^\infty\binom{-3}{j}(-x)^j\tag{2}\\ &=\left([x^{n-3}][[n\geq 3]]-3[x^{n-9}][[n\geq 9]]\right.\\ &\qquad\quad\left.+3[x^{n-15}][[n\geq 15]-[x^{n-21}][[n\geq 21]\right) \sum_{j=0}^\infty\binom{j+2}{2}x^j\tag{3}\\ &=\binom{n-1}{2}[[n\geq 3]]-3\binom{n-7}{2}[[n\geq 9]]\\ &\qquad\quad+3\binom{n-13}{2}[[n\geq 15]]-\binom{n-19}{2}[[n\geq 21]]\tag{4} \end{align*}
Comment:
In (1) we apply the
coefficient ofrule \begin{align*} [x^p]x^qA(x)=[x^{p-q}]A(x) \end{align*} and we use the formula for the finite geometric series. Since there is no contribution to the coefficient of $x^n$ if $n<3$ we respect this by using $[[n\geq 3]]$.
In (2) we expand the binomial and we also expand $\frac{1}{(1-x)^3}$ using the
binomial series expansion.
In (3) we use the linearity of the
coefficient ofoperator and we use the binomial identity \begin{align*} \binom{-p}{q}=\binom{p+q-1}{q}(-1)^q=\binom{p+q-1}{p-1}(-1)^q \end{align*}
In (4) we select the coefficient of $x^{n-k}, k\in\{3,9,15,21\}$.
Note: The usage of the Iverson brackets covers the general case. If we need to calculate a specific case only, the calculation becomes even more straight forward.
Example:$[x^{10}]y(x)$
We obtain \begin{align*} [x^{10}]y(x)&=[x^{10}](x+x^2+x^3+x^4+x^5+x^6)^3\\ &=[x^7]\left(\frac{1-x^6}{1-x}\right)^3\\\ &=[x^7](1-3x^6)\sum_{j=0}^\infty\binom{-3}{j}(-x)^j\\ &=\left([x^7]-3[x]\right)\sum_{j=0}^\infty\binom{j+2}{2}x^j\\ &=\binom{9}{2}-3\binom{3}{2}\\ &=27 \end{align*}
|
I'd like to offer a heuristic partial answer to a more general question... which relates to some of the discussion in the comments.
I had to generalize the definition to make my argument work, so that $p$ is a $\textit{survivor of order}$ $m$ if there is a set of primes $\{q_1, \dots, q_m\}$ such that for every $k \le m$, the concatenation of primes $$ q_k \ q_{k-1} \ \dots \ q_2 \ q_1 \ p $$ is prime.
A $\textit{survivor of order}$ $\infty$ is a prime $p$ along with an infinite set of primes $\{q_1, q_2, q_3, \dots \}$ such that for every $m$, the concatenation of primes $$ q_m \ q_{m-1} \ \dots \ q_2 \ q_1 \ p $$ is prime.
This is the outline of my argument:
Part 1: Heuristically, it is very likely that for any $k \in \mathbb{N}$, and $n \in \mathbb{N}$ with $n > k$, there is an $n$-digit number which is the concatenation of $k$ many primes.
Part 2: Using the Heuristic Lemma from Part 1, construct a finitely-branching tree $T$ of height $\omega$ (an $\omega$-tree) whose paths correspond to primes which are concatenations of primes. Then use Kőnig's Lemma to show that $T$ must have an infinite path, i.e. a survivor of order $\infty$.
PART 1
A MSE question/answer shows that for every $n \in \mathbb{N}$, there is a prime number with $n$ digits. One of the answers provides more intuition regarding the question:"The number of $n$-digit numbers increases much faster than the density of primes decreases so the number of $n$-digit primes increases rapidly as $n$ increases".
Let $n \in \mathbb{N}$. Define ${Q_n}_1$ to be the number of $n$ digit primes, and define ${Q_n}_2$ to be the number of ways to concatenate primes to make an $n$-digit number.
First, I will estimate ${Q_n}_1$: the number of $n$-digit primes.
Let $n \in \mathbb{N}$. Let $k$ be an $n$-digit number. Then $k < 10^n$, so a conservative estimate of the likelihood that $k$ is prime is $$ \frac{1}{\ln(10^n)}. $$
Since there are $9^n$ many $n$-digit numbers, there are approximately $$ \frac{9^n}{\ln(10^n)} $$
many $n$-digit numbers which are prime.
Thus, the estimate for ${Q_n}_1$ is $\frac{9^n}{\ln(10^n)}$.
Here is a table which gives the estimated values versus the real values of numbers of primes for a given length.
\begin{array}{ | c | c | c | c |}\hline\mbox{ Digits }& \mbox{ Formula } & \mbox{ Estimate of number of primes } & \mbox{ Actual }\\\hline1 & \frac{9}{\ln(10)} & 3.91 & 4 \\\hline2 & \frac{9^2}{\ln(10^2)} & 17.59 & 21 \\\hline3 & \frac{9^3}{\ln(10^3)} & 105.53 & 142 \\\hline4 & \frac{9^4}{\ln(10^4)} & 712.35 & 1061 \\\hline\end{array}
Next, I will estimate the number of ways to write an $n$-digit number as a concatenation of primes.
Let $n \in \mathbb{N}$. I'll assume $n$ is odd to make the calculation here slightly simpler. I'm going to estimate the number of ways to write $n$ as a concatenation of two primes, and thus the estimate for the number of ways to write $n$ as a concatenation of more primes will be greater.
For any $n \in \mathbb{N}$ there are $n-1$ many ways to see $n$ as a concatenation of two numbers. For example the 5-digit number 75319 can be seen as: 7-5319, 75-319, 753-19, or 7531-9 (this is a random example, in the work below each part of the concatenation is a prime number).
Let me show how to estimate the number of ways to write a 5-digit number as a concatenation of primes, then the reader can see how I got the estimate for an $n$-digit number.
For a 5-digit number, the first concatenation type is a single digit number followed by a 4-digit number.
There are 4 single digit prime numbers, and there are approximately $\frac{9^4}{\ln(10^4)}$ primes which are 4-digits long.
Thus there are approximately $\frac{4 \cdot 9^4}{\ln(10^4)}$ many 5-digit numbers which are of the first concatenation type.
There are just as many of the concatenation type which is a 4-digit number followed by a single digit number.
For the concatenation type of a 5-digit number which is a 2-digit number followed by a 3-digit number, there are approximately $\frac{9^2 \cdot 9^3}{\ln(10^2) \cdot \ln(10^3)}$ many 5-digit numbers of this concatenation type.
There are the same amount of 5-digit numbers which are of the concatenation type which is a 3-digit number followed by a 2-digit number.
Thus, the total estimate for the number of ways to write a 5-digit number as a concatenation of two primes is
$$ 2 \ \Bigg( \frac{4 \cdot 9^4}{\ln(10^4)} + \frac{9^2 \cdot 9^3}{\ln(10^2) \cdot \ln(10^3)} \Bigg) = 9411.$$
A worked out argument will give the following estimate for the number of ways to write an $n$-digit number as a concatenation of two primes:$$ 2 \Bigg( \frac{4 \cdot 9^n}{\ln (10^n)} + \frac{9^{n-2} \cdot 9^2}{\ln(10^{n-2})\cdot \ln(10^2) } + \dots + \frac{9^{(n+1)/2} \cdot 9^{(n-1)/2}}{\ln(10^{(n+1)/2}) \cdot \ln(10^{(n-1)/2})} \Bigg) .$$
Thus, the estimate for ${Q_n}_2$ must be greater than the above number. Thus heuristically, ${Q_n}_2 > {Q_n}_1$. That is, the number of ways to write an $n$-digit number as a concatenation of $k$ many primes is greater than the number of $n$-digit primes. So, it is likely that for a given $n$ and $k < n$, there is an $n$-digit number which is both prime and a concatenation of primes.
PART 2
If it is true that for every $n \in \mathbb{N}$ and $k < n$, there is an $n$-digit prime which is a concatenation of $k$ many primes, then I can imagine constructing an $\omega$-tree $T$ whose paths correspond to primes which are concatenations of primes. By Kőnig's lemma $T$ must have an infinite path, and thus there must be a survivor of order $\infty$.
I'd like to expand here to describe $T$:
1) $ t \in T$ if $t$ is prime and a concatenation of primes
2) $s < t$ in $T$ if $t = s^{\cap}q$ where $q \in T$
Let us order $T$ with a lexicographical ordering of the prime "words" created.
First, we can see that $T$ is a tree. Then, we can see that the tree is finitely branching via the ordering since there are finitely many primes for each number of digits. Finally, $T$ has branches of every height by the Heuristic Lemma of part 1, and thus $T$ has height $\omega$.
If there is no prime survivor of order $\omega$, then $T$ is an $\aleph_0$-Aronszajn tree, which is impossible. Thus, there must be a prime survivor of order $\omega$.
|
Home
Integration by PartsIntegration by Parts
Examples
Integration by Parts with a definite integral
Going in Circles
Tricks of the Trade
Integrals of Trig FunctionsAntiderivatives of Basic Trigonometric Functions
Product of Sines and Cosines (mixed even and odd powers or only odd powers)
Product of Sines and Cosines (only even powers)
Product of Secants and Tangents
Other Cases
Trig SubstitutionsHow Trig Substitution Works
Summary of trig substitution options
Examples
Completing the Square
Partial FractionsIntroduction to Partial Fractions
Linear Factors
Irreducible Quadratic Factors
Improper Rational Functions and Long Division
Summary
Strategies of IntegrationSubstitution
Integration by Parts
Trig Integrals
Trig Substitutions
Partial Fractions
Improper IntegralsType 1 - Improper Integrals with Infinite Intervals of Integration
Type 2 - Improper Integrals with Discontinuous Integrands
Comparison Tests for Convergence
Modeling with Differential EquationsIntroduction
Separable Equations
A Second Order Problem
Euler's Method and Direction FieldsEuler's Method (follow your nose)
Direction Fields
Euler's method revisited
Separable EquationsThe Simplest Differential Equations
Separable differential equations
Mixing and Dilution
Models of GrowthExponential Growth and Decay
The Zombie Apocalypse (Logistic Growth)
Linear EquationsLinear ODEs: Working an Example
The Solution in General
Saving for Retirement
Parametrized CurvesThree kinds of functions, three kinds of curves
The Cycloid
Visualizing Parametrized Curves
Tracing Circles and Ellipses
Lissajous Figures
Calculus with Parametrized CurvesVideo: Slope and Area
Video: Arclength and Surface Area
Summary and Simplifications
Higher Derivatives
Polar CoordinatesDefinitions of Polar Coordinates
Graphing polar functions
Video: Computing Slopes of Tangent Lines
Areas and Lengths of Polar CurvesArea Inside a Polar Curve
Area Between Polar Curves
Arc Length of Polar Curves
Conic sectionsSlicing a Cone
Ellipses
Hyperbolas
Parabolas and Directrices
Shifting the Center by Completing the Square
Conic Sections in Polar CoordinatesFoci and Directrices
Visualizing Eccentricity
Astronomy and Equations in Polar Coordinates
Infinite SequencesApproximate Versus Exact Answers
Examples of Infinite Sequences
Limit Laws for Sequences
Theorems for and Examples of Computing Limits of Sequences
Monotonic Covergence
Infinite SeriesIntroduction
Geometric Series
Limit Laws for Series
Test for Divergence and Other Theorems
Telescoping Sums
Integral TestPreview of Coming Attractions
The Integral Test
Estimates for the Value of the Series
Comparison TestsThe Basic Comparison Test
The Limit Comparison Test
Convergence of Series with Negative TermsIntroduction, Alternating Series,and the AS Test
Absolute Convergence
Rearrangements
The Ratio and Root TestsThe Ratio Test
The Root Test
Examples
Strategies for testing SeriesStrategy to Test Series and a Review of Tests
Examples, Part 1
Examples, Part 2
Power SeriesRadius and Interval of Convergence
Finding the Interval of Convergence
Power Series Centered at $x=a$
Representing Functions as Power SeriesFunctions as Power Series
Derivatives and Integrals of Power Series
Applications and Examples
Taylor and Maclaurin SeriesThe Formula for Taylor Series
Taylor Series for Common Functions
Adding, Multiplying, and Dividing Power Series
Miscellaneous Useful Facts
Applications of Taylor PolynomialsTaylor Polynomials
When Functions Are Equal to Their Taylor Series
When a Function Does Not Equal Its Taylor Series
Other Uses of Taylor Polynomials
Functions of 2 and 3 variablesFunctions of several variables
Limits and continuity
Partial DerivativesOne variable at a time (yet again)
Definitions and Examples
An Example from DNA
Geometry of partial derivatives
Higher Derivatives
Differentials and Taylor Expansions
Differentiability and the Chain RuleDifferentiability
The First Case of the Chain Rule
Chain Rule, General Case
Video: Worked problems
Multiple IntegralsGeneral Setup and Review of 1D Integrals
What is a Double Integral?
Volumes as Double Integrals
Iterated Integrals over RectanglesHow To Compute Iterated Integrals
Examples of Iterated Integrals
Fubini's Theorem
Summary and an Important Example
Double Integrals over General RegionsType I and Type II regions
Examples 1-4
Examples 5-7
Swapping the Order of Integration
Area and Volume Revisited
Double integrals in polar coordinatesdA = r dr (d theta)
Examples
Multiple integrals in physicsDouble integrals in physics
Triple integrals in physics
Integrals in Probability and StatisticsSingle integrals in probability
Double integrals in probability
Change of VariablesReview: Change of variables in 1 dimension
Mappings in 2 dimensions
Jacobians
Examples
Bonus: Cylindrical and spherical coordinates
To get the area between the polar curve $r=f(\theta)$ and the polar curve $r=g(\theta)$, we just subtract the area inside the inner curve from the area inside the outer curve. If $f(\theta) \ge g(\theta)$, this means $$\frac{1}{2}\int_a^b f(\theta)^2 - g(\theta)^2 d\theta.$$Note that this is NOT $\frac{1}{2}\int_a^b [f(\theta)-g(\theta)]^2 d\theta$!! You first square and then subtract, not the other way around.
As with most ``area between two curves'' problems, the tricky thing is figuring out the beginning and ending angles. This is typically where $f(\theta)=g(\theta)$.
In the following video, we compute the area inside the cardioid $r=1+\sin(\theta)$ and outside the circle $r=\frac{1}{2}$.
|
The Annals of Statistics Ann. Statist. Volume 22, Number 1 (1994), 1-20. The Order of the Remainder in Derivatives of Composition and Inverse Operators for $p$-Variation Norms Abstract
Many statisticians have adopted compact differentiability since Reeds showed in 1976 that it holds (while Frechet differentiability fails) in the supremum (sup) norm on the real line for the inverse operator and for the composition operator $(F,G) \mapsto F \circ G$ with respect to $F$. However, these operators are Frechet differentiable with respect to $p$-variation norms, which for $p > 2$ share the good probabilistic properties of the sup norm, uniformly over all distributions on the line. The remainders in these differentiations are of order $\| \cdot \|^\gamma$ for $\gamma > 1$. In a range of cases $p$-variation norms give the largest possible values of $\gamma$ on spaces containing empirical distribution functions, for both the inverse and composition operators. Compact differentiability in the sup norm cannot provide such remainder bounds since, over some compact sets, differentiability holds arbitrarily slowly.
Article information Source Ann. Statist., Volume 22, Number 1 (1994), 1-20. Dates First available in Project Euclid: 11 April 2007 Permanent link to this document https://projecteuclid.org/euclid.aos/1176325354 Digital Object Identifier doi:10.1214/aos/1176325354 Mathematical Reviews number (MathSciNet) MR1272072 Zentralblatt MATH identifier 0816.62039 JSTOR links.jstor.org Subjects Primary: 62G30: Order statistics; empirical distribution functions Secondary: 58C20: Differentiation theory (Gateaux, Fréchet, etc.) [See also 26Exx, 46G05] 26A45: Functions of bounded variation, generalizations 60F17: Functional limit theorems; invariance principles Citation
Dudley, R. M. The Order of the Remainder in Derivatives of Composition and Inverse Operators for $p$-Variation Norms. Ann. Statist. 22 (1994), no. 1, 1--20. doi:10.1214/aos/1176325354. https://projecteuclid.org/euclid.aos/1176325354
|
Proceedings of the Centre for Mathematics and its Applications Proc. Centre Math. Appl. The AMSI-ANU workshop on spectral theory and harmonic analysis. Andrew Hassell, Alan McIntosh and Robert Taggart, eds. Proceedings of the Centre for Mathematics and its Applications, v. 44. (Canberra AUS: Centre for Mathematics and its Applications, Mathematical Sciences Institute, The Australian National University, 2010), 105 - 114 A Maximal Theorem for Holomorphic Semigroups on Vector-Valued Spaces Abstract
Suppose that $1 \lt p \geq \infy, (\Omega, \mu)$ is a $\sigma$-finite measure space and $E$ is a closed subspace of a Lebesgue-Bochner space $L^p(\Omega; X)$, consisting of functions on $\Omega$ that take their values in some complex Banach space $X$. Suppose also that $- A$ is injective and generates a grounded holomorphic semigroup ${T_z}$ on $E$. If $0 \lt \alpha \lt 1$ and $f$ belongs to the domain of $A^\alpha$ then the maximal function $\sup_z \|T_zf\|_x$, where the supremum is taken over any given sector contained in the sector of holomorphy, belongs to $L^p$. A similar result holds for generators that are not injective. This extends earlier work of Blower and Doust.
Article information Source The AMSI-ANU workshop on spectral theory and harmonic analysis. Andrew Hassell, Alan McIntosh and Robert Taggart, eds. Proceedings of the Centre for Mathematics and its Applications, v. 44. (Canberra AUS: Centre for Mathematics and its Applications, Mathematical Sciences Institute, The Australian National University, 2010), 105-114 Dates First available in Project Euclid: 18 November 2014 Permanent link to this document https://projecteuclid.org/ euclid.pcma/1416320873 Mathematical Reviews number (MathSciNet) MR2655388 Zentralblatt MATH identifier 1235.47040 Citation
Blower, Gordon; Doust, Ian; Taggart, Robert J. A Maximal Theorem for Holomorphic Semigroups on Vector-Valued Spaces. The AMSI–ANU Workshop on Spectral Theory and Harmonic Analysis, 105--114, Centre for Mathematics and its Applications, Mathematical Sciences Institute, The Australian National University, Canberra AUS, 2010. https://projecteuclid.org/euclid.pcma/1416320873
|
On the LUQ decomposition
The algorithm implemented in
luq (see reference given below) computes bases for the left/right null spaces of a sparse matrix $A$. Unfortunately, as far as I can tell, there seems to be no thorough discussion of this particular algorithm in the literature. In place of a reference, let us clarify how/why it works and test it a bit.
The
luq routine inputs an $m$-by-$n$ matrix $A$ and outputs an $m$-by-$m$ invertible matrix $L$, an $n$-by-$n$ invertible matrix $Q$ , and an $m$-by-$n$ upper trapezoidal matrix $U$ such that: (i) $A=LUQ$ and (ii) the pivot-less columns/rows of $U$ are zero vectors. For example, $$\underbrace{\begin{pmatrix} 1 & 1 \\1 & 1 \end{pmatrix}}_A = \underbrace{\begin{pmatrix}1 & 0 \\1 & 1\end{pmatrix}}_L \underbrace{\begin{pmatrix}1 & 0 \\0 & 0\end{pmatrix}}_U \underbrace{\begin{pmatrix}1 & 1 \\0 & 1\end{pmatrix}}_Q$$
Point (ii) allows one to construct bases for the left/right null spaces of $A$.
Bases for Left/Right Null Spaces of $A$
Let $r = \operatorname{Rank}(A)$. Suppose we can compute the exact $LUQ$ decomposition of $A$ as described above. Then,
The $n-r$ columns of $Q^{-1}$ corresponding to the pivotless columns of $U$ are a basis for the null space of $A$. This follows from the fact that $\operatorname{null}(A) = \operatorname{null}(A Q^{-1}) = \operatorname{null}(L U)$ and that the pivotless columns of $U$ are zero vectors by construction. The $m-r$ rows of $L^{-1}$ corresponding to the pivotless rows of $U$ are a basis for the left null space of $A$. This follows from the fact that $\operatorname{null}(A^T) = \operatorname{null}((L^{-1} A)^T) = \operatorname{null}( (U Q)^T)$ and that the pivotless rows of $U$ are zero vectors by construction.
LUQ Algorithm
Assume that $m \ge n$. (If $m < n$, then the
lu command mentioned below outputs a slightly different $PA=LU$ factorization. Otherwise the LUQ decomposition is almost the same, and so, we omit this case.)
Given an $m$-by-$n$ matrix $A$, the LUQ decomposition calls MATLAB command
lu with partial (i.e., just row) pivoting.
lu implements a variant of the LU decomposition that inputs $A$ and outputs:
$m$-by-$m$ permutation matrix $P$; $m$-by-$n$ lower trapezoidal matrix $\tilde L$ with ones on the diagonal; and, $n$-by-$n$ upper triangular matrix $\tilde U$
such that $PA = \tilde L \tilde U$. Write:$$\tilde U = \begin{bmatrix} \tilde U_{11} & \tilde U_{12} \\0 & \tilde U_{22} \end{bmatrix}$$ where $\tilde U_{11}$ has nonzero diagonal entries, and hence, is invertible. Also, let $e_i$ denote unit $m$-vectors equal to $1$ in the $i$th component and zero otherwise. The algorithm then builds:$$L = P^T \begin{bmatrix} \tilde L & e_{n+1} & \cdots & e_m \end{bmatrix} $$which is an $m \times m$ invertible matrix, and$$U = \begin{bmatrix} \tilde U_{11} & 0 \\0 & \tilde U_{22} \\0 & 0 \end{bmatrix}$$ which is upper trapezoidal, and$$Q = \begin{bmatrix} I & \tilde U_{11}^{-1} \tilde U_{12} \\0 & I \end{bmatrix}$$which is an $n$-by-$n$ invertible matrix. To summarize, we obtain:$$A = L \begin{bmatrix} \tilde U_{11} & 0 \\0 & \tilde U_{22} \\0 & 0 \end{bmatrix} Q $$ For the most part, that is all the algorithm does. However, if there are any nonzero entries in $\tilde U_{22}$, then the algorithm will call
luq again with input matrix containing all of the nonzero entries of $\tilde U_{22}$. This last step introduces more zeros into $U$ and modifies the invertible matrices $L$ and $Q$.
To understand this last step, it helps to consider a simple input to
luq like$$A = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{pmatrix}$$The first call to
luq with this input trivially gives $U=A$ with $L$ and $Q$ being the $3$-by-$3$ identity matrices. Since $U$ has nonzero entries, a second call is made to
luq with input $1$, which outputs $L=U=Q=1$. This second decomposition is incorporated into the first one by making the second column of $L$ the first one and moving all the other columns to the right of it, and similarly, moving the third row of $Q$ to the first row and moving all the other rows below it. This yields,$$A = \begin{pmatrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} 1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix} \begin{pmatrix} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{pmatrix}$$
To be sure, consider another simple example$$A = \begin{pmatrix} 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & a & 0 & b \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & c \\ 0 & 0 & 0 & 0 & 0 \end{pmatrix}$$ where $a,b,c$ are nonzero reals. In the first pass through
luq the algorithm again sets $U=A$ and $L$, $Q$ equal to the $5$-by-$5$ identity matrices. Since $U=\tilde U_{22}$ has nonzero elements,
luq is called again with input matrix$$B = \begin{pmatrix} a & b \\0 & c \end{pmatrix}$$ This is incorporated into the first decomposition by permuting $L$ and $Q$ as shown:$$A = \begin{pmatrix} 0 & 0 & 1 & 0 & 0 \\1 & 0 & 0 & 0 & 0 \\0 & 0 & 0 & 1 & 0 \\0 & 1 & 0 & 0 & 0 \\0 & 0 & 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} a & b & 0 & 0 & 0 \\ 0 & c & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{pmatrix} \begin{pmatrix} 0 & 0 & 1 & 0 & 0 \\0 & 0 & 0 & 0 & 1 \\1 & 0 & 0 & 0 & 0 \\0 & 1 & 0 & 0 & 0 \\0 & 0 & 0 & 1 & 0 \end{pmatrix} $$ In general, the columns of $L$ and the rows of $Q$ are permuted so that the the zero columns/rows of $\tilde U_{22}$ are moved to the end of the matrix. An LUQ decomposition is then performed on this nonzero sub-block.
A full explanation would be notation heavy (requiring index sets for the zero/nonzero elements) and not much easier to understand than the code itself.
Simple Test
In reality, the algorithm computes an approximate LUQ decomposition and approximate bases, i.e., with rounding errors. These rounding errors might be significant if some of the nonzero singular values of $A$ are too small for the algorithm to detect.
Here is a MATLAB script file that tests the
luq code. The script is a slight modification of the demo file that the software comes with. I modified the original file so that it inputs a sparse, random, rectangular, rank deficient matrix and outputs bases for the left/right null spaces of this input matrix.
Here is a sample output from this demo file.
elapsed time = 0.011993 seconds
Input matrix:
size = 10000x500
true right null space dimension = 23
true left null space dimension = 9523
Output:
estimated right null space dimension = 23
estimated left null space dimension = 9523
error in basis for right null space = 0
error in basis for left null space = 2.2737e-13
"Extreme" Test
This example is adapted from Gotsman and Toledo [2008]. Consider the $(n+1)$-by-$n$ matrix:$$A_1 = \begin{pmatrix} 1 & & & & \\-1 & 1 & & & \\\vdots & -1 & \ddots & & \\\vdots & & \ddots & 1 & \\-1 & -1 & \cdots & -1 & 1 \\0.5 & 0.5 & \cdots & 0.5 & 0.5 \end{pmatrix}$$and in terms of this matrix, define the block diagonal matrix:$$A = \begin{bmatrix} A_1 & 0 \\0 & A_2 \end{bmatrix}$$ where $A_2$ is an $n$-by-$n$ random symmetric positive definite matrix whose eigenvalues are all equal to one except $3$ are zero and one is $10^{-8}$. With this input matrix and $n=1000$, we obtain the following sample output.
elapsed time = 1.1092
the matrix:
size of A = 2001x2000
true rank of A = 1997
true right null space dimension = 3
true left null space dimension = 4
results:
estimated right null space dimension = 3
estimated left null space dimension = 4
error in basis for right null space = 9.2526e-13
error in basis for left null space = 5.9577e-14
Remark
There is an option in the
luq code to use LU factorization with complete (i.e., row and column) pivoting $PAQ=LU$. The resulting $U$ matrix in the $LUQ$ factorization may better reflect the rank of $A$ in more ill-conditioned problems, but there is an added cost to doing column pivoting.
Reference
Kowal, P. [2006]. "Null space of a sparse matrix."
https://www.mathworks.com/matlabcentral/fileexchange/11120-null-space-of-a-sparse-matrix
Gotsman, C., and S. Toledo [2008]. "On the computation of null spaces of sparse rectangular matrices." SIAM Journal on Matrix Analysis and Applications, (30)2, 445-463.
|
I am conducting a linear regression: $Y=\alpha+\beta\times X+\epsilon$, $\epsilon\sim N(0,\sigma^2)$. It turned out that the confidence interval for the predicted mean $Y$ was really small (figure 1), that it was hardly differntiable from the predicted mean.
On the other hand, we can see that the observed $Y$s actually spread quite widely around each X. The second figure shows the predicted interval, which is large.
My question is:
What does this model tell me? Does it mean that it can be a perfect model to predict the mean? since the confidence interval in figure 1 is small. On the other hand, figure 2 tells that the random error in this model is high. Is there anything we can do (or is it necessary) to reduce such random error, although we already have a "perfect" model? For instance, will adding extra useful (assume) variables help further explaining (reducing) the random error?
Instead of asking what's the difference between these two types of intervals (see @whuber's link), I am interested in if a small confidence interval and a large prediction interval exist, what can we say and what can we do about such a model? is such model the best already and we should submit the result? Or something can still be done to further explain the random error? Can someone help me explaining this result?
Thanks
|
I want to learn how to construct spaces of quantum states of systems.
As an exercize, I tried to build the space of states and to find hamiltonian spectrum of the quantum system whose Hamiltonian is the Hamiltonian of the harmonic oscillator with the quadratic term: $\hat{H}=\hat{H}_{0}+\hat{H}_{1}$, where
$\hat{H}_{0}=\hbar\omega\left(\hat{a}^{\dagger}\hat{a}+1/2\right)$, $\hat{H}_{1}=\dot{\imath}\gamma\left(\hat{a}^{\dagger}\right)^{2}-\dot{\imath}\gamma\left(\hat{a}\right)^{2}$; $\hat{a}$, $\hat{a}^{\dagger}$-ladder operators, $\gamma$-real parameter
For this purpose, we should define a complete set of commuting observables (CSCO).
As for the harmonic oscillator, we can define a "number" operator $N=\hat{a}^{\dagger}\hat{a}$.
We can prove the following statement:
Let be $a$ and $a^{\dagger}$ Hermitian conjugated operators and $\left[a,a^{\dagger}\right]=1$. Define operator $N=aa^{\dagger}$. Then we can prove that $\left[N,a^{p}\right]=-pa^{p}, \left[N,a^{\dagger p}\right]=pa^{\dagger p}$ and that
the only algebraic functions of $a$ and $a^{\dagger}$, which commute with $N$, are the functions of $N$. (For example, see Messiah, Quantum Mechanics, exercises after chapter $12$)
Using this statement, we conclude(am i right?) that
operator $N$ forms a CSCO. So, sequence of eigenvectors of operator $N$ forms the basis of the space of states. So, I've come to the conclusion that the space of states of the described system is the same as the space of states of harmonic oscillator. But operators $a$, $a^{\dagger}$ can always be determined (as i think), so this arguments will be valid, so I've come to the conclusion, that spaces of states of all systems will be the same. After that I realized that I am mistaken.
Would you be so kind to explain where is a mistake in the arguments above? And can you give some references/articles/books where i can read some additional information about constructing spaces of states for different systems?
|
Play around with different values in the matrix to see how the linear transformation it represents affects the image. Notice how the sign of the determinant (positive or negative) reflects the
orientation of the image (whether it appears "mirrored" or not). The arrows denote eigenvectors corresponding to eigenvalues of the same color.
Determinant: \(\)
Eigenvalues: \(\), \(\).
Change image:
Here are some examples of matrix transformations.
Transformation Matrix Try it Rotation by angle \(\theta\) \(\begin{bmatrix}\cos\theta & -\sin\theta\\\sin\theta & \cos\theta\end{bmatrix}\) \(\theta\): Reflection about line at angle \(\theta\) \(\begin{bmatrix}\cos2\theta & \sin2\theta\\\sin2\theta & -\cos2\theta\end{bmatrix}\) \(\theta\): Shear parallel to \(x\)-axis \(\begin{bmatrix}1 & k\\0 & 1\end{bmatrix}\) \(k\): Shear parallel to \(y\)-axis \(\begin{bmatrix}1 & 0\\k & 1\end{bmatrix}\) \(k\): Uniform scaling by factor \(c\) \(\begin{bmatrix}c & 0\\0 & c\end{bmatrix}\) \(c\):
|
Given some vectors, how many dimensions do you need to add (to their span) before you can find some mutually orthogonal vectors that project down to the original ones?
Or, more formally...
Suppose $v_1,v_2,\ldots,v_k \in \mathbb C^m$. Take $n \ge m$ as small as possible such that if we consider $\mathbb C^m$ as a subspace of $\mathbb C^n$ in the natural way, then there is a projection $\pi$ of $\mathbb C^n$ onto $\mathbb C^m$ such that for some mutually orthogonal vectors $\hat v_1,\hat v_2,\ldots,\hat v_k \in \mathbb C^n$, $\pi(\hat v_i) = v_i$ for each $i$.
Intuitively, this $n$ provides a measure of how "far from orthogonal" the original vectors are. My (deliberately open-ended) question is the following. Does anyone recognize this $n$, or does the idea seem familiar to anyone from any other context? I can't identify it with or connect it to anything else I've encountered, but I'm wondering if it might appear in some other guise in linear algebra, or elsewhere.
|
There is the standard argument, using the definition of the inner product; that $\langle f|A|g\rangle =\langle g|A|f\rangle ^{*}$ for a Hermitian operator $A$, given any wave vectors $|f\rangle,~ |g\rangle$.
Also consider the following:
Consider the infinite dimensional one dimensional position space, with a column vector of values of the wave function at discretized points along the $x$-axis.
In the limit of infinitesimal differences between position values, we have $\frac{\mathrm df}{\mathrm dx}\approx \frac{1}{2h} (f(x+h)-f(x-h))$, where $(x+h)$ and $(x-h)$ are the discrete position values just preceding and succeeding the position value $x$, and $h$ is sufficiently small.
Then we might talk of an infinite dimensional matrix representation of ${\rm d/d}x$, where only the two off diagonal "diagonals" adjacent to the actual diagonal has $1/2h$ and $-\:1/2h$ entries. This matrix is skew symmetric. Everywhere else we have $0$ entries.
If we multiplied this matrix with $'\mathrm i'$, this skew symmetry becomes Hermitian, which makes ${\rm i~ d/d}x$ hermitian.
Edit: As pointed out by tparker in an answer below, we get $\langle x|\partial|x^\prime\rangle =\frac{\partial}{\partial x}\langle x|x^\prime \rangle =\frac{\partial}{\partial x}\delta(x-x^\prime)$.
Since we are dealing with a discrete set of points in space here, we must have the normalization given by the Kronecker delta $\delta_{x,x^\prime}$.
Informally, we then have $\partial(\delta_{x,x^\prime})|_{x~=~(x^\prime-h)}\approx\frac{1}{2h}(\delta_{x^\prime,x^\prime}-\delta_{x^\prime-2h,x^\prime})=\frac{1}{2h}$, and also $\partial(\delta_{x,x^\prime})|_{x~=~(x^\prime+h)}\approx-\frac{1}{2h}, ~~\partial(\delta_{x,x^\prime})|_{x~=~x^\prime}\approx 0$, which again gives us a skew-symmetric matrix of the form obtained before. Here we scanned the matrix vertically in a given column, whereas in the previous calculation, we scanned a fixed row horizontally.
|
Research Open Access Published: Boundedness in a quasilinear attraction–repulsion chemotaxis system with nonlinear sensitivity and logistic source Boundary Value Problems volume 2019, Article number: 120 (2019) Article metrics
280 Accesses
Abstract
In this paper, we deal with the following quasilinear attraction–repulsion model:
with homogeneous Neumann boundary conditions in a smooth bounded domain \(\varOmega \subset R^{n}\) (\(n\geq 2\)). Let the chemotactic sensitivity \(\chi (v)\) be a positive constant, and let the chemotactic sensitivity \(\xi (w)\) be a nonlinear function. Under some assumptions, we prove that the system has a unique globally bounded classical solution.
Introduction
In this paper, we consider a quasilinear attraction–repulsion chemotaxis system with nonlinear sensitivity and logistic source
where \(\varOmega \subset R^{n}\) (\(n\geq 2\)) is a bounded domain with smooth boundary, and \(\frac{\partial }{\partial \nu }\) denotes the derivative with respect to the outer normal of
∂Ω, α, β, γ, and δ are positive parameters, and \(\chi (v)\) and \(\xi (w)\) represent chemosensitivity. We assume that the functions \(\chi (v)\) and \(\xi (w)\) satisfy the following hypotheses: \((H_{1})\) :
the function \(\chi (v)=\chi _{0}\), which is a positive constant;
\((H_{2})\) :
the function \(\xi (w)=\frac{\xi _{0}}{w}\) for all \(w>0\), where \(\xi _{0}\) is a positive constant.
and there exist constants \(C_{D}>0\) and \(m\geq 1\) such that
The function \(f:[0,\infty )\rightarrow R\) is smooth and satisfies \(f(0)\geq 0\) and
with \(a\geq 0\), \(b>0\), and \(\eta >1\). The initial data comply with
Chemotaxis describes the oriented movement of cells along the concentration gradient of a chemical signal produced by cells. The prototype of the chemotaxis model, known as the Keller–Segel model, was first proposed by Keller and Segel [3] in 1970:
When \(\chi (v)\) is a positive constant, a global solution is studied by Osaki and Yagi [8] for \(n=1\); a global solution is investigated by Nagai et al. [7, 16] for \(n\geq 2\); the blowup solutions are proved by Herrero ea al. [2, 12]. For the case where \(\chi (v)\leq \frac{\chi _{0}}{(1+\alpha v)^{k}}\), \(\alpha >0\), and \(k>1\), the global classical solution is asserted by Winkler [17]. For the case \(\chi (v)=\frac{\chi }{v}\) with a positive constant \(\chi <\sqrt{\frac{2}{n}}\), a global classical solution is explored by Winkler [18].
Moreover, when \(D(u)=1\) and \(f(u)=0\), Tao and Wang [11] studied the following chemotaxis model:
The global boundedness of the solutions was obtained in high dimensions, and blowup solutions were identified in \(R^{2}\).
In the case where \(\chi (v)\) and \(\xi (w)\) are positive parameters in (1.7), \(D(u)\) satisfies (1.3), and \(f(u)\) satisfies (1.4), a unique global bounded classical solution was deduced by Wang [15]. When \(f(u)=0\) in (1.7), \(\chi (v)\) and \(\xi (w)\) are positive functions, \(D(u)\) satisfies (1.3), and \(f(u)\) satisfies (1.4), the global classical solutions are asserted by Wu and Wu [19], who obtained an important estimate of \(\int _{\varOmega } \vert \nabla v \vert ^{2}\,dx\). Note that this method is not applicable for the general \(f(u)\) in our paper. For more details about chemotaxis system, we refer the interested readers to [1, 5, 6, 9, 13, 14].
Theorem 1.1 and (i) If\(\sigma \in (1,\eta )\), then(1.1) admits a bounded global classical solution. (ii) If\(\sigma \in (\eta ,m)\), then(1.1) admits a bounded classical solution. (iii) If\(m>\max \{1,\frac{n\sigma +2-2\sigma }{n+2}, \frac{n\sigma -2}{n}\}\), then(1.1) admits a bounded global classical solution. Lemma 1.1
([4])
with \(l>n\) and In addition, if \(T_{\max }<+\infty \), then Lemma 1.2 Let \((u,v,w)\) be the solution of system (1.1). Then there exist a constant \(m^{*}\) such that Proof
Integrating the first equation of system (1.1) over
Ω, we have
Due to \(\eta >1\) and Young’s inequality, we derive
Combining with (1.12), we have
which yields (1.11). □
Lemma 1.3
(Gagliardo–Nirenberg inequality)
Let \(r\in (0,\alpha )\) and \(\psi \in W^{1,2}(\varOmega )\cap L^{r}(\varOmega )\). Then there exists a constant \(C_{\mathrm{GN}}>0\) such that with Lemma 1.4 Let Ω be a bounded domain in \(R^{n}\) with smooth boundary, and let \(v_{0}\in W^{1,\infty }(\varOmega )\). Suppose that there exists a constant \(C_{1}\) such that For the problem (i) if\(1\leq k< n\), then$$\begin{aligned} \bigl\Vert v(t) \bigr\Vert _{W^{1,j}(\varOmega )}\leq C \quad \textit{for all } j \in \biggl(0,\frac{nk}{n-k} \biggr); \end{aligned}$$(1.16) (ii) if\(k=n\), then(1.16) holds for all\(j\in (0,\infty )\); (iii) if\(k>n\), then(1.16) holds for\(j=\infty \). Lemma 1.5
([20])
For any \(h\in [1,\frac{n}{n-1})\), there exists a constant \(C_{2}>0\) such that Lemma 1.6
([21])
For any \(h\in [1,\frac{n\eta }{(n+2-\eta )^{+}})\), there exists a constant \(C_{3}>0\) such that A priori estimates Lemma 2.1 Suppose where If \(\sigma \in (1,\eta )\), then there exist constants \(E_{1}>0\) and \(E_{2}>0\) such that for sufficiently large k. Proof
Since \(\sigma \in (1,\eta )\), by Young’s inequality we have
and
□
Lemma 2.2 Suppose where If \(\sigma \in (\eta ,m)\), then there exist constants \(E_{3}>0\) and \(E_{4}>0\) such that Proof
By Lemma 1.2 and the Gagliardo–Nirenberg inequality there exists a constant \(C_{8}>0\) such that
where
By Young’s inequality we obtain
Since \(\sigma \in (\eta ,m)\), by Young’s inequality there exist \(C_{11}>0\) and \(C_{12}>0\) such that
Lemma 2.3 Suppose where If \(m> \max \{1,\frac{n\sigma +2-2\sigma }{n+2},\frac{n\sigma -2}{n} \}\), then there exist constants \(E_{5}>0\) and \(E_{6}>0\) such that Proof
By the Gagliardo–Nirenberg inequality there exists \(C_{13}>0\) such that
where
The condition \(m>\max \{1,\frac{n\sigma +2-2\sigma }{n+2}\}\) and sufficiently large
k guarantee that
Hence \(\lambda _{1}\in (0,1)\).
Since \(m>\max \{1,\frac{n\sigma -2}{n}\}\), we obtain
By Young’s inequality we derive
Lemma 2.4 Let \(n\geq 2\). Defining and we have (a) if\(\eta \in (1,\frac{n+2}{n}]\), \(s<\frac{m+\eta }{2}- \frac{n-1}{n}\), then for sufficiently large k, there exist\(\beta >2\) and\(h\in [1,\frac{n}{n-1})\) such that$$\begin{aligned} \delta _{i}(k,\beta ;h)\in (0,1) \quad \textit{and} \quad w_{i}(k,\beta ;h)< 2, \quad i=2,3. \end{aligned}$$(2.16) (b) if\(\eta \in (\frac{n+2}{n},n+2)\), \(s<\frac{m}{2}+ \frac{\eta (n+4)}{2(n+2)}-1\), then for sufficiently large k, there exist\(\beta >2\) and\(h\in (\frac{n}{n-1},\frac{n\eta }{n+2-\eta })\) such that(2.16) holds. Proof
By computation we verify that (2.16) is equivalent to
Thus it is sufficient to ensure that
(a) For \(h\in [1,\frac{n}{n-1}]\), by the continuity of
h it suffices to prove the case \(h=\frac{n}{n-1}\). To prove (2.17), we need to prove
Since \(s<\frac{m+\eta }{2}-\frac{n-1}{n}\), there exists
such that
(b) We note that \(\eta \in (\frac{n+2}{n},n+2)\) ensures the interval \(h\in (\frac{n}{n-1},\frac{n\eta }{n+2-\eta })\). By the continuity of
h, let \(h=\frac{n\eta }{n+2-\eta }\). To prove (2.17), we need to show that
and
Since \(s<\frac{m}{2}+\frac{\eta (n+4)}{2(n+2)}-1\), there exists
such that
Lemma 2.5 For the second equation in (1.1), \(E>0\), and \(\beta >2\) we have for all \(t\in [0,T_{\max })\). Proof
The proof can be found in [18]. □
Lemma 2.6 and let \(S(u)\) and \(F(u)\) satisfy (1.8). If \(\sigma \in (1, \eta )\), there exist sufficiently large k and \(t\in [0,T_{\max })\) such that Proof
Multiplying by \((u+1)^{k-1}\) the both sides of the first equation in (1.1), we have
for all \(t\in (0,T_{\max })\). Since \((u+1)^{\eta }\leq 2^{\eta -1}(u ^{\eta }+1)\) for \(\eta >1\), this implies that
Then (2.25) can be rewritten as
where
Similarly, we have
and then
For all \(t\in (0,T_{\max })\) with \(C_{16}>0\), we obtain
with \(\lambda _{i}\), \(\delta _{i}\) as in Lemma 2.4, where \(i=2,3\). Since \(w_{i}=\frac{\delta _{i}\lambda _{i}}{\beta }<2\), by Young’s inequality we have
where \(D_{1}=\frac{2C_{D}k(k-1)}{(k+m-1)^{2}}\) and \(D_{2}=\frac{C_{F} \xi _{0}k(k-1)}{k+\sigma -1}\). By Lemma 2.1 we have
for all \(t\in (0,T_{\max })\). By an ODE comparison argument we obtain (2.24).
For \(\eta \in [n+2,\infty )\), from the Lemma 1.6 we have
Remark 2.1 Remark 2.2 Proof of Theorem 1.1
Using the elliptic regularity theory, we have
Then, for a sufficiently large
k, by the Sobolev embedding theorem there exists a positive constant \(C_{29}\) such that
By using Lemma A.1 in [10] we conclude that
u is uniformly bounded in \(\varOmega \times (0,T_{\max })\). Thus there exists a positive constant \(C_{30}\) such that
that is, \((u,v,w)\) is a global bounded classical solution to (1.1). □
References 1.
Cieslak, T., Stinner, C.: New critical exponents in a fully parabolic quasilinear Keller–Segel system and applications to volume filling models. J. Differ. Equ.
258, 2080–2113 (2015) 2.
Herrero, M., Velázquez, J.: Singularity patterns in a chemotaxis model. Math. Ann.
306, 583–623 (1996) 3.
Keller, E., Segel, L.: Initiation of slime mold aggregation viewed as an instability. J. Theor. Biol.
26, 399–415 (1970) 4.
Liu, J., Zheng, J., Wang, Y.: Boundedness in a quasilinear chemotaxis–haptotaxis system with logistic source. Z. Angew. Math. Phys.
67, 1–33 (2016) 5.
Magdalena, L., Alexandra, C.R., Leah, E.K., Mogilner, A.: Chemotactic signaling, microglia, and Alzheimer disease senile plaques: is there a connection? Bull. Math. Biol.
65, 693–730 (2003) 6.
Masaaki, M., Tomomi, Y.: A unified method for boundedness in fully parabolic chemotaxis systems with signal-dependent sensitivity. Math. Nachr.
290, 2648–2660 (2017) 7.
Nagai, T., Senba, T., Yoshida, K.: Application of the Trudinger–Moser inequality to a parabolic system of chemotaxis. Funkc. Ekvacioj
40, 411–433 (1997) 8.
Osaki, K., Yagi, A.: Structure of the stationary solution to Keller–Segel equation in one dimension. Surikaisekikenkyusho Kokyuroku
1105, 1–9 (1999) 9.
Tao, Y., Wang, Z.: Competing effects of attraction vs. repulsion in chemotaxis. Math. Models Methods Appl. Sci.
23, 1–6 (2013) 10.
Tao, Y., Winkler, M.: Boundedness in a quasilinear parabolic–parabolic Keller–Segel system with subcritical sensitivity. J. Differ. Equ.
252, 692–715 (2012) 11.
Tao, Y., Winkler, M.: Locally bounded global solutions in a three-dimensional chemotaxis-Stokes system with nonlinear diffusion. Ann. Inst. Henri Poincaré, Anal. Non Linéaire
30, 157–178 (2013) 12.
Wang, G.: Blow-up in a chemotaxis model without symmetry assumptions. Eur. J. Appl. Math.
12, 159–177 (2001) 13.
Wang, L., Mu, C., Hu, X., Tian, Y.: Boundedness in a quasilinear chemotaxis–haptotaxis system with logistic source. Math. Methods Appl. Sci.
40, 3000–3016 (2017) 14.
Wang, Q.: Global solutions of a Keller–Segel system with saturated logarithmic sensitivity function. Commun. Pure Appl. Anal.
14, 383–396 (2015) 15.
Wang, Y., Liu, J.: Boundedness in a quasilinear fully parabolic Keller–Segel system with logistic source. Nonlinear Anal., Real World Appl.
38, 113–130 (2017) 16.
Winkler, M.: Global diffusive behavior in the higher-dimensional Keller–Segel model. J. Differ. Equ.
248, 2889–2905 (2010) 17.
Winkler, M.: Absence of collapse in a parabolic chemotaxis system with signal-dependent sensitivity. Math. Nachr.
283, 1664–1673 (2010) 18.
Winkler, M.: Global solutions in a fully parabolic chemotaxis system with singular sensitivity. Math. Methods Appl. Sci.
34, 176–190 (2011) 19.
Wu, S., Wu, B.: Global boundedness in a quasilinear attraction repulsion chemotaxis model with nonlinear sensitivity. J. Math. Anal. Appl.
442, 554–582 (2016) 20.
Zhang, Q., Li, Y.: Boundedness in a quasilinear fully parabolic Keller–Segel system with logistic source. Nonlinear Anal., Real World Appl.
38, 113–130 (2017) 21.
Zheng, J., Wang, Y.: Boundedness and decay behavior in a higher-dimensional quasilinear chemotaxis system with nonlinear logistic source. Comput. Math. Appl.
72, 2604–2619 (2016) Acknowledgements
We would like to thank the referees for their valuable comments and suggestions to improve our paper.
Availability of data and materials
Data sharing not applicable to this paper as no data sets were generated or analyzed during the current study.
Funding
Project Supported by the National Natural Science Foundation of China (Grant No. 11571093).
Ethics declarations Competing interests
The authors declare that they have no competing interests.
Additional information Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
|
The classical Berry-Esseen theorem asserts that if $f$ and $g$ are the characteristic functions of two distribution functions $F(t)$ and $G(t)$ respectively then with $T$ arbitrary $$ \sup_{t \in \mathbb{R}} |F(t) - G(t)| \ll \frac{1}{T} + \int_{-T}^{T} \bigg | \frac{f(t) - g(t)}{t} \bigg | dt $$ provided that one of the $F$ or $G$ is in Schwartz class (say). Is there a generalization of this inequality for distribution functions in $\mathbb{R}^k$, with $k = 2$ specifically? Precisely, I'm looking for a bound for $$ \sup_{\mathcal{R}} |\mathbb{P}(X \in \mathcal{R}) - \mathbb{P}(Y \in \mathcal{R}) | $$ in terms of the characteristic functions of $X$ and $Y$, with $X,Y$ random variables in $\mathbb{R}^2$, and $\mathcal{R}$ rectangles in $\mathbb{R}^2$.
The sharpest multidimensional Berry--Esseen Theorem I know is due to Bentkus and appears in the paper "A Lyapunov type bound in ${\mathbb R}^d$".
It does not use the characteristic function, though.
There are many results along those lines in Bhattacharya and Rao,
Normal Approximation and Asymptotic Expansions.
I was also looking for a higher dimensional analogue of the so-called Berry-Esseen inequality and ran into this unanswered question.
For a two dimensional analogue look at Theorem 1 and its Corollary in the paper
On two-dimensional analogues of an inequality of Esseen and their application to the central limit theorem by S.M. Sadikova. It is mentioned here that the proof of the result generalizes to higher dimensions.
For an explicit statement of the general higher dimensional analogue look at Theorem 2 and Corollary 2.2 of the paper
Higher dimensional quasi-power theoremand Berry–Esseen inequality by Clemens Heuberger and Sara Kropf.
|
It is essentially impossible to answer the general question of "how does multilinearity come up naturally in physics?" because of the myriad of possible examples that make up the total answer. Instead, let me describe a situation that very loudly cries out for the use of tensor products of two vectors.
Consider the problem of conservation of momentum for a continuous distribution of electric charge and current, which interacts with an electromagnetic field, under the action of no other external force. I will describe it more or less along the lines of Jackson (
Classical Electrodynamics, 3 rd edition, §6.7) but depart from it towards the end. This will get very electromagneticky for a while, so if you want to skip to the tensors, you can go straight to equation (1).
The rate of change of the total mechanical momentum of the system is the total Lorentz force, given by $$\frac{ d\mathbf{P}_\rm{mech}}{dt}=\int_V(\rho\mathbf{E}+\mathbf{J}\times \mathbf{B})d\mathbf{x}.$$To simplify this, one can take $\rho$ and $\mathbf{J}$ from Maxwell's equations:$$\rho=\epsilon_0\nabla\cdot\mathbf{E}\ \ \ \text{ and }\ \ \\mathbf{J}=\frac1{\mu_0}\nabla\times \mathbf{B}-\epsilon_0\frac{\partial \mathbf{E}}{\partial t}.$$(In particular, this means that what follows is only valid "on shell": momentum is only conserved if the equations of motion are obeyed. Of course!)
One can then put these expressions back, to a nice vector calculus work-out, and come up with the following relation:$$\begin{align}{}\frac{ d\mathbf{P}_\rm{mech}}{dt}+&\frac{d}{dt}\int_V\epsilon_0\mathbf{E}\times \mathbf{B}d\mathbf{x} \\ &=\epsilon_0\int_V \left[\mathbf{E}(\nabla\cdot \mathbf{E})-\mathbf{E} \times(\nabla \times \mathbf{E}) + c^2 \mathbf{B} (\nabla \cdot \mathbf{B})- c^2 \mathbf{B} \times (\nabla \times \mathbf{B})\right]d\mathbf{x}.\end{align}$$
The integral on the left-hand side can be identified as the total electromagnetic momentum, and differs from the integral of the Poynting vector by a factor of $1/c^2$. To get this in the proper form for a conservation law, though, such as the one for energy in this setting,$$\frac{dE_\rm{mech}}{dt}+\frac{d}{dt}\frac{\epsilon_0}{2}\int_V(\mathbf{E}^2+c^2\mathbf{B}^2)d\mathbf{x}=-\oint_S \mathbf{S}\cdot d\mathbf{a},$$we need to reduce the huge, ugly volume integral into a surface integral.
The way to do this, is, of course, the divergence theorem. However, that theorem is for scalars, and what we have so far is a vector equation. To work further then, we need to (at least temporarily) work in some specific basis $\{\mathbf{e}_1,\mathbf{e}_2,\mathbf{e}_3\}$, and write $\mathbf{E}=\sum_i E_i \mathbf{e}_i$. Let's work with the electric field term first; after that the results also apply to the magnetic term. Thus, to start with,$$\begin{align}{}\int_V \left[\mathbf{E}(\nabla\cdot \mathbf{E})-\mathbf{E} \times(\nabla \times \mathbf{E})\right]d\mathbf{x}=\sum_i \mathbf{e}_i\int_V \left[E_i(\nabla\cdot \mathbf{E})-\mathbf{e}_i\cdot\left(\mathbf{E} \times(\nabla \times \mathbf{E})\right)\right]d\mathbf{x}.\end{align}$$These terms should be simplified using the vector calculus identities$$E_i(\nabla\cdot \mathbf{E})=\nabla\cdot\left(E_i \mathbf{E}\right) - \mathbf{E}\cdot \nabla E_1$$and$$\mathbf{E} \times(\nabla \times \mathbf{E})=\frac12\nabla(\mathbf{E}\cdot\mathbf{E})-(\mathbf{E}\cdot\nabla)\mathbf{E},$$which mean that the whole combination can be simplified as$$\begin{align}{}\int_V \left[\mathbf{E}(\nabla\cdot \mathbf{E})-\mathbf{E} \times(\nabla \times \mathbf{E})\right]d\mathbf{x}=\sum_i \mathbf{e}_i\int_V \left[\nabla\cdot\left(E_i \mathbf{E}\right) -\mathbf{e}_i\cdot\left(\frac12\nabla(\mathbf{E}\cdot\mathbf{E})\right)\right]d\mathbf{x},\end{align}$$since the terms in $\mathbf{E}\cdot \nabla E_i$ and $\mathbf{e}_i\cdot\left( (\mathbf{E}\cdot\nabla)\mathbf{E}\right)$ cancel. This means we can write the whole integrand as the divergence of some vector field, and use the divergence theorem:$$\begin{align}{}\int_V \left[\mathbf{E}(\nabla\cdot \mathbf{E})-\mathbf{E} \times(\nabla \times \mathbf{E})\right]d\mathbf{x}&=\sum_i \mathbf{e}_i\int_V \nabla\cdot\left[E_i \mathbf{E}-\frac12 \mathbf{e}_i E^2\right]d\mathbf{x}\\ & =\sum_i \mathbf{e}_i\oint_S\left[E_i \mathbf{E}-\frac12 \mathbf{e}_i E^2\right]\cdot d\mathbf{a}. \tag 1\end{align}$$
In terms of conservation law structure, we're essentially done, as we've reduced the rate of change of momentum to a surface term. However, it is crying out for some simplification. In particular, this expression is basis-dependent, but it is
so close to being basis independent that it's worth a closer look.
The first term, for instance, is simply crying out for a simplification that would look something like $$\sum_i \mathbf{e}_i\oint_SE_i \mathbf{E}\cdot d\mathbf{a}=\oint_S\mathbf{E}\, \mathbf{E}\cdot d\mathbf{a}$$if we could only make sense of an object like $\mathbf{E}\, \mathbf{E}$. Even better, if we
could make sense of such a combination, then it turns out that the seemingly basis-dependent combination that would come up in the second term, $\sum_i \mathbf{e}_i\,\mathbf{e}_i$, turns out to be basis independent: one can prove that for any two orthonormal bases $\{\mathbf{e}_1, \mathbf{e}_2, \mathbf{e}_3\}$ and $\{\mathbf{e}_1', \mathbf{e}_2', \mathbf{e}_3'\}$, those combinations are the same:$$\sum_i \mathbf{e}_i\,\mathbf{e}_i = \sum_i \mathbf{e}_i'\,\mathbf{e}_i'$$as long as the product $\mathbf{u}\,\mathbf{v}$ of two vectors, whatever it ends up being, is linear on each component, which is definitely a reasonable assumption.
So what, then, should this new vector multiplication be? One key to realizing what we really need is noticing the fact that we haven't yet assigned any real physical meaning to the combination $\mathbf{E}\,\mathbf{E}$; instead, we're only ever interacting with it by dotting "one of the vectors of the product" with the surface area element $d\mathbf{a}$, and that leaves a
vector $\mathbf{E}\,\mathbf{E}\cdot d\mathbf{a}$ which we can integrate to get a vector, and that requires no new structure.
Let's then write a list of how we want this new product to behave. To keep things clear, let's give it some fancy new symbol like $\otimes$, mostly to avoid unseemly combinations like $\mathbf{u}\,\mathbf{v}$. We want then,
a function $\otimes:V\times V\to W$, which takes euclidean vectors in $V=\mathbb R^3$ into some vector space $W$ in which we'll keep our fancy new objects. Combinations of the form $\mathbf{u}\otimes \mathbf{v}$ should be linear in both $\mathbf{u}$ and $\mathbf{v}$. For all vectors $w$ in $V$, and all combinations $(\mathbf{u},\mathbf{v})\in V\times V$, we want the combination $(\mathbf{u}\otimes \mathbf{v})\cdot\mathbf{w}$ to be a vector in $V$. Even more, we want that to be the vector $(\mathbf{v}\cdot\mathbf{w})\mathbf{u}\in V$.
That last one looks actually pretty strong, but there's evidently room for improvement. For one, it depends on the euclidean structure, which is not actually necessary: we can make an equivalent statement that uses the vector space's dual.
For all $(\mathbf{u},\mathbf{v})\in V\times V$ and all $f\in V^\ast$, we want $f_\to(\mathbf{u}\otimes \mathbf{v})=f(\mathbf{v})\mathbf{u}\in V$ to hold, where $f_\to$ simply means that $f$ acts on the factor on the right.
Finally, if we're doing stuff with the dual, we can reformulate that in a slightly prettier way. Since two vectors $\mathbf{u},\mathbf{v}\in V$ are equal if and only if $f(\mathbf{u})=f(\mathbf{v})$ for all $f\in V^\ast$, we can give another equivalent statement of the same statement:
For all $(\mathbf{u},\mathbf{v})\in V\times V$ and all $f,g\in V^\ast$, we want $g_\leftarrow f_\to(\mathbf{u}\otimes \mathbf{v})=g(\mathbf{u})f(\mathbf{v})\in V$.
[Note, here, that this last rephrasing isn't really that fancy. Essentially, it is saying that the vector equation (1) is really to be interpreted as a component-by-component equality, and that's not really off the mark of how we actually do things.]
I could keep going, but it's clear that this requirement can be rephrased into the universal property of the tensor product, and that rephrasing is a job for the mathematicians. Thus, you can see the story like this: Upon hitting equation (1), we give to the mathematicians this list of requirements. They go off, think for a bit, and come back telling us that such a structure does exist (i.e. there exist rigorous constructions that obey those requirements) and that it is essentially unique, in the sense that multiple such constructions are possible, but they are canonically isomorphic. For a physicist, what that means is that it's OK to write down objects like $\mathbf{u}\otimes \mathbf{v}$ as long as one does keep within the rules of the game.
As far as electromagnetism goes, this means that we can write our conservation law in the form$$\frac{ d\mathbf{P}_\rm{mech}}{dt}+\frac{d}{dt}\int_V\epsilon_0\mathbf{E}\times \mathbf{B}d\mathbf{x} =\oint_A \mathcal T\cdot d\mathbf{a}$$where$$\mathcal T=\epsilon_0\left[\mathbf{E}\otimes\mathbf{E}+c^2\mathbf{B}\otimes\mathbf{B}-\frac12\sum_i\mathbf{e}_i\otimes\mathbf{e}_i\left(E^2+c^2 B^2\right)\right]$$is, of course, the Maxwell stress tensor.
I could go on and on about this, but I think this really captures the essence of how and where it happens in physics that a situation is really begging the use of a tensor product. There are other such situations, of course, but this is the clearest one I know.
|
Differential and Integral Equations Differential Integral Equations Volume 32, Number 7/8 (2019), 423-454. Structure of conformal metrics on $\mathbb{R}^n$ with constant $Q$-curvature Abstract
In this article, we study the nonlocal equation $$ (-\Delta)^{\frac{n}{2}}u=(n-1)!e^{nu}\quad \text{in $\mathbb R$}, \quad\int_{\mathbb R}e^{nu}dx < \infty, $$ which arises in the conformal geometry. Inspired by the previous work of C.S. Lin and L. Martinazzi in even dimension and T. Jin, A. Maalaoui, L. Martinazzi, J. Xiong in dimension three, we classify all solutions to the above equation in terms of their behavior at infinity.
Article information Source Differential Integral Equations, Volume 32, Number 7/8 (2019), 423-454. Dates First available in Project Euclid: 2 May 2019 Permanent link to this document https://projecteuclid.org/euclid.die/1556762424 Mathematical Reviews number (MathSciNet) MR3945763 Citation
Hyder, Ali. Structure of conformal metrics on $\mathbb{R}^n$ with constant $Q$-curvature. Differential Integral Equations 32 (2019), no. 7/8, 423--454. https://projecteuclid.org/euclid.die/1556762424
|
I wonder whether 'deepness' is subjective or not. The Compactness Theorem of first-order logic has several proofs.
Theorem. (Gödel-Maltsev) Given a language $L$ and a set $S$ of first-order sentences in that language, if every finite subset of $S$ has a model, then $S$ has model.
(A
language is just a set of relational, function and constant symbols, for example the language of ordered fields is $+,\times,=,\leqslant,0,1$; a first-order sentence in this language is a finite word using this language and $\exists,\forall,\land(and),\lor(or)$ and paranthesis for ease of reading only, for instance the sentence $\forall x\exists y(y^2=x)$; a model of a sentence is just a set where the sentence can be interpreted in a 'true' way. For instance $\bf R$ is not a natural model of the above sentence as $-1$ does not have a squareroot there, but $\bf C$ is a model of it.)
(For one immediate application, one can deduce in a straightforewared way that there is an ordered field containing $\bf R$ as well as infinitesimal/infinite elements while having the same first order properties as $\bf R$)
Some Proofs. (I believe there are many more)
(1) Gödel's original proof (for Gödel, in 1929, this Theorem was stated as a 'Remark') is from his Completeness Theorem, stated in the particular case where the language $L$ is countable, in which case the
axiom of choice is not needed. Hard for me to say the nature of the proof. Grammatical maybe.
(2) I don't know precisely the nature of Maltsev's proof, published in German in 1936, and extending G.'s result to the case of an arbitrary signature $L$, using the axiom of choice.
(3) Los' proof via the 'explicit' construction of the model as ultraproduct of finite structures, using the 'axiom of the ultrafilter', which is weaker that the axiom of choice. I would say this proof is of a topological nature.
In the same (?) vein, Gromov's 'bounded version' of his Theorem stating that a finitely generated group of polynomial growth as a nilpotent subgroup of finite index has several (at least 2) proofs, apparently.
Theorem. (Gromov) For any positive integers $k$, $d$, $n$, there exists a positive integer $m$ such that any $n$-generated group, in which for all $r= 1,\dots,m$ the size of the ball of radius $r$ centered at the identity is at most $kr^d$, has a subgroup of index and nilpotency class at most
$m$.
I read that Gromov's original proof is a Compactness argument. Van den Dries and Wilkie gave an alternative proof using Gödel's Compactness Theorem. Belegradek recently provided a third proof using yet another kind of compactness argument, using very little model theory.
Edit: To match the OP's Edit, I'm just adding that the field concerning the first Theorem should be 'model theory' which is currently classified (according to Bairwise, if I am not mistaken) as the latest of the 4 branches of 'mathematical logic'. It is central/important in the sense that it more or less gave birth to the 'field', and is used more or less tacitly in pretty much every Theorem of model theory. The second Theorem belongs to the field of 'geometric group theory', and I do not know about it's being central. It seems to be important in the sense that many mathematicians seem to be interested in it. It also has a wikipedia page in 3 major mathematical languages: https://fr.wikipedia.org/wiki/Th%C3%A9or%C3%A8me_de_Gromov_sur_les_groupes_%C3%A0_croissance_polynomiale
|
Let $p$ be a real number greater than $1$. It is well known (see Hall and Heyde's
Martingale limit theory and its applications, Theorem 2.10) that there exists a constant $C_p$ such that if $(X_i)_{i=1}^n$ is a real valued martingale difference with respect to the filtration $(\mathcal F_i)_{i=1}^n$ (that is, $(S_j:=\sum_{i=1}^jX_i)_{j=1}^n$ is a martingale with respect to this filtration), then $$\frac 1{C_p}\mathbb E\left(\sum_{i=1}^nX_i^2\right)^{p/2}\leqslant \mathbb E\left|\sum_{i=1}^nX_i\right|^p\leqslant C_p\mathbb E\left(\sum_{i=1}^nX_i^2\right)^{p/2}.$$Hence the $\mathbb L^p$ norm of the partial sum is controlled by those of the quadratic variation.
Now define for a real valued random variable $X$: $$\lVert X\rVert_{p,\infty}:=\left(\sup_{t\geqslant 0}t^p\mu\{|X|\geqslant t\}\right)^{1/p}.$$ This is equivalent to a norm (namely $N(X):=\sup_{\mu(A)>0}\mu(A)^{-1+1/p}\int_A|X|\mathrm d\mu$).
I would like to know whether there is a similar inequality to Rosenthal's one, that is, a control of $N(S_n)$ in terms of those of $N\left(\sqrt{\sum_{i=1}^nX_i^2}\right)$ plus maybe an other term. This seems to be a natural question which has probably been investigated, but I didn't manage to find a reference.
There are weak-$\mathbb L^p$ versions of Rosenthal's inequality for independent random variables, but I would like to see a reference dealing with an extension to martingale differences, namely:
Let $(\Omega,\mathcal F,\mu)$ be a probability space $p\gt 2$. Is there a constant $C_p$ such that if $n$ is an integer and $(X_j)_{1\leqslant j\leqslant n}$ is a martingale difference with respect to the filtration $(\mathcal F_j)_{1\leqslant j\leqslant n}$ with $\lVert X_j\rVert_{p,\infty}\lt\infty$, then $$C_p^{-1}\left\lVert \sqrt{\sum_{j=1}^nX_j^2}\right\rVert_{p,\infty}\leqslant \left\lVert \sum_{j=1}^nX_j\right\rVert_{p,\infty}\leqslant C_p\left\lVert \sqrt{\sum_{j=1}^nX_j^2}\right\rVert_{p,\infty}~? $$
|
Research Open Access Published: Global existence and exponential stability for a nonlinear Timoshenko system with delay Boundary Value Problems volume 2015, Article number: 206 (2015) Article metrics
1303 Accesses
3 Citations
Abstract
This paper is concerned with a nonlinear Timoshenko system modeling clamped thin elastic beams with time delay. The delay is defined on a feedback term associated to the equation for rotation angle. Under suitable assumptions on the data, we establish the well-posedness of the problem with respect to weak solutions. We also establish the exponential stability of the system under the usual equal wave speeds assumption.
Introduction
In this paper, we are concerned with a Timoshenko system with time delay,
where \((x,t)\in(0,1)\times\mathbb{R}^{+}\). When \(\mu_{1}=\mu_{2}=f=0\), this system was proposed by Timoshenko [1] as a model for vibrations of a thin elastic beam of length 1. Here, \(\varphi=\varphi(x,t)\) denotes the transverse displacement of the beam, \(\psi=\psi(x,t)\) denotes the rotation angle of the beam’s filament and \(\rho_{1}\), \(\rho_{2}\),
k, b are positive constants related to physical properties of the beam. In the system, \(\mu_{1}\psi_{t}\) represents a frictional damping and \(f(\psi)\) is a forcing term. The time delay is given by \(\mu_{2} \psi_{t}(x,t-\tau)\), where \(\mu_{1}\), \(\mu_{2}\), τ are positive constants.
To the system we add the initial conditions
where \(f_{0}\) is prescribed, and the Dirichlet boundary conditions
We observe that our problem is set in a context where: (a) the damping is defined only on the equation for rotation angle; (b) the presence of a time delay; (c) exponential stability under a nonlinear forcing. Under this scenario we briefly comment some of related early works.
is exponentially stable if and only if
This assumption, which means that both waves on the system have equal propagation speed, was later extended to several other problems based on Timoshenko systems. We refer the reader to the references [4–13] among others.
On the other hand, dynamics of delay systems have been a major research subject in differential equations (see,
e.g., [14, 15]). It is known that a time delay on the feedback term (internal or at the boundary) in a wave equation can destabilize the system, depending on the weight of each term, as discussed in Datko et al. [16] and Nicaise and Pignotti [17, 18]. Following that context, Said-Houari and Laskri [13] studied the stability of system (1.1) with \(f(\psi)=0\). They proved that, under condition (1.4) and \(\mu_{2} < \mu_{1}\), the system is exponentially stable.
In the present paper our objective is to extend the result of Said-Houari and Laskri [13] to a nonlinear framework by adding a forcing term \(f(\psi)\). The rest of the paper is organized as follows. In Section 2, we present some preliminary remarks and the main results. In Section 3, we prove the well-posedness of system (1.1)-(1.3) by using semigroup theory. In Section 4, we prove the exponential stability of system (1.1)-(1.3) by using energy methods.
Preliminaries and main results
In this paper we use standard Lebesgue and Sobolev spaces
In the case \(q=2\) we write \(\Vert u \Vert\) instead of \(\Vert u \Vert_{2}\).
Now we give some hypotheses on the forcing term \(f(\psi(x,t))\). We assume \(f: \mathbb{R}\rightarrow\mathbb{R}\) satisfying
where \(k_{0}>0\), \(\theta>0\). In addition we assume that
with \(\hat{f}(z)=\int^{z}_{0}f(s)\,ds\).
Then it is easy to verify
Thus, equations (1.1) are transformed to
with \(x\in(0,1)\), \(\rho\in(0,1)\) and \(t>0\), and the initial and boundary conditions are
Before using the semigroup theory, we introduce two new dependent variables \(u=\varphi_{t}\) and \(v=\psi_{t}\), then problem (2.5)-(2.6) is reduced to the following problem for an abstract first-order evolutionary equation:
where \(U=(\varphi,u,\psi,v,z)^{T}\), and
with the domain
where
We define the energy space \(\mathscr{H}\) by
For \(U=(\varphi,u,\psi,v,z)^{T}\), \(\overline{U}=(\overline{\varphi},\overline{u},\overline{\psi },\overline{v},\overline{z})^{T}\) and for
ξ a positive constant satisfying
we equip \(\mathscr{H}\) with the inner product
Now we give the result of the well-posedness of solutions to problem (2.7).
Theorem 2.1 (i) If\(U_{0}\in\mathscr{H}\), then problem(2.7) has a unique mild solution\(U\in C([0,\infty),\mathscr{H})\) with\(U(0)=U_{0}\). (ii) If\(U_{1}\) and\(U_{2}\) are two mild solutions of problem(2.7), then there exists a positive constant\(C_{0}=C(U_{1}(0),U_{2}(0))\) such that$$\begin{aligned} \bigl\| U_{1}(t)-U_{2}(t)\bigr\| _{\mathscr{H}}\leq e^{C_{0}T}\bigl\| U_{1}(0)-U_{2}(0)\bigr\| _{\mathscr{H}} \quad \textit{for any }0\leq t\leq T. \end{aligned}$$(2.12) (iii) If\(U_{0}\in D(\mathcal{A})\), then the above mild solution is a strong solution.
Below we shall give the stability result.
Theorem 2.2 The well-posedness Lemma 3.1 The energy \(E(t)\) defined by (2.13) is a nonincreasing function along the solution trajectories, i. e., there exists a positive constant C such that for any \(t\geq0\), and there exist two positive constants \(\delta_{0}\) and \(C_{1}\), independent of initial data in \(\mathscr{H}\), such that for any \(t\geq0\), Proof
Multiplying the first equation in (2.5) by \(\varphi_{t}\), the second equation by \(\phi_{t}\), integrating the result over \((0,1)\) with respect to
x and using Young’s inequality, we obtain
We multiply the third equation in (2.5) by \(\frac{\xi}{\tau}z\) and integrate the result over \((0, 1)\times(0,1)\) with respect to
ρ and x, respectively, to get Lemma 3.2 The operator \(\mathcal{A}\) defined in (2.7) is the infinitesimal generator of a \(C^{0}\)- semigroup in \(\mathscr{H}\). Proof
It follows from (3.1) that for all \(U(t)\in D(\mathcal{A})\),
which implies that the operator \(\mathcal{A}\) is a dissipative operator.
Next we will prove that the operator \(I-\mathcal{A}\): \(D(\mathcal {A})\rightarrow\mathscr{H}\) is onto, that is, given \(U^{*}=(f_{1},f_{2},f_{3},f_{4},f_{5})^{T}\in\mathscr{H}\), we seek \(U=(\varphi ,u,\psi,v,z)^{T}\in D(\mathcal {A})\) is a solution of \((I-\mathcal{A})U=U^{*}\). We have
Then we can infer that the operator \(\mathcal{A}\) is m-dissipative in \(\mathscr{H}\). Since \(D(\mathcal{A})\) is dense in \(\mathscr{H}\), thus we can conclude that the operator \(\mathcal{A}\) is the infinitesimal generator of a \(C^{0}\)-semigroup in \(\mathscr{H}\) by the Lumer-Phillips theorem (see, for example, Pazy [19]). The proof is now complete. □
Lemma 3.3 The operator F defined in (2.7) is locally Lipschitz in \(\mathscr{H}\). Proof
Let \(U_{1}=(\varphi^{1},u^{1},\psi^{1},v^{1},z^{1})\) and \(U_{1}=(\varphi^{2},u^{2},\psi^{2},v^{2},z^{2})\), then we have
By using (2.1), Hölder’s and Poincaré’s inequalities, we can obtain
which gives us
Then the operator
F is locally Lipschitz in \(\mathscr{H}\). The proof is hence complete. □ Proof of Theorem 2.1
defined in a maximal interval \((0,t_{\max})\).
If \(t_{\max}<\infty\), then
It is easy to get inequality (2.12) by using (3.4), the local Lipschitz behavior of
F and Gronwall’s inequality. Then we can obtain the continuous dependence on the initial data for mild solutions. This proves the item (ii) of Theorem 2.1. Exponential stability
In this section, we shall prove Theorem 2.2, which will be divided into the following lemmas.
Lemma 4.1 satisfies that for any \(\varepsilon>0\), hereafter \(\lambda_{1}>0\) is the first eigenvalue of −Δ in \(H^{1}_{0}(0,1)\). Proof
A straightforward calculation gives
Using (2.5) and integrating by parts, we see that
It follows from Young’s inequality and Poincaré’s inequality that for any \(\varepsilon>0\),
Lemma 4.2 where g is the solution of Then the functional \(I_{2}\) satisfies, for any \(\eta,\tilde{\eta}>0\), Proof
We know from (2.5) that
By (4.7), we can get
Using Young’s inequality and Poincaré’s inequality, we have
Now we define the following functional:
Then we may get the following lemma.
Lemma 4.3 Proof
By taking a derivative of (4.13), we arrive at
By using Young’s inequality and Poincaré’s inequality, we know that for any \(\varepsilon>0\),
and
Lemma 4.4 Proof
The same argument as in [13], we know that for any \(\varepsilon>0\),
By using (2.5), Young’s inequality, integration by parts and the following fact
we see that
Similarly,
In order to handle the term \(z(x,\rho,t)\), we introduce the functional
Then we can find the following result in [13].
Lemma 4.5 where c is a positive constant.
Now we define the following Lyapunov functional \(\mathscr{L}(t)\) by
Then we may obtain the following lemma.
Lemma 4.6 Let \((\varphi,\varphi_{t},\psi,\psi_{t},z)\) be the solution of problem (2.5)-(2.6). For M large enough, there exist two positives \(\gamma_{1}\) and \(\gamma_{2}\) depending on M, N and ε such that for any \(t\geq0\), Proof
The same argument as in [13], we can deduce
where the positive constants \(\alpha_{i}\) (\(i=1,2,3,4\)) are determined as in [13].
Performing Young’s inequality and using the fact
we easily get
Then choosing
M so large that \(\gamma_{1}:=M-\tilde{C}>0\) and \(\gamma_{2}=M+\tilde{C}>0\), we complete the proof. □ Proof of Theorem 2.2
First we choose
η so small that
and then we choose
ε small enough so that
Then we take
N so large that
After that, we select
η̃ small enough so that
Then we take
M so large that there exists a positive constant δ such that
Noting that (2.13), we know that there exists a positive constant
β such that
which, together with (4.25), yields
Then we can get
Using again (4.25), we find that
which gives us that the exponential stability holds for any \(U_{0}\in D(\mathcal{A})\). Noting that \(D(\mathcal{A})\) is dense in \(\mathscr{H}\), we can extend the energy inequalities to phase space \(\mathscr{H}\). Thus we complete the proof of Theorem 2.2. □
References 1.
Timoshenko, SP: On the correction for shear of the differential equation for transverse vibrations of prismatic bars. Philos. Mag. Ser. 6
41(245), 744-746 (1921) 2.
Soufyane, A: Stabilisation de la poutre de Timoshenko. C. R. Math. Acad. Sci. Paris, Sér. I
328, 731-734 (1999) 3.
Soufyane, A, Wehbe, A: Uniform stabilization for the Timoshenko beam by a locally distributed damping. Electron. J. Differ. Equ.
2003, 29 (2003) 4.
Almeida Júnior, DS, Muñoz Rivera, JE, Santos, ML: The stability number of the Timoshenko system with second sound. J. Differ. Equ.
253, 2715-2733 (2012) 5.
Amar-Khodja, F, Benabdallah, A, Muñoz Rivera, JE, Racke, R: Energy decay for Timoshenko systems of memory type. J. Differ. Equ.
194, 82-115 (2003) 6.
Fatori, LH, Monteiro, RN, Fernández Sare, HD: The Timoshenko system with history and Cattaneo law. Appl. Math. Comput.
228, 128-140 (2014) 7.
Guesmia, A, Messaoudi, SA: General energy decay estimates of Timoshenko systems with frictional versus viscoelastic damping. Math. Methods Appl. Sci.
32, 2102-2122 (2009) 8.
Ma, Z, Zhang, L, Yang, X: Exponential stability for a Timoshenko-type system with history. J. Math. Anal. Appl.
380, 299-312 (2011) 9.
Messaoudi, SA, Mustafa, MI: On the stabilization of the Timoshenko system by a weak nonlinear dissipation. Math. Methods Appl. Sci.
32, 454-469 (2009) 10.
Messaoudi, SA, Pokojovy, M, Said-Houari, B: Nonlinear damped Timoshenko systems with second sound-global existence and exponential stability. Math. Methods Appl. Sci.
32, 505-534 (2009) 11.
Muñoz Rivera, JE, Racke, R: Mildly dissipative nonlinear Timoshenko systems - global existence and exponential stability. J. Math. Anal. Appl.
276, 248-276 (2002) 12.
Muñoz Rivera, JE, Racke, R: Timoshenko systems with indefinite damping. J. Math. Anal. Appl.
341, 1068-1083 (2008) 13.
Said-Houari, B, Laskri, Y: A stability result of a Timoshenko system with a delay term in the internal feedback. Appl. Math. Comput.
217, 2857-2869 (2010) 14.
Fridman, E: Introduction to Time-Delay Systems. Analysis and Control. Birkhäuser, Cham (2014)
15.
Hale, JK, Verduyn Lunel, SM: Introduction to Functional-Differential Equations. Springer, New York (1993)
16.
Datko, R, Lagnese, J, Polis, MP: An example on the effect of time delays in boundary feedback stabilization of wave equations. SIAM J. Control Optim.
24, 152-156 (1986) 17.
Nicaise, S, Pignotti, C: Stability and instability results of the wave equation with a delay term in the boundary or internal feedbacks. SIAM J. Control Optim.
45, 1561-1585 (2006) 18.
Nicaise, S, Pignotti, C: Stabilization of the wave equation with boundary or internal distributed delay. Differ. Integral Equ.
21, 935-958 (2008) 19.
Pazy, A: Semigroups of Linear Operators and Applications to Partial Differential Equations. Springer, New York (1983)
20.
Liu, Z, Zheng, S: Semigroups Associated with Dissipative Systems. Chapman & Hall/CRC Research Notes in Mathematics. Chapman & Hall/CRC, Boca Raton (1999)
Acknowledgements
This work was supported by the Fundamental Research Funds for the Central Universities with contract No. JBK150128.
Additional information Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
The authors read and approved the final manuscript.
|
In a previous post, I described how one can compute the point ${\bf S}$ on a line $r$ which is closest to a given point ${\bf X}$. There I assumed the line $r$ passed through two distinct points ${\bf P}$ and ${\bf Q}$ (see figure 1). In this post I will describe how one can use that technique to compute the closest point on a segment ${\bf PQ}$ to a point ${\bf X}$.
Before we consider the segment case, let's consider again the problem with a line. For a line, the coordinates of ${\bf S}$ can be computed using the equation below: $$ {\bf S} = {\bf P} + \lambda_{\bf S}({\bf Q} - {\bf P}) \label{post_ec3d5dfdfc0b6a0d147a656f0af332bd_eq_S} $$ where: $$ \displaystyle \lambda_{\bf S} = \frac{({\bf X} - {\bf P})\cdot ({\bf Q} - {\bf P})}{({\bf Q}-{\bf P})\cdot ({\bf Q}-{\bf P})} \label{post_ec3d5dfdfc0b6a0d147a656f0af332bd_lambda_closest_point_line_to_point} $$
Fig. 1: ${\bf S}$ is the point on the line $r$ which is closest to ${\bf X}$.
There are three cases which we need to consider in order to solve the problem with a segment instead of a line: $0 \lt \lambda_{\bf S} \lt 1$, $\lambda_{\bf S} \leq 0$ and $\lambda_{\bf S} \geq 1$. To understand why this is so, notice first that equation \eqref{post_ec3d5dfdfc0b6a0d147a656f0af332bd_eq_S} can be written as follows: $$ {\bf S} = {\bf P} + \lambda_{\bf S}({\bf Q} - {\bf P}) = {\bf P} + \lambda_{\bf S}{\bf v} \label{post_ec3d5dfdfc0b6a0d147a656f0af332bd_eq_S_v} $$ where ${\bf v} = {\bf Q} - {\bf P}$ is the vector which connects (and is directed from) ${\bf P}$ to ${\bf Q}$ (as shown in figure 1). Hence, since ${\bf S}$ is equal to the sum of ${\bf P}$ and $\lambda_{\bf S}{\bf v}$, it must be in between ${\bf P}$ and ${\bf Q}$ if $0 \lt \lambda_{\bf S} \lt 1$. If $\lambda_{\bf S} \geq 1$, ${\bf S}$ will be either directly at or "after" ${\bf Q}$ (indeed, for $\lambda_{\bf S}=1$, ${\bf S} = {\bf P} + ({\bf Q} - {\bf P}) = {\bf Q})$. When $\lambda_{\bf S} \leq 0$, ${\bf S}$ will be either "before" or directly at ${\bf P}$ since $\lambda_{\bf S}{\bf v}$ will then point away from ${\bf Q}$ or be the zero vector (if $\lambda_{\bf S} = 0$).
Since we want to compute the point ${\bf S}$ on the segment ${\bf PQ}$ which is closest to ${\bf X}$, we cannot have the point ${\bf S}$ lying outside of ${\bf PQ}$. This means whenever ${\bf S}$ would lie "before" ${\bf P}$ on the line $r$, the closest point from ${\bf PQ}$ to ${\bf X}$ is then ${\bf P}$. Similarly, if ${\bf S}$ would lie "after" ${\bf Q}$, the closest point from ${\bf PQ}$ to ${\bf X}$ is ${\bf Q}$. Figure 2 illustrates these facts.
Fig. 1: ${\bf S}$ is the point on the segment ${\bf PQ}$ which is closest to ${\bf X}$. The figure shows the three possible cases of interest. From left to right, the corresponding $\lambda_{\bf S}$ is such that $0 \lt \lambda_{\bf S} \lt 1$, $\lambda_{\bf S} \leq 0$ and $\lambda_{\bf S} \geq 1$.
To summarize, here is the algorithm which determines the point ${\bf S}$ on a segment ${\bf PQ}$ which is closest to a point ${\bf X}$ (it also works if ${\bf X}$ lies on ${\bf PQ}$):
1. compute $\lambda_{\bf S}$ using equation \eqref{post_ec3d5dfdfc0b6a0d147a656f0af332bd_lambda_closest_point_line_to_point} 2. if $\lambda_{\bf S} \leq 0$, then ${\bf S} = {\bf P}$ 3. if $\lambda_{\bf S} \geq 1$, then ${\bf S} = {\bf Q}$ 4. if $0 \lt \lambda_{\bf S} \lt 1$, then ${\bf S} = {\bf P} + \lambda_{\bf S}({\bf Q} - {\bf P})$
|
I'm trying to find the expectation value of the spin in the $y$ direction after applying a $\pi/2$-$\pi/2$ sequence of pulses. In Slichter's Principles of Magnetic Resonance, he does a $\pi/2$-$\pi$ pulse which makes sense because that will get you the largest free induction signal you could get. I understand everything about the derivation for $\langle I_y\rangle$. What was nice about that derivation was that the rotation about the $X$-axis operator simply inverts $I_y$ (and $I_z$) such that the time evolution operators T still have the initial Hamiltonian in the z direction: $T(t,h_0) = e^{i\gamma h_0tI_z}$. So just like what Slichter did, you end up with terms $T^{-1}(t-\tau,h_0)\cdot T(\tau,h_0)$ resulting in 1 at the time of the echo $2\tau$. Then the rest is applying the $X$ operator to $I_y$. But, when it comes to the $\pi/2$-$\pi/2$ sequence, the integral looks like (ignoring integral over all spins):
$$\int\psi^*(0)X^{-1}(\pi/2)T^{-1}(\tau,h_0)e^{i\gamma h_0(t-\tau)I_y}\cdot I_z\cdot e^{-i\gamma h_0(t-\tau)I_y}T(\tau,h_0)X(\pi/2)\cdot \psi(0)d\tau_I$$
Reducing the time evolution operators to $1$ for $t=2\tau$ is no longer trivial. So I tried making the analogy that the coordinate system has been rotated by $\pi/2$ about the $y$-axis, as well as the Hamiltonian. From that, you'd have:
$$I_{x^{'}} = e^{-iI_y\phi} I_z e^{iI_y\phi} = I_x\cos\phi+I_z\sin\phi$$
$$I_{z^{'}} = e^{-iI_y\phi} I_z e^{iI_y\phi} = -I_x\sin\phi+I_z\cos\phi$$
$$I_{y^{'}} = I_y, \quad \phi = \gamma h_0(t-\tau)$$
Then, replacing $I_z$ from the integral gives two terms because of the $\cos$ and $\sin$. I then made the assumption that $h_0 \approx 0$ since we're trying to get as close to resonance as possible but small inhomogeneities are preventing this. So $\sin\phi \approx 0 ,\cos\phi \approx 1$ reducing the integral to just one term with:
$$e^{-i\gamma h_0\tau I_z}I_ze^{i\gamma h_0\tau I_z} = I_z$$
$$\langle I_{y,tot}(t)\rangle = -N\int p(h_0)dh_0\int\psi^*(0)X^{-1}(\pi/2)I_zX(\pi/2)d\tau = -N\langle I_y(0)\rangle$$
I interpret this as there was not a perfect $z$ magnetization due to inhomogenities in $H_0$ before the first pulse. Any help would be greatly appreciated.
|
Sodium has a volume expansion coefficient of $15 * 10^{-5} K^{-1}$. Calculate the percentage change in the fermi energy as the temperature is raised from $T = 0K$ to $T = 300K$.
My attempt at the solution is below:
Volume expansion coefficient is given by $\alpha _v = \frac{1}{V}\left( \frac{\partial V}{\partial T} \right)$. Then just using the definition of the fermi energy I get:
$$E_f = \frac{ \hbar ^2 k_f^2}{2m} = \frac{ \hbar ^2 \left( 3 \pi^2 \frac{N}{V} \right)^{2/3}}{2m}$$
Where $N$ is the number of electrons that can contribute to conduction (They are in the conduction band I think, but their energy must be less than $E_f$ because they are inside the fermi surface so I am unsure about that.)
And $V$ is some volume containing the electrons. Then it follows that:
$$dE_f = \left( \frac{\partial E_f}{\partial N} \right) dN + \left( \frac{\partial E_f}{\partial V} \right) dV$$
$$=\frac{ \hbar ^2 \left( 3 \pi^2 \frac{1}{V} \right)^{2/3}}{2m}N^{-1/3}dN + \frac{ \hbar ^2 \left( 3 \pi^2 N \right)^{2/3}}{2m}V^{-5/2}dV$$
Now $dV$ I know from the thermal expansion coefficient, but am I supposed to evaluate this new term? I had the idea that each Na atom contributes 1 conduction electron, but from there I don't know how many atoms or what else. Also I find it strange that I am asked to evaluate the fermi energy at a non-zero temperature.
|
o.m.'s conclusions are correct (if you like my answer you should up vote his too).
Theoretically Yes. Realistically Possibly.
Plugging in realistic efficiencies shows that certain sized craft could do this in theory. However, solar power isn't very potent and we don't have a good mechanism for turning solar power into propulsive power. So although physics doesn't forbid it, we don't have anything engineered that could do it now.
Assumptions, Constraints, and other trivia
I'm not deriving much of this from first principals, I'm starting with my basic background and education. If you need the justification for some of these, please read the references.
Derivation
Use the 4 equations stated above:
$T = W \cdot \frac{C_{drag}}{C_{lift}}$ $W = \left(\sqrt{A}\right)^3$ $\frac{C_{lift}}{C_{drag}} = \frac{4 \cdot \left(M + 3\right)}{M} \rightarrow \frac{C_{drag}}{C_{lift}} = \frac{M}{4 \cdot \left(M + 3\right)}$ $P = T \cdot v$
Substitute and simplify:
$$P = W \cdot \frac{C_{drag}}{C_{lift}} \cdot v \rightarrow \left(\sqrt{A}\right)^3 \cdot \frac{M}{4 \cdot \left(M + 3\right)} \cdot v$$
Then solve for various flight regimes.
Solution
I plugged this into a spreadsheet (along with calculations for speed of sound, mach number, etc.) and this is what I got:
I was off by a factor of 1000. I've updated this chart. Green cells show possible size/altitude combinations given 100% efficiency in solar power collection, engine thermal efficiency, propulsive efficiency, etc. Edit1 about 3 hours after initial answer
I neglected to multiply the solar power by the planform area. I've updated this chart. Green cells show possible size/altitude combinations given 100% efficiency in solar power collection, engine thermal efficiency, propulsive efficiency, etc. Edit2 about 3 hours after last edit
I have not changed the answer substantively. Instead for the realistic scenario I change the $\frac{C_l}{C_d}$ to 50% of the maximum value. Then I converted the charts to read what % of propulsive power the solar power could provide. So now the answer is in % rather than Watts. Edit3 about 2 days after last edit
Required Thrust Power (W) - for 100% efficient components:
The spreadsheet shows the minimum thrust power required for straight and level flight at the required velocity at the altitude shown on left for a aircraft planform area shown across the top.
What this says is that your aircraft must remain quite small (smaller than $2 m^2$). This vehicle could never be manned (but that's not a requirement, so it's OK).
If you plug in realistic efficiencies for these parameters (overall optimistic engine efficiency could be as high as 40% and very optimistically solar efficiency of 30%***), you would get a chart that looked more like this:
Required Thrust Power (W) - for realistically efficient components:
So it looks like a well designed and small enough aircraft could be made. Granted I don't know of a good mechanism for turning solar power into thrust so there would still be quite a bit of engineering work left to do, the physics don't prohibit such a craft.
And another thing
Although someone might be able to someday build such a craft, remember that anything with moving parts will require lubricants. Even if you meet its power requirements to keep it going, eventually it'll run out of these lubricants and/or its moving parts will break.
Many US aircraft have the ability to perform mid-air refueling. From a propulsive perspective, they can stay airborne forever. Practically speaking they have flight duration limitations based upon things like provisions for the crew, lubrication for moving parts, etc.
**Aircraft engine flight performance by type and flight regime:
***Solar cell efficiencies:
|
Some Properties of Common Tangents What is this about? Problem
Let $HE$ and $FG$ be two internal tangents of non-overlapping circles $(A)$ and $(C),$ as shown below.
Let $FH$ intersect the second time $(A)$ in $I$ and $(C)$ in $J.$ Define $L$ to be the intersection of $EH$ and $FG,$ $K$ the intersection of $EI$ and $GJ.$ Then
$FI=HJ,$
$\angle KIJ = \angle KJI,$
Points $E,L,G,K$ are concyclic.
Hint Solution
By the Power of a Point theorem, $FG^{2}=FH\cdot FJ$ and also $HE^{2}=HF\cdot HI.$ Then $HE=FG$ implies $FI=HJ.$
Since angles $ELF$ and $GLH$ are vertical and, hence, equal, the arcs subtended by chords $EF$ in $(A)$ and $GH$ in $(C)$ have the same angular measure, making central inscribed angles $GJH$ and $EIF$ equal such that $\angle KIJ = \angle KJI.$
Finally, since $FG$ and $EH$ are tangent to $(C),$ they are orthogonal to the radii of $(C)$ at the points of tangency. It follows that
\( \begin{align} \angle ELG &= 180^{\circ} -\angle GLH \\ &= \angle GCH \\ &= 2\angle GJH = 2\angle KJI. \end{align} \)
On the other hand, $\angle IKJ=180^{\circ}-\angle KIJ-\angle KJI=180^{\circ}-2\angle KJI.$ Therefore, $\angle EKJ + \angle ELG=180^{\circ},$ making quadrilateral $ELJK$ cyclic.
Also
The two external tangents have similar properties:
Acknowledgment
The problem with solution has been posted by Emmanuel Antonio José García at the CutTheKnotMath facebook page.
65607369
|
Why does there exist a non-split sequence with the condition that $\mathrm{pd} M = \infty$?
Remarks. I am reading
wherein on p. 476 there is the
Theorem. Let $\Lambda$ be a left artinian ring and assume that each indecomposable finitely generated left $\Lambda$-module has finite projective dimension or finite injective dimension. Then $\Lambda$ has finite left global dimension. Proof of the Theorem. Assume that $\Lambda$ is a left artinian ring of infinite global dimension where each indecomposable module has finite projective dimension or finite injective dimension. We shall show that this leads to a contradiction.Clearly we may assume that no simple module has both infinite projective dimension and infinite injective dimension. Let $S$ be a simple module of infinite projective dimension. Recall that if$$\cdots \longrightarrow P_{n} \longrightarrow P_{n-1} \longrightarrow \cdots \longrightarrow P_{1}\longrightarrow P_{0} \longrightarrow S\longrightarrow 0$$is a minimal projective resolution of S and T is a simple $\Lambda$-module, then $\mathrm{Ext}_{\Lambda}^{n}(S,T)\neq0$ if and only if the projective cover of $T$ is a direct summand of $P_{n}$. Since $S$ has infinite projective dimension and sup $\{{\rm id}\ Y$ $|$ $Y$ is simpleand of finite injective dimension$\}$ is finite, say equal to $n$, each simple module $T$ with $\mathrm{Ext}_{\Lambda}^{n+1}(S,T)\neq0$ is of infinite injective dimension. Now since $S$ has infinite projective dimension, there exists a direct summand M of $\Omega^{n}(S)$ and a nonsplit exact sequence $0 \longrightarrow T \longrightarrow E\longrightarrow M \longrightarrow 0$with $\mathrm{pd} M=\infty$. Where $\mathrm{pd} M$ denotes the projective dimension of $M$.${\rm id} Y$ denotes the injective dimension of $M$.$ \Omega^{n}(S)$ is the $n$th syzygy of $S$.
I cannot understand that why there exists a direct summand $M$ of $ \Omega^{n}(S)$ and a nonsplit exact sequence $0 \longrightarrow T \longrightarrow E\longrightarrow M \longrightarrow 0$ with $\mathrm{pd} M=\infty$.
In fact, since $\mathrm{Ext}_{\Lambda}^{1}(\Omega^{n}(S),T)=\mathrm{Ext}_{\Lambda}^{n+1}(S,T)\neq0$, we know that there exists a direct summand $M$ of $ \Omega^{n}(S)$ such that $\mathrm{Ext}_{\Lambda}^{1}(M,T)\neq0$.But why $\mathrm{pd} M=\infty?$
|
Most electronic components dissipate heat whenever a current flows through them. The amount of heat depends on the power, device characteristics, and circuit design. Besides the components, the resistance of the electrical connections, copper traces, and vias contribute to some heat, and power losses.
To avoid failures or circuit malfunctions, designers should aim at producing PCBs that operate and remain within safe temperature limits. While some circuits will work without additional cooling, there are situations where adding heat sinks, cooling fans, or a combination of several mechanisms is inevitable.
This article will discuss design practices that ensure better thermal management, including some common methods for removing excess heat from a PCB.
A themal image of a LattePanda single-board computer. Image by Gareth Halfacree. [CC BY-SA 2.0]
Good PCB Design Practices
Major issues to consider during design are:
Performance data and dimensions of the components Major heat-dissipating components Size of the PCB PCB material, layout, and component placement Mounting peripherals Temperature of the application environment Amount of heat dissipated Appropriate cooling methods, i.e., cooling fans, heat sink, etc.
A best practice is to manage the temperature at the component and system level while considering the operating environment. Factors to consider when deciding on a cooling mechanism include the package properties of the semiconductor, heat dissipation properties, etc. This information is usually available from the manufacturer’s datasheet.
Natural convection cooling is adequate for PCBs with small amounts of heat dissipation. However, PCBs with excess heat require heat sinks, heat pipes, fans, thick copper or a combination of several cooling techniques.
Reduce Thermal Resistance
A low thermal resistance ensures that the heat is transferred through the material much faster. This resistance is directly proportional to the length of the thermal path and inversely proportional to the cross-sectional area and thermal conductivity of the thermal path.
Thermal resistance \[\theta = \frac {t}{A \times K}\]
Where
tis the thickness of the material K is the thermal conductivity factor A is the cross-sectional area
Designers often reduce thermal resistance by:
Using a thinner PCB to reduce the thermal path Adding thermal vias for vertical heat conduction Copper foil and thick tracks for horizontal heat conduction
An overview of thermal conduction by thermal vias. Image used courtesy of ROHM Semiconductor
Identify Components with the Potential to Dissipate More Heat
It is important to understand which components generate the most heat and decide on the best removal mechanism. Using the manufacturer's datasheet, a designer should find out the thermal rating and characteristics of the device. Most often, the manufacturers will provide guidelines on how to remove excess heat.
Consider Component Placement, Orientation, and Organization
Components that dissipate more power should be located in areas providing the best heat removal. This should not be at the corners or edges of the PCB unless there is a heat sink. Placing components somewhere near the middle ensures heat dissipation around the device, however, there should be enough space for adequate air circulation.
Although it may be difficult to ensure an even temperature distribution, it is important to avoid concentrating the high power components together. Distributing them evenly prevents hot spots.
Another good practice is to place sensitive components such as small ICs, transistors, and electrolytic capacitors in low-temperature areas. In circuits that rely on convection cooling, arranging components such as ICs in a horizontal or vertical long manner helps in thermal management.
How to Identify Thermal Problems with Your PCB
Designers can use a wide range of techniques to identify potential problems. Popular approaches include the use of thermal analysis tools, visual inspections, and infrared cameras.
Conduct Thermal Analysis
Performing a thermal analysis establishes how the components and PCB will behave at different temperatures and conditions. The analysis provides designers with an idea of the heat generation and transfer within the circuit.
Designers can then use analysis results and simulations to come up with techniques that will help them better manage the heat.
PCB thermal analysis. Image Courtesy of PADS
Visually Inspect the Board without Power
Visual inspection is an easy way to find signs of overheating, burnt or partly damaged components, dry joints, arcing, etc. Some of the visible signs include bulging components, burnt components, and discolored spots on the PCB. In addition to the visual analysis, a smell from the board can also point to heating issues.
Use Infrared Cameras
Test engineers can use IR cameras to evaluate powered prototype boards for heat problems and identify issues invisible to the naked eye. In addition to showing areas where there is excess heat, the cameras can sometimes identify counterfeit or defective parts whose thermal signatures differ from those of genuine components.
Thermal imaging cameras can also detect the location where the tracks have insufficient solder, hence higher resistance and more heat dissipation.
How to Remove Heat from Circuit Boards
There are several techniques that designers can use to remove heat from components and PCBs. The common mechanisms include heat sinks, cooling fans, heat pipes, and thick copper. Most often, circuits generating more heat require more than one technology. For example, cooling a laptop processor and display chips requires a heat sink, heat pipe, and a fan.
Heat Sinks and Cooling Fans
A heat sink is a thermally conductive metallic part with a large surface area, usually attached to components such as power transistors and switching devices. A heat sink allows the component to dissipate its heat over a larger area and transfer that heat to the surroundings. In some cases, such as high current power supplies, adding a cooling fan aids in faster and better heat removal.
Heat Pipes
Heat pipes are suitable for compact devices with limited space. The pipes provide a reliable and cost-effective passive heat transfer. Benefits include a vibration-free operation, good thermal conductivity, low maintenance, and quiet operation since they have no moving parts.
A typical pipe contains small amounts of nitrogen, water, acetone, or Ammonia. These fluids help to absorb the heat, upon which they release a vapor that travels along the pipe. The pipe has a condenser where, as the vapor passes through, it condenses back to its liquid form and the cycle begins again.
Thermal Via Arrays
Thermal vias increase the mass and area of the copper, reducing the thermal resistance and improving heat dissipation from the critical components through conduction. As such, better performance is achieved when the vias are placed closer to the heat source.
In some applications, heat from a device, such as a thermally optimized IC, is conducted away through the combination of a thermal via array and pad. This eliminates the need for a heat sink while improving the heat dissipation through the PCB.
Thermal via array. Image used courtesy of Texas Instruments
Thick Copper Traces
Using more copper provides a larger surface area that helps in heat distribution and dissipation. Such PCBs are suitable for high power applications.
Conclusion
PCB thermal management techniques depend on a number of factors including the amount of heat the components and circuit dissipate, the environment, the overall design, and the enclosure. If heat generation is low, the circuit can work without additional cooling. However, if the circuit generates higher amounts of heat, there should be a cooling mechanism to take away the heat.
To provide thermally optimized PCBs, designers should consider everything that influences temperature right from the concept stage and throughout the design and manufacturing stages.
|
Cardinality Cardinality is the bare notion of size, we measure "how many elements" are in a given set. Finite sets are simple that way, a set $A$ is finite it there is a natural number $k$ such that $A=\{a_1,\ldots,a_k\}$. That is to say that there exists a bijection between $A$ and $\{0,\ldots,k-1\}$. We generalize this idea and say that two sets are of the same cardinality if there exists a bijection between them (often we say "equinumerous" or "equipotent").
This gives, in turn, an equivalence relation defined on all the sets. However, in ZF (and similar set theories) the equivalence classes are not sets, since the collection of all singletons is no longer a set. In order to formulate the notion of cardinality within a model of set theory we would have to make it into a set.
Under the axiom of choice every set can be well ordered. This means that we can simply choose the least ordinal bijectible with the set $A$, and define that as the cardinality of $A$, often denoted by $|A|$. Not assuming the axiom of choice we may have sets which cannot be well ordered, these sets are not in bijection with any ordinal at all. We define the cardinality of $A$ as follows:
If $A$ can be well ordered, let $|A|=\alpha$ such that $\alpha$ is the least ordinal for which there is a bijection with/surjection onto $A$. If $A$ cannot be well ordered, let $\alpha$ be the least ordinal for which exists $B\in V_\alpha$ such that $B$ is in bijection with $A$, and let $$|A|=\{B\mid B\in V_\alpha\land \exists f:A\to B\text{ bijection}\}$$
This may seem a bit cluttered, however this is a clever use of the axiom of regularity which allows us to shrink cardinality equivalence classes into sets. Now we can say that $A$ and $B$ has the same cardinality if and only if $|A|=|B|$ if and only if there is a bijection between $A$ and $B$.
Ordering of cardinals
We want to be able and say "the set $A$ is bigger than the set $B$", which in turn means that we would like to have an order defined on cardinals. This order should obey several properties:
If $A\subseteq B$ then $|A|\le|B|$, If $|A|\le|B|$ and $|B|\le|C|$ then $|A|\le|C|$, Every set is not bigger than itself, so $|A|=|A|$, If $|A|\le|B|$ and $|B|\le|A|$ then either set is no larger than the other, we would like $|A|=|B|$.
This can be done by defining $|A|\le|B|$ if and only if there exists $f:A\to B$ which is injective. We will show that these properties indeed hold:
If $A\subseteq B$ then $f(a)=a$ is an injective function showing $|A|\le|B|$. If $|A|\le|B|$ and $|B|\le|C|$ then there are $f:A\to B$ and $g:B\to C$ injective, then $g\circ f:A\to C$ is injective as wanted. Indeed $f(a)=a$ is a bijection from $A$ onto itself, so $|A|=|A|$. Lastly, the Cantor-Schroder-Bernstein theorem ensures us (without using the axiom of choice) the last condition that $|A|\le|B|$ and $|B|\le|A|$ imply together $|A|=|B|$.
Using injective functions works fine, however what about surjective functions? In finite subsets the pigeonhole principle ensures us that if there is an injection, but no bijection then there is no surjection. We can also define the surjective relation:
$$|A|\leq^\ast|B|\iff\exists f:B\to A\text{ surjective}$$
In this ordering, we indeed have that if $A\subseteq B$ then $|A|\leq^\ast|B|$; as well $|A|=^\ast|A|$, we even have $|A|\leq^\ast|B|$ and $|B|\leq^\ast|C|$ implying $|A|\leq^\ast|C|$. However we do not have a dual theorem to the Cantor-Schroder-Bernstein theorem, namely it is consistent that $|A|<|B|$ but $|B|\leq^\ast|A|$.
Well orderable cardinals
Finite sets can be well ordered, of course, however infinite sets can be well ordered too. Countable sets are by definition in bijection with $\omega$. Sets of ordinals have a natural order which is a well order, using Hartog number we can deduce that if $\aleph_\alpha$ exists then $\aleph_\alpha^+$ exists, and if we only iterated this set-many times then the increasing union gives us a new cardinal. The result is a proper class of well orderable cardinals.
If $\alpha$ is an ordinal, we say that it is an
initial ordinal if no $\beta<\alpha$ is in bijection with $\alpha$. We can see that initial ordinals are exactly the cardinalities which represent well orderable cardinals, these are the sets which use for $\aleph$-numbers. Non-well orderable cardinals
Assuming the negation of the axiom of choice, we have that some sets cannot be well ordered, and as a result some cardinals are non-$\aleph$ ones. Such examples are infinite Dedekind finite cardinals, in some models of ZF the real numbers cannot be well ordered, which also forms a non-$\aleph$ cardinal.
This article is a stub. Please help us to improve Cantor's Attic by adding information.
|
This question is related to this Helicity states.
Suppose we have $k=[\omega,0,0,\omega]$. In Weinberg's book The Quantum Theory of Fields: Volume I he defines the state$|k,\sigma\rangle$ as an eigenstate of the operator $J_{3}$ that is
\begin{equation} J_{3}|k,\sigma\rangle=\sigma|k,\sigma\rangle \end{equation} where ${\mathbf{J }}=(J_1,J_2,J_3)$ are the rotation generators. Since the 3 momentum and the 3 component of the angular momentum are pointing in the same direction this is a state of helecity $\sigma$.
Then he was able to show that under a Lorentz transformation, a massless particle state should transform like this: $$U(\Lambda)|k,\sigma\rangle=e^{i\theta\sigma}| \Lambda k,\sigma\rangle.$$ Now before the Lorentz transformation we had $$J_3(|k\rangle\otimes |\sigma \rangle)=|k\rangle\otimes J_3|\sigma \rangle=\sigma(|k\rangle\otimes |\sigma \rangle)=\sigma|k,\sigma\rangle$$
Now after Lorentz transformation since parameter $\sigma$ does not change shouldn't we have $$J_3(|\Lambda k\rangle\otimes |\sigma \rangle)=|\Lambda k\rangle\otimes J_3|\sigma \rangle=\sigma(|\Lambda k\rangle\otimes |\sigma \rangle)=\sigma|\Lambda k,\sigma\rangle$$ ?
My main problem is this, if before the Lorentz transformation we had $ | k\rangle\ \otimes J_3|\sigma \rangle= |k\rangle\ \otimes \sigma|\sigma \rangle$ since under Lorentz transformation in the direct product state, only the momentum part change $$\Lambda(| k\rangle\otimes |\sigma \rangle=e^{i\theta\sigma}|\Lambda k\rangle\otimes |\sigma \rangle$$
and since $J_3$ acts only on the spin part, why $$J_3(|\Lambda k\rangle\otimes |\sigma \rangle)=|\Lambda k\rangle\otimes J_3|\sigma \rangle\neq \sigma(|\Lambda k\rangle\otimes |\sigma \rangle)$$ ?
Isn't this like saying that $J_3|\sigma \rangle=\sigma|\sigma \rangle$ and that $J_3|\sigma \rangle \neq \sigma|\sigma \rangle$?
Can anyone give me a mathematical proof why $J_3|\Lambda k,\sigma\rangle \neq \sigma|\Lambda k,\sigma\rangle$?
|
Lephenixnoir af424d1baa update documentation after writing the wiki 3 miesięcy temu config 4 miesięcy temu include/TeX 3 miesięcy temu src 3 miesięcy temu .gitignore 3 miesięcy temu Makefile 4 miesięcy temu README.md 3 miesięcy temu TODO.md 3 miesięcy temu configure 3 miesięcy temu font5x7.bmp 4 miesięcy temu font8x9.bmp 4 miesięcy temu font10x12.bmp 4 miesięcy temu
This library is a customizable 2D math rendering tool for calculators. It can be used to render 2D formulae, either from an existing structure or TeX syntax.
\frac{x^7 \left[X,Y\right] + 3\left|\frac{A}{B}\right>} {\left\{\frac{a_k+b_k}{k!}\right\}^5}+ \int_a^b \frac{\left(b-t\right)^{n+1}}{n!} dt+ \left(\begin{matrix} \frac{1}{2} & 5 \\ -1 & a+b \end{matrix}\right)
List of currently supported elements:
\frac)
_ and
^)
\left and
\right)
\sum,
\prod and
\int)
\vec) and limits (
\lim)
\sqrt)
\begin{matrix} ... \end{matrix})
Features that are partially implemented (and what is left to finish them):
See the
TODO.md file for more features to come.
First specify the platform you want to use :
cli is for command-line tests, with no visualization (PC)
sdl2 is an SDL interface with visualization (PC)
fx9860g builds the library for fx-9860G targets (calculator)
fxcg50 builds the library for fx-CG 50 targets (calculator)
For calculator platforms, you can use
--toolchain to specify a different toolchain than the default
sh3eb and
sh4eb. The install directory of the library is guessed by asking the compiler, you can override it with
--prefix.
Example for an SDL setup:
% ./configure --platform=sdl2
Then you can make the program, and if it’s a calculator library, install it. You can later delete
Makefile.cfg to reset the configuration, or just reconfigure as needed.
% make% make install # fx9860g and fxcg50 only
Before using the library in a program, a configuration step is needed. The library does not have drawing functions and instead requires that you provide some, namely:
TeX_intf_pixel)
TeX_intf_line)
TeX_intf_size)
TeX_intf_text)
The three rendering functions are available in fxlib; for monospaced fonts the fourth can be implemented trivially. In gint, the four can be defined as wrappers for
dpixel(),
dline(),
dsize() and
dtext().
The type of formulae is
TeX_Env. To parse and compute the size of a formula, use the
TeX_parse() function, which returns a new formula object (or
NULL if a critical error occurs). The second parameter
display is set to non-zero to use display mode (similar to
\[ .. \] in LaTeX) or zero to use inline mode (similar to
$ .. $ in LaTeX).
char *code = "\\frac{x_7}{\\left\\{\\frac{\\frac{2}{3}}{27}\\right\\}^2}";struct TeX_Env *formula = TeX_parse(code, 1);
The size of the formula can be queried through
formula->width and
formula->height. To render, specify the location of the top-left corner and the drawing color (which will be passed to all primitives):
TeX_draw(formula, 0, 0, BLACK);
The same formula can be drawn several times. When it is no longer needed, free it with
TeX_free():
TeX_free(formula);
|
Suppose I have data points $(x_1^1, x_2^1), (x_1^2, x_2^2), (x_1^3, x_2^3), \ldots$ in $\mathbf{R}^2$ that fall in one of two classes, $y^i=0$ or $y^i=1$. I can find a linear separator for these points using logistic regression by finding $b_0, b_1, b_2$ that maximize $$P\left(y\;|\; (x_1, x_2)\right) = \frac{1}{1+e^{-(b_0 + b_1x_1 + b_2x_2)}}$$
based on the data. We can do this by gradient ascent, for example. The line that separates the points is given by the set of points $(x_1', x_2')$ that satisfy $$0.5 = \frac{1}{1+e^{-(b_0 + b_1x_1' + b_2x_2')}},$$ which yields $$b_0 + b_1x_1' + b_2x_2' = 0.$$
Right?
Another way to look at this problem is as a regression problem: We can find parameters $\beta_0, \beta_1, \beta_2$ that minimize $$\sum_d (\beta_0 + \beta_1x_1^d + \beta_2x_2^d-y^d)^2.$$ We can do this using the master equations or gradient ascent... So now the line $y=\beta_0 + \beta_1x_1 + \beta_2x_2$ can be viewed in $3d$ as the line that "best" passes through the data? I think the set of points $(x_1', x_2')$ that satisfy $0.5=\beta_0 + \beta_1x_1' + \beta_2x_2'$ will be the same set of points that were found in the first method above, but I'm having trouble seeing why. Thinking about the data as two big clusters in $3d$ seems to be throwing me off. Can someone shed some light on this?
|
ReferencesShahvooz, B., S. Boy, T. Baseheart. Flexural Strengthening of Four 76-Year-old T-Beams with Various Fiber-Reinforced Polymer Systems: Testing and Analysis. ACI Structural Journal , 99 (2002), No. 5, 681-691.Sheikh, S., D. De Rose, J. Mardukhi. Retrofitting of Concrete Structures for Shear and Flexure with Fiber-Reinforced Polymers. ACI Structural Journal , 99 (2002), No. 4, 451-459.Arduiri, M., A. Nanni, M. Romagnolo. Performance of Decommissioned Reinforced
REFERENCES[1] Goetzendorf-Grabowski, T., Frydrychewicz, A., Goraj, Z., et al., 2006, “MALE UAV design of an increased reliability level,” Aircraft Engineering and Aerospace Technology: An International Journal, vol. 78, No 3, pp. 226-235.[2] Lin, X., Fulton, N., and Horn, M., 2014, “Quantification of high level safety criteria for civil unmanned aircraft systems,” Proceedings of IEEE Aerospace Conference , Big Sky, pp. 1-13.[3] Loh, R., Bian, Y., and Roe, T., 2009 “UAVs in civil airspace: Safety requirements,” IEEE Aerospace and Electronic
, and to attain this field in specific regions of the brain, the electric current should pass through different head layers via skin, fat, skull, meninges, and cortex (part of the brain). In order to model the brain, different layers should be considered, including gray and white matters.The meninges, three layers of protective tissue, cover the outer surface of the central nervous system (brain and spinal cord) and comprise three connective tissue layers viz. (from the innermost to the outermost layer) the pia mater, arachnoid and the dura mater. The meninges also
Kathrin Badstübner, Marco Stubbe, Thomas Kröger, Eilhard Mix and Jan Gimsa
Education and Research (BMBF, FKZ 01EZ0911). The custom-made stimulator system was developed in cooperation with the Steinbeis company (STZ1050, Rostock, Germany) and Dr. R. Arndt (Rückmann & Arndt, Berlin, Germany).References1 Krack P, Hariz MI, Baunez C, Guridi J, Obeso JA. Deep brain stimulation: from neurology to psychiatry? Trends Neurosci. 2010;33:474-84. https://doi.org/10.1016/j.tins.2010.07.002 10.1016/j.tins.2010.07.002 20832128Krack P Hariz MI Baunez C Guridi J Obeso JA Deep brain stimulation: from neurology to psychiatry
Lisa Röthlingshöfer, Mark Ulbrich, Sebastian Hahne and Steffen Leonhardt
magnetic field strength (h) are assigned to the edges. Hence, a system of equations, called Maxwell-Grid Equations, has to be solved for the whole calculation domain, where each cell is described by:(1)C e → = − ∂ b → ∂ t C ˜ h → = − ∂ d → ∂ t + j →$$C\overrightarrow{e}=-\frac{\partial \overrightarrow{b}}{\partial t}\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \tilde{C}\overrightarrow{h}=-\frac{\partial \overrightarrow{d}}{\partial t}+\overrightarrow{j}$$(2)S ˜ d → = q S b → = 0$$\tilde
electrodes are near-to constant because of the high resistance to current of the stratum corneum in the considered frequency range [ 3 ]. This allows us to rewrite the boundary conditions, Eqs. 5 - 7 , between the probe and the uppermost skin layer n , stratum corneum, as (we drop the subindex ` eff ’ for notational convenience in the analysis)− σ n ∂ Φ ( r , H n ) ∂ z = ∑ j = 1 m I j A j [ U ( R 2 j − 1 − r ) − U ( R 2 j − 2 − r ) ] ,$$\begin{array}{}\displaystyle-\sigma_{n}\frac{\partial\Phi(r,\mathcal{H}_{n})}{\partial z}=\sum_{j=1}^{m
|
If we describe spacetime with a Lorentzian manifold, it is always possible to choose a coordinate system such that at any particular point $x^\alpha$, the components of the metric are: $$ g_{\mu\nu}(x^\alpha) = \left( \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 &-1 & 0 & 0 \\ 0 & 0 &-1 & 0 \\ 0 & 0 & 0 &-1 \end{array} \right) $$
But our freedom is much greater than this. In curved spacetime, the equivalence principle suggests we can choose coordinates such that the metric is of that form for every point along a chosen time-like geodesic. And in flat spacetime, we can see an explicit example that it doesn't even need to be a geodesic, for the Rindler metric has that form on every point of a particular worldline with constant proper acceleration. I have a feeling this is possible for any time-like worldine.
So my question is:
Given a coordinate system and metric for a Lorentzian manifold and a time-like worldline on this spacetime, is it always possible to find a coordinate transformation such that
for every point on the world line the components of the metric in this coordinate system are just (1,-1,-1,-1) on the diagonal?
I realize that even for simplified cases (say a geodesic on the Schwarzschild background), such a coordinate system could be incredibly complicated. So if someone creates an incredibly complicated explicit construction, please also show that the solution to the equations you setup exist with some kind of existence proof.
This originally started from trying to find such a local coordinate chart for a free falling observer towards a blackhole, but realized I didn't know a good mathematical way to represent such coordinate freedom to get me started. Eventually I ended up pondering this current question. So even if you can't give a full answer, but can suggest some mathematical tools or where to read up on them, any help would be appreciated.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.