text stringlengths 256 16.4k |
|---|
I'm trying to teach some secondary school students on how to complete the square. The goal is to rewrite: $$y = ax^2 + bx + c \ \ \Rightarrow \ \ y = a(x-h)^2 + k$$ The first thing I did was to ask them to confirm, via FOIL: $$\left( x + \tfrac{b}{2} \right) \left( x + \tfrac{b}{2} \right) \ \ = \ \ x^2 + bx + \tfrac{b^2}{4}$$ With this as a guide, they were able to rewrite something like: $$y = x^2 + 6x - 10 \ \ \Rightarrow \ \ y = (x + 3)^2 -19$$ Problems occur when the leading coefficient does not equal one. So far, I am not able to explain why I need to reduce the leading coefficient to $1$ before completing the square. (Moreover, some of them are having problems with distributions.) For example: \begin{align} y \ \ & = \ \ 2x^2 + 8x - 9 \\ & = \ \ 2(x^2 + 4x - 4.5) \\ & = \ \ 2(x^2 + 4x + 4 - 8.5) \\ & = \ \ 2(x^2 + 4x + 4) - 17 \ \ = \ \ 2(x + 2)^2 - 17 \end{align}
This may not really answer your question, and, it may be inappropriate to the level of your class, but, I find the idea of looking at an algebraic manipulation algebraically is at times helpful. The whole point of doing algebra is to engage in the kind of thinking I give below so I think there is something to gain from taking this sort of "abstract" approach. The idea here is to discover how arbitrary $h$ and $k$ and indeed $A$ must relate to $a,b,c$ in order that $$ y = ax^2+bx+c = A(x-h)^2+k. $$ This is itself and algebra problem. Do algebra to do algebra.
Consider $$ ax^2+bx+c = A(x^2-2xh+h^2)+k $$ or $$ ax^2+bx+c = Ax^2-2hAx+Ah^2+k $$ requires that these be the same polynomial. Hence, equating coefficients, $$ \begin{array}{cc} x^2: & a=A \\ x: & b=-2Ah \\ 1: & c = Ah^2+k \end{array} $$ So, the $h$ and $k$ we seek are given by: $$ A = a, \ \ h = -\frac{b}{2a}, \ \ k = c - \frac{b^2}{4a}. $$ Therefore, $$ ax^2+bx+c = a\left(x+\frac{b}{2a}\right)^2+c-\frac{b^2}{4a} $$ Then, give numeric examples galore until it sinks in.
It's likely as or more important to explain
why it is important to complete the square. Of course the danger of my method, and it is a real danger is that students may black-box this solution. Heaven forbid teachers just say well, you set $h = b/2a$ and $k = c-\frac{b^2}{4a}$ and that's it. The universe in which I present this to a class is the same universe in which the students are also held responsible for reproducing the abstract result in addition to the expectation they do numerically specific examples with ease. Ease born of practice.
The goals of my approach are:
Emphasize that the purpose of completing the square is a process of writing something that is equal to the original expression. Emphasize that there are choicesto make, differentiating between a choice being usefulversus it being true(avoiding the words " correct" or " right" which conflate useful and true). Avoid unnecessary algebra and quantifiers; my students have a hard enough time with $x$; adding in $b$ and $c$ obscures my previous two goals.
Complete the square: $x^2 + 8x + 19$
First ask yourself: "is it a perfect square?" If it was, you would be able to find two numbers that add up to 8, multiply to 19, and
are the same number.
If not, ask yourself what the two numbers would have to be to add to 8 and be the same number. Then recognize that the constant 19 is the
wrong constant. Ask yourself: What is the constant you want it to be? Then don't let your dreams be dreams, just write it:
$\begin{align*} & x^2 + 8x + 19 \\ = & x^2 + 8x + 16 \end{align*}$
Notice that this is
useful (it factors now) but not true (they aren't equal). Fix what you've written so that it is also true.
$\begin{align*} & x^2 + 8x + 19 \\ = & x^2 + 8x + 16 & + 3\end{align*}$
Then factor the part of the result that you have engineered to work out.
$\begin{align*} & x^2 + 8x + 19 & \\ = & x^2 + 8x + 16 & + 3 \\ = &(x + 4)^2 & + 3\end{align*}$
I have tried a variety of approaches, and I like the abstract skills that this approach allows students to practice. For example, when a student does this:
$\begin{align*} & x^2 + 8x + 19 \\ = & x^2 + 8x + 4 + 15 \end{align*}$
I can ask students if that was a
good idea and tease out the distinction that there is a difference between something being true and useful. Since one of the primary goals of algebra courses is convincing students that they have the freedom to do whatever they want when it is true (sure, multiply both sides by 10 if you want man; worst case you can undo it later), this synergizes well with my approach to the rest of algebra.
I don't know if the following will be of much use to your specific case (our audiences are quite different---I teach this to college freshmen), here is my usual approach:
First, we typically start the quarter by getting a good handle on geometric transformations of graphs of functions. Given some function $f:\mathbb{R}\to\mathbb{R}$ with a known graph, the graph of the function defined by $$ x \mapsto B f\left( \frac{x-h}{A} \right) + k $$ is the same as the graph of $f$
shifted right $h$ units (where $h<0$ is a shift to the negative right, or left), scaled horizontally by a factor of $A$, reflected across the $y$-axis if $A < 0$, scaled vertically by a factor of $B$, reflected across the $x$-axis if $B < 0$, and shifted up $k$ units (again, down is negative up).
We can then consider the humble parabola defined by $x \mapsto x^2$. This lovely function has a zero at zero, goes through the points $(1,1)$ and $(-1,1)$, and has inverses on the domains $[0,\infty)$ and $(-\infty,0]$ given by the positive and negative square roots, respectively.
We might first note that if we transform the graph of $x\mapsto x^2$, the two multiplicative constants can be combined: \begin{equation} B\left( \frac{x-h}{A} \right)^2 + k = \left( \frac{B}{A^2} \right)(x-h)^2 + k, \end{equation} so we may fairly conclude that any transformation of the graph will be given by a function of the form \begin{equation*} x \mapsto C(x-h)^2 + k, \end{equation*} where $C$ represents some kind of scaling (we may as well understand it as a vertical scaling, but it combines both the horizontal and vertical scalings into one gooey mess), and $h$ and $k$ are translations, as usual. In particular, the vertex of this graph is the point $(h,k)$, and the vertex will represent a minimum or maximum for the function, depending on whether $C > 0$ or $C < 0$.
We can also expand this mess in order to obtain \begin{align} C\left( x-h \right)^2 + k &= C (x^2 - 2hx + h^2) + k \\ &= \left(C\right)x^2 + \left( -2hC \right)x + \left( h^2C + k\right) \\ &= ax^2 + bx + c, \end{align} which is a more familiar expression to most of my students, as they have mostly seen the material before in high school. The next "obvious" (for certain values of "obvious") question is: can we go the other way?
That is, if we are told that $$ f(x) = ax^2 + bx + c,$$ can we understand the graph of $f$ as the graph of our basic parabola subject to some elementary transformations? That is, given arbitrary $a,b,c$ (with $a\ne0$), are there $C,h,k$ such that $$ f(x) = ax^2 + bx + c = C(x-h)+k?$$ The answer is "Yes!", with the details as provided by James S. Cook's answer to this question.
In this exposition, the idea is that the
geometry motivates exploration of the problem. We want to know how to "complete the square" so that we can find the vertex of the transformed parabola, or so that we can determine an inverse on some domain (i.e. solve a quadratic equation).
I present completing the square as a recipe.
Given $x^2 + bx$, add $\big(\frac{b}{2}\big)^2$ to both sides. You obtain a different expression, but your new expression can be written neatly as a perfect square:
\begin{align*} x^2 + bx &\xrightarrow{\text{add }\left(\frac{b}{2}\right)^2} x^2 + bx + \big(\frac{b}{2}\big)^2 = \big(x + \frac{b}{2}\big)^2 \end{align*}
At some point, I emphasize how important it is that we started with $x^2$ without a (non-one) coefficient: the recipe produces a new expression of the form $(x + \text{something})^2$, which, when expanded, will
never produce anything whose $x^2$ term has a non-one coefficient.
So, maybe you can really play up that it's a "faithful, but simple" method that
only works for expressions of the form $x^2 + bx$: The recipe deals exclusively with quadratics whose leading term is $1$, end of story.
Note: You face similar problems getting students to factor $2x^2 - 3x + 5$ using the "ac method" or whatever you prefer to call it. Students are used to the process
find a pair of numbers that multiply to this, and add to that from the "easy" quadratic case. But, unlike the "easy" case, here we don't just get to jump right to a factorization. Why? Because $(x + \text{thing1})(x + \text{thing2})$ will never produce quadratics whose leading coefficient isn't $1$, and we finally need that. So, we have to add another step or two to the old method.
So I will make a bit of a big deal about the jump from expressions like $x^2 + 6x + 1$ to $2x^2 + 6x + 1$: Our version of completing the square
requires us to have just $x^2$. Is all hope lost; do we need a brand new version of completing the square?
This is where I take the approach suggested by DRF's comment: We reduce a superficially new problem to a known, already solved problem. By adding the minor modification of factoring out the leading coefficient, we get the much vaunted $x^2 + [\text{something}]x$ on which our recipe relies.
I'll add that there is a
super slick, more general version of completing the square that can handle non-one leading coefficients without factoring/division, given here by André Nicolas (and I think the linked answer is the first place I saw it). It's more appropriate for solving equations by completing the square (instead of rewriting quadratic expressions), but it's so nice that I think it deserves to be more widely-known.
The main idea is that $(2ax + b)^2 = 4a^2x^2 + 4abx + b^2$, so given an expression like $ax^2 + bx$, you'll multiply by $4a$ then add $b^2$, at which point you can write your new expression as a perfect square.
I have yet to mention this for a remedial algebra / precalculus class, but occasionally I'll get a math ed student and show them this method in addition to the usual approach.
My Approach: \begin{align} y \ \ & = \ \ 2x^2 + 8x - 9 \\ & y+9 = \ \ 2(x^2 + 4x + \underline{\quad}\ ) \\ & y+9+8 = \ \ 2(x^2 + 4x+4) \\ & y+17 = \ \ 2(x^2 + 4x + 4) \\ & y+17 = \ \ 2(x + 2)^2 \\ \end{align}
Notes - Moving the constant to the left in the first step avoids the errors some students might make dividing this number. Now, when you add the '4' to complete the square, you ask "What did we add to the right side?" "4" "don't forget the number to multiply, here, '2', so we added 8 to the right, and we'll add 8 to the left."
Last, the format of that last line is the one I prefer. It's the "Vertex Form" and students can quickly see (-2,-17) is the vertex.
I think it is easier to practice doing this without the y (with the equation set to zero). Of course it is same thing but for someone just learning, doing it as just a single equation is easier and concentrates the mind on roots.
I would also try to do it very mechanically, writing out each step (like when you are in pre-algebra and first learn how to sort things out with first order equations in X only.)
Just some thoughts. |
Motivation:
It is a well known fact that the gravitational field (in General Relativity and direct generalizations of it) has no local energy-momentum density.
Usually there are two reasons stated, one is heuristic, one is mathematical.
Heuristic reasoning:The gravitational field can be "turned off" at any one $x\in M$ by a proper choice of reference frame, which implies there are problems defining a local density. Mathematical reasoning:The (Einstein-Hilbert) stress-energy (SEM) tensor of a matter field $\psi$ is defined as $$ T_{\mu\nu}=-\frac{2}{\sqrt{-g}}\frac{\delta S_m}{\delta g^{\mu\nu}} $$ where $S_m$ is the matter action. It is, in some sense, the response of the matter field action to changes in the metric. On the other hand, the response of the gravitational actionto a change in the metric is $$ \frac{1}{16\pi G}G_{\mu\nu}=\frac{\delta S_{EH}}{\delta g^{\mu\nu}}, $$ which is basically the vacuum EoMs of GR, clearly this is not to be interpreted as a SEM tensor. So the gravitational field has no SEM tensor.
Moreover, the covariant conservation law $$ \nabla_\mu T^{\mu\nu}=0 $$ is usually interpreted not as a genuine conservation law, but rather the exchange of energy-momentum between the matter field and the gravitational field.
Background:
Let $\psi$ be a matter field, whose target space carries a representation of a Lie group $G$, with action $$ S_m [\psi]=\int d^4x\ \mathcal L_m(\psi,\partial\psi)$$ such that the Lie group $G$ is a group of symmetries of $S_m$. From Noether's theorem, one may obtain $k$ (we have $h=\dim G$) conserved currents satisfying $$ \partial_\mu\mathcal J^\mu _a=0. $$
Let us turn the global symmetry into a gauge symmetry, introduce the gauge connection/gauge field $\mathcal A_\mu=A^a_\mu T_a$ ($T_a$ are a set of generators for $\mathfrak g$) through the covariant derivative $$ D_\mu\psi=\partial_\mu \psi + \mathcal A_\mu\psi, $$ and introduce a self-action $$ S_{GF}[A]=\int d^4 x\ \mathcal L_{GF}(A,F) $$ for the gauge field.
One may show the following:
The current $$ j^\mu_a=\frac{\delta S_m}{\delta A^a_\mu} $$ is an equivalent current to the Noether current $\mathcal J^\mu _a$
provided that $A\rightarrow 0$. This includes the conservation law $\partial_\mu j^\mu_a=0$.
With $A_\mu^a\neq 0$, the current satisfies the identity $$ D_\mu j^\mu_a=\partial_\mu j^\mu_a-A_\mu^cC_{c\ a}^{b}j^\mu_b=0, $$ where $ D_\mu $ is the covariant derivative in the coadjoint bundle.
Question:
The situation here is clearly analogous to the situation with GR but with $T^{\mu\nu}$ replaced with $j^\mu_a$. In particular:
The conservation law $D_\mu j^\mu_a=0$ cannot be interpreted as a real conservation law, it expresses the fact that the matter field can exchange charge with the gauge field.
It does not make sense to define a gauge current for the gauge field $A$, only for the matter field $\psi$.
Does that meanthat in a nonabelian gauge theory, the local charge current density of the gauge field is ill-defined?
If so, I would find it odd, since it is not something that's usually stated, plus unlike gravity, where you can "turn off" the "field strength" $\Gamma^\rho_{\mu\nu}$ by a choice of reference frame at a point, the gauge curvature $\mathcal F=d\mathcal A+\frac{1}{2}[\mathcal A,\mathcal A]$ cannot be eliminated at points via gauge transformations. |
Eigenvalues for a nonlocal pseudo $p-$Laplacian
1.
CONICET and Departamento de Matemática, FCEyN, Universidad de Buenos Aires, Pabellon I, Ciudad Universitaria, Buenos Aires, 1428, Argentina, Argentina
Mathematics Subject Classification:Primary: 35P30, 35J92, 35R1. Citation:Leandro M. Del Pezzo, Julio D. Rossi. Eigenvalues for a nonlocal pseudo $p-$Laplacian. Discrete & Continuous Dynamical Systems - A, 2016, 36 (12) : 6737-6765. doi: 10.3934/dcds.2016093
References:
[1] [2]
A. Anane, Simplicité et isolation de la première valeur propre du $p-$laplacien avec poids,,
305 (1987), 725.
Google Scholar
[3] [4]
M. Belloni and B. Kawohl, The pseudo-p-Laplace eigenvalue problem and viscosity solutions as $p\to \infty$,,
10 (2004), 28.
doi: 10.1051/cocv:2003035.
Google Scholar
[5]
T. Bhattacharya, E. Di Benedetto and J. Manfredi, Limits as $p \to \infty$ of $\Delta_p u_p = f$ and related extremal problems,,
[6] [7] [8] [9]
H. Brezis, Analyse fonctionnelle,,
[10]
J. Bourgain, H. Brezis and P. Mironescu, Another look at sobolev spaces,,
[11] [12]
M. G. Crandall, H. Ishii and P. L. Lions, User's guide to viscosity solutions of second order partial differential equations,,
27 (1992), 1.
doi: 10.1090/S0273-0979-1992-00266-5.
Google Scholar
[13]
F. Della Pietra and N. Gavitone, Sharp bounds for the first eigenvalue and the torsional rigidity related to some anisotropic operators,,
287 (2014), 194.
doi: 10.1002/mana.201200296.
Google Scholar
[14] [15] [16] [17]
S. Dipierro, X. Ros-Oton and E. Valdinoci, Nonlocal problems with Neumann boundary conditions,,
[18] [19] [20] [21]
J. Garcia-Azorero and I. Peral, Multiplicity of solutions for elliptic problems with critical exponent or with a nonsymmetric term,,
323 (1991), 877.
doi: 10.1090/S0002-9947-1991-1083144-2.
Google Scholar
[22]
G. Franzina and G. Palatucci, Fractional p-eigenvalues,,
5 (2014), 373.
Google Scholar
[23]
A. Iannizzotto, S. Mosconi, and M. Squassina, Global Hölder regularity for the fractional $p-$Laplacian,,
[24]
J. Jaros, Picone's identity for a Finsler p-Laplacian and comparison of nonlinear elliptic equations,,
139 (2014), 535.
Google Scholar
[25]
H. Jylha, An optimal transportation problem related to the limits of solutions of local and nonlocal $p-$Laplace- type problems,,
28 (2015), 85.
doi: 10.1007/s13163-014-0147-5.
Google Scholar
[26] [27] [28] [29] [30] [31]
G. Molica Bisci and B. A. Pansera, Three weak solutions for nonlocal fractional equations,,
14 (2014), 619.
Google Scholar
[32] [33] [34]
show all references
References:
[1] [2]
A. Anane, Simplicité et isolation de la première valeur propre du $p-$laplacien avec poids,,
305 (1987), 725.
Google Scholar
[3] [4]
M. Belloni and B. Kawohl, The pseudo-p-Laplace eigenvalue problem and viscosity solutions as $p\to \infty$,,
10 (2004), 28.
doi: 10.1051/cocv:2003035.
Google Scholar
[5]
T. Bhattacharya, E. Di Benedetto and J. Manfredi, Limits as $p \to \infty$ of $\Delta_p u_p = f$ and related extremal problems,,
[6] [7] [8] [9]
H. Brezis, Analyse fonctionnelle,,
[10]
J. Bourgain, H. Brezis and P. Mironescu, Another look at sobolev spaces,,
[11] [12]
M. G. Crandall, H. Ishii and P. L. Lions, User's guide to viscosity solutions of second order partial differential equations,,
27 (1992), 1.
doi: 10.1090/S0273-0979-1992-00266-5.
Google Scholar
[13]
F. Della Pietra and N. Gavitone, Sharp bounds for the first eigenvalue and the torsional rigidity related to some anisotropic operators,,
287 (2014), 194.
doi: 10.1002/mana.201200296.
Google Scholar
[14] [15] [16] [17]
S. Dipierro, X. Ros-Oton and E. Valdinoci, Nonlocal problems with Neumann boundary conditions,,
[18] [19] [20] [21]
J. Garcia-Azorero and I. Peral, Multiplicity of solutions for elliptic problems with critical exponent or with a nonsymmetric term,,
323 (1991), 877.
doi: 10.1090/S0002-9947-1991-1083144-2.
Google Scholar
[22]
G. Franzina and G. Palatucci, Fractional p-eigenvalues,,
5 (2014), 373.
Google Scholar
[23]
A. Iannizzotto, S. Mosconi, and M. Squassina, Global Hölder regularity for the fractional $p-$Laplacian,,
[24]
J. Jaros, Picone's identity for a Finsler p-Laplacian and comparison of nonlinear elliptic equations,,
139 (2014), 535.
Google Scholar
[25]
H. Jylha, An optimal transportation problem related to the limits of solutions of local and nonlocal $p-$Laplace- type problems,,
28 (2015), 85.
doi: 10.1007/s13163-014-0147-5.
Google Scholar
[26] [27] [28] [29] [30] [31]
G. Molica Bisci and B. A. Pansera, Three weak solutions for nonlocal fractional equations,,
14 (2014), 619.
Google Scholar
[32] [33] [34]
[1]
Bhargav Kumar Kakumani, Suman Kumar Tumuluri.
Asymptotic behavior of the solution of a diffusion equation with nonlocal boundary conditions.
[2]
Huicong Li, Jingyu Li.
Asymptotic behavior of Dirichlet eigenvalues on a body coated by functionally graded material.
[3]
Ciprian G. Gal, M. Grasselli.
On the asymptotic behavior of the Caginalp system with dynamic boundary conditions.
[4]
Zhenhua Zhang.
Asymptotic behavior of solutions to the phase-field equations with neumann boundary conditions.
[5]
Guanggan Chen, Jian Zhang.
Asymptotic behavior for a stochastic
wave equation with dynamical boundary conditions.
[6]
E. C.M. Crooks, E. N. Dancer, Danielle Hilhorst.
Fast reaction limit and long time behavior for a competition-diffusion system with Dirichlet boundary conditions.
[7] [8]
Monica Conti, Stefania Gatti, Alain Miranville.
Asymptotic behavior of the Caginalp phase-field system with coupled dynamic boundary conditions.
[9]
Ciprian G. Gal, Hao Wu.
Asymptotic behavior of a Cahn-Hilliard equation with Wentzell boundary conditions and mass conservation.
[10]
Dorina Mitrea, Marius Mitrea, Sylvie Monniaux.
The Poisson problem for the exterior derivative operator with Dirichlet boundary condition in nonsmooth domains.
[11]
Carmen Cortázar, Manuel Elgueta, Fernando Quirós, Noemí Wolanski.
Asymptotic behavior for a nonlocal diffusion
equation on the half line.
[12]
Micol Amar.
A note on boundary layer effects in periodic homogenization with Dirichlet boundary conditions.
[13] [14]
Yueding Yuan, Zhiming Guo, Moxun Tang.
A nonlocal diffusion population model with age structure
and Dirichlet boundary condition.
[15] [16] [17]
Chunpeng Wang.
Boundary behavior and asymptotic behavior of solutions
to a class of parabolic equations with boundary degeneracy.
[18]
Madalina Petcu, Roger Temam.
The one dimensional shallow water equations with Dirichlet boundary conditions on the velocity.
[19]
Carmen Calvo-Jurado, Juan Casado-Díaz, Manuel Luna-Laynez.
Parabolic problems with varying operators and Dirichlet and Neumann boundary conditions on varying sets.
[20]
2018 Impact Factor: 1.143
Tools Metrics Other articles
by authors
[Back to Top] |
Hello guys! I was wondering if you knew some books/articles that have a good introduction to convexity in the context of variational calculus (functional analysis). I was reading Young's "calculus of variations and optimal control theory" but I'm not that far into the book and I don't know if skipping chapters is a good idea.
I don't know of a good reference, but I'm pretty sure that just means that second derivatives have consistent signs over the region of interest. (That is certainly a sufficient condition for Legendre transforms.)
@dm__ yes have studied bells thm at length ~2 decades now. it might seem airtight and has stood the test of time over ½ century, but yet there is some fineprint/ loopholes that even phd physicists/ experts/ specialists are not all aware of. those who fervently believe like Bohm that no new physics will ever supercede QM are likely to be disappointed/ dashed, now or later...
oops lol typo bohm bohr
btw what is not widely appreciated either is that nonlocality can be an emergent property of a fairly simple classical system, it seems almost nobody has expanded this at length/ pushed it to its deepest extent. hint: harmonic oscillators + wave medium + coupling etc
But I have seen that the convexity is associated to minimizers/maximizers of the functional, whereas the sign second variation is not a sufficient condition for that. That kind of makes me think that those concepts are not equivalent in the case of functionals...
@dm__ generally think sampling "bias" is not completely ruled out by existing experiments. some of this goes back to CHSH 1969. there is unquestioned reliance on this papers formulation by most subsequent experiments. am not saying its wrong, think only that theres very subtle loophole(s) in it that havent yet been widely discovered. there are many other refs to look into for someone extremely motivated/ ambitious (such individuals are rare). en.wikipedia.org/wiki/CHSH_inequality
@dm__ it stands as a math proof ("based on certain assumptions"), have no objections. but its a thm aimed at physical reality. the translation into experiment requires extraordinary finesse, and the complex analysis starts with CHSH 1969. etc
While it's not something usual, I've noticed that sometimes people edit my question or answer with a more complex notation or incorrect information/formulas. While I don't think this is done with malicious intent, it has sometimes confused people when I'm either asking or explaining something, as...
@vzn what do you make of the most recent (2015) experiments? "In 2015 the first three significant-loophole-free Bell-tests were published within three months by independent groups in Delft, Vienna and Boulder.
All three tests simultaneously addressed the detection loophole, the locality loophole, and the memory loophole. This makes them “loophole-free” in the sense that all remaining conceivable loopholes like superdeterminism require truly exotic hypotheses that might never get closed experimentally."
@dm__ yes blogged on those. they are more airtight than previous experiments. but still seem based on CHSH. urge you to think deeply about CHSH in a way that physicists are not paying attention. ah, voila even wikipedia spells it out! amazing
> The CHSH paper lists many preconditions (or "reasonable and/or presumable assumptions") to derive the simplified theorem and formula. For example, for the method to be valid, it has to be assumed that the detected pairs are a fair sample of those emitted. In actual experiments, detectors are never 100% efficient, so that only a sample of the emitted pairs are detected. > A subtle, related requirement is that the hidden variables do not influence or determine detection probability in a way that would lead to different samples at each arm of the experiment.
↑ suspect entire general LHV theory of QM lurks in these loophole(s)! there has been very little attn focused in this area... :o
how about this for a radical idea? the hidden variables determine the probability of detection...! :o o_O
@vzn honest question, would there ever be an experiment that would fundamentally rule out nonlocality to you? and if so, what would that be? what would fundamentally show, in your opinion, that the universe is inherently local?
@dm__ my feeling is that something more can be milked out of bell experiments that has not been revealed so far. suppose that one could experimentally control the degree of violation, wouldnt that be extraordinary? and theoretically problematic? my feeling/ suspicion is that must be the case. it seems to relate to detector efficiency maybe. but anyway, do believe that nonlocality can be found in classical systems as an emergent property as stated...
if we go into detector efficiency, there is no end to that hole. and my beliefs have no weight. my suspicion is screaming absolutely not, as the classical is emergent from the quantum, not the other way around
@vzn have remained civil, but you are being quite immature and condescending. I'd urge you to put aside the human perspective and not insist that physical reality align with what you expect it to be. all the best
@dm__ ?!? no condescension intended...? am striving to be accurate with my words... you say your "beliefs have no weight," but your beliefs are essentially perfectly aligned with the establishment view...
Last night dream, introduced a strange reference frame based disease called Forced motion blindness. It is a strange eye disease where the lens is such that to the patient, anything stationary wrt the floor is moving forward in a certain direction, causing them have to keep walking to catch up with them. At the same time, the normal person think they are stationary wrt to floor. The result of this discrepancy is the patient kept bumping to the normal person. In order to not bump, the person has to walk at the apparent velocity as seen by the patient. The only known way to cure it is to remo…
And to make things even more confusing:
Such disease is never possible in real life, for it involves two incompatible realities to coexist and coinfluence in a pluralistic fashion. In particular, as seen by those not having the disease, the patient kept ran into the back of the normal person, but to the patient, he never ran into him and is walking normally
It seems my mind has gone f88888 up enough to envision two realities that with fundamentally incompatible observations, influencing each other in a consistent fashion
It seems my mind is getting more and more comfortable with dialetheia now
@vzn There's blatant nonlocality in Newtonian mechanics: gravity acts instantaneously. Eg, the force vector attracting the Earth to the Sun points to where the Sun is now, not where it was 500 seconds ago.
@Blue ASCII is a 7 bit encoding, so it can encode a maximum of 128 characters, but 32 of those codes are control codes, like line feed, carriage return, tab, etc. OTOH, there are various 8 bit encodings known as "extended ASCII", that have more characters. There are quite a few 8 bit encodings that are supersets of ASCII, so I'm wary of any encoding touted to be "the" extended ASCII.
If we have a system and we know all the degrees of freedom, we can find the Lagrangian of the dynamical system. What happens if we apply some non-conservative forces in the system? I mean how to deal with the Lagrangian, if we get any external non-conservative forces perturbs the system?Exampl...
@Blue I think now I probably know what you mean. Encoding is the way to store information in digital form; I think I have heard the professor talking about that in my undergraduate computer course, but I thought that is not very important in actually using a computer, so I didn't study that much. What I meant by use above is what you need to know to be able to use a computer, like you need to know LaTeX commands to type them.
@AvnishKabaj I have never had any of these symptoms after studying too much. When I have intensive studies, like preparing for an exam, after the exam, I feel a great wish to relax and don't want to study at all and just want to go somehwere to play crazily.
@bolbteppa the (quanta) article summary is nearly popsci writing by a nonexpert. specialists will understand the link to LHV theory re quoted section. havent read the scientific articles yet but think its likely they have further ref.
@PM2Ring yes so called "instantaneous action/ force at a distance" pondered as highly questionable bordering on suspicious by deep thinkers at the time. newtonian mechanics was/ is not entirely wrong. btw re gravity there are a lot of new ideas circulating wrt emergent theories that also seem to tie into GR + QM unification.
@Slereah No idea. I've never done Lagrangian mechanics for a living. When I've seen it used to describe nonconservative dynamics I have indeed generally thought that it looked pretty silly, but I can see how it could be useful. I don't know enough about the possible alternatives to tell whether there are "good" ways to do it. And I'm not sure there's a reasonable definition of "non-stupid way" out there.
← lol went to metaphysical fair sat, spent $20 for palm reading, enthusiastic response on my leadership + teaching + public speaking abilities, brought small tear to my eye... or maybe was just fighting infection o_O :P
How can I move a chat back to comments?In complying to the automated admonition to move comments to chat, I discovered that MathJax is was no longer rendered. This is unacceptable in this particular discussion. I therefore need to undo my action and move the chat back to comments.
hmmm... actually the reduced mass comes out of using the transformation to the center of mass and relative coordinates, which have nothing to do with Lagrangian... but I'll try to find a Newtonian reference.
One example is a spring of initial length $r_0$ with two masses $m_1$ and $m_2$ on the ends such that $r = r_2 - r_1$ is it's length at a given time $t$ - the force laws for the two ends are $m_1 \ddot{r}_1 = k (r - r_0)$ and $m_2 \ddot{r}_2 = - k (r - r_0)$ but since $r = r_2 - r_1$ it's more natural to subtract one from the other to get $\ddot{r} = - k (\frac{1}{m_1} + \frac{1}{m_2})(r - r_0)$ which makes it natural to define $\frac{1}{\mu} = \frac{1}{m_1} + \frac{1}{m_2}$ as a mass
since $\mu$ has the dimensions of mass and since then $\mu \ddot{r} = - k (r - r_0)$ is just like $F = ma$ for a single variable $r$ i.e. an spring with just one mass
@vzn It will be interesting if a de-scarring followed by a re scarring can be done in some way in a small region. Imagine being able to shift the wavefunction of a lab setup from one state to another thus undo the measurement, it could potentially give interesting results. Perhaps, more radically, the shifting between quantum universes may then become possible
You can still use Fermi to compute transition probabilities for the perturbation (if you can actually solve for the eigenstates of the interacting system, which I don't know if you can), but there's no simple human-readable interpretation of these states anymore
@Secret when you say that, it reminds me of the no cloning thm, which have always been somewhat dubious/ suspicious of. it seems like theyve already experimentally disproved the no cloning thm in some sense. |
My favorite connection in mathematics (and an interesting application to physics) is a simple corollary from Hodge's decomposition theorem, which states:
On a (compact and smooth) riemannian manifold $M$ with its Hodge-deRham-Laplace operator $\Delta,$ the space of $p$-forms $\Omega^p$ can be written as the orthogonal sum (relative to the $L^2$ product) $$\Omega^p = \Delta \Omega^p \oplus \cal H^p = d \Omega^{p-1} \oplus \delta \Omega^{p+1} \oplus \cal H^p,$$ where $\cal H^p$ are the harmonic $p$-forms, and $\delta$ is the adjoint of the exterior derivative $d$ (i.e. $\delta = \text{(some sign)} \star d\star$ and $\star$ is the Hodge star operator). (The theorem follows from the fact, that $\Delta$ is a self-adjoint, elliptic differential operator of second order, and so it is Fredholm with index $0$.)
From this it is now easy to proof, that every not trivial deRham cohomology class $[\omega] \in H^p$ has a unique harmonic representative $\gamma \in \cal H^p$ with $[\omega] = [\gamma]$. Please note the equivalence $$\Delta \gamma = 0 \Leftrightarrow d \gamma = 0 \wedge \delta \gamma = 0.$$
Besides that this statement implies easy proofs for Poincaré duality and what not, it motivates an interesting viewpoint on electro-dynamics:
Please be aware, that from now on we consider the Lorentzian manifold $M = \mathbb{R}^4$ equipped with the Minkowski metric (so $M$ is neither compact nor riemannian!). We are going to interpret $\mathbb{R}^4 = \mathbb{R} \times \mathbb{R}^3$ as a foliation of spacelike slices and the first coordinate as a time function $t$. So every point $(t,p)$ is a position $p$ in space $\mathbb{R}^3$ at the time $t \in \mathbb{R}$. Consider the lifeline $L \simeq \mathbb{R}$ of an electron in spacetime. Because the electron occupies a position which can't be occupied by anything else, we can remove $L$ from the spacetime $M$.
Though the theorem of Hodge does not hold for lorentzian manifolds in general, it holds for $M \setminus L \simeq \mathbb{R}^4 \setminus \mathbb{R}$. The only non vanishing cohomology space is $H^2$ with dimension $1$ (this statement has nothing to do with the metric on this space, it's pure topology - we just cut out the lifeline of the electron!). And there is an harmonic generator $F \in \Omega^2$ of $H^2$, that solves $$\Delta F = 0 \Leftrightarrow dF = 0 \wedge \delta F = 0.$$ But we can write every $2$-form $F$ as a unique decomposition $$F = E + B \wedge dt.$$ If we interpret $E$ as the classical electric field and $B$ as the magnetic field, than $d F = 0$ is equivalent to the first two Maxwell equations and $\delta F = 0$ to the last two.
So cutting out the lifeline of an electron gives you automagically the electro-magnetic field of the electron as a generator of the non-vanishing cohomology class. |
I am wanting to find the correlation between two variables: a categorical vegetation variable (with categories 1-4) and a continuous variable of erosion (with values anywhere between 0.2 and 9). I think I would be required to use a non parametric test to find the correlation, but bit of a newby to stats so could anyone suggest the best test to use? I also wanted to know how sample size can affect these tests, ie with a sample size of 100 or 1000?
I agree with @John's answer but would also suggest simply plotting boxplots of erosion for each vegetation category. If the vegetation variable actually has a ordinal interpretation then we could draw some exploratory conclusions about correlations between vegetation and erosion. For example, consider the following fictional data and corresponding box plot.
So as you can see from the box plots, there would appear to be a positive correlation (or association if you prefer that jargon) between erosion and vegetation level. (Of course this argument requires vegetation categories have an ordinal meaning)
Here is the code to generate this in
R although I know you didn't ask for it and clearly one of your tags is for
SPSS.
#Pseudo DataN = 10000erosion = runif(N,0,10)vegetation = rep(1,N)vegetation[erosion > 2.5 & erosion <= 5] = 2vegetation[erosion > 5 & erosion <= 7.5] = 3vegetation[erosion > 7.5 & erosion <= 10] = 4boxplot(erosion~vegetation,col=c("blue","red","green","orange"), xlab="Vegetation",ylab="Erosion")
You could calculate a $\chi^2$ and the $\Phi$ coefficient measure as a substitute for correlation. $\Phi$ would have a similar interpretation to a Pearson correlation coefficient but should really only be used in 2x2 designs. In your case you should probably go with a Cramer's
V which is standardized but doesn't quite have the correlation style interpretation. There is no nonparametric comparable linear correlation because one of your variables is purely categorical. Those numbers 1-4 labelling your categories could be arbitrarily reassigned to different categories and it wouldn't affect the true relationship but it would affect something like a Spearman correlation coefficient.
Unfortunately, the above tactic wouldn't really treat your continuous variable fairly. Another way to get a correlation(ish) value is to perform an ANOVA with your continuous variable as response and categorical as predictor and then calculate $\eta^2$ (eta-squared) effect size measure. That effect size has an interpretation that's similar to $R^2$.
Your second query about sample size has a fixed answer for all tests and estimates. The larger the sample, the more likely the test is to be significant. Increasing sample size increases the accuracy of your effect estimates ($\Phi$ or $\eta^2$, or any parameter estimates you're making). |
Difference between revisions of "Probability Seminar"
(→March 7, TBA)
Line 40: Line 40:
</div>
</div>
−
== March 7,
+
== March 7, ==
+ + + + + + + + + + +
== March 14, TBA ==
== March 14, TBA ==
== March 21, Spring Break, No seminar ==
== March 21, Spring Break, No seminar ==
Revision as of 20:51, 4 February 2019 Spring 2019 Thursdays in 901 Van Vleck Hall at 2:25 PM, unless otherwise noted. We usually end for questions at 3:15 PM.
If you would like to sign up for the email list to receive seminar announcements then please send an email to join-probsem@lists.wisc.edu
January 31, Oanh Nguyen, Princeton
Title:
Survival and extinction of epidemics on random graphs with general degrees
Abstract: We establish the necessary and sufficient criterion for the contact process on Galton-Watson trees (resp. random graphs) to exhibit the phase of extinction (resp. short survival). We prove that the survival threshold $\lambda_1$ for a Galton-Watson tree is strictly positive if and only if its offspring distribution has an exponential tail, settling a conjecture by Huang and Durrett. On the random graph with degree distribution $D$, we show that if $D$ has an exponential tail, then for small enough $\lambda$ the contact process with the all-infected initial condition survives for polynomial time with high probability, while for large enough $\lambda$ it runs over exponential time with high probability. When $D$ is subexponential, the contact process typically displays long survival for any fixed $\lambda>0$. Joint work with Shankar Bhamidi, Danny Nam, and Allan Sly.
Wednesday, February 6 at 4:00pm in Van Vleck 901 , Li-Cheng Tsai, Columbia University
Title:
When particle systems meet PDEs
Abstract: Interacting particle systems are models that involve many randomly evolving agents (i.e., particles). These systems are widely used in describing real-world phenomena. In this talk we will walk through three facets of interacting particle systems, namely the law of large numbers, random fluctuations, and large deviations. Within each facet, I will explain how Partial Differential Equations (PDEs) play a role in understanding the systems..
Title:
Fluctuations of the KPZ equation in d\geq 2 in a weak disorder regime
Abstract: We will discuss some recent work on the Edwards-Wilkinson limit of the KPZ equation with a small coupling constant in d\geq 2.
February 14, TBA February 21, TBA Wednesday, February 27 at 1:10pm Jon Peterson, Purdue March 7, Shamgar Gurevitch UW-Madison
Title:
Harmonic Analysis on GLn over finite fields, and Random Walks
Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters. For evaluating or estimating these sums, one of the most salient quantities to understand is the {\it character ratio}:
$$ trace(\rho(g))/dim(\rho), $$
for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant {\it rank}. This talk will discuss the notion of rank for GLn over finite fields, and apply the results to random walks. This is joint work with Roger Howe (Yale and Texas AM). |
383 13 Homework Statement The electric field outside and an infinitesimal distance away from a uniformly charged spherical shell, with radius R and surface charge density σ, is given by Eq. (1.42) as σ/0. Derive this in the following way. (a) Slice the shell into rings (symmetrically located with respect to the point in question), and then integrate the field contributions from all the rings. You should obtain the incorrect result of ##\frac{\sigma}{2\epsilon_0}##. (b) Why isn’t the result correct? Explain how to modify it to obtain the correct result of ##\frac{\sigma}{2\epsilon_0}##. Hint: You could very well have performed the above integral in an effort to obtain the electric field an infinitesimal distance inside the shell, where we know the field is zero. Does the above integration provide a good description of what’s going on for points on the shell that are very close to the point in question? Homework Equations Coulomb's Law
Hi! I need help with this problem. I tried to do it the way you can see in the picture. I then has this:
##dE_z=dE\cdot \cos\theta## thus ##dE_z=\frac{\sigma dA}{4\pi\epsilon_0}\cos\theta=\frac{\sigma 2\pi L^2\sin\theta d\theta}{4\pi\epsilon_0 L^2}\cos\theta##.
Then I integrated and ended up with ##E=\frac{\sigma}{2\epsilon_0}\int \sin\theta\cos\theta d\theta##. The problem is that I don't know what are the limits of integrations, I first tried with ##\pi##, but I got 0. What am I doing wrong? |
Let \(\alpha\) be an irrational number, and consider the sequence \(\alpha , 2\alpha , 3\alpha ,\dots\) modulo one. It is likely that the reader knows that the sequence is dense in the unit interval \((0,1)\), which is a classical result often attributed to Kronecker. Some readers probably know the stronger statement proven by Bohl, Sierpinski, and Weyl that the sequence is also uniformly distributed on \((0,1)\). However, it turns out that much more can be said about the regularity of this sequence. For example, there is the following result, which this reviewer found very surprising:
Three Distance Theorem: Consider the first \(n\) terms of the sequence \(\alpha , 2\alpha , 3\alpha ,...\) modulo one and label them in increasing order \( 0 < \gamma_1 < \gamma_2 < ... < \gamma_n < 1\) with the additional definitions that \(\gamma_0=0\) and \(\gamma_{n+1}=1\). Then the set \(\{\gamma_{i+1} - \gamma_i\}\) consists of at most three values.
Even more can be said if \(\alpha\) is a quadratic irrational number, using the fact that the continued fraction expansion of \(\alpha\) will have nice properties. In particular, it turns out that under these hypotheses the sequence will exhibit a central limit theorem that looks very analogous to the types of things that one would see in a book on probability theory. It is for this reason that Jozsef Beck almost gave his new book,
Probabilistic Diophantine Approximation the subtitle “Randomness of \(\sqrt{2}\).” As he discusses in the introduction, he felt that subtitle would be misleading and ultimately decided on “Randomness in Lattice Point Counting,” which gives a hint as to some of the techniques that he uses through the book. Beck’s book brings together probability theory, algebraic number theory, and combinatorics to forge new ground.
The bulk of the book is dedicated to proving several results, each of which can be described as a Central Limit Theorem (and each of which is too technical to reproduce in this review). The first half of Beck’s book is dedicated to the global aspects (“Randomness of the Irrational Rotation”) and the second half dedicated to local aspects (“Inhomogeneous Pell Inequalities”) of the situation. The book is quite dense in the amount of technical mathematics it covers, but throughout the book, Beck gives motivating examples and describes connections with various areas of mathematics and helps give outlines of proofs before diving into the details. (In fact, my biggest complaint with the book is that I often last track of what the author had actually proved and what he had simply told us he was going to prove).
Beck had previously written a dozen or so research papers on these types of results, and much of the book is dedicated to recapping and expanding on those results, while also filling in the background for readers who may not be experts in all of these different branches of mathematics (in other words, the vast majority of readers). As he writes in his introduction, “‘Algebraists’ and ‘probabilists’ are in fact very different kinds of mathematicians with totally different taste and different intuitions,” and Beck is trying to appeal to both tastes. His goal was to write a book that would be accessible to beginning graduate students in all areas of mathematics, and while this may be overly optimistic given the level of the material I do think the book is accessible to a wide range of mathematicians. In my own experience, while there were a number of occasions where I had to pull out other references to remind/teach myself various pieces of background material, Beck did a nice job of motivating the questions and explaining the answers.
Darren Glass is an Associate Professor of Mathematics at Gettysburg College whose primary mathematical interests include Number Theory, Algebraic Geometry, and Graph Theory. He can be reached at dglass@gettysburg.edu. |
On the Independent Double Roman Domination in Graphs 2 Downloads Abstract
An independent double Roman dominating function (IDRDF) on a graph \(G=(V,E)\) is a function \(f{:}V(G)\rightarrow \{0,1,2,3\}\) having the property that if \(f(v)=0\), then the vertex
v has at least two neighbors assigned 2 under f or one neighbor w assigned 3 under f, and if \(f(v)=1\), then there exists \(w\in N(v)\) with \(f(w)\ge 2\), such that the set of vertices with positive weight is independent. The weight of an IDRDF is the value \(\sum _{u\in V}f(u)\). The independent double Roman domination number \(i_\mathrm{dR}(G)\) of a graph G is the minimum weight of an IDRDF on G. We continue the study of the independent double Roman domination and show its relationships to both independent domination number (IDN) and independent Roman \(\{2\}\)-domination number (IR2DN). We present several sharp bounds on the IDRDN of a graph G in terms of the order of G, maximum degree and the minimum size of edge cover. Finally, we show that, any ordered pair ( a, b) is realizable as the IDN and IDRDN of some non-trivial tree if and only if \(2a + 1 \le b \le 3a\). KeywordsIndependent double Roman domination Independent Roman {2}-domination Independent domination Graphs Mathematics Subject Classification05C69 05C5 Notes Acknowledgements
The authors sincerely thank the referees for their careful review of this paper and some useful comments and valuable suggestions.
References 1. 2. 3.Chellali, M., Haynes, T.W., Hedetniemi, S.T., MacRae, A.: Roman \(\{2\}\)-domination. Discret. Appl. Math. 204, 22–28 (2016)Google Scholar 4. 5. 6. 7. 8.Mojdeh, D.A., Parsian, A., Masoumi, I.: Characterization of double Roman trees, to appear in Ars CombinatoriaGoogle Scholar 9.Rahmouni, A., Chellali, M.: Independent Roman \(\{2\}\)-domination in graphs. Discret. Math. 236, 408–414 (2018)Google Scholar 10.West, D.B.: Introduction to graph theory, 2nd edn. Prentice Hall, New Jersey (2001)Google Scholar 11. |
Overview
To find the gravitational force exerted by a sphere
of mass \(M\) on a particle of mass \(m\), we must first subdivide that sphere into many very skinny shells and find the gravitational force exerted by anyone of those shells on \(m\). We'll see, however, that finding the gravitational force exerted by such a shell is in of itself a somewhat tedious exercise. In the end, we'll see that the gravitational force exerted by a sphere of mass \(M\) on a particle of mass \(m\) outside of the sphere (where \(D\) is the center-to-center separation distance between the sphere and particle) is completely identical to the gravitational force exerted by a particle of mass \(M\) on the mass \(m\) such that \(D\) is their separation distance.
Finding Gravitational Force Exerted by Shell and Sphere
In this lesson, we'll use Newton's law of gravity and the concept of a definite integral to calculate the gravitational force exerted by a solid sphere of uniform mass density \(ρ\) on a particle of mass \(m\) at the point \(P\) (see Figure 1) where the particle is
outside of the sphere. To solve this problem, we must subdivide the sphere into many very thin shells. By finding the gravitational pull exerted on \(m\) by any one of these shells, we can find the total gravitational force exerted on \(m\) by the entire sphere. Finding the gravitational force on \(m\) due to a spherical shell is, in of itself, a fairly tedious problem. To find the force on \(m\) due to a spherical shell, we must subdivide the shell into many very thing rings. Summing the contributions to the total gravitational tug on \(m\) by every ring will give the total gravitational force exerted on \(m\) by the entire shell.
In Figure 1, the solid \(QRR_1Q_1\) is one of these rings. We can subdivide this ring into many tiny pieces of volume \(dV\). Since the mass density throughout the sphere is constant, this mass density is given by the equation
$$ρ=\frac{\text{Mass inside of volume}}{Volume}.$$
Using this equation, we can determine that the mass of one of the tiny pieces comprising the ring is given by
$$dm=ρdV.$$
Since each dimension of \(dV\) is infinitesimally small, we can regard the entire mass \(dm\) as being concentrated into a single point. This is good news. Newton's law of gravity only applies to particles and since both the mass element \(dm\) in the ring and the mass \(m\) at \(P\) are particles, we can use Newton's law of gravity to find the gravitational force by \(dm\) on \(m\). Doing so, we have
$$f=Gm\frac{ρ}{QP^2}dV.\tag{1}$$
Equation (1) represents the gravitational force exerted by any mass element \(dm\) in the ring on the mass \(m\) at \(P\). To find the total gravitational force exerted on \(m\) by the entire disk, we must add up all the forces acting on \(m\) due to every mass element \(dm\) comprising the ring. Since very mass \(dm\) is an equal distance \(r\) away from \(m\), each mass \(dm\) exerts an equal force on \(m\). Furthermore, since the \(x\)-component of force exerted by \(dm\) is given by
$$f_x=fsin(OPQ),$$
and since the angle \(OPQ\) is the same for every mass element in the ring, it follows that every mass \(dm\) exerts the same \(x\)-component of force, \(f_x\), on \(m\). When we add up the forces acting on \(m\) due to each mass element \(dm\), for any mass element \(Q_1Q\) on the ring which exerts a horizontal force \(\vec{f}_x\) on \(m\) there is another mass element \(R_1R\) on the ring which exerts \(-\vec{f}_x\) on \(m\). Thus, we only need to add up the \(y\)-components of force (which we'll by represent by \(f_y\)) which are given by
$$f_y=Gm\frac{ρ}{QP^2}dVcos(OPQ).\tag{2}$$
If we add up each force \(f\) exerted on \(m\) by each \(dm\), all of the \(x\)-components of \(f\) cancel leaving us with just the infinite sum of \(f_y\):
$$f_{ring}=\int{f_y}=\frac{Gmρcos(OPQ)}{(QP)^2}\int{dV}.\tag{3}$$
Notice that since every term in Equation (2) is constant for every \(dm\), we were able to pull all of those terms outside of the integral as we did in Equation (3). The integral, \(\int{dV}\), is just the volume of the ring. Thus, Equation (3) becomes
$$f_{ring}=\frac{Gmρcos(OPQ)}{(QP)^2}\biggl(\text{Volume of ring}\biggr).\tag{4}$$
The volume of the ring is given by the product of the rings circumference \(2π(QS)\) and the arc length \(Q_1Q\). Thus,
$$\text{Volume of ring}=2π(QS)(Q_1Q).$$
Using the relationships \(Q_1Q=adθ\) and \(QS=asinθ\), the above equation becomes
$$\text{Volume of ring}=2π(asinθ)(adθ).\tag{5}$$
Substituting Equation (5) into (4), we have
$$f_{ring}=\frac{Gmρcos(OPQ)}{(QP)^2}·2π(asinθ)(adθ).\tag{6}$$
You might be asking why we substituted Equation (5) into (4). As I mentioned earlier, to find the total force exerted by the shell on \(m\), we must add up all the forces due to every ring. In other words, we have to be able to calculate the integral, \(\int{f_{ring}}\). To be able to calculate this integral, we must do something similar to what we have been doing in so many previous lessons concerning the applications of definite integrals; namely, we want to represent \(\int{f_{ring}}\) in the same form as \(\int_a^bf(x)dx\). To do this, we need to represent everything in Equation (6) in terms of a single variable. It is all too easy to get lost in the math and lose track of what we're doing; but everything we have done since deriving Equation (4) and everything that we'll continue to do until we finally take the integral of \(f_{ring}\) will involve altering Equation (4) until it is represented in terms of a single variable.
That tangent aside, let's see if there is anything that we can do to Equation (6) to come closer to reaching our goal. As you can see from Figure 1,
$$cos(OPQ)=\frac{SP}{r}=\frac{D-OS}{r}=\frac{D-acosθ}{r}.$$
Substituting this result in Equation (6), we have
$$f_{ring}=\frac{Gmρ}{r^2}\biggl(\frac{D-acosθ}{r}\biggr)(2πa^2sinθdθ).\tag{7}$$
To make Equation (7) of the same form as \(f(x)dx\), we can either express everything in Equation (7) in terms of \(θ\) or everything terms of \(r\). Doing either would work and would allow us to calculate the integral \(\int{f_{ring}}\). Let's represent everything in Equation (7) in terms of \(r\). We can do this by making the appropriate substitutions to eliminate all of the \(θ\) terms. Let's apply the law of cosines to the triangle \(OQP\) in Figure 1 to get
$$r^2=a^2+D^2-2aDcosθ.\tag{8}$$
Let's take the derivative on both sides of Equation (8) with respect to \(θ\) to get
$$2r\biggl(\frac{dr}{dθ}\biggr)=2aDsinθ.\tag{9}$$
Making some algebraic simplifications, Equation (9) becomes
$$\frac{rdr}{D}=asinθdθ.\tag{10}$$
Also, doing some algebraic manipulations on Equation (8), we have
$$r^2-a^2+D^2=2D^2-2aDcosθ.$$
This equation can be further simplified to
$$r^2-a^2+D^2=2D(D-cosθ)$$
or
$$\frac{r^2-a^2+D^2}{2D}=D-acosθ.\tag{11}$$
It is perfectly natural at this point to be asking yourself we went through the trouble of doing all that. Well, the reason why is because we can substitute Equations (10) and (11) into Equation (7) to eliminate all of the \(θ\) terms and to represent everything in Equation (7) in terms of \(r\). Making these substitutions, Equation (7) becomes
$$f_{ring}=\frac{Gmρ}{r^2}\frac{\biggl(\frac{r^2-a^2+D^2}{2D}\biggr)}{r}(2πa)\frac{rdr}{D}$$
or
$$f_{ring}=\frac{Gmρπa}{D^2}\biggl(\frac{r^2+D^2-a^2}{r^2}\biggr)dr.\tag{12}$$
All of the messy math that we did in the steps in between Equations (4) and (12) was to represent \(f_{ring}\) of the same form as \(f(x)dx\). After all of the messy math we went through, as you can see Equation (12) is of the form \(f(r)dr\). We could've taken the integral, \(\int{f_{ring}}\), a long time ago to get the gravitational force xerted by the entire disk; but not until now (after having had derived Equation (12)) could we write down
$$f_{disk}=\int{f_{ring}}=\int_{?_1}^{?_2}f(r)dr$$
and actually calculate this force. Taking the integral on both sides of Equation (12), we have
$$f_{disk}=\frac{Gmρπa}{D^2}\int_{?_1}^{?_2}\frac{r^2+D^2-a^2}{r^2}dr.\tag{13}$$
As you can see from Figure 1, the lower and upper-limits of integration for the integral in Equation (13) are \(D-a\) and \(D+a\), respectively. Thus,
$$f_{disk}=\frac{Gmρπa}{D^2}\int_{D-a}^{D+a}1+\frac{D^2-a^2}{r^2}dr.\tag{14}$$
To try to attempt to keep things from getting too messy, let's ignore the \(Gmρπa/D^2\) term in Equation (14) for just a moment and let's just focus on calculating the definite integral in Equation (14). Doing so, we have
$$\int_{D-a}^{D+a}1+\frac{D^2-a^2}{r^2}dr=\int_{D-a}^{D+a}(1)dr+\int_{D-a}^{D+a}\frac{D^2-a^2}{r^2}dr$$
$$=\biggl[r\biggr]_{D-a}^{D+a}+\biggl[\frac{-1}{r}\biggr]_{D-a}^{D+a}(D^2-a^2)$$
$$=D+a-(D-a)+\biggl(\frac{1}{D-a}-\frac{1}{D+a}\biggr)(D^2-a^2).$$
If we multiply the two terms, \(1/(D-a)\) and \(-1/(D+a)\), by \((D+a)/(D+a)\) and \((D-a)\(D-a)\), respectively, we have
$$D+a-(D-a)+\biggl(\frac{D+a}{(D-a)(D+a)}-\frac{D-a}{(D+a)(D-a)}\biggr)=2a+\biggl(\frac{2a}{D^2-a^2}\biggr)(D^2-a^2)=4a.$$
Substituting this result for the integral in Equation (14), we have
$$f_{shell}=Gmρπa/D^2(4πa^2).\tag{15}$$
The gravitational force exerted by a shell on a mass \(m\) outside of the shell is given by Equation (15). Since \(ρ\) Is the mass density of the cylinder and \((4πa^2)t\) is its volume, we see that Equation (15) can also be written as
$$f_{disk}=G\frac{mM_{shell}}{D^2}.\tag{15}$$
What Equation (15) means is that a thin shell exerts a force on a aprticle outside of the shell as if all of the shell's mass were concentrated at a single point at the center of the shell. to find the gravitational force excerted by a solid sphere on a particle outside of the sphere, we must add up the forces on \(m\) due to infinitely many, infinitesimally thin shells. Since any such shell is infinitesimally thin, let's replace \(t\) with \(dr\). Taking the integral of both sides of Equation (15), we have
$$f_{\text{Solid sphere}}=\frac{4πGmρ}{D^2}\int_0^Rr^2dr\tag{16}$$
where \(R\) is the radius of the sphere. Calculating the integral in Equation (16), Equation (16) simplifies to
$$f_{\text{Solid sphere}}=\frac{4πGmρ}{D^2}\biggl(\frac{R^3}{3}\biggr)$$
or
$$f_{\text{Solid sphere}}=G\frac{m\biggl(ρ\frac{4}{3}πR^3\biggr)}{D^2}.\tag{17}$$
Since \(ρ\frac{4}{3}πR^3\) is just the mass of the solid sphere, Equation (17) simplifies to
$$f_{\text{Solid sphere}}=G\frac{mM_{sphere}}{D^2}.\tag{18}$$
Equation (18) tells us that a sphere of mass \(M\) and radius \(D\) exerting a gravitational force on a point-mass \(m\) outside of the sphere exerts the
same force as a particle of mass \(M\) acting on the point-mass \(m\) such that those two particles separation distance are given by \(D\). When the Earth exerts a gravitational force on an object, if that object is very small compared to the Earth then that object can be approximated as a point-mass and the Earth can be approximated as a sphere with uniform mass density. The gravitational force exerted on such objects is given by Equation (18).
This article is licensed under a CC BY-NC-SA 4.0 license.
References
1. Kline, Morris. "Some Physical Applications of the Definite Integral."
Calculus: An Intuitive and Physical Approach. Mineola, NY: Dover Publications, 1998. 502. Print |
Difference between revisions of "Probability Seminar"
(→March 7, Shamgar Gurevitch UW-Madison)
(→March 28, TBA)
Line 45: Line 45:
== March 21, Spring Break, No seminar ==
== March 21, Spring Break, No seminar ==
−
== March 28,
+
== March 28, ==
+ + + + + + + + + + +
== April 4, TBA ==
== April 4, TBA ==
== April 11, [https://sites.google.com/site/ebprocaccia/ Eviatar Proccia], [http://www.math.tamu.edu/index.html Texas A&M] ==
== April 11, [https://sites.google.com/site/ebprocaccia/ Eviatar Proccia], [http://www.math.tamu.edu/index.html Texas A&M] ==
Revision as of 20:53, 4 February 2019 Spring 2019 Thursdays in 901 Van Vleck Hall at 2:25 PM, unless otherwise noted. We usually end for questions at 3:15 PM.
If you would like to sign up for the email list to receive seminar announcements then please send an email to join-probsem@lists.wisc.edu
January 31, Oanh Nguyen, Princeton
Title:
Survival and extinction of epidemics on random graphs with general degrees
Abstract: We establish the necessary and sufficient criterion for the contact process on Galton-Watson trees (resp. random graphs) to exhibit the phase of extinction (resp. short survival). We prove that the survival threshold $\lambda_1$ for a Galton-Watson tree is strictly positive if and only if its offspring distribution has an exponential tail, settling a conjecture by Huang and Durrett. On the random graph with degree distribution $D$, we show that if $D$ has an exponential tail, then for small enough $\lambda$ the contact process with the all-infected initial condition survives for polynomial time with high probability, while for large enough $\lambda$ it runs over exponential time with high probability. When $D$ is subexponential, the contact process typically displays long survival for any fixed $\lambda>0$. Joint work with Shankar Bhamidi, Danny Nam, and Allan Sly.
Wednesday, February 6 at 4:00pm in Van Vleck 901 , Li-Cheng Tsai, Columbia University
Title:
When particle systems meet PDEs
Abstract: Interacting particle systems are models that involve many randomly evolving agents (i.e., particles). These systems are widely used in describing real-world phenomena. In this talk we will walk through three facets of interacting particle systems, namely the law of large numbers, random fluctuations, and large deviations. Within each facet, I will explain how Partial Differential Equations (PDEs) play a role in understanding the systems..
Title:
Fluctuations of the KPZ equation in d\geq 2 in a weak disorder regime
Abstract: We will discuss some recent work on the Edwards-Wilkinson limit of the KPZ equation with a small coupling constant in d\geq 2.
February 14, TBA February 21, TBA Wednesday, February 27 at 1:10pm Jon Peterson, Purdue March 7, TBA March 14, TBA March 21, Spring Break, No seminar March 28, Shamgar Gurevitch UW-Madison
Title:
Harmonic Analysis on GLn over finite fields, and Random Walks
Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters. For evaluating or estimating these sums, one of the most salient quantities to understand is the {\it character ratio}:
$$ trace(\rho(g))/dim(\rho), $$
for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant {\it rank}. This talk will discuss the notion of rank for GLn over finite fields, and apply the results to random walks. This is joint work with Roger Howe (Yale and Texas AM). |
I was trying to understand formula(2.21) in Witten's paper "Quantum Field Theory and Jones Polynomial"(link: https://projecteuclid.org/euclid.cmp/1104178138) (Page 360).
There, it was mentioned, the gravitational Chern Simons action:
$$I(g)=\frac{1}{4\pi} \int_M Tr(\omega d \omega + \frac{2}{3} \omega^3)$$
depends on the choice of framing, i.e. trivialization of tangent bundle of M, in the way that $I(g)\rightarrow I(g)+2\pi s$, where s is how many "units" you twisted the framing.
My question is: how to imagine the twist of framing on this 3 manifold, and further compute the number 's' for given two tangent bundle?
A related, and might be more interesting question is about (2.25) of the same paper:
$$Z\rightarrow Z \exp(2\pi i s \frac{c}{24})$$
Which says, if you twist the frame by s unit, the total contribute to the partition function of Chern Simons theory of gauge group G at level k will be a phase proportion to $c/24$, where c is the central charge of the corresponding current algebra.
Again, how to under to understand this formula, e.g., is there a CFT derivation of this phase shift?This post imported from StackExchange Physics at 2015-06-26 10:40 (UTC), posted by SE-user Yingfei Gu |
Difference between revisions of "Probability Seminar"
(→Probability related talk in PDE Geometric Analysis seminar: Monday, 3:30pm to 4:30pm, Van Vleck 901, Xiaoqin Guo, UW-Madison)
(→April 4, TBA)
Line 83: Line 83:
== April 4, TBA ==
== April 4, TBA ==
+ + + + +
== April 11, [https://sites.google.com/site/ebprocaccia/ Eviatar Procaccia], [http://www.math.tamu.edu/index.html Texas A&M] ==
== April 11, [https://sites.google.com/site/ebprocaccia/ Eviatar Procaccia], [http://www.math.tamu.edu/index.html Texas A&M] ==
Revision as of 13:18, 29 March 2019 Spring 2019 Thursdays in 901 Van Vleck Hall at 2:25 PM, unless otherwise noted. We usually end for questions at 3:15 PM.
If you would like to sign up for the email list to receive seminar announcements then please send an email to join-probsem@lists.wisc.edu
January 31, Oanh Nguyen, Princeton
Title:
Survival and extinction of epidemics on random graphs with general degrees
Abstract: We establish the necessary and sufficient criterion for the contact process on Galton-Watson trees (resp. random graphs) to exhibit the phase of extinction (resp. short survival). We prove that the survival threshold $\lambda_1$ for a Galton-Watson tree is strictly positive if and only if its offspring distribution has an exponential tail, settling a conjecture by Huang and Durrett. On the random graph with degree distribution $D$, we show that if $D$ has an exponential tail, then for small enough $\lambda$ the contact process with the all-infected initial condition survives for polynomial time with high probability, while for large enough $\lambda$ it runs over exponential time with high probability. When $D$ is subexponential, the contact process typically displays long survival for any fixed $\lambda>0$. Joint work with Shankar Bhamidi, Danny Nam, and Allan Sly.
Wednesday, February 6 at 4:00pm in Van Vleck 911 , Li-Cheng Tsai, Columbia University
Title:
When particle systems meet PDEs
Abstract: Interacting particle systems are models that involve many randomly evolving agents (i.e., particles). These systems are widely used in describing real-world phenomena. In this talk we will walk through three facets of interacting particle systems, namely the law of large numbers, random fluctuations, and large deviations. Within each facet, I will explain how Partial Differential Equations (PDEs) play a role in understanding the systems..
Title:
Fluctuations of the KPZ equation in d\geq 2 in a weak disorder regime
Abstract: We will discuss some recent work on the Edwards-Wilkinson limit of the KPZ equation with a small coupling constant in d\geq 2.
February 14, Timo Seppäläinen, UW-Madison
Title:
Geometry of the corner growth model
Abstract: The corner growth model is a last-passage percolation model of random growth on the square lattice. It lies at the nexus of several branches of mathematics: probability, statistical physics, queueing theory, combinatorics, and integrable systems. It has been studied intensely for almost 40 years. This talk reviews properties of the geodesics, Busemann functions and competition interfaces of the corner growth model, and presents some new qualitative and quantitative results. Based on joint projects with Louis Fan (Indiana), Firas Rassoul-Agha and Chris Janjigian (Utah).
February 21, Diane Holcomb, KTH
Title:
On the centered maximum of the Sine beta process Abstract: There has been a great deal or recent work on the asymptotics of the maximum of characteristic polynomials or random matrices. Other recent work studies the analogous result for log-correlated Gaussian fields. Here we will discuss a maximum result for the centered counting function of the Sine beta process. The Sine beta process arises as the local limit in the bulk of a beta-ensemble, and was originally described as the limit of a generalization of the Gaussian Unitary Ensemble by Valko and Virag with an equivalent process identified as a limit of the circular beta ensembles by Killip and Stoiciu. A brief introduction to the Sine process as well as some ideas from the proof of the maximum will be covered. This talk is on joint work with Elliot Paquette.
Title: Quantitative homogenization in a balanced random environment
Abstract: Stochastic homogenization of discrete difference operators is closely related to the convergence of random walk in a random environment (RWRE) to its limiting process. In this talk we discuss non-divergence form difference operators in an i.i.d random environment and the corresponding process—a random walk in a balanced random environment in the integer lattice Z^d. We first quantify the ergodicity of the environment viewed from the point of view of the particle. As consequences, we obtain algebraic rates of convergence for the quenched central limit theorem of the RWRE and for the homogenization of both elliptic and parabolic non-divergence form difference operators. Joint work with J. Peterson (Purdue) and H. V. Tran (UW-Madison).
Wednesday, February 27 at 1:10pm Jon Peterson, Purdue
Title:
Functional Limit Laws for Recurrent Excited Random Walks
Abstract:
Excited random walks (also called cookie random walks) are model for self-interacting random motion where the transition probabilities are dependent on the local time at the current location. While self-interacting random walks are typically very difficult to study, many results for (one-dimensional) excited random walks are remarkably explicit. In particular, one can easily (by hand) calculate a parameter of the model that will determine many features of the random walk: recurrence/transience, non-zero limiting speed, limiting distributions and more. In this talk I will prove functional limit laws for one-dimensional excited random walks that are recurrent. For certain values of the parameters in the model the random walks under diffusive scaling converge to a Brownian motion perturbed at its extremum. This was known previously for the case of excited random walks with boundedly many cookies per site, but we are able to generalize this to excited random walks with periodic cookie stacks. In this more general case, it is much less clear why perturbed Brownian motion should be the correct scaling limit. This is joint work with Elena Kosygina.
March 21, Spring Break, No seminar March 28, Shamgar Gurevitch UW-Madison
Title:
Harmonic Analysis on GLn over finite fields, and Random Walks
Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters. For evaluating or estimating these sums, one of the most salient quantities to understand is the
character ratio:
$$ \text{trace}(\rho(g))/\text{dim}(\rho), $$
for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant
rank. This talk will discuss the notion of rank for $GL_n$ over finite fields, and apply the results to random walks. This is joint work with Roger Howe (Yale and Texas AM). April 4, TBA
Title:
Outliers in the spectrum for products of independent random matrices
Abstract: For fixed positive integers m, we consider the product of m independent n by n random matrices with iid entries as in the limit as n tends to infinity. Under suitable assumptions on the entries of each matrix, it is known that the limiting empirical distribution of the eigenvalues is described by the m-th power of the circular law. Moreover, this same limiting distribution continues to hold if each iid random matrix is additively perturbed by a bounded rank deterministic error. However, the bounded rank perturbations may create one or more outlier eigenvalues. We describe the asymptotic location of the outlier eigenvalues, which extends a result of Terence Tao for the case of a single iid matrix. Our methods also allow us to consider several other types of perturbations, including multiplicative perturbations. Joint work with Natalie Coston and Sean O'Rourke. |
Dear Uncle Colin, I'm trying to sew a traditional football in the form of a truncated icosahedron. If I want a radius of 15cm, how big do the polygons need to be? -- Plugging In Euler Characteristic's Excessive Hello, PIECE, and thank you for your message! Getting an exact answerRead More →
"What are the ch..." "About 11.7%," said the Mathematical Ninja. "Assuming $X$ is drawn from a Poisson distribution with a mean of 9 and we want the probability that $X=7$." "That's a fair assumption, sensei," pointed out the student, "given that that's what the sodding question says." A wiser studentRead More →
Dear Uncle Colin, I have a pair of parametric equations giving $x$ and $y$ each as a function of $t$. I'm happy with the first derivative being $\diff{y}{t} \div \diff{x}{t}$, but I struggle to find the second derivative. How would I do that? - Can't Handle An Infinitesimal Nuance Hi,Read More →
Eagle-eyed friend of the blog @robjlow spotted an error in Uncle Colin's last answer. As I'm forever telling my students, making errors is how you learn; Rob has graciously delivered a lesson for us all. Thanks for keeping me honest! Recently, Uncle Colin gave a couple of ways to seeRead More →
Dear Uncle Colin, What is $\lim_{x \to \infty} \left\{ \sqrt{x^2 + 3x} - x\right\}$? - Raging Over Obnoxious Terseness Hi, ROOT, and thanks for your very brief question. My approach would be to split up the square root and use either a binomial expansion or completing the square, as follows:Read More →
In this month's installment of Wrong, But Useful, our special guest co-host is @mathsjem (Jo Morgan in real life) from the indispensable resourceaholic.com. We start by talking about resourceaholic.com and how Jo manages to fit such a punishing blog schedule around being a nearly-full-time maths teacher. Colin wonders how writingRead More →
"Arr, that be a scurvy-lookin' expression!" said the Mathematical Pirate. "A quartic on the top and a quadratic on the bottom. That Ninja would probably try to factorise and do it all elegant-like." "Is that not the point?" "When you got something as 'orrible as that, it's like puttin' lipstickRead More →
Dear Uncle Colin, Help! My calculator is broken and I need to solve - or at least approximate - $0.1 = \frac{x}{e^x - 1}$! How would you do it? -- Every $x$ Produces Outrageous Numbers, Exploring New Techniques Hi, ExPONENT, and thanks for your message! That's a bit of aRead More →
"Sensei! I have a problem!" The Mathematical Ninja nodded. "Bring it on." "There's a challenge! Someone has picked a five-digit integer and cubed it to get 6,996,364,932,376. I know it ends with a six, and I could probably get the penultimate digit with a bit of work... I just wonderedRead More → |
I was told that we would use a list if the graph is
sparse and a matrix if the graph is dense. For me, it's just a raw definition. I don't see much beyond it. Can you clarify when would it be the natural choice to make?
Thanks in advance!
Computer Science Stack Exchange is a question and answer site for students, researchers and practitioners of computer science. It only takes a minute to sign up.Sign up to join this community
I was told that we would use a list if the graph is
sparse and a matrix if the graph is dense. For me, it's just a raw definition. I don't see much beyond it. Can you clarify when would it be the natural choice to make?
Thanks in advance!
First of all note that
sparse means that you have very few edges, and dense means many edges, or almost complete graph.In a complete graph you have $n(n-1)/2$ edges, where $n$ is the number of nodes.
Now, when we use matrix representation we allocate $n\times n$ matrix to store node-connectivity information, e.g., $M[i][j] = 1$ if there is edge between nodes $i$ and $j$, otherwise $M[i][j] = 0$.
But if we use adjacency list then we have an array of nodes and each node points to its adjacency list containing ONLY its neighboring nodes.
Now if a graph is sparse and we use matrix representation then most of the matrix cells remain unused which leads to the waste of memory. Thus we usually don't use matrix representation for sparse graphs. We prefer adjacency list.
But if the graph is dense then the number of edges is close to (the complete) $n(n-1)/2$, or to $n^2$ if the graph is directed with self-loops. Then there is no advantage of using adjacency list over matrix.
In terms of space complexity
Adjacency matrix: $O(n^2)$ Adjacency list: $O(n + m)$ where $n$ is the number nodes, $m$ is the number of edges.
When the graph is undirected tree then
Adjacency matrix: $O(n^2)$ Adjacency list: $O(n + n)$ is $O(n)$ (better than $n^2$)
When the graph is directed, complete, with self-loops then
Adjacency matrix: $O(n^2)$ Adjacency list: $O(n + n^2)$ is $O(n^2)$ (no difference)
And finally, when you implement using matrix, checking if there is an edge between two nodes takes $O(1)$ times, while with an adjacency list, it may take linear time in $n$.
To answer by providing a simple analogy.. If you had to store 6oz of water, would you (generally speaking) do so with a 5 gallon container, or an 8oz cup?
Now, coming back to your question.. If the majority of your matrix is empty, then why use it? Just list each value instead. However, if your list is
really long, why not just use a matrix to condense it?
The reasoning behind list vs matrix really is that simple in this case.
P.S. a list is really just a single column matrix!!! (trying to show you just how arbitrary of a decision/scenario this is)
Consider a graph with $N$ nodes and $E$ edges. Ignoring low-order terms, a bit matrix for a graph uses $N^2$ bits no matter how many edges there are.
How many bits do you actually need, though?
Assuming that edges are independent, the number of graphs with $N$ nodes and $E$ edges is ${N^2 \choose E}$. The minimum number of bits required to store this subset is $\log_2 {N^2 \choose E}$.
We will assume without loss of generality that $E \le \frac{N^2}{2}$, that is, that half or fewer of the edges are present. If this is not the case, we can store the set of "non-edges" instead.
If $E = \frac{N^2}{2}$, $\log_2{N^2 \choose E} = N^2 + o(N^2)$, so the matrix representation is asymptotically optimal. If $E \ll N^2$, using Stirling's approximation and a little arithmetic, we find:
$$\log_2 {N^2 \choose E}$$ $$= \log_2 \frac {(N^2)!} {E! (N^2 - E)!}$$ $$= 2E \log_2 N + O(\hbox{low order terms})$$
If you consider that $\log_2 N$ is the size of an integer which can represent a node index, the optimal representation is an array of $2E$ node ids, that is, an array of pairs of node indexes.
Having said that, a good measure of sparsity is the entropy, which is also the number of bits per edge of the optimal representation. If $p = \frac{E}{N^2}$ is the probability that an edge is present, the entropy is $- \log_2{p(1-p)}$. For $p \approx \frac{1}{2}$, the entropy is 2 (i.e. two bits per edge in the optimal representation), and the graph is dense. If the entropy is significantly greater than 2, and in particular if it's close to the size of a pointer, the graph is sparse. |
I'm no expert on Alan Turing at all, and the following will not directly answer your question, but it might give some context. Following the link provided, I also found the following page, which might shed some more light on things:
It says the following (my apologies for any errors in transcribing, I'm grateful for any corrections):
A formal expression $$ f(x) = \sum_{i=1}^n \alpha_i x^i $$ involving
the `indeterminate' (or variable) $x$, whose coefficients $\alpha_i$
are numbers in a field $K$, is called a ($K$-)polynomial of formal
degree $n$.
The idea of an `indeterminate' is distinctly subtle, I would almost
say too subtle. It is not (at any rate as van der Waerden [link added by me] sees it) the
same as variable. Polynomials in an indeterminate $x$, $f_1(x)$ and
$f_2(x)$, would not be considered identical if $f_1(x)=f_2(x)$ [for]
all $x$ in $K$, but the coefficients differed. They are in effect the
array of coefficients, with rules for multiplication and addition suggested
by their form
I am inclined to the view that this is too subtle and makes an inconvenient
definition. I prefer the indeterminate $x$ [?] be just the variable.
I think one thing to keep in mind here is that at this time anything relating to `computability' was not as clear as today. After all, Turing (an Church & Co.) were just discovering the essential notions.
In particular questions of intensionality vs. extensionality could have been an issue. It might be that Turing was pondering on the difference between functions (and also operations on functions) from a purely mathematical point of view (i.e., functions as extensional objects) vs. a computational point of view (i.e., functions as some form of formal description of a calculation process, which a priori can not be looked at in an extensional way).
All of this can be still seen in the context of the foundational crisis of mathematics (or at least strong echoes thereof). Related to this, are of course, questions of rigour, formalism, and denotation. This, in turn, is where your quote comes in. As others have outlined, Turing might have asked the question, what $\frac{\mathrm{d}y}{\mathrm{d}x}$ is (from a formal point of view), but also what is denoted by it, and (to his frustration) found that the answer to his question was not as clear as he wanted it to be. |
This is mainly a question about the remainder term of power series for elementary functions.
I'm very interested in aspects of calculating or computing elementary operations and functions, by which I mean:
trigonometric: $\sin$, $\cos$, $\tan$ inverse trig.: $\sin^{-1}$, $\cos^{-1}$, $\tan^{-1}$ log and exponential: $\ln$, $\exp$ hyperbolic: $\sinh$, $\cosh$, $\tanh$ inverse hyp.: $\sinh^{-1}$, $\cosh^{-1}$, $\tanh^{-1}$ powers, reciprocation, $\sqrt{\ \ \ }$
perhaps also:
gamma function: $\Gamma$ and a few other important functions
There are many contexts (of calculation). For example:
real versuscomplex arguments known, fixed precision versusvariable precision numerical versussymbolic
There are many approaches and techniques available too. For example:
power series expansions and polynomial approximations use of relationships between the functions use of periodic or similar properties to shrink the domain lookup tables and interpolation CORDIC (used within some hand calculators I believe) exactmethods interval or other error-tracking methods
Some good references to certain aspects include:
Digital Library of Mathematical Functions: Elementary Functions Chee-Keng Yap, Fundamental problems of algorithmic algebra Behrooz Parhami, Computer Arithmetic: Algorithms and Hardware Designs The main gap in my knowledge is in finding bounds for the error or remainder term in partial power series expansions of certain of the above functions. Some are fairly simple to determine, whilst others seem to be awkward. Any pointers on this matter would be much appreciated.
Likewise for any further references on any other aspects of or techniques for calculating elementary functions. |
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs
Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class
I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra
Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric
It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice
The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly
And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building)
It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad
I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore)
In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus
One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of
@TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students
In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $...
"If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have
Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed?
Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2
Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$
Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight.
hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$
for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$
I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything.
I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D
Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of
One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ...
The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious.
(but seriously, the best tactic is over powered...)
Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible
It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field?
Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement?
"Infinity exists" comes to mind as a potential candidate statement.
Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system
@Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity
but so far failed
Put it in another way, an equivalent formulation of that (possibly open) problem is:
> Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object?
If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite.
My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book
The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science...
O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem
hmm...
By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as:
$$P(x) = \prod_{k=0}^n (x - \lambda_k)$$
If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic
Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows:
The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases.
In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}...
Do these still exist if the axiom of infinity is blown up?
Hmmm...
Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum:
$$\sum_{k=1}^M \frac{1}{b^{k!}}$$
The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test
therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework
There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'...
and neither Rolle nor mean value theorem need the axiom of choice
Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure
Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment
typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set
> are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion |
The question is whether someone apart from the TC can find the factorization of $m$.
Yes, however I will first address the easier to answer question of "could the holder of private key $a$ decrypt traffic to the holder of private key $b$"
The answer to that is "yes"; Alice, the holder of the private key $a$ has $e_a$, $d_a$, and $e_b$ (Bob's public key).
We know that $e_a d_a \equiv 1 \pmod {\lambda(n)}$ (where $\lambda(n) = \text{lcm}(p_1 - 1, p_2 - 1, ..., p_k-1)$, that is, $e_a d_a - 1 = m\lambda(n)$ for some integer $m$. So, what Alice can do is compute $d'_b = e_b^{-1} \bmod{ e_a d_a - 1} = e_b^{-1} \bmod{ m\lambda(n)}$; it is easy to see that $e_b d'_b \equiv 1 \pmod{ \lambda(n)}$, that is, $d_b'$ will work as a decryption key for traffic to Bob. It likely not to be larger than Bob's key $d_b$, however it is not so large that Alice can't practically use it.
However, to address the question "could Alice factor $n$ (even though the above argument shows she doesn't need to)", the answer is "yes"; the same probabilistic method that works against two factor RSA primes also works in this case (it just may take more iterations to complete the factorization).
To review: Alice computes $e_a d_a - 1 = 2^k z$ for $k$ large enough to make $z$ odd. Then, she selects a random value $g$ and computes:
\begin{align*} h_0 &= g^z \bmod n \\ h_1 &= h_0^2 \bmod n \\ h_2 &= h_1^2 \bmod n \\ &\vdots \\ h_k &= h_{k-1}^2 \bmod n\end{align*}
It should be clear that, unless the original $g$ just happened to not be relatively prime to $n$, that the final value $h_{k} = 1$; we look at the largest $i$ where $h_i \ne 1$. If $i > 0$ and $h_i \ne -1 \pmod n$, that gives us a nontrivial factor $\gcd(n, h_i - 1)$.
It can be shown that at least half the possible initial $g$ values will yield a factor, and that this factor is effectively random over the possible factorizations, hence rerunning this test using different $g$ values will quickly reveal all the factors. |
I was watching the video lecture from MIT on Prim's algorithm for minimum spanning trees. Why do we need to do the swap step for proving the theorem that if we choose a set of vertices in minimum spanning tree of $G(V,E)$and let us call that $A$ such $A\subset B$, the edge with the least weight connecting $A$ to $V-A$ will always be in the minimum spanning tree ? The professor has done the swap step at point 59:07 seconds in the video.
Charles Leiserson is presenting a proof to what is known as the
blue rule in Tarjan's "Data Structures and Network Analysis" book.
The proof goes by contradiction, that is you assume that the edge $(u,v)$ is not in the MST $T$. Don't be confused by the board, it has to be $(u,v)\not \in T$. Let $(a,b)$ the edge that we swap with $(u,v)$. Bot edges are cut edges between $A$ and $A\setminus V$. The tree $T'$ after the swap has weight $$w(T')=w(T)+w((u,v))-w((a,b)).$$ As a consequence $w(T')<w(T)$, which is a contradiction and therefor the assumption $(u,v)\not\in T$ was wrong.
Notice that this proof uses the assumption that all edge weights are distinct, however it is easy to modify it for general edge weights. |
Is: $\exists x (P(x) \land Q(x)) \rightarrow \exists x P(x) \land \exists x Q(x) $ logically valid?.
I cant found an intepretation in wich the formula is false.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Yes, indeed it is valid. No counterexample to be found.
If there exists an $x$ for which both ($P(x)$ and $Q(x)$) hold, then there certainly exists an $x$ for which $P(x)$ holds, and there exists an $x$ for which $Q(x)$ holds.
The
converse implication is not valid, however. If there exists an $x$ that's a pumpkin and there exists an $x$ that is green, it does not follow that there exists an $x$ that is a green pumpkin.
Yes, it's valid. If there is something which is both $P$ and $Q$, then there is something which is $P$ AND there is something which is $Q$ (viz. the same thing which was both $P$ and $Q$). |
Browse by Person
Up a level 41. Article
Aad, G, Abbott, B, Abdallah, J et al. (2883 more authors) (2016)
Addendum to ‘Measurement of the tˉt production cross-section using eμ events with b-tagged jets in pp collisions at √s = 7 and 8 TeV with the ATLAS detector’. European Physical Journal C: Particles and Fields, 76. 642. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2855 more authors) (2016)
Performance of pile-up mitigation techniques for jets in pp collisions at √s=8 TeV using the ATLAS detector. European Physical Journal C, 76 (11). ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2867 more authors) (2016)
Identification of high transverse momentum top quarks in pp collisions at √s=8 TeV with the ATLAS detector. Journal of High Energy Physics, 2016. 93. ISSN 1029-8479
Aad, G, Abbott, B, Abdallah, J et al. (2865 more authors) (2016)
Reconstruction of hadronic decay products of tau leptons with the ATLAS experiment. The European Physical Journal C, 76 (5). ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2867 more authors) (2016)
Measurement of the transverse momentum and Øn∗ distributions of Drell–Yan lepton pairs in proton–proton collisions at √s = 8 TeV with the ATLAS detector. The European Physical Journal C - Particles and Fields, 76 (5). ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2862 more authors) (2016)
Measurement of the differential cross-sections of prompt and non-prompt production of J/ψ and ψ(2S) in pp collisions at √s=7 and 8 TeV with the ATLAS detector. European Physical Journal C: Particles and Fields, 76 (5). 283. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2866 more authors) (2016)
Search for the standard model Higgs boson produced in association with a vector boson and decaying into a tau pair in pp collisions sqrt s = 8 TeV at with the ATLAS detector. Physical Review D, 93 (9). ISSN 1550-7998
Aad, G, Abbott, B, Abdallah, J et al. (2868 more authors) (2016)
Observation of Long-Range Elliptic Azimuthal Anisotropies in root s=13 and 2.76 TeV pp Collisions with the ATLAS Detector. PHYSICAL REVIEW LETTERS, 116 (17). ARTN 172301. ISSN 0031-9007
Aad, G, Abbott, B, Abdallah, J et al. (2865 more authors) (2016)
Probing lepton flavour violation via neutrinoless τ⟶3μ decays with the ATLAS detector. European Physical Journal C: Particles and Fields, 76 (5). 232. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2871 more authors) (2016)
Search for dark matter produced in association with a Higgs boson decaying to two bottom quarks in pp collisions at root s=8 TeV with the ATLAS detector. PHYSICAL REVIEW D, 93 (7). ARTN 072007. ISSN 1550-7998
Aad, G, Abbott, B, Abdallah, J et al. (2863 more authors) (2016)
Search for new phenomena in events with at least three photons collected in pp collisions at $$\sqrt{s}$$ s = 8 TeV with the ATLAS detector. The European Physical Journal C, 76 (4). ISSN 1434-6044
Aad, G, Abajyan, T, Abbott, B et al. (2840 more authors) (2016)
Measurement of the centrality dependence of the charged-particle pseudorapidity distribution in proton–lead collisions at sNN‾‾‾√=5.02sNN=5.02 TeV with the ATLAS detector. The European Physical Journal C, 76 (4). ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2863 more authors) (2016)
Search for anomalous couplings in the W tb vertex from the measurement of double differential angular decay rates of single top quarks produced in the t-channel with the ATLAS detector. Journal of High Energy Physics, 2016 (4).
Aad, G, Abbott, B, Abdallah, J et al. (2782 more authors) (2016)
Search for magnetic monopoles and stable particles with high electric charges in 8 TeV pp collisions with the ATLAS detector. Physical Review D, 93 (5). 052009. ISSN 2470-0010
Aad, G, Abbott, B, Abdallah, J et al. (2865 more authors) (2016)
Search for the electroweak production of supersymmetric particles in root s=8 TeV pp collisions with the ATLAS detector. Physical Review D, 93 (5). 052002. ISSN 2470-0010
Aad, G, Abbott, B, Abdallah, J et al. (2794 more authors) (2016)
Search for the electroweak production of supersymmetric particles in root s=8 TeV pp collisions with the ATLAS detector. Physical Review D, 93 (5). 052002. ISSN 2470-0010
Aad, G, Abbott, B, Abdallah, J et al. (2879 more authors) (2016)
Centrality, rapidity, and transverse momentum dependence of isolated prompt photon production in lead-lead collisions at TeV measured with the ATLAS detector. Physical Review C, 93 (3). ISSN 0556-2813
Aad, G, Abbott, B, Abdallah, J et al. (2856 more authors) (2016)
Search for invisible decays of a Higgs boson using vector-boson fusion in pp collisions at √s=8 TeV with the ATLAS detector. Journal of High Energy Physics, 2016. 172. ISSN 1126-6708
Aad, G, Abbott, B, Abdallah, J et al. (2862 more authors) (2016)
Search for a high-mass Higgs boson decaying to a W boson pair in pp collisions at s = 8 $$ \sqrt{s}=8 $$ TeV with the ATLAS detector. Journal of High Energy Physics, 2016 (1).
Aad, G, Abbott, B, Abdallah, J et al. (2854 more authors) (2016)
Measurements of fiducial cross-sections for $$t\bar{t}$$ t t ¯ production with one or two additional b-jets in pp collisions at $$\sqrt{s}$$ s =8 TeV using the ATLAS detector. European Physical Journal C: Particles and Fields, 76 (1). 11. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2824 more authors) (2016)
Measurements of the Higgs boson production and decay rates and coupling strengths using pp collision data at $$\sqrt{s}=7$$ s = 7 and 8 TeV in the ATLAS experiment. European Physical Journal C: Particles and Fields, 76. 6. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2854 more authors) (2015)
ATLAS Run 1 searches for direct pair production of third-generation squarks at the Large Hadron Collider. European Physical Journal C: Particles and Fields, 75 (10). 510. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2835 more authors) (2015)
Search for Higgs boson pair production in the $$b\bar{b}b\bar{b}$$ b b ¯ b b ¯ final state from pp collisions at $$\sqrt{s} = 8$$ s = 8 TeVwith the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (9). 412. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2825 more authors) (2015)
Search for heavy long-lived multi-charged particles in pp collisions at root s=8 TeV using the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (8). 362. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2819 more authors) (2015)
Constraints on the off-shell Higgs boson signal strength in the high-mass ZZ and WW final states with the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (7). 335. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2822 more authors) (2015)
Search for a new resonance decaying to a W or Z boson and a Higgs boson in the $$\ell \ell / \ell \nu / \nu \nu + b \bar{b}$$ ℓ ℓ / ℓ ν / ν ν + b b ¯ final states with the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (6). 263. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2823 more authors) (2015)
Determination of spin and parity of the Higgs boson in the $$WW^*\rightarrow e \nu \mu \nu $$ W W ∗ → e ν μ ν decay channel with the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (5). 231. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2815 more authors) (2015)
Observation and measurements of the production of prompt and non-prompt $$\varvec{\text {J}\uppsi }$$ J ψ mesons in association with a $$\varvec{Z}$$ Z boson in $$\varvec{pp}$$ p p collisions at $$\varvec{\sqrt{s}= 8\,\text {TeV}}$$ s = 8 TeV with the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (5). 229. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2821 more authors) (2015)
Search for direct pair production of a chargino and a neutralino decaying to the 125 GeV Higgs boson in $$\sqrt{\varvec{s}} = 8$$ s = 8 TeV $$\varvec{pp}$$ p p collisions with the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (5). 208. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2888 more authors) (2015)
Search for $$W' \rightarrow tb \rightarrow qqbb$$ W ′ → t b → q q b b decays in $$pp$$ p p collisions at $$\sqrt{s}$$ s = 8 TeV with the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (4). 165. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2822 more authors) (2015)
Search for Higgs and Z Boson Decays to J/ψγ and ϒ(nS)γ with the ATLAS Detector. Physical Review Letters, 114 (12). 121801. ISSN 0031-9007
Aad, G, Abbott, B, Abdallah, J et al. (2881 more authors) (2015)
Simultaneous measurements of the tt¯, W+W−, and Z/γ∗→ττ production cross-sections in pp collisions at √s=7 TeV with the ATLAS detector. Physical Review D - Particles, Fields, Gravitation and Cosmology, 91 (5). 052005. ISSN 1550-7998
Aad, G, Abbott, B, Abdallah, J et al. (2888 more authors) (2015)
Search for dark matter in events with heavy quarks and missing transverse momentum in pp collisions with the ATLAS detector. European Physical Journal C, 75 (2). 92. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2467 more authors) (2015)
Search for dark matter in events with heavy quarks and missing transverse momentum in pp collisions with the ATLAS detector. European Physical Journal C , 75 (2). 92. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2896 more authors) (2015)
Measurements of Higgs boson production and couplings in the four-lepton channel in pp collisions at center-of-mass energies of 7 and 8 TeV with the ATLAS detector. Physical Review D, 91 (1). ISSN 1550-7998
Aad, G, Abbott, B, Abdallah, J et al. (2888 more authors) (2014)
Search for nonpointing and delayed photons in the diphoton and missing transverse momentum final state in 8 TeV pp collisions at the LHC using the ATLAS detector. Physical Review D, 90 (11). ISSN 1550-7998
Aad, G, Abajyan, T, Abbott, B et al. (2793 more authors) (2014)
Measurements of normalized differential cross sections for tt¯ production in pp collisions at √(s)=7 TeV using the ATLAS detector. Physical Review D, 90 (7). ISSN 1550-7998
Aad, G, Abbott, B, Abdallah, J et al. (2886 more authors) (2014)
Measurement of the Higgs boson mass from the H→γγ and H→ZZ∗→4ℓ channels in pp collisions at center-of-mass energies of 7 and 8 TeV with the ATLAS detector. Physical Review D, 90 (5). ISSN 1550-7998
Aad, G, Abbott, B, Abdallah, J et al. (2878 more authors) (2014)
Search for high-mass dilepton resonances in pp collisions at s√=8 TeV with the ATLAS detector. Physical Review D, 90. 052005. ISSN 1550-7998
Aad, G, Abajyan, T, Abbott, B et al. (2920 more authors) (2013)
Evidence for the spin-0 nature of the Higgs boson using ATLAS data. Physics Letters B, 726 (1-3). pp. 120-144. ISSN 0370-2693
Aad, G, Abbott, B, Abdallah, J et al. (3057 more authors) (2012)
Search for the Standard Model Higgs Boson in the Diphoton Decay Channel with 4.9 fb−1 of pp Collision Data at √s=7 TeV with ATLAS. Physical Review Letters, 108. 111803. ISSN 0031-9007 |
I'm teaching an introductory analysis course, and I am seeking some feedback on how proofs should be written on the board in class in order to maximize learning. I realize that there is an opinion-based component to this question, but would appreciate any citations of research on the subject (not necessary) in addition to arguments on how student learning might be impacted.
My question is regarding the actual style that the proof is written on the board including but not limited to the following points: Should theorems and proofs always be written on the board in complete English sentences or is it ok to use abbreviations and symbolic shorthand? Is it ok to use slightly different notation than in the textbook? Is it ok to use slightly (or even completely) different arguments than what is used in the textbook?
I'm including a couple theorems from Wade's introduction to analysis as examples. I'll type up the theorem statement and proof exactly as in the text and then how I might write it on the board in class.
Here is a
nearly word for word as the theorem and proof appear in the book: 2.12 Theorem. Suppose that $\{x_n\}$ and $\{y_n\}$ are real sequences and are convergent, then$$\lim_{n\rightarrow\infty}(x_n+y_n)=\lim_{n\rightarrow\infty}x_n+\lim_{n\rightarrow\infty}y_n.$$ Suppose $x_n\rightarrow x$ and $y_n\rightarrow y$ as $n\rightarrow\infty.$ Let $\epsilon>0$ and choose $N\in\mathbb N$ such that $n\geq N$ implies $|x_n-x|<\epsilon/2$ and $|y_n-y|<\epsilon/2.$ Thus $n\geq N$ implies$$|(x_n+y_n)-(x+y)|\leq |x_n-x| + |y_n-y|<\frac\epsilon2+\frac\epsilon2=\epsilon. \quad \blacksquare$$ Proof.
Now here is how I might actually write it on the board in class:
Thm 2.12 $\quad x_n\rightarrow x, y_n\rightarrow y \quad \Longrightarrow \quad x_n+y_n\rightarrow x+y.$ Given $\epsilon>0$, choose $N$ s.t. $|x_n-x|<\epsilon/2$, $|y_n-y|<\epsilon/2$ $\ \forall n>N.$$$\begin{aligned}|(x_n+y_n)-(x+y)|&\leq |x_n-x| + |y_n-y|\\&<\frac\epsilon2+\frac\epsilon2=\epsilon. \qquad\qquad \blacksquare\end{aligned}$$ Proof.
The main differences is that the text is more "wordy", and I will write everything out more "symbolically". Note that I do stay consistent with the numerical labeling of theorems (e.g. 2.12) in the book so that students can more easily reference the text to compare/study.
I also sometimes use slightly different arguments than in the text, add in details that are left out of the book or leave out details that are explained in the book.
E.g.:
2.8 Theorem. Every convergent sequence is bounded. Assume $x_n\rightarrow a.$ Given $\epsilon=1,$ there is an $N\in\mathbb N$ such that $n\geq N$ implies $|x_n-a|<1.$ Hence by the triangle inequality, $|x_n|<1+|a|.$ On the other hand, if $1\leq n \leq N,$ then$$|x_n|\leq M:=\max\{|x_1|,|x_2|,\ldots,|x_N|\}.$$Therefore, $\{x_n\}$ is dominated by $\max\{M,1+|a|\}. \quad \blacksquare$ Proof.
Now here is how I might actually write it on the board in class:
Thm 2.12 $\quad x_n\rightarrow x \quad \Longrightarrow \quad x_n$ bdd. Given $\epsilon>0$ we can find $N$ s.t. $\forall n>N,$ $|x_n-x|<\epsilon.$ Proof.
So $|x_n|\leq |x|+\epsilon$ when $n>N.$
Let $M=\max\{|x_k|; k\leq N\}.$
Therefore $|x_n|\leq\max\{M,|x|+\epsilon\}$ for any $n. \qquad \blacksquare$
Of course, I also verbally explain each step of the work and will sometimes draw diagrams to help them informally/intuitively understand the argument, etc.
My justification is that there is no need to copy it exactly as it is in the text as the students can simply read that, and that it might be beneficial for them to see it written up slightly differently so that they can become accustomed to different styles of mathematical writing. |
August 2nd, 2016, 09:13 AM
# 1
Member
Joined: Aug 2016
From: afghanistan
Posts: 55
Thanks: 1
numerical method questions ...?
i had this subject called "computer oriented numerical methods in c language" ...
Code:
#include<stdio.h> #include<conio.h> #include<math.h> void main() { float x[10],y[10],temp=1,f[10],sum,p; int i,n,j,k=0,c; clrscr(); printf("\nhow many record you will be enter: "); scanf("%d",&n); for(i=0; i<n; i++) { printf("\n\nenter the value of x%d: ",i); scanf("%f",&x[i]); printf("\n\nenter the value of f(x%d): ",i); scanf("%f",&y[i]); } printf("\n\nEnter X for finding f(x): "); scanf("%f",&p); for(i=0;i<n;i++) { temp = 1; k = i; for(j=0;j<n;j++) { if(k==j) { continue; } else { temp = temp * ((p-x[j])/(x[k]-x[j])); } } f[i]=y[i]*temp; } for(i=0;i<n;i++) { sum = sum + f[i]; } printf("\n\n f(%.1f) = %f ",p,sum); getch(); }
Code:
/* ______________________________________ OUT PUT ______________________________________ how many record you will be enter: 4 enter the value of x0: 0 enter the value of f(x0): 0 enter the value of x1: 1 enter the value of f(x1): 2 enter the value of x2: 2 enter the value of f(x2): 8 enter the value of x3: 3 enter the value of f(x3): 27 Enter X for finding f(x): 2.5 f(2.5) = 15.312500 */
i have few doubts from it ..
in certain equations why do we change the unit of x to delta x , and the unit of y to delta y ??
August 2nd, 2016, 09:35 AM
# 2
Senior Member
Joined: Jun 2015
From: England
Posts: 915
Thanks: 271
I couldn't see where the delta x and delta y appeared, can you point it out in some way please.
Without that is not another name for (x1 - x2) and (x0-x1) - delta x?
August 2nd, 2016, 09:57 AM
# 3
Member
Joined: Aug 2016
From: afghanistan
Posts: 55
Thanks: 1
studiot ,
thanks for the reply ...
i thought solving certain differential equations with numerical methods always involved ...changing the unit of x to delta x , and the unit of y to delta y ...
in here for example ...
why are we changing it like that ??
August 2nd, 2016, 12:34 PM
# 4
Senior Member
Joined: Jun 2015
From: England
Posts: 915
Thanks: 271
You don't 'change x to delta x...... etc'
Delta x is a difference between the values of x at two points.
It is never the value of x at any point.
However many equations contain derivatives or may be solved by differentiating or may already be a diferential equation.
In that case an numerical approximation of the value of the derivative may be very useful.
So in this case we can replace $\displaystyle \frac{{dy}}{{dx}}$ by $\displaystyle \frac{{\Delta y}}{{\Delta x}}$
Numerically we can calculate delta y / delta x as
$\displaystyle \frac{{{y_2} - {y_1}}}{{{x_2} - {x_1}}}$
for two points 1 and 2.
Then the derivative reduces to a number we can put instead of the derivative in the equation so reducing it to an algebraic one.
But we don't have to do this, we can treat the derivative as a D operator and the equation as a polynomial in D to be solved numerically.
It is true that differences (deltas) can be uesd to solve polynomial equations but again this is not the only way.
For example consider the polynomial
$\displaystyle 5{x^4} - 3{x^3} + 25{x^2} - 14 = 0$
This can be rearranged to give
$\displaystyle x = \frac{1}{5}\sqrt {14 + 3{x^3} - 5{x^4}} $
In this form the polynomial can be readily and quickly solved with a few iterations, starting from a trial guess of x = 1 and feeding the output back into the equation.
No differences or deltas are required.
August 2nd, 2016, 11:23 PM
# 5
Member
Joined: Aug 2016
From: afghanistan
Posts: 55
Thanks: 1
studiot ,
thanks a lot ...
i guess i will just focus on one question at a time ..
all these requires some time to learn ... we had all these packed into one semester ...
just 6 months to learn a lot of mathematics and the c programming language associated with it ...
i have also been following this book , which i found online ...
http://ins.sjtu.edu.cn/people/mtang/textbook.pdf
Euler's Method for example ...
Our goal is to find a numerical solution, i.e. we must find a set of points which lie along the initial value problem's solution
Numerical Methods--Euler's Method
Numerical Methods--Euler's Method
yn+1 = yn + h f(xn, yn)
11. Euler's Method - a numerical solution for Differential Equations
We'll finish with a set of points that represent the solution, numerically.
is this like function behavior at certain points??
August 3rd, 2016, 08:26 AM
# 6
Senior Member
Joined: Jun 2015
From: England
Posts: 915
Thanks: 271
Quote:
The solution to an algebraic equation is a single number or a handful of numbers.
If you like it is a single point or a few points.
The solution to an ordinary differential equation is a specific function, which has lots of points plus one or more arbitrary constants of integration.
The solution to partial differential equation is again a specific function but this time plus one or more arbitrary functions.
Numerical methods attempt to reproduce these functions by 'calculating' the graph in steps rather than deducing an algebraic expression for the graph.
For the purposes of this post I am taking the word graph as the same as the function it represents.
August 3rd, 2016, 12:03 PM
# 7
Member
Joined: Aug 2016
From: afghanistan
Posts: 55
Thanks: 1
studiot ,
thanks a lot for the answers ...
i don't know if i am wording it right ...
usually for differential equation questions ... isn't it a bit like this ??
find the function , that has this instantaneous rate of change ...
Quote:
and the answer is a function that has the property of ,this instantaneous rate of change ...
Quote:
Quote:
so with numerical methods the answer is a set of points that has certain underlying qualitative property of , this instantaneous rate of change ... ??
August 3rd, 2016, 01:07 PM
# 8
Senior Member
Joined: Jun 2015
From: England
Posts: 915
Thanks: 271
Don't forget that only the first derivative represents the instantaneous rate of change.
Differential equations can contain higher derivatives.
When you are learning about methods or testi ng them it is handy to have an analytical solution to calibrate of compare.
In engineering many differential equations are non linear and/or have no known analytic solutions.
So in that case the only approach available is a numeric one.
Often engineers simplify (they love it) so in finite elements difficult equations are often replaced by simpler ones that ease computer resources.
Gauss / Green / Divergence theorems are used to replace interior elements with boundary elements, which also reduces calculation effort.
September 26th, 2016, 11:28 PM
# 10
Member
Joined: Aug 2016
From: afghanistan
Posts: 55
Thanks: 1
i have few more doubts , i need a little bit more clarity .
sorry for messing up the post like this .but i still have some doubts
let me try this again ...
this was supposed to start with a program for a polynomial factorization first ...
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
example ...
this is an instantaneous rate of change
dy/dx gives you the change in 'y' with respect to change in 'x'
find the function that gives you this rate of change in 'y' with respect to change in 'x' ...
and the answer is a function that has the property of ,this instantaneous rate of change in 'y' with respect to change in 'x' ... ...
example ...
this is an instantaneous rate of change ...
find the function ,the solution , the answer which is a set of points that has certain underlying qualitative property which produced, this instantaneous rate of change ...
Quote:
Quote:
Code:
#include<stdio.h> #include<math.h> main() { float x; /*defining variables*/ float y; float h; float targetx; puts("This program will solve the differential equation y' = y - x \nusing Euler's Method with y(0)=1/2 \n\n"); puts("Please enter the desired constant step size. (h-value)\n\n"); scanf("%f", &h); /* Defining step size*/ puts("\n\nNow enter the desired x-value to solve for y.\n\n"); scanf("%f", &targetx); y = 0.5; x = 0.0; puts("\n\nX Y"); while ( x != targetx ) { printf("\n\n%f %f", x, y); y = y + ((y - x)*h); x= x+h; } printf("\n\n%f %f\n", x, y); printf("\nThe value of y at the given x is %f.\n\n", y, h); system("pause"); }
in the case of an analytic solution :
this is an instantaneous rate of change
dy/dx gives you the change in 'y' with respect to change in 'x'
find the function that gives you this rate of change in 'y' with respect to change in 'x' ...
in the case of an numerical solution :
this is an instantaneous rate of change
dy/dx gives you the change in 'y' with respect to change in 'x'
find the function that gives you this rate of change in 'y' with respect to change in 'x' ...
??
Last edited by porchgirl; September 26th, 2016 at 11:45 PM.
Tags method, numerical, questions
Thread Tools Display Modes
Similar Threads Thread Thread Starter Forum Replies Last Post numerical method to solve equation coenter Differential Equations 4 January 27th, 2015 10:24 PM Steffensen's method in Numerical Analysis bounces Calculus 0 November 14th, 2014 05:48 AM About Numerical method mathisdifficult Applied Math 1 October 24th, 2012 07:51 AM Newton raphson method-numerical function meriens Applied Math 2 May 25th, 2012 02:43 AM Numerical Linear Algebra, Conjugate Gradient Method hgd7833 Linear Algebra 0 May 4th, 2012 03:50 AM |
Regarding potential and kinetic energy, when a pendulum swings we do not count the force from the string because it doesn't affect the energy, but why? if we split the force from the string to it's components it does affect the force of gravity which in turn affects the acceleration of the pendulum which in turn affects the kinetic energy? Or something like skating down a ramp, the normal force can be split into it's components and therefore affecting the gravitational force.
When we calculate potential and kinetic energy of a pendulum, we can ignore the action of the string, because the string (if it does not stretch and contract) does not produce or consume any energy.
We can predict that because there is no source or storage of energy behind it.
We can also notice that its force is always normal to the trajectory of the mass and therefore the
Force x Distance product is always zero as well.
The work-kinetic energy theorem states that $W_{net}=\Delta KE$. But we also know that $W=\vec{F}\cdot\vec{d}=Fdcos(\theta)$. In the case of the ramp, the normal force is perpendicular to the ramp, but the displacement is along the ramp. This means the displacement is perpendicular to the normal force and $W=\vec{F_N} \cdot \vec{d}=F_Ndcos(90^\circ)=0$ and there is no contribution to the change in the kinetic energy.
Same idea for the pendulum. The tension is always pointed toward the center of the object's circular path. The tension vector is always perpendicular to the circle the object is traveling on. Again, making $W=\vec{F_T}\cdot\vec{d}=F_Tdcos(90^\circ)=0$.
In these systems, if they are closed (and there is no friction or air resistance), $\Delta KE=-\Delta PE$ so the same idea applies to potential energy.
The diagram you drew is misleading. We are not taking apart the components of normal force. Instead, we are taking apart the components of gravitational force. Imagine a box sliding down a ramp. The ramp has an angle of $\theta$. Firstly, you must assign the x and y directions. Let x be parallel to the ramp and y be perpendicular to it. We must split the gravitational force $F_g$ on the box along x and y. The force $F_g$ is split into $F_{gx}$ and $F_{gy}$. We use trigonometry to determine the magnitude of the forces $$F_{gx}=F_g\sin{\theta}$$ $$F_{gy}=F_g\cos{\theta}$$The normal force is perpendicular to the slope and its magnitude is $$F_N=F_{gy}=F_g\cos{\theta}$$ The normal force vector is facing the opposite of the gravitational force vector.$$F_n=-F_{gy} \Rightarrow F_N+F_{gy} = 0$$ Therefore, the net force along y cancels out. Along x, the net force is $F_{gx} $. The same can be done on a pendulum. We split the gravitational force into its components. If we were to split the pendulum force into its components just like you did in your diagram, the pendulum force vector facing in the opposite direction of the gravitational force vector would be smaller than the gravitational force. $$F_{p1}<F_g$$ In fact, at almost all points on the pendulum, the total pendulum force is less than the gravitational force.
Notice that your $F_{p1}$ and $F_{p2}$ components point toward opposite sides of the string. That means one points in the direction of motion and the other points against the direction of motion, so one will contribute positive work and the other will contribute negative work.
We might leave it as a exercise for the student to show that the contributions cancel out, but this is an opportunity to talk about the importance of the mathematical properties of vector products.
In particular we know that the dot (inner; scalar) product of vectors distributes over additions $$ \vec{A} \cdot \left(\vec{B} + \vec{C}\right) = \vec{A} \cdot \vec{B} + \vec{A} \cdot \vec{C} \;,$$ so we conclude immediately that if $\vec{F}_\text{pend} = \vec{F}_{p1} + \vec{F}_{p2}$ and $F_\text{pend} \cdot \mathrm{d}s = 0$ then $$ \vec{F}_{p1}\cdot \mathrm{d}s + \vec{F}_{p2}\cdot \mathrm{d}s = 0\;,$$ and that your proposed treatment has the same result.
protected by Qmechanic♦ Apr 29 '18 at 18:02
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead? |
Search
Now showing items 1-9 of 9
Measurement of $J/\psi$ production as a function of event multiplicity in pp collisions at $\sqrt{s} = 13\,\mathrm{TeV}$ with ALICE
(Elsevier, 2017-11)
The availability at the LHC of the largest collision energy in pp collisions allows a significant advance in the measurement of $J/\psi$ production as function of event multiplicity. The interesting relative increase ...
Multiplicity dependence of jet-like two-particle correlations in pp collisions at $\sqrt s$ =7 and 13 TeV with ALICE
(Elsevier, 2017-11)
Two-particle correlations in relative azimuthal angle (Δ ϕ ) and pseudorapidity (Δ η ) have been used to study heavy-ion collision dynamics, including medium-induced jet modification. Further investigations also showed the ...
The new Inner Tracking System of the ALICE experiment
(Elsevier, 2017-11)
The ALICE experiment will undergo a major upgrade during the next LHC Long Shutdown scheduled in 2019–20 that will enable a detailed study of the properties of the QGP, exploiting the increased Pb-Pb luminosity ...
Azimuthally differential pion femtoscopy relative to the second and thrid harmonic in Pb-Pb 2.76 TeV collisions from ALICE
(Elsevier, 2017-11)
Azimuthally differential femtoscopic measurements, being sensitive to spatio-temporal characteristics of the source as well as to the collective velocity fields at freeze-out, provide very important information on the ...
Charmonium production in Pb–Pb and p–Pb collisions at forward rapidity measured with ALICE
(Elsevier, 2017-11)
The ALICE collaboration has measured the inclusive charmonium production at forward rapidity in Pb–Pb and p–Pb collisions at sNN=5.02TeV and sNN=8.16TeV , respectively. In Pb–Pb collisions, the J/ ψ and ψ (2S) nuclear ...
Investigations of anisotropic collectivity using multi-particle correlations in pp, p-Pb and Pb-Pb collisions
(Elsevier, 2017-11)
Two- and multi-particle azimuthal correlations have proven to be an excellent tool to probe the properties of the strongly interacting matter created in heavy-ion collisions. Recently, the results obtained for multi-particle ...
Jet-hadron correlations relative to the event plane at the LHC with ALICE
(Elsevier, 2017-11)
In ultra relativistic heavy-ion collisions at the Large Hadron Collider (LHC), conditions are met to produce a hot, dense and strongly interacting medium known as the Quark Gluon Plasma (QGP). Quarks and gluons from incoming ...
Measurements of the nuclear modification factor and elliptic flow of leptons from heavy-flavour hadron decays in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 and 5.02 TeV with ALICE
(Elsevier, 2017-11)
We present the ALICE results on the nuclear modification factor and elliptic flow of electrons and muons from open heavy-flavour hadron decays at mid-rapidity and forward rapidity in Pb--Pb collisions at $\sqrt{s_{\rm NN}}$ ...
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
I have a problem evaluating this limit. When I am trying it will come out 0. It is not good. Can you help how can I start with this limit?
$$ \lim_{x\to \infty^{-}}{\frac{x}{3}}{\left |\arctan{\frac{9}{x}}\right |} $$
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
I have a problem evaluating this limit. When I am trying it will come out 0. It is not good. Can you help how can I start with this limit?
$$ \lim_{x\to \infty^{-}}{\frac{x}{3}}{\left |\arctan{\frac{9}{x}}\right |} $$
When $x$ is positive, $\frac 9x$ will be positive too, so $\arctan\frac9x$ is positive and the $|\cdot|$ is a no-op. Therefore $$ \lim_{x\to\infty} \frac x3 \left|\arctan\frac 9x\right| = \lim_{x\to\infty} \frac x3 \arctan \frac 9x $$
Now switch variable to $y=\frac3x$ and we get $$ \lim_{y\to 0^+} \frac{\arctan(3y)}{y} $$ where you can either apply L'Hospital's rule, or (which amounts to the same thing) recognize it as the definition of a derivative taken at $0$. This derivative can then be computed symbolically.
Consider that as $x \to \infty$, $\arctan(x) > 0$, so the original limit can be re-written as: $$ \lim_{x \to \infty}\frac{x}{3} \left| \arctan{\frac{9}{x}} \right| = \lim_{x \to \infty}\frac{x}{3} \arctan{\frac{9}{x}} $$ Now, we factor out $\frac{1}{3}$ and re-write the expression to apply L'Hospital's Rule: $$ \lim_{x \to \infty}\frac{x}{3} \arctan{\frac{9}{x}} = \frac{1}{3}\lim_{x \to \infty}x \arctan{\frac{9}{x}} = \frac{1}{3}\lim_{x \to \infty}\frac{ \arctan{\frac{9}{x}}}{\frac{1}{x}} $$ Appliying L'Hospital's Rule we have: $$ \frac{1}{3}\lim_{x \to \infty} \frac{\frac{1}{1+ \left( \frac{9}{x} \right)^{2}} \cdot - \frac{9}{x^{2}}}{-\frac{1}{x^{2}}} $$ Operating the factors gives us: $$ \frac{1}{3}\lim_{x \to \infty}\frac{9}{1+ \left( \frac{9}{x^{2}} \right)} $$ And finally, $$ \frac{1}{3}\lim_{x \to \infty}\frac{9}{1+ \left( \frac{9}{x^{2}} \right)} = \frac{1}{3} \cdot 9 = 3 $$ |
Let $(E,D)$ be a probabilistic encryption scheme with $n$-length keys (given a key $k$, we denote the corresponding encryption function by $E_k$) and $n+10$-length messages. Then, show that there exist two messages $x_0, x_1 \in \{0,1\}^{n+10}$ and a function $A$ such that
$$\mathrm{Pr}_{b \in \{0,1\}, k \in \{0,1\}^n}[A(E_k(x_b)) = b ] \geq \frac{9}{10}.$$
(This is problem 9.4 from Arora/Barak Computational Complexity.)
My gut intuition says that the same idea from the proof in the deterministic case should carry over. WLOG let $x_0 = 0^{n+10}$, and denote by $S$ the support of $E_{U_n}(0^{n+10})$. We will take $A$ to output $0$ if the input is in $S$. Then, assuming the condition stated in the problem fails to hold for all $x \in \{0,1\}^{n+10}$, we conclude that $\mathrm{Pr}[E_{U_n}(x) \in S] \geq 2/10$ for all $x$. This implies that there exists some key so that $E_k$ maps at least $2/10$ of the $x$ into $S$ (the analogue of this statement in the deterministic case suffices to derive a contradiction), but now I don't really see how to continue. Is my choice of $A$ here correct, or should I be using a different approach? |
My favorite connection in mathematics (and an interesting application to physics) is a simple corollary from Hodge's decomposition theorem, which states:
On a (compact and smooth) riemannian manifold $M$ with its Hodge-deRham-Laplace operator $\Delta,$ the space of $p$-forms $\Omega^p$ can be written as the orthogonal sum (relative to the $L^2$ product) $$\Omega^p = \Delta \Omega^p \oplus \cal H^p = d \Omega^{p-1} \oplus \delta \Omega^{p+1} \oplus \cal H^p,$$ where $\cal H^p$ are the harmonic $p$-forms, and $\delta$ is the adjoint of the exterior derivative $d$ (i.e. $\delta = \text{(some sign)} \star d\star$ and $\star$ is the Hodge star operator). (The theorem follows from the fact, that $\Delta$ is a self-adjoint, elliptic differential operator of second order, and so it is Fredholm with index $0$.)
From this it is now easy to proof, that every not trivial deRham cohomology class $[\omega] \in H^p$ has a unique harmonic representative $\gamma \in \cal H^p$ with $[\omega] = [\gamma]$. Please note the equivalence $$\Delta \gamma = 0 \Leftrightarrow d \gamma = 0 \wedge \delta \gamma = 0.$$
Besides that this statement implies easy proofs for Poincaré duality and what not, it motivates an interesting viewpoint on electro-dynamics:
Please be aware, that from now on we consider the Lorentzian manifold $M = \mathbb{R}^4$ equipped with the Minkowski metric (so $M$ is neither compact nor riemannian!). We are going to interpret $\mathbb{R}^4 = \mathbb{R} \times \mathbb{R}^3$ as a foliation of spacelike slices and the first coordinate as a time function $t$. So every point $(t,p)$ is a position $p$ in space $\mathbb{R}^3$ at the time $t \in \mathbb{R}$. Consider the lifeline $L \simeq \mathbb{R}$ of an electron in spacetime. Because the electron occupies a position which can't be occupied by anything else, we can remove $L$ from the spacetime $M$.
Though the theorem of Hodge does not hold for lorentzian manifolds in general, it holds for $M \setminus L \simeq \mathbb{R}^4 \setminus \mathbb{R}$. The only non vanishing cohomology space is $H^2$ with dimension $1$ (this statement has nothing to do with the metric on this space, it's pure topology - we just cut out the lifeline of the electron!). And there is an harmonic generator $F \in \Omega^2$ of $H^2$, that solves $$\Delta F = 0 \Leftrightarrow dF = 0 \wedge \delta F = 0.$$ But we can write every $2$-form $F$ as a unique decomposition $$F = E + B \wedge dt.$$ If we interpret $E$ as the classical electric field and $B$ as the magnetic field, than $d F = 0$ is equivalent to the first two Maxwell equations and $\delta F = 0$ to the last two.
So cutting out the lifeline of an electron gives you automagically the electro-magnetic field of the electron as a generator of the non-vanishing cohomology class. |
Here at the Flying Colours Maths Blog, we're never afraid to answer the questions on everyone's lips - such as, why is $\left(1 + 9^{-4^{7\times 6}}\right)^{3^{2^{85}}}$ practically the same as $e$? When I say ‘practically the same’, I mean… well. 20-odd decimal places of $\pi$ are enough to get theRead More →
Gale surveyed the destruction with a face somewhere between disgust and admiration. Tunnock’s Caramel Wafer wrappers strewn across the room. A smell of haggis in the air. Bottles of whisky, half-drunk. Constable Beveridge… well, you wouldn’t say half-drunk. “You were up watching the referendum results last night, weren’t you?” BeveridgeRead More →
The Mathematical Ninja didn't bother with a warning. The Mathematical Ninja didn't even do that impressive whirry thing he does with a sword in each hand. No. The Mathematical Ninja conjured up a pistol and pulled the trigger - BANG! It was a blank, of course, but the student wasn'tRead More →
Every few weeks, this bit of motivational excrement does the rounds on twitter (I saw it here, but it comes around from all sorts of sources). For a start, what's with the per cents? Per cents of what? You can't just take a number that's close to a hundred andRead More →
A reader asks: There are some confusing questions in my maths textbooks. There is one question asking me to estimate the answers to the following maths problems but it doesn't say whether we need to round the numbers to decimal places, or significant figures. So, I'd like to ask forRead More →
An interview special, featuring our favourite Abel Prize nominee, @samuel_hansen! Sam is the brain behind Relatively Prime - which I consider some of the greatest maths radio journalism ever made1 and respectfully requests your donations towards it. It's the only thing he's ever done respectfully, so pay attention. You canRead More →
Let’s suppose, for the moment, you’re interested in the function $f(x) = \frac{\pi\sin(x)}{x}$. It’s a perfectly respectable function, defined everywhere except for $x = 0$, where the bottom is 0. The top is also zero there (because $\sin(0) = 0$), so its value is, strictly speaking, indeterminate - $\frac{0}{0}$ could,Read More →
Oh, for heaven's sake. The Standards& Testing Agency has just released their new sample materials for Key Stage 2 (upper primary). Among other things in Mr Gove's poisonous legacy is the insistence on everyone using formal methods of arithmetic, whether appropriate or not. The examples given in the test are:Read More →
Ours is not to reason why; just invert and multiply. - Anonymous Rule number one of Fractions club is: do NOT let the Mathematical Ninja hear you talking like that, otherwise you’re not going to have ears to hear rule number two. I mean - that is a way toRead More → |
This question already has an answer here:
Iteration can replace Recursion? 5 answers
Is it possible to solve every problem (solvable with turing machine) with only recursion ? If yes, which principles or theories assure this ? Thanks
Computer Science Stack Exchange is a question and answer site for students, researchers and practitioners of computer science. It only takes a minute to sign up.Sign up to join this community
This question already has an answer here:
Is it possible to solve every problem (solvable with turing machine) with only recursion ? If yes, which principles or theories assure this ? Thanks
A set $A$ is computable (like in Turing machines) iff its characteristic function $$\chi_A(x) = \begin{cases}1, & x \in A\\ 0, & x \notin A\end{cases}$$ is recursive. The class of recursive functions (sometimes referred to as $\mu$-recursive functions) is the smallest class that contains all constant functions, successor function, projection, and is closed under substitution, primitive recursion and minimization.
So yes, every computable function is recursive w.r.t. to the definition above.
Can you define what it means to solve a problem "only" with recursion?
Would your definition allow an algorithm like this:
int foo(int n) { if (n==1) return 1; while(someCondition) { do something foo(n-1); }}
If your definition would wllow this, then it would be easy to transform every algorithm in such a recursive form (you could do all your calculations in the outermost function call and do nothing in the recursive calls). |
There is a notation for an element to equal a specific value using indicator functions.You could thus define the proportion equal to $x$ as follows$$p_x=\frac{1}{N} \sum_{s \,\in\, S}I(s=x)$$Here $N$ is the number of elements in $S$ and $I$ is the indicator function that equals $1$ when $s=x$ and zero otherwise.There should be no problem using your ...
Short answer: the solution provided is wrong. Linearity should not be characterized by "the value of power to which $x$ is raised".A precise definition depends on the context - elementary school, algebra I, linear algebra in college. Your argument about lines in the plane is the right one for grade 6.With any reasonable definition, the sum of linear ...
Yes, $\mathbb{R^n}$ with the $p$-norm is a special $L^p$ space with the counting measure on $\{1,...,n\}$: Let $x \in \mathbb{R}^n$. Then we have that$$\int |x|^p \mathrm{d}\mu=\sum_{i=1}^n |x(i)|^p$$Because we can identify every element of $\mathbb{R}^n$ as a function from $\{1,...,n\}$ to $\mathbb{R}$.
You calculate the expected value of a random variable, so the domain of the expectation operator is the set of random variables. These are measurable functions on the probability space.The codomain is the set where the random variable takes its values. Your example concerns vector valued random variables, so the expectation will be a vector.
$n!!$ is known as a "double factorial," or "semifactorial," and it typically represents the product of all positive integers less than or equal to $n$ with the same parity (odd or even) as $n$. For example,$8!! = 2 \cdot 4 \cdot 6 \cdot 8$$7!! = 1 \cdot 3 \cdot 5 \cdot 7$
While there is no ''standard'' notation, as you describe, that is analogous to the sigma and pi notations, you may use Knuth's up-arrow notation instead.This notation uses $\uparrow$ to denote exponentiation, $\uparrow\uparrow$ do denote tetration, $\uparrow\uparrow\uparrow$ to denote pentation, and so forth. For instnace,$${\displaystyle 2\uparrow 4=2\...
A useful resource for "speaking" mathematics is The Handbook for Spoken Mathematics by Lawrence A. Chang. This book suggests that we should use a description of the font or script, plus the name of the letter (see pages 3–5). Hence $\mathcal{E}$ might be read as "calligraphic capital $E$". In most contexts, this would likely be overkill and require ...
A common error committed by novice students is to confuse the name of a function with the values taken by that function. When we write$$ f : \mathbb{R}\to \mathbb{R} : x \mapsto x^2, $$then:$f$ is the function itself. At a very basic level, $f$ is a set; specifically, a subset of the Cartesian product $\mathbb{R}\times\mathbb{R}$. Even more ...
I think the most straightforward way to regard it is just as a function composition, which is formally defined as $(f \circ g)(x) = f(g(x))$ (taking note of well-definedness in the respective domains). So in your specific example, let $g : \mathbb{R} \to \mathbb{R} : x \mapsto x - 1$, then $f(x - 1) = (f \circ g)(x)$.And yes, to add to your additional ...
Set $z:=e^{i\theta}$ for $0\leqslant\theta\leqslant 2\pi$ then you integral around the contour you say is equivalent to\begin{align}\int_{|z|=1}|z-1||\,dz|=\int_0^{2\pi}|e^{i\theta}-1||ie^{i\theta}\,d\theta|&=\int^{2\pi}_0\sqrt{(1-\cos\theta)^2+\sin^2\theta}\,d\theta\\&=\int^{2\pi}_0\sqrt{2-2\cos\theta}\,d\theta\end{align}Using double angle ...
Parametrize with the usual $z=e^{it}$$$=\int_0^{2\pi} |e^{it}-1||ie^{it}dt| = \int_0^{2\pi} \sqrt{(1-\cos t)^2+\sin^2 t} \hspace{4 pt}dt$$From here simplify and use trig identities. (Hint: if you get $0$ you did something wrong, perhaps related to the simplification of the square root. The answer should be $8$).
No, there's no such thing as a point infinitely close to another point.I've never seen the notation $\int_{a^+}^{b^-}$. If I did see that notation I can't imagine what it would mean other than $\int_a^b$. The notation $a^{\pm}$ comes up in limits, not integrals! $\lim_{x\to a^+}f(x)$ means the right-hand limit of $f$ at $a$.The notation can be ...
It's probably because the notation could make it clear that the function is $f^2=f\cdot f$. That is, if $g=f^2=f\cdot f,$ then we'd write $g(x)=f^2(x).$ Writing $f(x)^2$ kind of hides the functional relationship. Although, as noted in the comments, writing $f^2(x)$ could be confused with composition. It's probably best to clarify what you mean when you write ...
If $\Gamma$ is normal in $SL_2(\Bbb{Z})$ for a fixed $\delta\in SL_2(\Bbb{Z})$$$ \delta\Gamma=\Gamma \delta = \bigcup_l P g_l \qquad (disjoint\ union) $$where $P = \Gamma \cap \langle p \rangle=\langle p^N \rangle$ with $p(z)=z+N$.$$E_{\Gamma,\delta,k}(z)=\sum_l (Pg_l)'(z)^{k/2} \qquad is \ well-defined :\ \ (p^m g_l)'(z)=g_l'(z)$$For all $\beta \in \...
This is the collection of sets$$\{n+p\mathbb Z\mid n\in\mathbb Z\}$$which forms a group $(G,+_G)$ with the group operation “$+_G$” defined by$$(n+p\mathbb Z)+_G (m+p\mathbb Z)=(n+m)+p\mathbb Z$$Recall that$$p\mathbb Z=\{\ldots, -2p,-p,0,p,2p,\ldots\}$$and that$$n+p\mathbb Z=\{\ldots, n-2p,n-p,n,n+p,n+2p,\ldots\}$$For a given positive integer $p$, ...
In addition to this answer, the notation $A^*$ comes to mind. I've seen this in a few contexts:In symbolic dynamics, an alphabet $A$ is a (finite) set. A word is a finite sequence in which each term comes from $A$. You will often see the notations$$ A^0 := \varnothing,\qquadA^n := \{ (a_1, a_2, \dotsc, a_n) : a_j \in A \}, \qquadA^* := \bigcup_{j=0}^{...
In more familiar terms, by the formula for the square of a sum,$$\frac1n\sum_{i=1}^n x_i^2-\left(\frac1n\sum_{i=1}^n x_i\right)^2=\frac1n\sum_{i=1}^n x_i^2-\frac1{n^2}\sum_{i=1}^n x_i^2-\frac2{n^2}\sum_{i=1}^n\sum_{j=i+1}^n x_ix_j.$$The notation $S(x_1x_2)$ is questionable, because it doesn't clearly shows that the summation indexes do not cover the ...
For reflexive, it must contain all the pairs $xRx$, so your last example is correct. It can contain any other pairs desired, so $\{(1,1),(2,2),(3,3),(2,3)\}$ is reflexive as well.For symmetric, note that it says $aRb \implies bRa$. If $aRb$ is not true it does not impose a requirement. Both your examples are symmetric, as is the empty relation. ...
After having spent some time searching, I cannot find an obvious dupe target for this question. I am, frankly, quite surprised by this. However, it is not an unreasonable question, and I think it deserves to be answered on MSE. I am going to answer a slightly more general question:What does it mean to "extend a function"?Suppose that $\Omega \...
As explained by @kccu, using $e_{i,j}$ would be a rather poor choice. This often denotes the edges between vertices $i$ and $j$.I would use a capital "blackboard bold" P : $\mathbb{P}$, and hence $\mathbb{P}_{i,j}$
I post my attempt here. It would be great if someone helps me verify it ^o^It follows from $G$ is a closed linear subspace of the Hilbert space $H$ that $G$ is also a Hilbert space. By Riesz theorem, there is a unique $x_G \in G$ such that $g(x) = \langle x_G,x \rangle$ for all $x \in G$ and that $\|g\| = \|x_G\|$. We define $f:H \to \mathbb R$ by $f(x) = \...
I think you cluttered the words exactly in the wrong way. The statement is, that you have a map, that is linear and defined on the whole Hilbert space $H$ (not just $G$). But its norm stays the same or you can say is preserved,e.g. $\|g\| = \|f\|$ and $f\bigg|_G = g$.
Yes, that is exactly what this symbol means here, bidirectional assignment. In the flow of the statement of the theorem, one has to interpret the first use as declaring the map from $a$ to the remainder tuple, later comes the claim of the theorem that this map is bijective, justifying the double arrow.The only irritating part may be that the bidirectional ...
No, this is not possible. Whenever you use any symbol, it should be clearly stated what that symbol represents and it should refer to one object only. It might be fixed number, or it may be some variable, but it can't change meaning before you are done with whatever you are trying to do.If you want to write something similar to allow different values on ...
I don't think that it's a standard notation. Nevertheless one can see in some papers and books the notation :$$\frac{\partial^{n+m}f(x,y)}{\partial^n x\:\partial^m y}\equiv f^{(n,m)}(x,y) $$which is also used by WolframAlpha.From the context, this is consistent with :$$ f^{(1,0)}(x,y) \equiv \frac{\partial f(x,y)}{\partial x}$$
Your problem is thinking about "the" subspace of $\mathbb{R}^3$ that is (isomorphic to) $\mathbb{R}$. There are infinitely many. You can single out the three coordinate axes as special since you are representing vectors as ordered triples, but the $x$-axis is no more special than the other two axes and does not have another common name. Even calling the $x$-...
It is indeed reasonable to use arbitrary sets as the index sets for matrices, rather than specifically sets of the form $\{ 1, 2, \ldots, n \}$.Note that even infinite sets are reasonable here, giving you infinite dimensional matrices! This introduces some extra complications that you have to deal with; e.g. matrix multiplication would involve an infinite ...
Short AnswerThis is just repeated iterations of the Hockey-stick Identity.Quick Overview of the Hockey-stick IdentityHere's my attempt at a more approachable description:Notice that for each of the colors, the sum of all of the numbers in the diagonal is the last number on the bottom that goes off in the opposite direction. For instance, if we look ...
As you say, $(i,j)\in A \times B$ refers to the elements, not to the indices, sets have no notion of indices. So if you want to be really strict and rigorous with your notation, you have to use ordered sets $A=\langle a_1, a_2, ... \rangle$ and $B=\langle b_1, b_2, ... \rangle$ (or some kind of similar construction). However, what does $c_{ij}$ means here? ...
Originally I skimmed through most of your question and mainly answered the question in bold. I have since reread the question and decided to give some more comments.Ordinal vs Cardinal indicesYou claim that the ordinal notation in indices of cardinals makes the computation unwieldy, however there is a very distinct reason for using ordinals and not ...
In algebraic geometry, a variety is defined to be the solution set of a system of polynomials over an algebraically closed field. More precisely, if $k$ is an algebraically closed field and $f_1,\dots,f_m$ are polynomials in $n$ variables with coefficients in $k$, then$$V(f_1,\dots,f_n) = \{(x_1,\dots,x_n) \in k^n \mid f_1(x_1,\dots x_n)=\cdots = f_m(x_1,\...
I am not aware of any special name for an operator or matrix of the form $\lambda I - T$ (or $cI - A$) in either linear algebra or functional analysis. However, some possibilities arean inverse of the resolvent,a pseudoinverse of the resolvent, ora pre-resolvent.All three of these are meant to capture the the notion that $cI - A$ has a relation to the ...
It is the integration-by-parts formula, although demonstrated in a quite confusing way. Especially, it is unclear which function $\int$ is applied to. Using parentheses to emphasize the scope of $\int$ for each instance, we may instead write$$\int(fg) = \left(\int f\right) g - \int \left(\left( \int f \right) g'\right). $$It is still an unconventional ...
For me the braces $\{X_n\}$ would suggest that the order is irrelevant (and so we're not taking a limit, say), while brackets $(X_n)_{n \ge 1}$ is much more suggestive that the order matters. "Family" normally suggests a set (so no order). Just my 2 cts.
The answer is -1, please let me show You why.In binary there is one integer with one digit, two integers with two digits, four integers with three digits and so on, so there always are $2^(n-1)$ integers with n digits.So there must be $2^(0-1)$ = 1/2 integers with 0 digits, $2^(-1-1)$ = 1/4 integers with -1 digits and so on. 1/2 + 1/4 + 1/8 + ... add up ...
This is the indicator function.$$f(x)=1_{[-1,1]}(x)=\begin{cases}1&x\in[-1,1]\\0&x\not\in[-1,1]\end{cases}$$To compute its Fourier transform is simple, you integrate this function over $(-\infty,\infty)$ as usual, but since the function is zero outside this specific range, the integral limits become $\int_{-1}^1$.
The line segment between $[0,B]$ taken $n$ times, i.e. for $n$ different real numbers.A subset of $\mathbb R^n$ where each real number is between $[0,B]$.So a function takes $n$ real numbers in (each between $0$ and $B$) and gives one value between $0$ and $B$ out. |
The evidence says noWhat research I'm aware of is all about how giving any overall data about their own performance is actively harmful in promoting further learning. They learn considerably more from instruction about what to do differently in the absence of a grade or numerical score. To repeat; individual scores discourage learning! (Explanations for ...
The student changed something which was indeterminate ($\infty-\infty$) into something which was not ($\infty\cdot \infty$). How does that not merit a perfect score? Changing indeterminate expressions into determinant ones is, generally speaking, the point.If the professor had some other solution in mind, then they made a mistake. They should have chosen a ...
Your student should get full marks.In fact, I would say that even a more complicated example, like $$\int 2x\cos(x^2) dx = \sin(x^2) + C$$should be awarded full points as long as the student justifies this by differentiating $\sin(x^2)$. In fact, this solution demonstrates deeper understanding of the meaning of these symbols than the variable ...
Students are used to other people being the source of truth.Even in an algebra class, they will do something (incorrect, at least in the common context) like this:$(x + 3)^2 = x^2 + 9$and then ask me if it is correct, or if it actually goes a different way. The implication is that I know the truth and they cannot know it without me. My goal then ...
If this is calc I, that deserves a 5/5. If this is analysis, it depends on what you taught them. Don't you set up a grading rubric ahead of time? What do the 5 point answers look like? What do other not-so-great answers look like?
I agree with the other post that you should give full credit unless there were clear directions saying what would and wouldn't be acceptable approaches.I think it would be very, very hard to convey to students why using the integral of tan, which they happen to know, is off limits, while using other integrals they know is allowed, other than as a detailed ...
Instead of arguing with other people's answers in the comments I thought it might be more productive to present my own point of view. I find myself completely unable to understand why anyone would take off points for this student's answer.Just to be clear, this isn't because I'm being somehow lax or generous as a grader. My opinion is that this is a ...
The part of this question that raises red flags for me is the line:However, many timesit [sic] is good to have a wide spread of grades.Why? This seems backward reasoning to me wherein you know the distribution of grades that you want to give and are figuring out how to design the test to fit that distribution.Your assessment should be criterion ...
One technique which is fairly obvious, but (at least for some of us) surprisingly difficult to implement consistently, is to just model for them in class what you expect them to write on their own. When I solve a problem in class, I try to show the same work and write the same explanations that I expect them to show. I also try to talk about it as I do it, ...
I'll try to make this answer a little more general than just telling how many points I would give for this particular error (if interested: I'd give 5/10 at most, most likely less).For that, let's discuss three different kinds of computation errors (i.e. not including logical errors, wrong proofs, etc.). The given error in your image falls in the third ...
You wrote:My problem is that I ... end up giving too good of grades at the beginning, and then face the task of either trying to lower their grades by being much tougher or not fulfilling departmental expectations.My advice is that, as a practical matter, things turn out much better when one is tougher in the beginning of a course, easing off if ...
Since you remark that your question is "deliberately non-specific," here is a (necessarily) incomplete response: First are two links to documents about assessment that might be of interest, and then two grading schemes that I have encountered in mathematics courses.Documents:As far as the philosophy of creating examinations, early work on this was done ...
A bit too long for a legit comment... I think a very important question in the background here is that of whether we want to teach students that "correct" math involves primarily adherence to somewhat-arbitrary, even if clear, rules set down by the teacher, or whether there is an underlying reality, and best-practices, etc. Some of the "rule" nonsense can be ...
Let me echo Benjamin's comment that any proactive step that you take should be done with the instructor's permission.At a practical level, I think there are ways to address issues (a) and (b). For (a), make a rubric (either in advance or a running one as you go) which lays out the criteria for awarding points. This allows you to be consistent with how ...
A question that occurs with a project like this (broader than one department, as you put it) would be: Who is qualified to make those assessments? Probably not any other department at a particular college, certainly -- the one department is, by definition, where all the experts in that subject work.To some degree this actually is done in places, in the ...
Make clear that the "make sure the proof is correct" is part of the work to be done in the homework. If it is my proof, or yours, or from <famous textbook> that is wrong, the answer is wrong.(Yes, need to emphasize that even the above cited authorities get it wrong sometimes).
In my grading scheme, any answer that is wrong, checkable, and not checked gets $1/4$ off the partial credit. (I give partial credit for work shown, depending on how much knowledge is shown, the pettiness of the mistake, and so on.)If the student gets a wrong answer, checks it and sees that it is wrong, and notes this in his answer or work, I do not remove ...
For multiple choice questions it is much better to ask in a slightly different way, namelyAre the following numbers even?yes no1.) O O 172.) O O 223.) O O 334.) O O 425.) O O 576.) O O 617.) O O 498.) O O 999.) O O 1310.) O O 30The advantage is, that you explicitly open the ...
In France we barely have intro-to-proof courses, but we ask for proofs in other courses.Usually, each proof has little granularity in the grading, and I tend to avoid giving half the points which sends a mixed signal, so most of the time small proofs get either zero, one-third, two-third or full credit.Basically I try to give one-third credit when the ...
Quoted from the first part of my answer on https://academia.stackexchange.com/questions/80898/should-a-student-be-penalized-for-using-a-theorem-outside-of-the-curriculum/:... the point of an exam is to assess mastery of basic knowledgecovered in the course. If one uses a more powerful outside theorem,then the steps that they've skipped likely ...
Calculus classes are taught at an 18th century standard of rigor, and analysis classes at a 20th century standard of rigor. It doesn't make much sense to try to invent some arbitrary combination of the two. So if you aren't expecting proofs written in sentences with all proofs eventually going back to epsilon-delta definitions, then you should accept ...
Why do so few professors assign extra credit?In my experience, the attitude towards extra credit is consistent throughout the department. Nearly every professor in the education department at my university puts extra credit questions on the test, but only a few in the math department do. After chatting with other students and professors, this seems to be ...
By recommending some book you implicitly acknowledge all of its contents (unless specified otherwise). In fact it would be similar if the student used a solution you presented, which happens to be wrong, but you only discovered the flaw when grading the exam. What would you do?First, I would not penalize the student for copying the solution from the book, ...
The answer seems ok to me, in that it shows that the student understands what the limiting behavior of $x - \sqrt{x}$ is as $x \to \infty$. If the questioner wants to see a formal justification of that, then the word "find" should be replaced by some more precise indication.The interpretive problem is that that the exercise could be better posed.A better ...
Set up a grading schema, awarding points for grammar, spelling, correct use of the technical terms. Central should be points for correct, logical overall structure, and enough explanations to lead the reader from one step to the next.You can publish the grading schema for their guidance, and perhaps write up an example of how their work should look like. ...
There's a compromise between "correctness" and "completion" called "Standards-based grading". Here's a few links about it with various people who have tried using it for Calculus: http://blogs.cofc.edu/owensks/2014/01/08/sbg-calculus2/, http://alwaysformative.blogspot.com/p/standards-based-grading-implementation.html, http://speced.fivetowns.net/lcs/content/...
Students tend to only pay attention to feedback that affects their grade. In your example, where the student calculates the height of the door to be 0.0001 inches, showing no sign of realizing that the answer is impossible, I would give a zero on the problem. If the student writes, "Hmm...obviously this is wrong, but I've run out of time to track down the ...
There is at least one way to make this question more reconciliable with reality if the implicit question is partly about how to make the students less stressed... by more-clearly delineating what their grade(s) will be at a given level of effort. (On the whole, it's a fine impulse to want to implement this, I think.)However, I very-strongly suspect that it ...
This is an opinion, but perhaps worth sharing:When I assign that I expect students (particularly lower division students, such as those in calculus or precalculus classes) to do on their own time, I assume that they are going to collaborate, ask the Google, and otherwise use all of the resources that are available to them. Demanding that they do the work ... |
I am studying Peskin and Schroeder's textbook of
quantum field theory.
I have proceeded to Ward-Takahashi identity and have one question.
Eq.(7.66) and Eq.(7.67) are the two cases involved. Then the textbook proceeds to discuss the Ward-Takahashi identity by summing all possibilities for different Feynman diagrams (with external photon line striped) as well as different ways of inserting photons, where $M^{\mu}$ is the correlation function for $n$ inserting electrons and $n$ out-going electrons. $$k_{\mu}M^{\mu}(k;p_1...p_n;q_1...q_n)=-e\sum_i[M_0(k;p_1...p_n;q_1...(q_i-k)...q_n)-M_0(k;p_1...(p_i+k)...p_n;q_1...q_n)].$$
It is understood that if ($M^{\mu}$) has all its external electrons on-shell, then the amplitudes on the right-hand side of this identity each have only one external particle off-shell, and therefore they do not contribute to S-matrix elements. As a result, the corresponding S matrix is zero if all the external electrons in the left hand side are on shell.
I understood that the discussions related to Eq.(7.66) only involve the singularity factor due to the on-shell external electron lines, and therefore applies to any individual Feynman diagram which possesses such external electron line structure (while enumerating all different ways of inserting the additional photon line). On the other hand, Eq.(7.67) gives zero by itself.
However, the textbook mentioned on P.238:
The identity is generally not true for individual Feynman diagrams; we must sum over the diagrams for $M(k)$ at any given order.
My question is (1) Why one still need the summation of all possible Feynman diagrams. It seems individual Feynman diagram with all possible ways of insertion of photon lines do the trick. (2) P.242 in the first plot, the incoming and outgoing Fermi lines are paired ($p_i$ to $q_i$). Why one does not consider the case when $p_i$ is paired to $q_j$ ($i\ne j$), it seems to me that this also contributes to the same physical process. Many thanks for the answer or comment!
PS: This question is a follow-up of a question by
Brioschi. |
We’ve discussed before how powerful isomorphisms can be, when we find them. Finding isomorphisms “from scratch” can be quite a challenge. Thankfully, Arthur Cayley proved one of the classic theorems of modern algebra that can help make our lives a bit easier. We’ll explore this theorem and note the power of groups of permutations.
For the introduction to isomorphisms, check out this post. Cayley’s powerful theorem states the following:
Theorem (Cayley):
First, I’d like to highlight the beauty and simplicity of the statement. One sentence, no conditions.
Every group. Not every finite group, not every commutative group. Every. Single. Group. That’s an extremely strong statement. Every mathematician hopes to write and prove a theorem this strong; I know I would die happy if I managed to do it.
For this article, I actually want to spend time going through the proof of Cayley’s Theorem. It will be useful when we want to develop the notion of regular representations later.
Permutations: what exactly are they?
We’ve discussed permutations before, but hadn’t formally defined them. We will here. Permutations are generally viewed as rearrangements of objects (typically integers). In other words, the permutation \begin{pmatrix}1&2&3\\2&3&1\end{pmatrix} mixes up the numbers 1,2,3 and gives 2,3,1. Each number gets used, and mapped to a different number. 1 maps to 2, 2 maps to 3, and 3 maps to 1. In other words, if we started with the set A=\{1,2,3\}, a permutation maps the elements A to other elements in A.
Permutations are also reversible. We can always take any permutation and reverse it to get our original order back. Formally, we then define a permutation of a set A as a bijective function (or mapping) from the set A to itself. The mapping is one-to-one, meaning that if two input values are different, then their function values are different. It’s also surjective (also called onto), meaning that every element in A has something that maps to it.
Since we aren’t moving from one set to another, but rather staying in the same set, this is just a fancy way of saying that a permutation maps every element in our given set to only one other element in the given set, and that all set elements are used up. No one gets left out or not mapped somewhere (even if it maps to itself).
We’ll need this more generic definition of a permutation as a function or mapping in the proof of Cayley’s Theorem.
Proof of Cayley’s Theorem
To prove this, we take some generic group G.
1.
OK, we have our group G. Now we have to show it’s isomorphic to a group of permutations. What group of permutations on what set? This proof is constructive, so we will construct a group of permutations and show that this constructed group is isomorphic to G. In other words, we’re going to
(1) Construct the group of permutations that G will be isomorphic to, then
(2) Construct the isomorphism from G to the group of permutations we constructed in (1).
The only set we have to work with is G itself, so let’s see what we can do with this.
(1) Construct the group of permutations
Let’s grab a random element in G and call it a. Now, we’ll define a function we’ll call \pi_{a}:G\to G that maps
2 elements of G to other elements of G by
For any other element x\in G, the function \pi_{a} just multiplies a on the left
3 of x.
To show this function \pi_{a} is a permutation according to our above definition, we need to show that the function is bijective and maps stuff in G to stuff in G. We already have that \pi_{a} maps stuff in G to other stuff in G, because we defined it that way. That problem is solved. Now we’ll show \pi_{a} is a bijective function by showing it’s injective (one-to-one) and surjective (onto):
(1)
To show injectivity, we need to show that if \pi_{a}(x_{1}) = \pi_{a}(x_{2}), then x_{1} = x_{2}. In other words, we’re showing that two different elements cannot map to the same thing. Injectivity of \pi_{a}:
Now, in our specific case, if\pi_{a}(x_{1}) = \pi_{a}(x_{2})
thenax_{1} = ax_{2}
But we’re allowed to cancel those as by multiplying by the inverse of a on the left on both sides (just like in your high school algebra class)
4, leaving
So our \pi_{a} is absolutely injective.
(2)
To show surjectivity, we take some y\in G, and we need to find the element x\in G that maps to this y. (This is formally known as finding the inverse image of y.) We know that Surjectivity of \pi_{a}:
where e is our identity element
5. Now, any element multiplied by its inverse (in any order) gives us the identity element (because that’s the definition of an inverse.), so let’s grab a and its inverse a^{-1}:
By using the associativity laws we get free with the knowledge that G is a group, we can multiply a^{-1}y first, then multiply that result by a. But that element a^{-1}y yields y when multiplied by a, so we found our inverse image of y. Since y was generic, this reasoning is true for all elements of G, so we have that \pi_{a} is surjective.
With all that, we now can say that \pi_{a} is a permutation, because it fits the definition.
More Permutations!
The \pi_{a} was just a particular mapping defined by multiplying on the left by a fixed element a. We can define one of these permutations for every other element in G as well. So \pi_{b} multiplies stuff on the left by whatever b is, and so forth.
Let H = \{\pi_{n}:n\in G\} denote the set of all the permutations of G that involve multiplying on the left by some n \in G. Obviously, there are more possible permutations of the elements of G than what’s in this set H. The set of all possible permutations of a group (denoted S_{G}) is itself a group
6, so if we just show that H is a subgroup (a subset of a group that is itself a group), then we’ve built a group of permutations from the group G.
To show that H is a subgroup of S_{G}, we have to show that the composition of two permutations in H is also an element of H, and that the inverse of every permutation in H is also in H. We’ll skip the formal proof here, since it doesn’t really add much to the discussion, other than to formally verify that H is a group and indeed a possible candidate for a group that our G could be isomorphic to.
(2) Construct the isomorphism from G to H
With all the work we spent constructing H, we sure hope that we constructed something for G to be isomorphic to. If these two groups are isomorphic, then we need to construct a bijective function that maps elements of G to elements of H.
(A note: this isomorphism f:G\to H takes elements in G and maps them to a function in H. We’re allowed to have functions that output functions.)
The most obvious mapping is to take an element a\in G and map it to its permutation \pi_{a} \in H. In other words,f(a) = \pi_{a}
Our f maps an element a to a function that says “multiply on the left by a”. So we defined a function whose output is itself a function with the rule “multiply on the left by the input element”.
We already discussed how to verify that a function f is an isomorphism in the last post, so we won’t go through the details in this case. It’s quite straightforward, and doesn’t serve to illuminate anything further. Try it yourself as a short exercise to finish the formal proof.
Conclusion
We took a group G and constructed another group H that consists of permutations of G and then showed those two groups were isomorphic. Since G was generic in every way, and H was constructed from that generic G, we just proved Cayley’s Theorem–that every group is isomorphic to some group of permutations.
What’s this for? This seemed so utterly abstract as to be simply a curiosity (albeit a powerful one). We can use Cayley’s Theorem to develop a notion called the
regular representation of a group, which in turn has applications in quantum chemistry and physics, particularly in the study of symmetries of a molecule, which in turn controls its vibrational spectrum. This molecular vibrational spectrum can be determined by IR spectroscopy, which then allows a chemist to determine the symmetries of a molecule.
Groups also tend to arise in applications as
actions on other things (in fact, a permutation is an action of sorts), so representations and their theory help us understand how these groups and their actions affect anything from differential equations to molecules.
(This article details some of these applications in more detail, though the level is quite advanced.)
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Footnotes This is important. We have to take a generic group with no specifications at all, or we aren’t really proving the theorem. I don’t know what’s in this group, what its operation is, or even if the group is finite, countably infinite, or uncountable. That’s what the arrow means in the notation. The function maps elements of G to elements in the target space G. It just so happens our target space is the same as our “input space”, or domain, so we don’t actually leave G The order of multiplication does matter here. We are not guaranteed commutativity, so we have to be very specific about which side we multiply a on. In high school algebra, we’d say “divide by a”. Division is really just “multiplying by a multiplicative inverse”. Since we’re dealing with a generic group, we use the letter e to denote the identity element under the general operation. In addition on the real numbers, e=0. In multiplication of integers, e=1. But here, we don’t even know what’s in this group, so we have to just give it a generic name. We’ll skip the proof. It’s just a straightforward verification of the definition of a group. |
Prologue: The big $O$ notation is a classic example of the power and ambiguity of some notations as part of language loved by human mind. No matter how much confusion it have caused, it remains the choice of notation to convey the ideas that we can easily identify and agree to efficiently.
I totally understand what big $O$ notation means. My issue is when we say $T(n)=O(f(n))$ , where $T(n)$ is running time of an algorithm on input of size $n$.
Sorry, but you do not have an issue if you understand the meaning of big $O$ notation.
I understand semantics of it. But $T(n)$ and $O(f(n))$ are two different things. $T(n)$ is an exact number, But $O(f(n))$ is not a function that spits out a number, so technically we can't say $T(n)$
$O(f(n))$, if one asks you what's the equals of $O(f(n))$, what would be your answer? There is no answer. value
What is important is the semantics. What is important is (how) people can agree easily on (one of) its precise interpretations that will describe asymptotic behavior or time or space complexity we are interested in. The default precise interpretation/definition of $T(n)=O(f(n))$ is, as translated from Wikipedia,
$T$ is a real or complex valued function and $f$ is a real valued function, both defined on some unbounded subset of the real positive numbers, such that $f(n)$ is strictly positive for all large enough values of $n$. For for all sufficiently large values of $n$, the absolute value of $T(n)$ is at most a positive constant multiple of $f(n)$. That is, there exists a positive real number $M$ and a real number $n_0$ such that
${\text{ for all }n\geq n_{0}, |T(n)|\leq \;Mf(n){\text{ for all }}n\geq n_{0}.}$
Please note this interpretation is considered
the definition. All other interpretations and understandings, which may help you greatly in various ways, are secondary and corollary. Everyone (well, at least every answerer here) agrees to this interpretation/definition/semantics. As long as you can apply this interpretation, you are probably good most of time. Relax and be comfortable. You do not want to think too much, just as you do not think too much about some of the irregularity of English or French or most of natural languages. Just use the notation by that definition.
$T(n)$ is an exact number, But $O(f(n))$ is not a function that spits out a number, so technically we can't say $T(n)$
$O(f(n))$, if one asks you what's the equals of $O(f(n))$, what would be your answer? There is no answer. value
Indeed, there could be no answer, since the question is ill-posed. $T(n)$ does not mean an exact number. It is meant to stand for a function whose name is $T$ and whose formal parameter is $n$ (which is sort of bounded to the $n$ in $f(n)$). It is just as correct and even more so if we write $T=O(f)$. If $T$ is the function that maps $n$ to $n^2$ and $f$ is the function that maps $n$ to $n^3$, it is also conventional to write $f(n)=O(n^3)$ or $n^2=O(n^3)$. Please also note that the definition does not say $O$ is a function or not. It does not say the left hand side is supposed to be equal to the right hand side at all! You are right to suspect that equal sign does not mean equality in its ordinary sense, where you can switch both sides of the equality and it should be backed by an equivalent relation. (Another even more famous example of abuse of the equal sign is the usage of equal sign to mean assignment in most programming languages, instead of more cumbersome
:= as in some languages.)
If we are only concerned about that one equality (I am starting to abuse language as well. It
is not an equality; however, it is an equality since there is an equal sign in the notation or it could be construed as some kind of equality), $T(n)=O(f(n))$, this answer is done.
However, the question actually goes on. What does it mean by, for example, $f(n)=3n+O(\log n)$? This equality is not covered by the definition above. We would like to introduce another convention,
the placeholder convention. Here is the full statement of placeholder convention as stated in Wikipedia.
In more complicated usage, $O(\cdots)$ can appear in different places in an equation, even several times on each side. For example, the following are true for $n\to \infty$.
$(n+1)^{2}=n^{2}+O(n)$
$(n+O(n^{1/2}))(n+O(\log n))^{2}=n^{3}+O(n^{5/2})$
$n^{O(1)}=O(e^{n})$
The meaning of such statements is as follows: for any functions which satisfy each $O(\cdots)$ on the left side, there are some functions satisfying each $O(\cdots)$ on the right side, such that substituting all these functions into the equation makes the two sides equal. For example, the third equation above means: "For any function $f(n) = O(1)$, there is some function $g(n) = O(e^n)$ such that $n^{f(n)} = g(n)$."
You may want to check here for another example of placeholder convention in action.
You might have noticed by now that I have not used the set-theoretic explanation of the big $O$-notation. All I have done is just to show even without that set-theoretic explanation such as "$O(f(n))$ is a set of functions", we can still understand big $O$-notation fully and perfectly. If you find that set-theoretic explanation useful, please go ahead anyway.
You can check the section in "asymptotic notation" of CLRS for a more detailed analysis and usage pattern for the family of notations for asymptotic behavior, such as big $\Theta$, $\Omega$, small $o$, small $\omega$, multivariable usage and more. The Wikipedia entry is also a pretty good reference.
Lastly, there is some inherent ambiguity/controversy with big $O$ notation with multiple variables,1 and 2. You might want to think twice when you are using those. |
I am interested in isolating which NHL players shoot the puck well, and which NHL goaltendersdo a good job at preventing shots from becoming goals. To that end I have fit a regressionmodel which replicates some of the simple features of shooting and saving. Throughout thisarticle, when I say "shot" I will mean "unblocked shot", that is, goals, saves, and misses(including shots that hit the post or the crossbar). Furthermore, when I talk of shootingtalent, I mean the ability to score
more than one would expect given the shot location,so a player may well take a lot of shots from great scoring locations and still be "a badshooter" in some sense. Generating many such shots is obviously desirable and surely can bedone more often by talented players, but I do not consider any such talents to be part of shooting talent, which is (half of) the subject of this article.
Throughout, I'll be using only 5v5 shots, since I think the hockey assumptions underlying the model are only valid for a single score state. However, one could presumably fit such a model (with perhaps slightly different tuning parameters) for 5v4 and even for 5v3, and then obtain aggregate estimates for players by combining their estimates from the various different models.
Once a shot is being taken by a given player from a certain spot against a specific goaltender, I estimate the probability that such a shot will be a goal. This process is modelled with a generalized ridge logistic regression, for a detailed exposition please see Section 3. Briefly: I use a design matrix for which every row is a shot with the following columns:
I make a slightly unusual modification to shot distances; namely, shots which are recorded as coming from closer than ten feet are assigned a distance of 10ft. This is to stop small variations in shot location from having outsize effects on the regression, and also because it is close to the threshold of minimum human reaction time for goaltenders given typical NHL wrist shot speeds.
The observation is 1 for goals and 0 for saves or misses.The model is fit by maximizing the likelihood of the model, that is, for a given model, form the product of the predicted probabilities for all of the events that
did happen(90% chance of a save here times 15% of that goal there, etc.). Largeproducts are awkward, so we solve the mathematically equivalent problem of maximizing the logarithm of the likelihood, and before we do so we add a term of the form \(-\beta^T\Lambda\beta\), where we use \(\Lambda\) to encode our prior knowledge, asdescribed below.
Simple formulas for the \(\beta\) which maximixes this likelihood to not seem to exist, butwe can still find it by iterativelycomputing: $$ \beta_{n+1} = ( X^TX + \Lambda )^{-1} X^T ( X \beta_n + Y - f(X,\beta_n) ) $$ where \(f(X,\beta)\) is the vector function whose entry as position i is \((1 + \exp(-X_i\beta))^{-1}\) where \(X_i\) is the i'th row of \(X\) (this choice of \(f\) is whatmakes the regression
logistic). By starting with\(\beta_0\) as the zero vector and iterating until convergence, I obtain estimates of shooter ability,goaltending ability, with suitable modifications for shot location and type.
This model is zero-biased, which is to say that we consider deviations from average ability to be on-their-face unlikely and bias our results towards average. Another way of saying the same thing is to say that we are beginning with an assumption (of a certain strength) that all players are of league average ability and then letting the observed data slowly update our knowledge, instead of beginning with an assumption that we know nothing about the shooters and goaltenders at all. The bias controlled by the matrix \(\Lambda\), which must be positive definite for the above formula to be the well-defined solution which makes \(\beta\) the one which minimizes the total error. As in my 5v5 shot rate model, I use a diagonal matrix, where the entries correspoding to goaltenders and shooters are \(\lambda = 100\) and those corresponding to all other columns are 0.001, that is, very close to zero. As for that model, the non-trivial \(\lambda\) values were chosen by varying \(\lambda\) and choosing a value where player estimates have stabilized.
In the future, I will publish results for all seasons, but for now, I record the results of fitting this model on all of the 5v5 shots in the 2016-2018 regular seasons. First, the non-player covariates are:
Covariate Value Constant
-2.55
Slapshot
+0.0836
Tip/Deflection
-0.222
Backhand
-0.175
Wraparound
-0.300
Rush
+0.228
Rebound
+0.754
Distance
-2.86
Visible Net
+1.15
Logistic regression coefficient values can be difficult to interpret, but negative values always mean "less likely to become a goal" and positive values mean "more likely to become a goal". To compute the probability that a shot with a given description will become a goal, add up all of the model covariates to obtain a number, and then apply the logistic function to it, that is, $$ x \mapsto \frac{1}{1 + \exp(-x)}$$ This function (after which the regression type is named) is very convenient for modelling probabilities, since it monotonically takes the midpoint of the number line (that is, zero) to 50% while taking large negative numbers to positive numbers close to zero and very large positive numbers to positive numbers close to one.
Thus, for instance, we might want to compute the goal probability of a wrist shot from 30 feet out (just below the tops of the circles), on the split line, neither on the rush nor a rebound. To do this, begin with the constant value -2.55. We have encoded by dividing by 89, so we multiply 30/89 times the distance coefficient of -2.86 to obtain -0.964. From the split line, the visible net is 1, so we add +1.15. Wrist and snap shots are taken as the base category, so no shot type term needs to be added. Since the shot is neither a shot nor a rebound, we have all the terms we need, adding them together gives -2.364. Applying the logistic function gives 8.6%, close to the historical percentage of six to eight percent from this area.
The overall features of the model are more or less as expected---shots from farther away are less likely to go in, seeing more of the net is good, rush shots are good, rebound shots are even better. The (very slight) positive value for slapshots and negative value for tips and deflections may seem surprising at first, after all, slapshots are scored only rarely and tips score often. However, slapshots are systematically taken far from the net, and tips and deflections almost always from close to the net, after accounting for shot location there is almost no difference between wrist and slap shots and tips are, after all, somewhat less precise than wrist shots and the player tipping the prior shot generally isn't looking at the net.
As the above example shows, the model can already be used without specifying shooters or goaltenders. However, this is perhaps a little boring. Below are the values for all the goaltenders who faced at least one shot in the 2016-2018 regular seasons. I've inverted the scale so that the better performances are at the top.
The scale is the same units as for the non-player covariates above, so even the best or worst performances are smaller than the effect of a shot being a rush shot, for instance, consistent with goaltending performances being broadly similar across the league.
Minimum Minutes: |
Let $(N \subset M)$ be an irreducible finite index depth $n$ subfactor. Let $P = P(N \subset M)$ its planar algebra.
Let $(B_i)$ be the finite sequence of $N$-$N$-bimodules appearing in the principal graph. Let $2m = n$ if $n$ even, else $2m=n+1$. Let $p_i \in P_{2m,+}$ be the minimal central projection related to the $N$-$N$-bimodule $B_i$. Question: Is there a planar tangle $T: P_{2m,+} \otimes P_{2m,+} \to P_{2m,+}$ such that $T(p_i \otimes p_j) = \sum_{k} n_{ij}^k p_k $ with $B_i \boxtimes B_j = \bigoplus_k M_{ij}^k \otimes B_k$ and $dim(M_{ij}^k)= n_{ij}^k$ (the fusion coefficients)?
Else, is there such a $T$ if we only consider the range support? the central support?
Remark: If $n = 2$, such a $T$ exists, it's the coproduct (see here). Then, a generalization of the coproduct on $P_{2m,+}$ could do the job. |
Xavier Blanc (LJLL et Matherials) Thursday, October 17th, 10h — Salle de séminaire du CERMICS
Schémas préservant l’asymptotique sur maillages coniques
On considère un système hyperbolique linéaire appelé équation
de la chaleur hyperbolique. Ce système est une approxlmation très utilisée de l’équation du transfert radiatif, en particulier dans les expériences de fusion par confinement inertiel (laser méga-joule). Dans la limite d’une forte interaction du rayonnement avec la matière, cette équation dégénère vers une équation de diffusion. Nous présenterons des schémas volumes finis qui respectent cette limite, d’abord sur des maillages polygonaux, puis sur des maillages dont les arêtes sont définies par des coniques. Il s’agit d’un travail en commun avec V. Delmas et P. Hoch (CEA). Pierre Bellec (Rugers University) Thursday, October 17th, 15h — Salle de séminaire du CERMICS
First order expansion of convex regularized estimators
We consider first order expansions of convex penalized estimators in
high-dimensional regression problems with random designs. Our setting includes linear regression and logistic regression as special cases. For a given penalty function $h$ and the corresponding penalized estimator $\hat\beta$, we construct a quantity $\eta$, the first order expansion of $\hat\beta$, such that the distance between $\hbeta$ and $\eta$ is an order of magnitude smaller than the estimation error $\|\hat{\beta} – \beta^*\|$. In this sense, the first order expansion $\eta$ can be thought of as a generalization of influence functions from the mathematical statistics literature to regularized estimators in high-dimensions. Such first order expansion implies that the risk of $\hat{\beta}$ is asymptotically the same as the risk of $\eta$ which leads to a precise characterization of the MSE of $\hat\beta$; this characterization takes a particularly simple form for isotropic design. Such first order expansion also leads to confidence intervals based on $\hat{\beta}$. We provide sufficient conditions for the existence of such first order expansion for three regularizers: the Lasso in its constrained form, the lasso in its penalized form, and the Group-Lasso. The results apply to general loss functions under some conditions and those conditions are satisfied for the squared loss in linear regression and for the logistic loss in the logistic model.
Joint work with Arun K Kuchibhotla (UPenn)
Horia Cornean (Aalborg) Tuesday, October 29, 10h — Salle de séminaire du CERMICS
Parseval frames of exponentially localized magnetic Wannier functions
Abstract: Motivated by the analysis of gapped periodic quantum systems in presence of a uniform magnetic field in dimension d \le 3, we study the possibility to construct spanning sets of exponentially localized (generalized) Wannier functions for the space of occupied states. When the magnetic flux per unit cell satisfies a certain rationality condition, by going to the momentum-space description one can model m occupied energy bands by a real-analytic and {{\mathbb {Z}}}^{d}-periodic family \left\{ P(\mathbf{k}) \right\} _{\mathbf{k}\in {{\mathbb {R}}}^{d}} of orthogonal projections of rank m. A moving orthonormal basis of {{\,\mathrm{Ran}\,}}P(\mathbf{k}) consisting of real-analytic and {{\mathbb {Z}}}^d-periodic Bloch vectors can be constructed if and only if the first Chern number(s) of P vanish(es). Here we are mainly interested in the topologically obstructed case. First, by dropping the generating condition, we show how to algorithmically construct a collection of m-1orthonormal, real-analytic, and {{\mathbb {Z}}}^d-periodic Bloch vectors. Second, by dropping the linear independence condition, we construct a Parseval frame of m+1 real-analytic and {{\mathbb {Z}}}^d-periodic Bloch vectors which generate {{\,\mathrm{Ran}\,}}P(\mathbf{k}). Both algorithms are based on a two-step logarithm method which produces a moving orthonormal basis in the topologically trivial case. A moving Parseval frame of analytic, {{\mathbb {Z}}}^d-periodic Bloch vectors corresponds to a Parseval frame of exponentially localized composite Wannier functions. We extend this construction to the case of magnetic Hamiltonians with an irrational magnetic flux per unit cell and show how to produce Parseval frames of exponentially localized generalized Wannier functions also in this setting. Our results are illustrated in crystalline insulators modelled by 2d discrete Hofstadter-like Hamiltonians, but apply to certain continuous models of magnetic Schrödinger operators as well.
This is joint work with Domenico Monaco and Massimo Moscolari. Sonia Fliss (ENSTA) Thursday, November 7th, 10h — Salle de séminaire du CERMICS
TBA
David Herzog (Iowa State University) Thursday, November 21st, 10h — Salle de séminaire du CERMICS
TBA
Denis Talay (Inria Sophia-Antipolis) Wednesday, November 27th, 14h30 — Salle F106
TBA
Laure Dumaz (CEREMADE) Thursday, December 12th, 10h — Salle de séminaire du CERMICS
Localization of the continuous Anderson hamiltonian in 1-d and its transition
We consider the continuous Schrödinger operator – d^2/d^x^2 + B’(x) on the interval [0,L] where the potential B’ is a white noise. We study the spectrum of this operator in the large L limit. We show the convergence of the smallest eigenvalues as well as the eigenvalues in the bulk towards a Poisson point process, and the localization of the associated eigenvectors in a precise sense. We also find that the transition towards delocalization holds for large eigenvalues of order L, where the limiting law of the point process corresponds to Sch_tau, a process introduced by Kritchevski, Valko and Virag for discrete Schrodinger operators. In this case, the eigenvectors behave like the exponential Brownian motion plus a drift, which proves a conjecture of Rifkind Virag. Joint works with Cyril Labbé.
Archive of past seminars: here |
This article will be permanently flagged as inappropriate and made unaccessible to everyone.
Are you certain this article is inappropriate?
Excessive Violence
Sexual Content
Political / Social
Email Address:
Article Id:
WHEBN0000025280
Reproduction Date:
Quantum teleportation is a process by which quantum information (e.g. the exact state of an atom or photon) can be transmitted (exactly, in principle) from one location to another, with the help of classical communication and previously shared quantum entanglement between the sending and receiving location. Because it depends on classical communication, which can proceed no faster than the speed of light, it cannot be used for superluminal transport or communication of classical bits. It also cannot be used to make copies of a system, as this violates the no-cloning theorem. Although the name is inspired by the teleportation commonly used in fiction, current technology provides no possibility of anything resembling the fictional form of teleportation. While it is possible to teleport one or more qubits of information between two (entangled) atoms,[1][2][3] this has not yet been achieved between molecules or anything larger. One may think of teleportation either as a kind of transportation, or as a kind of communication; it provides a way of transporting a qubit from one location to another, without having to move a physical particle along with it.
The seminal paper[4] first expounding the idea was published by C. H. Bennett, G. Brassard, C. Crépeau, R. Jozsa, A. Peres and W. K. Wootters in 1993.[5] Since then, quantum teleportation has been realized in various physical systems. Presently, the record distance for quantum teleportation is 143 km (89 mi) with photons,[6] and 21 m with material systems.[7] In August 2013, the achievement of "fully deterministic" quantum teleportation, using a hybrid technique, was reported.[8] On 29 May 2014, scientists announced a reliable way of transferring data by quantum teleportation. Quantum teleportation of data had been done before but with highly unreliable methods.[9][10]
It is known, from axiomatizations of quantum mechanics (such as categorical quantum mechanics), that the universe is fundamentally composed of two things: bits and qubits.[11][12] Bits are units of information, and are commonly represented using zero or one, true or false. These bits are sometimes called "classical" bits, to distinguish them from quantum bits, or qubits. Qubits also encode a type of information, called quantum information, which differs sharply from "classical" information. For example, a qubit cannot be used to encode a classical bit (this is the content of the no-communication theorem). Conversely, classical bits cannot be used to encode qubits: the two are quite distinct, and not inter-convertible. Qubits differ from classical bits in dramatic ways: they cannot be copied (the no-cloning theorem) and they cannot be destroyed (the no-deleting theorem).
Quantum teleportation provides a mechanism of moving a qubit from one location to another, without having to physically transport the underlying particle that a qubit is normally attached to. Much like the invention of the telegraph allowed classical bits to be transported at high speed across continents, quantum teleportation holds the promise that one day, qubits could be moved likewise. However, as of 2013, only photons and single atoms have been teleported; molecules have not, nor does this even seem likely in the upcoming years, as the technology remains daunting. Specific distance and quantity records are stated below.
The movement of qubits does require the movement of "things"; in particular, the actual teleportation protocol requires that an entangled quantum state or Bell state be created, and its two parts shared between two locations (the source and destination, or Alice and Bob). In essence, a certain kind of "quantum channel" between two sites must be established first, before a qubit can be moved. Teleportation also requires a classical information link to be established, as two classical bits must be transmitted to accompany each qubit. The need for such links may, at first, seem disappointing; however, this is not unlike ordinary communications, which requires wires, radios or lasers. What's more, Bell states are most easily shared using photons from lasers, and so teleportation could be done, in principle, through open space.
The quantum states of single atoms have been teleported,.[1][2][3] An atom consists of several parts: the qubits in the electronic state or electron shells surrounding the atomic nucleus, the qubits in the nucleus itself, and, finally, the electrons, protons and neutrons making up the atom. Physicists have teleported the qubits encoded in the electronic state of atoms; they have not teleported the nuclear state, nor the nucleus itself. It is therefore false to say "an atom has been teleported". It has not. The quantum state of an atom has. Thus, performing this kind of teleportation requires a stock of atoms at the receiving site, available for having qubits imprinted on them. The importance of teleporting nuclear state is unclear: nuclear state does affect the atom, e.g. in hyperfine splitting, but whether such state would need to be teleported in some futuristic "practical" application is debatable.
The quantum world is strange and unusual; so, aside from no-cloning and no-deleting, there are other oddities. For example, quantum correlations arising from Bell states seem to be instantaneous (the Alain Aspect experiments), whereas classical bits can only be transmitted slower than the speed of light (quantum correlations cannot be used to transmit classical bits; again, this is the no-communication theorem). Thus, teleportation, as a whole, can never be superluminal, as a qubit cannot be reconstructed until the accompanying classical bits arrive.
The proper description of quantum teleportation requires a basic mathematical toolset, which, although complex, is not out of reach of advanced high-school students, and indeed becomes accessible to college students with a good grounding in finite-dimensional linear algebra. In particular, the theory of Hilbert spaces and projection matrixes is heavily used. A qubit is described using a two-dimensional complex number-valued vector space (a Hilbert space); the formal manipulations given below do not make use of anything much more than that. Strictly speaking, a working knowledge of quantum mechanics is not required to understand the mathematics of quantum teleportation, although without such acquaintance, the deeper meaning of the equations may remain quite mysterious.
The prerequisites for quantum teleportation are a qubit that is to be teleported, a conventional communication channel capable of transmitting two classical bits (i.e., one of four states), and means of generating an entangled EPR pair of qubits, transporting each of these to two different locations, A and B, performing a Bell measurement on one of the EPR pair qubits, and manipulating the quantum state of the other of the pair. The protocol is then as follows:
Work in 1998 verified the initial predictions,[13] and the distance of teleportation was increased in August 2004 to 600 meters, using optical fiber.[14] The longest distance yet claimed to be achieved for quantum teleportation is 143 km (89 mi), performed in May 2012, between the two Canary Islands of La Palma and Tenerife off the Atlantic coast of north Africa.[15] In April 2011, experimenters reported that they had demonstrated teleportation of wave packets of light up to a bandwidth of 10 MHz while preserving strongly nonclassical superposition states.[16][17]
Researchers at the Niels Bohr Institute successfully used quantum teleportation to transmit information between clouds of gas atoms, notable because the clouds of gas are macroscopic atomic ensembles.[18][19]
There are a variety of ways in which the teleportation protocol can be written mathematically. Some are very compact but abstract, and some are verbose but straightforward and concrete. The presentation below is of the latter form: verbose, but has the benefit of showing each quantum state simply and directly. Later sections review more compact notations.
The teleportation protocol begins with a quantum state or qubit |\psi\rangle, in Alice's possession, that she wants to convey to Bob. This qubit can be written generally, in bra–ket notation, as:
The subscript C above is used only to distinguish this state from A and B, below. The protocol requires that Alice and Bob share a maximally entangled state beforehand. This state is chosen beforehand, by mutual agreement between Alice and Bob, and will be one of the four Bell states
Alice obtains one of the qubits in the pair, with the other going to Bob. The subscripts A and B in the entangled state refer to Alice's or Bob's particle. In the following, assume that Alice and Bob shared the entangled state |\Phi^+\rangle_{AB}.
At this point, Alice has two particles (C, the one she wants to teleport, and A, one of the entangled pair), and Bob has one particle, B. In the total system, the state of these three particles is given by
Alice will then make a partial measurement in the Bell basis on the two qubits in her possession. To make the result of her measurement clear, it is best to write the state of Alice's two qubits as superpositions of the Bell basis. This is done by using the following general identities, which are easily verified:
and
The total three particle state, of A, B and C together, thus becomes the following four-term superposition:
The above is just a change of basis on Alice's part of the system. No operation has been performed and the three particles are still in the same total state. The actual teleportation occurs when Alice measures her two qubits in the Bell basis. Experimentally, this measurement may be achieved via a series of laser pulses directed at the two particles. Given the above expression, evidently the result of Alice's (local) measurement is that the three-particle state would collapse to one of the following four states (with equal probability of obtaining each):
Alice's two particles are now entangled to each other, in one of the four Bell states, and the entanglement originally shared between Alice's and Bob's particles is now broken. Bob's particle takes on one of the four superposition states shown above. Note how Bob's qubit is now in a state that resembles the state to be teleported. The four possible states for Bob's qubit are unitary images of the state to be teleported.
The result of Alice's Bell measurement tells her which of the above four states the system is in. She can now send her result to Bob through a classical channel. Two classical bits can communicate which of the four results she obtained.
After Bob receives the message from Alice, he will know which of the four states his particle is in. Using this information, he performs a unitary operation on his particle to transform it to the desired state \alpha |0\rangle_B + \beta|1\rangle_B:
to recover the state.
to his qubit.
Teleportation is thus achieved. The above-mentioned three gates correspond to rotations of π radians (180°) about appropriate axes (X, Y and Z).
Some remarks:
There are a variety of different notations in use that describe the teleportation protocol. One common one is by using the notation of quantum gates. In the above derivation, the unitary transformation that is the change of basis (from the standard product basis into the Bell basis) can be written using quantum gates. Direct calculation shows that this gate is given by
where H is the one qubit Walsh-Hadamard gate and C_N is the Controlled NOT gate.
Teleportation can be applied not just to pure states, but also mixed states, that can be regarded as the state of a single subsystem of an entangled pair. The so-called entanglement swapping is a simple and illustrative example.
If Alice has a particle which is entangled with a particle owned by Bob, and Bob teleports it to Carol, then afterwards, Alice's particle is entangled with Carol's.
A more symmetric way to describe the situation is the following: Alice has one particle, Bob two, and Carol one. Alice's particle and Bob's first particle are entangled, and so are Bob's second and Carol's particle:
___ / \ Alice-:-:-:-:-:-Bob1 -:- Bob2-:-:-:-:-:-Carol \___/
Now, if Bob performs a projective measurement on his two particles in the Bell state basis and communicates the results to Carol, as per the teleportation scheme described above, the state of Bob's first particle can be teleported to Carol's. Although Alice and Carol never interacted with each other, their particles are now entangled.
A detailed diagrammatic derivation of entanglement swapping has been given by Bob Coecke,[21] presented in terms of categorical quantum mechanics.
One can imagine how the teleportation scheme given above might be extended to N-state particles, i.e. particles whose states lie in the N dimensional Hilbert space. The combined system of the three particles now has an N^3 dimensional state space. To teleport, Alice makes a partial measurement on the two particles in her possession in some entangled basis on the N^2 dimensional subsystem. This measurement has N^2 equally probable outcomes, which are then communicated to Bob classically. Bob recovers the desired state by sending his particle through an appropriate unitary gate.
In general, mixed states ρ may be transported, and a linear transformation ω applied during teleportation, thus allowing data processing of quantum information. This is one of the foundational building blocks of quantum information processing. This is demonstrated below.
A general teleportation scheme can be described as follows. Three quantum systems are involved. System 1 is the (unknown) state ρ to be teleported by Alice. Systems 2 and 3 are in a maximally entangled state ω that are distributed to Alice and Bob, respectively. The total system is then in the state
A successful teleportation process is a LOCC quantum channel Φ that satisfies
where Tr12 is the partial trace operation with respect systems 1 and 2, and \circ denotes the composition of maps. This describes the channel in the Schrödinger picture.
Taking adjoint maps in the Heisenberg picture, the success condition becomes
for all observable O on Bob's system. The tensor factor in I \otimes O is 12 \otimes 3 while that of \rho \otimes \omega is 1 \otimes 23.
The proposed channel Φ can be described more explicitly. To begin teleportation, Alice performs a local measurement on the two subsystems (1 and 2) in her possession. Assume the local measurement have effects
If the measurement registers the i-th outcome, the overall state collapses to
The tensor factor in (M_i \otimes I) is 12 \otimes 3 while that of \rho \otimes \omega is 1 \otimes 23. Bob then applies a corresponding local operation Ψi on system 3. On the combined system, this is described by
where Id is the identity map on the composite system 1 \otimes 2.
Therefore the channel Φ is defined by
Notice Φ satisfies the definition of LOCC. As stated above, the teleportation is said to be successful if, for all observable O on Bob's system, the equality
holds. The left hand side of the equation is:
where Ψi* is the adjoint of Ψi in the Heisenberg picture. Assuming all objects are finite dimensional, this becomes
The success criterion for teleportation has the expression
A local explanation of quantum teleportation is put forward by David Deutsch and Patrick Hayden, with respect to the many-worlds interpretation of Quantum mechanics. Their paper asserts that the two bits that Alice sends Bob contain "locally inaccessible information" resulting in the teleportation of the quantum state. "The ability of quantum information to flow through a classical channel ..., surviving decoherence, is ... the basis of quantum teleportation."[22]
Computer science, Logic, Information theory, Physics, Mathematics
Cryptography, Technology, Semantic Web, Neuroscience, Agriculture
Cryptography, Silicon, ArXiv, Artificial intelligence, Computer science
Bell state, Qubit, Quantum cryptography, Quantum channel, Locc
Quantum gate, Quantum circuit, Quantum information, Quantum optics, Quantum teleportation |
The aim of Kernel Density Estimation(KDE) is:
Given a set of \(N\) samples from a random variable, \(\mathbf{X}\), possibly multivariate and continuous, estimate the random variables probability density function(pdf)
The univariate case
To get a rough idea how one can think about the problem, we start out with a set of samples, \(X=[x_1,x_2,...,x_N]\), of a continuous, one-dimensional, random variable(univariate) on \(\mathbb{R}\). To get a estimate, we assume that the pdf is constant on small intervals \(dx\), this means our pdf is now piecewise constant and discretized. The amount of the total probability mass falling into every small interval can be written as,
\( P_{dx} \approx f(x)dx = \frac{N_{dx}}{N}\),
where \(N_{dx}\) is the number of samples falling into a particular interval. What we have done, in other words, is to estimate the pdf by the normalized histogram of the samples from the random variable. In the animated gif below you can see such an estimate of a normal distributed random variable.
This estimate of \(f(x)\) is quite blunt because of the discretization, and thus dependent on the size of \(dx\) and the number of samples we have. We can see this in the gif above where we start out with a few set of samples and large bins, \(dx\). As we go along, we get more samples, and can make dx finer, which results in a approximation that is quite near the original normal distributions.
The multivariate case
In the multivariate case we simply replace \(dx\) by a volume \(dV\) and the estimate above becomes
\( P_{dV} \approx f(x)dV = \frac{N_{dV}}{N}. \)
We can write out the equation in units / words to get a better understanding of it:
\( \mathrm{probability\; mass} \approx (\mathrm{probability\; mass}/\mathrm{volume})* volume = \\ (\mathrm{probability\; mass}/ \mathrm{sample}) * \mathrm{samples} \)
As we move from univariate to multivariate most of the volumes, \(dV\), will contain few or zero samples, especially if \(dV\) is small. This will make our discretized estimate quite jumpy.
The KDE approach
KDE approaches the discontinuity problem by introducing uncertainty (or noise) over the sample location. If there is uncertainty in the position of the sample the probability masses assigned to sample points will spread out, influencing other \(dV\)'s, giving a better and smoother estimate. Further on, since the samples now are spread out the grid or volumes estimate transforms into a sliding window estimate.
To show how this can be done we start out by specifying a kernel, \(K(u)\), at each sample point. The kernel is equivalent to spreading out the count of a sample to the whole space. For the input to the kernel, centered at sample \(x_i\), we write, \( \frac{x-x_i}{dx} \).
The input to the kernel reads as the ratio between the to-sample distance and discrectization size. We are thus feeding the kernel a value that is linear function of the to-sample distance and the grid size.
The kernel is formulated such that the following holds,
\( \int K(u)\,du = 1 ,\; K(u) \geq 0 \).
This looks suspiciously as a pdf, and that is essentially what it is.
Choosing the right kernel is more of a data problem than theory problem, but starting with a Gaussian kernel is always a safe bet.
\(K(\frac{x-x_i}{dx}) = \frac{1}{(2\pi dx^2)^{D/2}} e^{-\frac{1}{2}(\frac{||x-x_i||}{dx})^2}\)
\(dx\) here becomes the variance of the kernel and restricts the sample count spread to be concentrated within the volume. Essentially, \(dx\) controls the smoothness of the estimate, and its thus only natural that it should affect the concentration of probability mass. We can now pin down the pdf as a sum,
\( f(x) = \frac{1}{N} \sum\limits_{i=1}^{N} K (\frac{x-x_i}{dx}) \),
where \(K\) is any kernel that fulfills the requirements stated above. Below is an animated gif of the behaviour of the KDE as function of the number of samples one with a fine \(dx\)
The blue dotted lines are individual kernel values (not multiplied by N), red the estimated density and magenta the actual density. The first image has a coarse grid value requiring only a few samples while the lower image has a finer grid requiring many more samples. This is one of the weaknesses of the approach since \(dx\) is the same over the whole space and cannot be made finer where there is lot of variation and coarser when there is not. If we choose a bad average value \(dx\) we might end up with the situation that arises in image two. Luckily there are ways to fix this but that is for another post.
A footnote, one can interpret the sum over the kernels as a mixture model where the mixture components are equal, \(1/N\), and with the same local support. If one is familiar with the EM or variational approach to Gaussian mixture models (GMM) one can easily see the shortcomings of this static approach to density estimation as well as the simplicity and ease of implementation as compared to the GMM approach for density modeling. |
Summary: The 15 Puzzle consists of 15 squares numbered from 1 to 15 that are placed in a 4 by 4 box with one empty position. The objective of the puzzle is to reposition the squares by sliding them one at a time into a configuration with the numbers in order. Problem Statement Solve the Puzzle Mathematical Formulation GAMS Model Acknowledgments and References
The 15 Puzzle is a sliding puzzle that consists of a 4 by 4 frame of numbered square tiles in an arbitrary ordering with one space. The objective of the puzzle is to place the tiles in order, as shown in the figure below, by making sliding moves that use the empty space.
Solve the Puzzle
Solve the 15 Puzzle yourself using the interactive demo.
Here we investigate an optimization approach to solving the 15 Puzzle. In particular, we formulate an integer programming model in which the objective is to minimize the number of moves that the empty space makes in the process of positioning the numbered tiles in their respective positions. The constraints in the model define the allowable moves and track the positions of the empty space and the numbered tiles. The empty space (or blank tile) is allowed to move in four directions -- up, down, left, and right unless it is along an edge. When the blank tile is along an edge, one or two of the directions will be prohibited.
Sets \(I = \{i0, i1, i2, \cdots, i15\}\) is the set of tiles in which the blank tile is \(i0\) \(T\) is the time horizon (set of moves) Parameters initial(i) = the starting position of tile \(i\), for all tiles \(i \in I\) final(i) = the final (target) position of tile \(i\), for all tiles \(i \in I\) Variables \(p(i,t)\) = position of tile \(i\) at time \(t\), integer \(x(t)\) = \(x\)-coordinate of the blank tile at time \(t\), integer \(y(t)\) = \(y\)-coordinate of the blank tile at time \(t\), integer
\(z(i,t) = \left\{ \begin{array}{ll}
1 & \mbox{if tile \(i\) moves at time \(t\), \(\forall i \in I\), \(\forall t \in T\)} \\ 0 & \mbox{otherwise} \end{array} \right. \) \(u(t) = \left\{ \begin{array}{ll} 1 & \mbox{if the blank tile moves up at time \(t\), \(\forall t \in T\)} \\ 0 & \mbox{otherwise} \end{array} \right. \) \(d(t) = \left\{ \begin{array}{ll} 1 & \mbox{if the blank tile moves down at time \(t\), \(\forall t \in T\)} \\ 0 & \mbox{otherwise} \end{array} \right. \) \(l(t) = \left\{ \begin{array}{ll} 1 & \mbox{if the blank tile moves left at time \(t\), \(\forall t \in T\)} \\ 0 & \mbox{otherwise} \end{array} \right. \) \(r(t) = \left\{ \begin{array}{ll} 1 & \mbox{if the blank tile moves right at time \(t\), \(\forall t \in T\)} \\ 0 & \mbox{otherwise} \end{array} \right. \) 15 Puzzle Integer Programming Formulation Minimize \( \sum_{t \in T} z(i0,t) \)
subject to: blank tile can make only one move at a time
\( u(t) + d(t) + l(t) + r(t) = z(i0,t) \quad \forall t \in T\)
subject to: update the position of the blank tile after each move
\( p(i0,t+1) = p(i0,t) - 4u(t) + 4d(t) - l(t) + r(t) \quad \forall t \in T\)
subject to: update the position of the numbered tile if it moved
\( p(i,t+1) - p(i0,t) + 16 z(i,t) \leq 16, \quad \forall i \in I, \forall t \in T\) \( p(i,t+1) - p(i0,t) - 16 z(i,t) \geq -16, \quad \forall i \in I, \forall t \in T\)
subject to: define the x and y coordinates of the blank tile
\( p(i0,t) = x(t) + 4y(t), \quad \forall t \in T\)
subject to: restrict the movement of the blank tile at the top edge
\( 5u(t) \le p(i0,t), \quad \forall t \in T\)
subject to: restrict the movement of the blank tile at the bottom edge
\( 4d(t) \le 16 - p(i0,t), \quad \forall t \in T\)
subject to: restrict the movement of the blank tile at the left edge
\( l(t) \le x(t) - 1, \quad \forall t \in T \)
subject to: restrict the movement of the blank tile at the right edge
\( r(t) \le 4 - x(t), \quad \forall t \)
subject to: ensure that every tile is assigned a position at every time
\( \sum_{i \in I} p(i,t) = 1 + 2 + ... + 16, \quad \forall t \in T \)
subject to: ensure that one of the numbered tiles moves if the blank tile moves
\( z(i0,t) = \sum_{i \ne i0 } z(i,t), \quad \forall t \in T\)
subject to: maintain the position of the numbered tiles if the blank tile does not move
\( p(i,t) - p(i,t+1) + 4 z(i,t) \geq 0, \quad \forall i \in I, t \in T\) \( p(i,t) - p(i,t+1) - 4 z(i,t) \leq 0, \quad \forall i \in I, t \in T\)
subject to: bound restrictions on the variables
\( 1 \le p(i,t) \le 16, \quad \forall i \in I, \forall t \in T\) \( 1 \le x(t) \le 4, \quad \forall t \in T\) \( 0 \le y(t) \le 3, \quad \forall t \in T\)
subject to: initial positions
\( p(i,1) = initial(i), \quad \forall i \in I\)
subject to: final positions
\( p(i,|T|) = final(i), \quad \forall i \in I\)
Different instances of the model can be created by specifying different values in the vector
initial(i). To solve an instance, we can use one of the NEOS Solvers in the Mixed Integer Linear Programming category.
In order to solve an instance of the model, we need to specify an upper bound on \(T\), the number of steps required to solve the puzzle. Clearly, we want the upper bound to be as small as possible. Brüngger et al. (1999) proved computationally that the hardest initial configurations required 80 steps to solve. Therefore, in the general case, we can define \(T = \{1, \cdots, 80\}\), but the computation time required to solve those hardest initial configurations is extensive. For the Java applet here, we make two simplifications in order to provide a user-friendly experience . First, we limit the set of available starting positions to those that can be solved within 60 seconds. Second, we have the solver return the
first integer solution that it finds. This solution may not be optimal in the sense that there may be a shorter sequence of steps, however, this approach eliminates the (potentially lengthy) time spent by the solver proving the optimality of the solution.
Here is a GAMS model for an instance in which the initial configuration is the following:
(i5, i1, i2, i3, i9, i6, i7, i4, i13, i10, i11, i8, i14, i15, i0, i12).
sets
i tile number /i0*i15/ t move number /0*80/
parameter
*Optimal solution is 11 moves initial(i) /i1 2, i2 3, i3 4, i4 8, i5 1, i6 6, i7 7, i8 12, i9 5, i10 10, i11 11, i12 16, i13 9, i14 13, i15 14, i0 15/ final(i) /i1 1, i2 2, i3 3, i4 4, i5 5, i6 6, i7 7, i8 8, i9 9, i10 10, i11 11, i12 12, i13 13, i14 14, i15 15, i0 16/;
integer variables
p(i,t), x(t), y(t);
binary variables
z(i,t), u(t), d(t), l(t), r(t);
variable
obj;
equations
c_blank(t) c_move(t) c_move2_1(i,t) c_move2_2(i,t) c_pos(t) c_up(t) c_down(t) c_left(t) c_right(t) c_no_repeat(t) c1(t) c2_1(i,t) c2_2(i,t) calc_obj ;
c_blank(t).. u(t) + d(t) + l(t) + r(t) =E= z('i0',t);
c_move(t)\$(not sameas(t,'80')).. p('i0',t+1) =E= p('i0',t) - 4*u(t) + 4*d(t) - l(t) + r(t); c_move2_1(i,t)\$(not sameas(i,'i0')).. p(i,t+1) - p('i0',t) + 16*z(i,t) =L= 16 + 0; c_move2_2(i,t)\$(not sameas(i,'i0')).. p(i,t+1) - p('i0',t) - 16*z(i,t) =G= -16 + 0; c_up(t).. 5*u(t) =L= p('i0',t); c_down(t).. 4*d(t) =L= 16 - p('i0',t); c_left(t).. l(t) =L= x(t) - 1; c_right(t).. r(t) =L= 4 - x(t); c_pos(t).. p('i0',t) =E= 4*y(t) + x(t); c_no_repeat(t).. sum(i, p(i,t)) =E= sum(i, ord(i)); c1(t).. z('i0',t) =E= sum(i\$(not sameas(i,'i0')), z(i,t)); c2_1(i,t)\$(not sameas(t,'80')).. -4*z(i,t) =L= p(i,t) - p(i,t+1); c2_2(i,t)\$(not sameas(t,'80')).. p(i,t) - p(i,t+1) =L= 4*z(i,t); calc_obj.. obj =E= sum(t, z('i0',t));
p.lo(i,t) = 1; p.up(i,t) = 16;
x.lo(t) = 1; x.up(t) = 4; y.lo(t) = 0; y.up(t) = 3;
p.fx(i,'1') = initial(i);
p.fx(i,'80') = final(i);
model puzzle4 /all/;
solve puzzle4 using mip minimizing obj
Wai Kit Ong and Yachao Dong contributed the original version of this case study, which was completed as part of the CS/ISyE 635 taught course by Professor Jeff Linderoth at UW-Madison in Spring 2012.
Brüngger, A., Marzetta, A., Fukuda, K., and Nievergelt, J. 1999. The Parallel Search Bench ZRAM and Its Applications. Annals of Operations Research 90, 45-63, 1999. Johnson, W., and Story, W. 1879. Notes on the "15" Puzzle. American Journal of Mathematics, 397-404. Korf, R., and Schultze, P. 2005. Large-scale parallel breadth-first search. In Proceedings of the 20th National Conference of Artificial Intelligence (AAAI-05), 1380-1385. |
Bob has a black-box, with the label "V-Wade", which he has been promised prepares a qubit which he would like to know the state of. He asks Alice, who happens also to be an experimental physicist, to determine the state of his qubit. Alice reports $\sigma$ but Bob would like to know her honest opinion $\rho$ for the state of the qubit. To ensure her honesty, Bob performs a measurement $\{E_i\}$ and will pay Alice $R(q_i)$ if he obtains outcome $E_i$, where $q_i={\rm Tr}(\sigma E_i)$. Denote the honest probabilities of Alice by $p_i={\rm Tr}(\rho E_i)$. Then her honesty is ensured if $$ \sum_i p_i R(p_i)\leq\sum_i p_i R(q_i). $$ The Aczel-Pfanzagl theorem holds and thus $R(p)=C\log p + B$. Thus, Alice's expected loss is (up to a constant $C$) $$ \sum_i p_i(\log{p_i}-\log{q_i})=\sum_i {\rm Tr}(\rho E_i)\log\left[\frac{{\rm Tr}(\rho E_i)}{{\rm Tr}(\sigma E_i)}\right]. $$ Blume-Kohout and Hayden showed that if Bob performs a measurement in the diagonal basis of Alice's reported state, then her expected loss is the quantum relative entropy $$ D(\rho\|\sigma)={\rm Tr}(\rho\log\rho)-{\rm Tr}(\rho\log\sigma). $$ Clearly in this example Alice is constrained to be honest since the minimum of $D(\rho\|\sigma)$ is uniquely obtained at $\sigma$. This is not true for any measurement Bob can make (take the trivial measurement for example). So, we naturally have the question: which measurements can Bob make to ensure Alice's honesty? That is, which measurement schemes are characterized by $$ \sum_i {\rm Tr}(\rho E_i)\log\left[\frac{{\rm Tr}(\rho E_i)}{{\rm Tr}(\sigma E_i)}\right]=0\Leftrightarrow \sigma=\rho? $$
Note that $\{E_i\}$ can depend on $\sigma$ (what Alice reports) but not $\rho$ (her true beliefs).
Partial answer: Projective measurement in the eigenbasis of $\sigma$ $\Rightarrow$ yes, Blume-Kohout/Hayden showed that this is the unique scheme constraining Alice to be honest for a projective measurement.
Informationally complete $\Rightarrow$ yes, clearly this constrains Alice to be honest since the measurement will uniquely specify the state (moreover, the measurement can be chosen independent of $\sigma$).
Trivial measurement $\Rightarrow$ no, Alice can say whatever she wants without impunity. |
You say you cannot use a poisson regression on the count variable deaths directly, because there might be some influence of exposure (number of characters that year) with the number of deaths, presumably because with too many characters the film company's need to get rid of more of them ... (?).
But I think that we can take care of that within the glm framework, I will first discuss a poisson regression (with log link). Let $Y_t$ be the number of deaths year $t$, the exposure $E_t$ the total number of characters, and possibly $x_t$ other covariates (in intensive form, see Goodness of fit and which model to choose linear regression or Poisson). Then $Y_t \sim \cal{P}(\lambda_t)$, $\lambda_t = \exp(\beta_0+1\cdot \log E_t + \beta^T x_t)$ where the log exposure $\log E_t$ is used as an offset (see Difference between offset and exposure in Poisson Regression). This is a baseline model which is
not modelling any interaction between exposure and number of deaths. Then to see if there is any such interaction, extend the model as $\lambda_t = \exp(\beta_0+1\cdot \log E_t + s(\log E_t) + \beta^T x_t)$ where $s(\cdot)$ represents a smooth term, maybe a spline, or more simply a linear term. Then the estimate of the new term indicates if there is such an interaction. If there is overdispersion all this can be done within a negative binomial model.
One could argue (since the number of deaths $Y_t$ is bounded above by the exposure) that a binomial likelihood is more natural. That would have to be used with a log link, see for instance Log binomial regression with a case-control sample |
I’d have written it as $r = 1 – \theta$, myself, but even then it’s not much of a heart. However, that’s pretty much my biggest gripe about this episode, the penultimate in series 2 of Samuel Hansen’s one-of-a-kind mathematics podcast, Relatively Prime.
Episode 7 is subtitled “Dating in the mathematical domain”, and looks at the maths involved in dating and relationships, and begins with some of the comments Sam’s dating profile received from non-mathematicians. Now, denizens of the dating world: Samuel has
many flaws and failings; picking on the fact that he’s a mathematician seems a little arbitrary and unfair, like deciding not to vote for Donald Trump because you don’t like his tie. I have this unfamiliar sensation. Could it be… surely not? It appears that I feel a little sorry for Samuel. Don’t tell him, ok?
The first segment is an interview with Andrea Silenzi of the Why Oh Why podcast, on how she took Tim Harford’s advice on how to filter out potential partners using Skype dates. While it seems like good advice, I’d have liked to hear a little more about the mathematical or economic reasoning behind it.
Second up is Sam Yagan, founder of OKCupid, who
does go into the numbers, explaining why the service is set up the way it is, and how the blog decides on which experiments to run. (I wanted to fit a ‘big dater’ joke in here, but I’m groggy with the shingles and lack the mental fortitude to make it work.) A good interview, I thought: a good insight into someone who’s used mathematical modelling to completely disrupt an entire industry.
Who’s this popping up in the third segment? Why, it’s a man who appears to have changed his name to “Matt Parker, author of
Things To Make And Do In The Fourth Dimension”. He used to write around here, you know. Anyway, in the interests of doing some actual maths, Matt is here to discuss the Optimal Stopping Problem, one possible way of deciding who to settle down with (under certain, somewhat unilateral, assumptions). Apparently there’s some recent research into the problem that suggests the $\frac{1}{e}$ approach isn’t necessarily the best approach if you don’t want to die alone.
The last guest is John Gottman of the Gottman Institute, who has developed mathematical models for use
within relationships — for example, for predicting which behaviours and attitudes to conflict are most likely to lead to couples splitting up.
The episode ends by returning to Andrea Silenzi for a quick discussion about what a date quality prediction formula would look like. Don’t tell the Daily Mail.
This is possibly my favourite episode of the series so far; it didn’t even particularly annoy me. Samuel’s interventions are well-judged between segments, and conversational within them. An excellent topic that got me wondering about what I’d do differently or take further; and left me a little sad that next week’s is the last of the series.
But wait! There’s (potentially) more! As announced on Wrong, But Useful a few weeks ago, Samuel is running a Kickstarter to fund Season 3, which will be on a more sedate release schedule (and worth supporting for that reason alone).
More information Listen to Relatively Prime: $f(\theta)=1-\sin(\theta)$ at relprime.com. While you’re there, catch up on Season 1. Colin was given early access to Season 2 of Relatively Prime, in return for writing reviews of each episode. He also met his partner through OKCupid. Furthermore, Samuel is Aperiodipal numero uno and most of us chipped some money into the Relatively Prime Kickstarter, too. Just so you know. |
Difference between revisions of "Probability Seminar"
(→Wednesday, February 27 at 1:10pm Jon Peterson, Purdue)
(→Wednesday, February 27 at 1:10pm Jon Peterson, Purdue)
Line 43: Line 43:
Abstract: There has been a great deal or recent work on the asymptotics of the maximum of characteristic polynomials or random matrices. Other recent work studies the analogous result for log-correlated Gaussian fields. Here we will discuss a maximum result for the centered counting function of the Sine beta process. The Sine beta process arises as the local limit in the bulk of a beta-ensemble, and was originally described as the limit of a generalization of the Gaussian Unitary Ensemble by Valko and Virag with an equivalent process identified as a limit of the circular beta ensembles by Killip and Stoiciu. A brief introduction to the Sine process as well as some ideas from the proof of the maximum will be covered. This talk is on joint work with Elliot Paquette.
Abstract: There has been a great deal or recent work on the asymptotics of the maximum of characteristic polynomials or random matrices. Other recent work studies the analogous result for log-correlated Gaussian fields. Here we will discuss a maximum result for the centered counting function of the Sine beta process. The Sine beta process arises as the local limit in the bulk of a beta-ensemble, and was originally described as the limit of a generalization of the Gaussian Unitary Ensemble by Valko and Virag with an equivalent process identified as a limit of the circular beta ensembles by Killip and Stoiciu. A brief introduction to the Sine process as well as some ideas from the proof of the maximum will be covered. This talk is on joint work with Elliot Paquette.
+ + + + + + + + + +
== <span style="color:red"> Wednesday, February 27 at 1:10pm</span> [http://www.math.purdue.edu/~peterson/ Jon Peterson], [http://www.math.purdue.edu/ Purdue] ==
== <span style="color:red"> Wednesday, February 27 at 1:10pm</span> [http://www.math.purdue.edu/~peterson/ Jon Peterson], [http://www.math.purdue.edu/ Purdue] ==
Revision as of 11:31, 18 February 2019 Spring 2019 Thursdays in 901 Van Vleck Hall at 2:25 PM, unless otherwise noted. We usually end for questions at 3:15 PM.
If you would like to sign up for the email list to receive seminar announcements then please send an email to join-probsem@lists.wisc.edu
January 31, Oanh Nguyen, Princeton
Title:
Survival and extinction of epidemics on random graphs with general degrees
Abstract: We establish the necessary and sufficient criterion for the contact process on Galton-Watson trees (resp. random graphs) to exhibit the phase of extinction (resp. short survival). We prove that the survival threshold $\lambda_1$ for a Galton-Watson tree is strictly positive if and only if its offspring distribution has an exponential tail, settling a conjecture by Huang and Durrett. On the random graph with degree distribution $D$, we show that if $D$ has an exponential tail, then for small enough $\lambda$ the contact process with the all-infected initial condition survives for polynomial time with high probability, while for large enough $\lambda$ it runs over exponential time with high probability. When $D$ is subexponential, the contact process typically displays long survival for any fixed $\lambda>0$. Joint work with Shankar Bhamidi, Danny Nam, and Allan Sly.
Wednesday, February 6 at 4:00pm in Van Vleck 911 , Li-Cheng Tsai, Columbia University
Title:
When particle systems meet PDEs
Abstract: Interacting particle systems are models that involve many randomly evolving agents (i.e., particles). These systems are widely used in describing real-world phenomena. In this talk we will walk through three facets of interacting particle systems, namely the law of large numbers, random fluctuations, and large deviations. Within each facet, I will explain how Partial Differential Equations (PDEs) play a role in understanding the systems..
Title:
Fluctuations of the KPZ equation in d\geq 2 in a weak disorder regime
Abstract: We will discuss some recent work on the Edwards-Wilkinson limit of the KPZ equation with a small coupling constant in d\geq 2.
February 14, Timo Seppäläinen, UW-Madison
Title:
Geometry of the corner growth model
Abstract: The corner growth model is a last-passage percolation model of random growth on the square lattice. It lies at the nexus of several branches of mathematics: probability, statistical physics, queueing theory, combinatorics, and integrable systems. It has been studied intensely for almost 40 years. This talk reviews properties of the geodesics, Busemann functions and competition interfaces of the corner growth model, and presents some new qualitative and quantitative results. Based on joint projects with Louis Fan (Indiana), Firas Rassoul-Agha and Chris Janjigian (Utah).
February 21, Diane Holcomb, KTH
Title:
On the centered maximum of the Sine beta process Abstract: There has been a great deal or recent work on the asymptotics of the maximum of characteristic polynomials or random matrices. Other recent work studies the analogous result for log-correlated Gaussian fields. Here we will discuss a maximum result for the centered counting function of the Sine beta process. The Sine beta process arises as the local limit in the bulk of a beta-ensemble, and was originally described as the limit of a generalization of the Gaussian Unitary Ensemble by Valko and Virag with an equivalent process identified as a limit of the circular beta ensembles by Killip and Stoiciu. A brief introduction to the Sine process as well as some ideas from the proof of the maximum will be covered. This talk is on joint work with Elliot Paquette.
Xiaoqin Guo, UW-Madison
Title: Quantitative homogenization in a balanced random environment
Abstract: Stochastic homogenization of discrete difference operators is closely related to the convergence of random walk in a random environment (RWRE) to its limiting process. In this talk we discuss non-divergence form difference operators in an i.i.d random environment and the corresponding process—a random walk in a balanced random environment in the integer lattice Z^d. We first quantify the ergodicity of the environment viewed from the point of view of the particle. As consequences, we obtain algebraic rates of convergence for the quenched central limit theorem of the RWRE and for the homogenization of both elliptic and parabolic non-divergence form difference operators. Joint work with J. Peterson (Purdue) and H. V. Tran (UW-Madison).
Wednesday, February 27 at 1:10pm Jon Peterson, Purdue
Title:
Functional Limit Laws for Recurrent Excited Random Walks
Abstract:
Excited random walks (also called cookie random walks) are model for self-interacting random motion where the transition probabilities are dependent on the local time at the current location. While self-interacting random walks are typically very difficult to study, many results for (one-dimensional) excited random walks are remarkably explicit. In particular, one can easily (by hand) calculate a parameter of the model that will determine many features of the random walk: recurrence/transience, non-zero limiting speed, limiting distributions and more. In this talk I will prove functional limit laws for one-dimensional excited random walks that are recurrent. For certain values of the parameters in the model the random walks under diffusive scaling converge to a Brownian motion perturbed at its extremum. This was known previously for the case of excited random walks with boundedly many cookies per site, but we are able to generalize this to excited random walks with periodic cookie stacks. In this more general case, it is much less clear why perturbed Brownian motion should be the correct scaling limit. This is joint work with Elena Kosygina.
March 7, TBA March 14, TBA March 21, Spring Break, No seminar March 28, Shamgar Gurevitch UW-Madison
Title:
Harmonic Analysis on GLn over finite fields, and Random Walks
Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters. For evaluating or estimating these sums, one of the most salient quantities to understand is the
character ratio:
$$ \text{trace}(\rho(g))/\text{dim}(\rho), $$
for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant
rank. This talk will discuss the notion of rank for $GL_n$ over finite fields, and apply the results to random walks. This is joint work with Roger Howe (Yale and Texas AM). |
Back to Nonlinear Programming
avoid the use of penalty parameters by searching along curves that stay near the feasible set. Essentially, these methods take the second version of the nonlinear programming formulation and use the equality constraints to eliminate a subset of the variables, thereby reducing the original problem to a bound-constrained problem in the space of the remaining variables. If \(x_B\) is the vector of eliminated or basic variables, and \(x_N\) is the vector of nonbasic variables, then Reduced-gradient algorithms
\[x_B = h(x_N),\]
where the mapping \(h\) is defined implicitly by the equation
\[c[h(x_N), x_N] = 0.\]
We have assumed that the components of \(x\) have been arranged so that the basic variables come first. In practice,
\[x_B = h(x_N)\] can be recalculated using Newton's method whenever \(x_N\) changes. Each Newton iteration has the form \[x_B \leftarrow x_B - \partial_Bc(x_B, x_N)^{-1}c(x_B, x_N),\] where \(\partial_Bc\) is the Jacobian matrix of \(c\) with respect to the basic variables. The original constrained problem is now transformed into the bound-constrained problem \[\min \{ f(h(x_N), x_N) : l_N \leq x_N \leq u_N \}.\]
Algorithms for this reduced subproblem subdivide the nonbasic variables into two categories. These are the fixed variables \(x_F\), which usually include most of the variables that are at either their upper or lower bounds and that are to be held constant on the current iteration, and the superbasic variables \(x_S\), which are free to move on this iteration. The standard reduced-gradient algorithm, implemented in CONOPT, searches along the steepest-descent direction in the superbasic variables. The generalized reduced-gradient codes GRG2 and LSGRG2 use more sophisticated approaches. They either maintain a dense BFGS approximation of the Hessian of \(f\) with respect to \(x_S\) or use limited-memory conjugate gradient techniques. MINOS also uses a dense approximation to the superbasic Hessian matrix. The main difference between MINOS and the other three codes is that MINOS does not apply the reduced-gradient algorithm directly to the problem but rather uses it to solve a linearly constrained subproblem to find the next step. The overall technique is known as a
projected augmented Lagrangian algorithm.
Operations involving the inverse of \(\partial_Bc(x_B, x_N)\) are frequently required in reduced-gradient algorithms. These operations are facilitated by an \(LU\) factorization of the matrix. GRG2 performs a dense factorization, while CONOPT, MINOS, and LSGRG2 use sparse factorization techniques, making them more suitable for large-scale problems.
When some of the components of the constraint functions are linear, most algorithms aim to retain feasibility of all iterates with respect to these constraints. The optimization problem becomes easier in the sense that there is no curvature term corresponding to these constraints that must be accounted for and, because of feasibility, these constraints make no contribution to the merit function. Numerous codes, such as NPSOL, MINOS and some routines from the NAG library, are able to take advantage of linearity in the constraint set. Other codes, such as those in the IMSL, PORT 3, and PROC NLP libraries, are specifically designed for linearly constrained problems. The IMSL codes are based on a sequential quadratic programming algorithm that combines features of the EQP and IQP variants. At each iteration, this algorithm determines a set \(N_k\) of near-active indices defined by
\[N_k = {i \in I : c_i(x_k) \geq -r_i} \,\]
where the tolerances \(r_i\) tend to decrease on later iterations. The step \(d_k\) is obtained by solving the subproblem
where
\[q_k(d) = \nabla f(x_k)^T d + \frac{1}{2} d^TB_kd,\] and \(B_k\) is a BFGS approximation to \(\nabla^2f(x_k)\). This algorithm is designed to avoid the short steps that EQP methods sometimes produce, without taking many unnecessary constraints into account, as IQP methods do. |
Since random variables simply assign values to outcomes in a sample space and we have defined probability measures on sample spaces, we can also talk about probabilities for random variables. Specifically, we can compute the probability that a discrete random variable equals a specific value (
probability mass function) and the probability that a random variable is less than or equal to a specific value ( cumulative distribution function).
Probability Mass Functions (PMFs)
In the following example, we compute the probability that a discrete random variable equals a specific value.
Example \(\PageIndex{1}\)
Continuing in the context of Example 3.1.1, we compute the
probability that the random variable \(X\) equals \(1\). There are two outcomes that lead to \(X\) taking the value 1, namely \(ht\) and \(th\). So, the probability that \(X=1\) is given by the probability of the event \({ht, th}\), which is \(0.5\):
$$P(X=1) = P(\{ht, th\}) = \frac{\text{# outcomes in}\ \{ht, th\}}{\text{# outcomes in}\ S} = \frac{2}{4} = 0.5$$
In Example 3.2.1, the probability that the random variable \(X\) equals 1, \(P(X=1)\), is referred to as the
probability mass function of \(X\) evaluated at 1. In other words, the specific value 1 of the random variable \(X\) is associated with the probability that \(X\) equals that value, which we found to be 0.5. The process of assigning probabilities to specific values of a discrete random variable is what the probability mass function is and the following definition formalizes this.
Definition \(\PageIndex{1}\)
The
(or probability mass function (pmf) ) of a discrete random variable \(X\) assigns probabilities to the possible values of the random variable. More specifically, if \(x_1, x_2, \ldots\) denote the possible values of a random variable \(X\), then the probability mass function is denoted as \(p\) and we write frequency function
$$p(x_i) = P(X=x_i) = P(\underbrace{\{s\in S\ |\ X(s) = x_i\}}_{\text{set of outcomes resulting in}\ X=x_i}).\label{pmf}$$
Note that, in Equation \ref{pmf}, \(p(x_i)\) is
shorthand for \(P(X = x_i)\), which represents the probability of the event that the random variable \(X\) equals \(x_i\).
As we can see in Definition 3.2.1, the probability mass function of a random variable \(X\) depends on the probability measure of the underlying sample space \(S\). Thus, pmf's inherit some properties from the axioms of probability (Definition 1.2.1). In fact, in order for a function to be a
valid pmf it must satisfy the following properties.
Properties of Probability Mass Functions
Let \(X\) be a discrete random variable with possible values denoted \(x_1, x_2, \ldots, x_i, \ldots\). The probability mass function of \(X\), denoted \(p\), must satisfy the following:
\(\displaystyle{\sum_{x_i} p(x_i)} = p(x_1) + p(x_2) + \cdots = 1\) \(p(x_i) \geq 0\), for all \(x_i\)
Furthermore, if \(A\) is a subset of the possible values of \(X\), then the probability that \(X\) takes a value in \(A\) is given by
$$P(X\in A) = \sum_{x_i\in A} p(x_i).\label{3rdprop}$$
Note that the first property of pmf's stated above follows from the first axiom of probability, namely that the probability of the sample space equals \(1\): \(P(S) = 1\). The second property of pmf's follows from the second axiom of probability, which states that all probabilities are non-negative.
We now apply the formal definition of a pmf and verify the properties in a specific context.
Example \(\PageIndex{2}\)
Returning to Example 3.2.1, now using the notation of Definition 3.2.1, we found that the pmf for \(X\) at \(1\) is given by
$$p(1) = P(X=1) = P(\{ht, th\}) = 0.5.\notag$$ Similarly, we find the pmf for \(X\) at the other possible values of the random variable: \begin{align*} p(0) &= P(X=0) = P(\{tt\}) = 0.25 \\ p(2) &= P(X=2) = P(\{hh\}) = 0.25 \end{align*} Note that all the values of \(p\) are positive (second property of pmf's) and \(p(0) + p(1) + p(2) = 1\) (first property of pmf's). Also, we can demonstrate the third property of pmf's (Equation \ref{3rdprop}) by computing the probability that there is at least one heads, i.e., \(X\geq 1\), which we could represent by setting \(A = \{1,2\}\) so that we want the probability that \(X\) takes a value in \(A\):
$$P(X\geq1) = P(X\in A) = \sum_{x_i\in A}p(x_i) = p(1) + p(2) = 0.5 + 0.25 = 0.75\notag$$
We can represent probability mass functions
numerically with a table, graphically with a histogram, or analytically with a formula. The following example demonstrates the numerical and graphical representations. In the next three sections, we will see examples of pmf's defined analytically with a formula.
Example \(\PageIndex{3}\)
We represent the pmf we found in Example 3.2.2 in two ways below, numerically with a table on the left and graphically with a histogram on the right.
In the histogram in Figure 1, note that we represent probabilities as
areas of rectangles. More specifically, each rectangle in the histogram has width \(1\) and height equal to the probability of the value of the random variable \(X\) that the rectangle is centered over. For example, the leftmost rectangle in the histogram is centered at \(0\) and has height equal to \(p(0) = 0.25\), which is also the area of the rectangle since the width is equal to \(1\). In this way, histograms provides a visualization of the distribution of the probabilities assigned to the possible values of the random variable \(X\). This helps to explain where the common terminology of "probability distribution" comes from when talking about random variables.
Cumulative Distribution Functions (CDFs)
There is one more important function related to random variables that we define next. This function is again related to the probabilities of the random variable equalling specific values. It provides a shortcut for calculating many probabilities at once.
Definition \(\PageIndex{2}\)
The
of a random variable \(X\) is a function on the real numbers that is denoted as \(F\) and is given by cumulative distribution function (cdf)
$$F(x) = P(X\leq x),\quad \text{for any}\ x\in\mathbb{R}. \label{cdf}$$
Before looking at an example of a cdf, we note a few things about the definition.
First of all, note that we did not specify the random variable \(X\) to be discrete. CDFs are also defined for continuous random variables (see Chapter 4) in exactly the same way.
Second, the cdf of a random variable is defined for
all real numbers, unlike the pmf of a discrete random variable, which we only define for the possible values of the random variable. Implicit in the definition of a pmf is the assumption that it equals 0 for all real numbers that are not possible values of the discrete random variable, which should make sense since the random variable will never equal that value. However, cdf's, for both discrete and continuous random variables, are defined for all real numbers. In looking more closely at Equation \ref{cdf}, we see that a cdf \(F\) considers an upper bound, \(x\in\mathbb{R}\), on the random variable \(X\), and assigns that value \(x\) to the probability that the random variable \(X\) is less than or equal to that upper bound \(x\). This type of probability is referred to as a cumulative probability, since it could be thought of as the probability accumulated by the random variable up to the specified upper bound. With this interpretation, we can represent Equation \ref{cdf} as follows:
$$F: \underbrace{\mathbb{R}}_{\text{upper bounds on RV}\ X} \longrightarrow \underbrace{\mathbb{R}}_{\text{cumulative probabilities}}\label{function}$$
In the case that \(X\) is a discrete random variable, with possible values denoted \(x_1, x_2, \ldots, x_i, \ldots\), the cdf of \(X\) can be calculated using the third property of pmf's (Equation \ref{3rdprop}), since, for a fixed \(x\in\mathbb{R}\), if we let the set \(A\) contain the possible values of \(X\) that are less than or equal to \(x\), i.e., \(A = \{x_i\ |\ x_i\leq x\}\), then the cdf of \(X\) evaluated at \(x\) is given by
$$F(x) = P(X\leq x) = P(X\in A) = \sum_{x_i\leq x} p(x_i).$$
Example \(\PageIndex{4}\)
Continuing with Examples 3.2.2 and 3.2.3, we find the cdf for \(X\). First, we find \(F(x)\) for the possible values of the random variable, \(x=0,1,2\):
\begin{align*} F(0) &= P(X\leq0) = P(X=0) = 0.25 \\ F(1) &= P(X\leq1) = P(X=0\ \text{or}\ 1) = p(0) + p(1) = 0.75 \\ F(2) &= P(X\leq2) = P(X=0\ \text{or}\ 1\ \text{or}\ 2) = p(0) + p(1) + p(2) = 1 \end{align*} Now, if \(x<0\), then the cdf \(F(x) = 0\), since the random variable \(X\) will never be negative.
If \(0<x<1\), then the cdf \(F(x) = 0.25\), since the only value of the random variable \(X\) that is less than or equal to such a value \(x\) is \(0\). For example, consider \(x=0.5\). The probability that \(X\) is less than or equal to \(0.5\) is the same as the probability that \(X=0\), since \(0\) is the only possible value of \(X\) less than \(0.5\):
$$F(0.5) = P(X\leq0.5) = P(X=0) = 0.25.\notag$$
Similarly, we have the following:
\begin{align*} F(x) &= F(1) = 0.75,\quad\text{for}\ 1<x<2 \\ F(x) &= F(2) = 1,\quad\text{for}\ x>2 \end{align*}
Exercise \(\PageIndex{1}\)
For this random variable \(X\), compute the following values of the cdf:
\(F(-3)\) \(F(0.1)\) \(F(0.9)\) \(F(1.4)\) \(F(2.3)\) \(F(18)\) Answer \(F(-3) = P(X\leq -3) = 0\) \(F(0.1) = P(X\leq 0.1) = P(X=0) = 0.25\) \(F(0.9)= P(X\leq 0.9) = P(X=0) = 0.25\) \(F(1.4) = P(X\leq 1.4) = \displaystyle{\sum_{x_i\leq1.4}}p(x_i) = p(0) + p(1) = 0.25 + 0.5 = 0.75\) \(F(2.3) = P(X\leq 2.3) = \displaystyle{\sum_{x_i\leq2.3}}p(x_i) = p(0) + p(1) + p(2) = 0.25 + 0.5 + 0.25 = 1\) \(F(18) = P(X\leq18) = P(X\leq 2) = 1\)
To summarize Example 3.2.4, we write the cdf \(F\) as a
piecewise function and Figure 2 gives its graph: $$F(x) = \left\{\begin{array}{l l} 0, & \text{for}\ x<0 \\ 0.25 & \text{for}\ 0\leq x <1 \\ 0.75 & \text{for}\ 1\leq x <2 \\ 1 & \text{for}\ x\geq 2. \end{array}\right.\notag$$
Figure 2: Graph of cdf in Example 3.2.4
Note that the cdf we found in Example 3.2.4 is a "step function", since its graph resembles a series of steps. This is the case for
all discrete random variables. Additionally, the value of the cdf for a discrete random variable will always "jump" at the possible values of the random variable, and the size of the "jump" is given by the value of the pmf at that possible value of the random variable. For example, the graph in Figure 2 "jumps" from \(0.25\) to \(0.75\) at \(x=1\), so the size of the "jump" is \(0.75-0.25= 0.5\) and note that \(p(1) = P(X=1) = 0.5\). The pmf for any discrete random variable can be obtained from the cdf in this manner.
We end this section with a statement of the properties of cdf's. The reader is encouraged to verify these properties hold for the cdf derived in Example 3.2.4 and to provide an intuitive explanation (or formal explanation using the axioms of probability and the properties of pmf's) for why these properties hold for cdf's in general.
Properties of Cumulative Distribution Functions
Let \(X\) be a random variable with cdf \(F\). Then \(F\) satisfies the following:
\(F\) is non-decreasing, i.e., \(F\) may be constant, but otherwise it is increasing. \(\displaystyle{\lim_{x\to-\infty} F(x) = 0}\) and \(\displaystyle{\lim_{x\to\infty} F(x) = 1}\) |
I know that this question has been submitted several times (especially see How are anyons possible?), even as a byproduct of other questions, since I did not find any completely satisfactory answers, here I submit another version of the question, stated into a very precise form using only
very elementary general assumptions of quantum physics. In particular I will not use any operator (indicated by $P$ in other versions) representing the swap of particles.
Assume to deal with a system of a couple of identical particles, each moving in $R^2$. Neglecting for the moment the fact that the particles are indistinguishable, we start form the Hilbert space $L^2(R^2)\otimes L^2(R^2)$, that is isomorphic to $L^2(R^2\times R^2)$. Now I divide the rest of my issue into several elementary steps.
(1) Every element $\psi \in L^2(R^2\times R^2)$ with $||\psi||=1$ defines a state of the system, where $|| \cdot||$ is the $L^2$ norm.
(2) Each element of the class $\{e^{i\alpha}\psi\:|\; \psi\}$ for $\psi \in L^2(R^2\times R^2)$ with $||\psi||=1$ defines the same state, and a state is such a set of vectors.
(3) Each $\psi$ as above can be seen as a complex valued function defined, up to zero (Lebesgue) measure sets, on $R^2\times R^2$.
(4) Now consider the "swapped state" defined (due to (1)) by $\psi' \in L^2(R^2\times R^2)$ by the function (up to a zero measure set):
$$\psi'(x,y) := \psi(y,x)\:,\quad (x,y) \in R^2\times R^2$$
(5) The physical meaning of the state represented by $\psi'$ is that of a state obtained form $\psi$ with the role of the two particles interchanged.
(6) As the particles are identical, the state represented by $\psi'$ must be the same as that represented by $\psi$.
(7) In view of (1) and (2) it must be: $$\psi' = e^{i a} \psi\quad \mbox{for some constant $a\in R$.}$$
Here physics stops. I will use only mathematics henceforth.
(8) In view of (3) one can equivalently re-write the identity above as
$$\psi(y,x) = e^{ia}\psi(x,y) \quad \mbox{almost everywhere for $(x,y)\in R^2\times R^2$}\quad [1]\:.$$
(9) Since $(x,y)$ in [1] is every pair of points up to a zero-measure set, I am allowed to change their names obtaining
$$\psi(x,y) = e^{ia}\psi(y,x) \quad \mbox{almost everywhere for $(x,y)\in R^2\times R^2$}\quad [2]$$
(Notice the zero measure set where the identity fails remains a zero measure set under the reflexion$(x,y) \mapsto (y,x)$, since it is an isometry of $R^4$ and Lebesgues' measure is invariant under isometries.)
(10) Since, again, [2] holds almost everywhere for every pair $(x,y)$, I am allowed to use again [1] in the right-hand side of [2] obtaining:
$$\psi(x,y) = e^{ia}e^{ia}\psi(x,y) \quad \mbox{almost everywhere for $(x,y)\in R^2\times R^2$}\:.$$
(This certainly holds true outside the union of the zero measure set $A$ where [1] fails and that obtained by reflexion $(x,y) \mapsto (y,x)$ of $A$ itself.)
(11) Conclusion:
$$[e^{2ia} -1] \psi(x,y)=0 \qquad\mbox{almost everywhere for $(x,y)\in R^2\times R^2$}\quad [3]$$
Since $||\psi|| \neq 0$, $\psi$ cannot vanish everywhere on $R^2\times R^2$.If $\psi(x_0,y_0) \neq 0$, $[e^{2ia} -1] \psi(x_0,y_0)=0$ implies $e^{2ia} =1 $ and so:
$$e^{ia} = \pm 1\:.$$
And thus, apparently, anyons are not permitted.
Where is the mistake?
ADDED REMARK. (10) is a completely mathematical result. Here is another way to obtain it. (8) can be written down as $\psi(a,b) = e^{ic} \psi(b,a)$ for some fixed $c \in R$ and all $(a,b) \in R^2 \times R^2$ (I disregard the issue of negligible sets). Choosing first $(a,b)=(x,y)$ and then $(a,b)=(y,x)$ we obtain resp. $\psi(x,y) = e^{ic} \psi(y,x)$ and $\psi(y,x) = e^{ic} \psi(x,y)$. They immediately produce [3] $\psi(x,y) = e^{i2c} \psi(x,y)$.
So the
physical argument (4)-(7) that we have permuted again the particles and thus a further new phase may appear does not apply here.
2nd ADDED REMARK. It is clear that as soon as one is allowed to write
$\psi(x,y) = \lambda \psi(y,x)$ for a
constant $\lambda\in U(1)$ and all $(x,y) \in R^2\times R^2$
the game is over:
$\lambda$ turns out to be $\pm 1$ and anyons are forbidden.This is just mathematics however. My guess for a way out is that the true configuration space is not $R^2\times R^2$ but some other space whose $R^2 \times R^2$ is the universal covering.
An idea (quite rough) could be the following. One should assume that particles are indistinguishable from scratch already defining the configuration space, that is something like $Q := R^2\times R^2/\sim$ where $(x',y')\sim (x,y)$ iff $x'=y$ and $y'=x$. Or perhaps subtracting the set $\{(z,z)\:|\: z \in R^2\}$ to $R^2\times R^2$ before taking the quotient to say that particles cannot stay at the same place. Assume the former case for the sake of simplicity. There is a (double?) covering map $\pi : R^2 \times R^2 \to Q$. My guess is the following. If one defines wavefunctions $\Psi$ on $R^2 \times R^2$, he automatically defines many-valued wavefunctions on $Q$. I mean $\psi:= \Psi \circ \pi^{-1}$. The problem of many values physically does not matter if the difference of the two values (assuming the covering is a double one) is just a phase and this could be written, in view of the identification $\sim$ used to construct $Q$ out of $R^2 \times R^2$: $$\psi(x,y)= e^{ia}\psi(y,x)\:.$$ Notice that the identity cannot be interpreted literally because $(x,y)$ and $(y,x)$ are the same point in $Q$, so my trick for proving $e^{ia}=\pm 1$ cannot be implemented. The situation is similar to that of $QM$ on $S^1$ inducing many-valued wavefunctions form its universal covering $R$. In that case one writes $\psi(\theta)= e^{ia}\psi(\theta + 2\pi)$.
3rd ADDED REMARK I think I solved the problem I posted focusing on the model of a couple of anyons discussed on p.225 of this paper matwbn.icm.edu.pl/ksiazki/bcp/bcp42/bcp42116.pdf suggested by Trimok. The model is simply this one:$$\psi(x,y):= e^{i\alpha \theta(x,y)} \varphi(x,y)$$ where $\alpha \in R$ is a constant, $\varphi(x,y)= \varphi(y,x)$, $(x,y) \in R^2 \times R^2$ and $\theta(x,y)$ is the angle with respect to some fixed axis of the segment $xy$. One can pass to coordinates $(X,r)$, where $X$ describes the center of mass and $r:= y-x$. Swapping the particles means $r\to -r$. Without paying attention to mathematical details, one sees that, in fact: $$\psi(X,-r)= e^{i \alpha \pi} \psi(X,r)\quad \mbox{i.e.,}\quad \psi(x,y)= e^{i \alpha \pi} \psi(y,x)\quad (A)$$for an anti clock wise rotation. (For clock wise rotations a sign $-$ appears in the phase, describing the other element of the braid group $Z_2$. Also notice that, for $\alpha \pi \neq 0, 2\pi$ the function vanishes for $r=0$, namely $x=y$, and this corresponds to the fact that we removed the set $C$ of coincidence points $x=y$ from the space of configurations.)
However a closer scrutiny shows that the situation is more complicated:The angle $\theta(r)$ is not well defined without fixing a reference axis where $\theta =0$. Afterwards one may assume, for instance, $\theta \in (0,2\pi)$,
otherwise $\psi$ must be considered multi-valued. With the choice $\theta(r) \in (0,2\pi)$, (A) does not hold everywhere. Consider an anti clockwise rotation of $r$. If $\theta(r) \in (0,\pi)$ then (A) holds in the form$$\psi(X,-r)= e^{+ i \alpha \pi} \psi(X,r)\quad \mbox{i.e.,}\quad \psi(x,y)= e^{+ i \alpha \pi} \psi(y,x)\quad (A1)$$ but for $\theta(r) \in (\pi, 2\pi)$, and always for a anti clockwise rotation one finds$$\psi(X,-r)= e^{-i \alpha \pi} \psi(X,r)\quad \mbox{i.e.,}\quad \psi(x,y)= e^{- i \alpha \pi} \psi(y,x)\quad (A2)\:.$$Different results arise with different conventions. In any cases it is evident that the phase due to the swap process is a function of $(x,y)$ (even if locally constant) and not a constant. This invalidate my "no-go proof", but also proves that the notion of anyon statistics is deeply different from the standard one based on the groups of permutations, where the phases due to the swap of particles is constant in $(x,y)$. As a consequence the swapped state is different from the initial one, differently form what happens for bosons or fermions and against the idea that anyons are indistinguishable particles. [Notice also that, in the considered model, swapping the initial pair of bosons means $\varphi(x,y) \to \varphi(y,x)= \varphi(x,y)$ that is $\psi(x,y)\to \psi(x,y)$. That is, swapping anyons does not mean swapping the associated bosons, and it is correct, as it is another physical operation on different physical subjects.]
Alternatively one may think of the anyon wavefunction $\psi(x,y)$ as a
multi-valued one, again differently from what I assumed in my "no-go proof" and differently from the standard assumptions in QM. This produces a truly constant phase in (A). However, it is not clear to me if, with this interpretation the swapped state of anyons is the same as the initial one, since I never seriously considered things like (if any) Hilbert spaces of multi-valued functions and I do not understand what happens to the ray-representation of states. This picture is physically convenient, however, since it leads to a tenable interpretation of (A) and the action of the braid group turns out to be explicit and natural.
Actually a last possibility appears. One could deal with (standard complex valued) wavefunctions defined on $(R^2 \times R^2 - C)/\sim$ as we know (see above, $C$ is the set of pairs $(x,y)$ with $x=y$) and we define the swap operation in terms of phases only (so that my "no-go proof" cannot be applied and the transformations do not change the states):
$$\psi([(x,y)]) \to e^{g i\alpha \pi}\psi([(x,y)])$$
where $g \in Z_2$. This can be extended to many particles passing to the braid group of many particles. Maybe it is convenient mathematically but is not very physically expressive.
In the model discussed in the paper I mentioned, it is however evident that, up to an unitary transformation, the Hilbert space of the theory is nothing but a standard bosonic Hilbert space, since the considered wavefunctions are obtained from those of that space by means of a unitary map associated with a singular gauge transformation,and just that singularity gives rise to all the interesting structure! However, in the initial bosonic system the singularity was pre-existent: the magnetic field was a sum of Dirac's delta. I do not know if it makes sense to think of anyons independently from their dynamics.And I do not know if this result is general. I guess that moving the singularity form the statistics to the interaction and This post imported from StackExchange Physics at 2014-04-11 15:20 (UCT), posted by SE-user V. Moretti
vice versa is just what happens in path integral formulation when moving the external phase to the internal action, see Tengen's answer. |
@HarryGindi So the $n$-simplices of $N(D^{op})$ are $Hom_{sCat}(\mathfrak{C}[n],D^{op})$. Are you using the fact that the whole simplicial set is the mapping simplicial object between cosimplicial simplicial categories, and taking the constant cosimplicial simplicial category in the right coordinate?
I guess I'm just very confused about how you're saying anything about the entire simplicial set if you're not producing it, in one go, as the mapping space between two cosimplicial objects. But whatever, I dunno. I'm having a very bad day with this junk lol.
It just seems like this argument is all about the sets of n-simplices. Which is the trivial part.
lol no i mean, i'm following it by context actually
so for the record i really do think that the simplicial set you're getting can be written as coming from the simplicial enrichment on cosimplicial objects, where you take a constant cosimplicial simplicial category on one side
@user1732 haha thanks! we had no idea if that'd actually find its way to the internet...
@JonathanBeardsley any quillen equivalence determines an adjoint equivalence of quasicategories. (and any equivalence can be upgraded to an adjoint (equivalence)). i'm not sure what you mean by "Quillen equivalences induce equivalences after (co)fibrant replacement" though, i feel like that statement is mixing category-levels
@JonathanBeardsley if nothing else, this follows from the fact that \frakC is a left quillen equivalence so creates weak equivalences among cofibrant objects (and all objects are cofibrant, in particular quasicategories are). i guess also you need to know the fact (proved in HTT) that the three definitions of "hom-sset" introduced in chapter 1 are all weakly equivalent to the one you get via \frakC
@IlaRossi i would imagine that this is in goerss--jardine? ultimately, this is just coming from the fact that homotopy groups are defined to be maps in (from spheres), and you only are "supposed" to map into things that are fibrant -- which in this case means kan complexes
@JonathanBeardsley earlier than this, i'm pretty sure it was proved by dwyer--kan in one of their papers around '80 and '81
@HarryGindi i don't know if i would say that "most" relative categories are fibrant. it was proved by lennart meier that model categories are Barwick--Kan fibrant (iirc without any further adjectives necessary)
@JonathanBeardsley what?! i really liked that picture! i wonder why they removed it
@HarryGindi i don't know about general PDEs, but certainly D-modules are relevant in the homotopical world
@HarryGindi oh interesting, thomason-fibrancy of W is a necessary condition for BK-fibrancy of (R,W)?
i also find the thomason model structure mysterious. i set up a less mysterious (and pretty straightforward) analog for $\infty$-categories in the fappendix here: arxiv.org/pdf/1510.03525.pdf
as for the grothendieck construction computing hocolims, i think the more fundamental thing is that the grothendieck construction itself is a lax colimit. combining this with the fact that ($\infty$-)groupoid completion is a left adjoint, you immediately get that $|Gr(F)|$ is the colimit of $B \xrightarrow{F} Cat \xrightarrow{|-|} Spaces$
@JonathanBeardsley If you want to go that route, I guess you still have to prove that ^op_s and ^op_Delta both lie in the unique nonidentity component of Aut(N(Qcat)) and Aut(N(sCat)) whatever nerve you mean in this particular case (the B-K relative nerve has the advantage here bc sCat is not a simplicial model cat)
I think the direct proof has a lot of advantages here, since it gives a point-set on-the-nose isomorphism
Yeah, definitely, but I'd like to stay and work with Cisinski on the Ph.D if possible, but I'm trying to keep options open
not put all my eggs in one basket, as it were
I mean, I'm open to coming back to the US too, but I don't have any ideas for advisors here who are interested in higher straightening/higher Yoneda, which I am convinced is the big open problem for infinity, n-cats
Gaitsgory and Rozenblyum, I guess, but I think they're more interested in applications of those ideas vs actually getting a hold of them in full generality
@JonathanBeardsley Don't sweat it. As it was mentioned I have now mod superpowers, so s/he can do very little to upset me. Since you're the room owner, let me know if I can be of any assistance here with the moderation (moderators on SE have network-wide chat moderating powers, but this is not my turf, so to speak).
There are two "opposite" functors:$$ op_\Delta\colon sSet\to sSet$$and$$op_s\colon sCat\to sCat.$$The first takes a simplicial set to its opposite simplicial set by precomposing with the opposite of a functor $\Delta\to \Delta$ which is the identity on objects and takes a morphism $\langle k...
@JonathanBeardsley Yeah, I worked out a little proof sketch of the lemma on a notepad
It's enough to show everything works for generating cofaces and codegeneracies
the codegeneracies are free, the 0 and nth cofaces are free
all of those can be done treating frak{C} as a black box
the only slightly complicated thing is keeping track of the inner generated cofaces, but if you use my description of frak{C} or the one Joyal uses in the quasicategories vs simplicial categories paper, the combinatorics are completely explicit for codimension 1 face inclusions
the maps on vertices are obvious, and the maps on homs are just appropriate inclusions of cubes on the {0} face of the cube wrt the axis corresponding to the omitted inner vertex
In general, each Δ[1] factor in Hom(i,j) corresponds exactly to a vertex k with i<k<j, so omitting k gives inclusion onto the 'bottom' face wrt that axis, i.e. Δ[1]^{k-i-1} x {0} x Δ[j-k-1] (I'd call this the top, but I seem to draw my cubical diagrams in the reversed orientation).
> Thus, using appropriate tags one can increase ones chances that users competent to answer the question, or just interested in it, will notice the question in the first place. Conversely, using only very specialized tags (which likely almost nobody specifically favorited, subscribed to, etc) or worse just newly created tags, one might miss a chance to give visibility to ones question.
I am not sure to which extent this effect is noticeable on smaller sites (such as MathOverflow) but probably it's good to follow the recommendations given in the FAQ. (And MO is likely to grow a bit more in the future, so then it can become more important.) And also some smaller tags have enough followers.
You are asking posts far away from areas I am familiar with, so I am not really sure which top-level tags would be a good fit for your questions - otherwise I would edit/retag the posts myself. (Other than possibility to ping you somewhere in chat, the reason why I posted this in this room is that users of this room are likely more familiar with the topics you're interested in and probably they would be able to suggest suitable tags.)
I just wanted to mention this, in case it helps you when asking question here. (Although it seems that you're doing fine.)
@MartinSleziak even I was not sure what other tags are appropriate to add.. I will see other questions similar to this, see what tags they have added and will add if I get to see any relevant tags.. thanks for your suggestion.. it is very reasonable,.
You don't need to put only one tag, you can put up to five. In general it is recommended to put a very general tag (usually an "arxiv" tag) to indicate broadly which sector of math your question is in, and then more specific tags
I would say that the topics of the US Talbot, as with the European Talbot, are heavily influenced by the organizers. If you look at who the organizers were/are for the US Talbot I think you will find many homotopy theorists among them. |
Probability Seminar Spring 2019 Thursdays in 901 Van Vleck Hall at 2:25 PM, unless otherwise noted. We usually end for questions at 3:15 PM.
If you would like to sign up for the email list to receive seminar announcements then please send an email to join-probsem@lists.wisc.edu
January 31, Oanh Nguyen, Princeton
Title:
Survival and extinction of epidemics on random graphs with general degrees
Abstract: We establish the necessary and sufficient criterion for the contact process on Galton-Watson trees (resp. random graphs) to exhibit the phase of extinction (resp. short survival). We prove that the survival threshold $\lambda_1$ for a Galton-Watson tree is strictly positive if and only if its offspring distribution has an exponential tail, settling a conjecture by Huang and Durrett. On the random graph with degree distribution $D$, we show that if $D$ has an exponential tail, then for small enough $\lambda$ the contact process with the all-infected initial condition survives for polynomial time with high probability, while for large enough $\lambda$ it runs over exponential time with high probability. When $D$ is subexponential, the contact process typically displays long survival for any fixed $\lambda>0$. Joint work with Shankar Bhamidi, Danny Nam, and Allan Sly.
Wednesday, February 6 at 4:00pm in Van Vleck 911 , Li-Cheng Tsai, Columbia University
Title:
When particle systems meet PDEs
Abstract: Interacting particle systems are models that involve many randomly evolving agents (i.e., particles). These systems are widely used in describing real-world phenomena. In this talk we will walk through three facets of interacting particle systems, namely the law of large numbers, random fluctuations, and large deviations. Within each facet, I will explain how Partial Differential Equations (PDEs) play a role in understanding the systems..
Title:
Fluctuations of the KPZ equation in d\geq 2 in a weak disorder regime
Abstract: We will discuss some recent work on the Edwards-Wilkinson limit of the KPZ equation with a small coupling constant in d\geq 2.
February 14, Timo Seppäläinen, UW-Madison
Title:
Geometry of the corner growth model
Abstract: The corner growth model is a last-passage percolation model of random growth on the square lattice. It lies at the nexus of several branches of mathematics: probability, statistical physics, queueing theory, combinatorics, and integrable systems. It has been studied intensely for almost 40 years. This talk reviews properties of the geodesics, Busemann functions and competition interfaces of the corner growth model, and presents some new qualitative and quantitative results. Based on joint projects with Louis Fan (Indiana), Firas Rassoul-Agha and Chris Janjigian (Utah).
February 21, Diane Holcomb, KTH
Title:
On the centered maximum of the Sine beta process Abstract: There has been a great deal or recent work on the asymptotics of the maximum of characteristic polynomials or random matrices. Other recent work studies the analogous result for log-correlated Gaussian fields. Here we will discuss a maximum result for the centered counting function of the Sine beta process. The Sine beta process arises as the local limit in the bulk of a beta-ensemble, and was originally described as the limit of a generalization of the Gaussian Unitary Ensemble by Valko and Virag with an equivalent process identified as a limit of the circular beta ensembles by Killip and Stoiciu. A brief introduction to the Sine process as well as some ideas from the proof of the maximum will be covered. This talk is on joint work with Elliot Paquette.
Title: Quantitative homogenization in a balanced random environment
Abstract: Stochastic homogenization of discrete difference operators is closely related to the convergence of random walk in a random environment (RWRE) to its limiting process. In this talk we discuss non-divergence form difference operators in an i.i.d random environment and the corresponding process—a random walk in a balanced random environment in the integer lattice Z^d. We first quantify the ergodicity of the environment viewed from the point of view of the particle. As consequences, we obtain algebraic rates of convergence for the quenched central limit theorem of the RWRE and for the homogenization of both elliptic and parabolic non-divergence form difference operators. Joint work with J. Peterson (Purdue) and H. V. Tran (UW-Madison).
Wednesday, February 27 at 1:10pm Jon Peterson, Purdue
Title:
Functional Limit Laws for Recurrent Excited Random Walks
Abstract:
Excited random walks (also called cookie random walks) are model for self-interacting random motion where the transition probabilities are dependent on the local time at the current location. While self-interacting random walks are typically very difficult to study, many results for (one-dimensional) excited random walks are remarkably explicit. In particular, one can easily (by hand) calculate a parameter of the model that will determine many features of the random walk: recurrence/transience, non-zero limiting speed, limiting distributions and more. In this talk I will prove functional limit laws for one-dimensional excited random walks that are recurrent. For certain values of the parameters in the model the random walks under diffusive scaling converge to a Brownian motion perturbed at its extremum. This was known previously for the case of excited random walks with boundedly many cookies per site, but we are able to generalize this to excited random walks with periodic cookie stacks. In this more general case, it is much less clear why perturbed Brownian motion should be the correct scaling limit. This is joint work with Elena Kosygina.
March 21, Spring Break, No seminar March 28, Shamgar Gurevitch UW-Madison
Title:
Harmonic Analysis on GLn over finite fields, and Random Walks
Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters. For evaluating or estimating these sums, one of the most salient quantities to understand is the
character ratio:
$$ \text{trace}(\rho(g))/\text{dim}(\rho), $$
for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant
rank. This talk will discuss the notion of rank for $GL_n$ over finite fields, and apply the results to random walks. This is joint work with Roger Howe (Yale and Texas AM). April 4, Philip Matchett Wood, UW-Madison
Title:
Outliers in the spectrum for products of independent random matrices
Abstract: For fixed positive integers m, we consider the product of m independent n by n random matrices with iid entries as in the limit as n tends to infinity. Under suitable assumptions on the entries of each matrix, it is known that the limiting empirical distribution of the eigenvalues is described by the m-th power of the circular law. Moreover, this same limiting distribution continues to hold if each iid random matrix is additively perturbed by a bounded rank deterministic error. However, the bounded rank perturbations may create one or more outlier eigenvalues. We describe the asymptotic location of the outlier eigenvalues, which extends a result of Terence Tao for the case of a single iid matrix. Our methods also allow us to consider several other types of perturbations, including multiplicative perturbations. Joint work with Natalie Coston and Sean O'Rourke.
April 11, Eviatar Procaccia, Texas A&M Title: Stabilization of Diffusion Limited Aggregation in a Wedge.
Abstract: We prove a discrete Beurling estimate for the harmonic measure in a wedge in $\mathbf{Z}^2$, and use it to show that Diffusion Limited Aggregation (DLA) in a wedge of angle smaller than $\pi/4$ stabilizes. This allows to consider the infinite DLA and questions about the number of arms, growth and dimension. I will present some conjectures and open problems.
April 18, Andrea Agazzi, Duke
Title:
Large Deviations Theory for Chemical Reaction Networks
Abstract: The microscopic dynamics of well-stirred networks of chemical reactions are modeled as jump Markov processes. At large volume, one may expect in this framework to have a straightforward application of large deviation theory. This is not at all true, for the jump rates of this class of models are typically neither globally Lipschitz, nor bounded away from zero, with both blowup and absorption as quite possible scenarios. In joint work with Amir Dembo and Jean-Pierre Eckmann, we utilize Lyapunov stability theory to bypass this challenges and to characterize a large class of network topologies that satisfy the full Wentzell-Freidlin theory of asymptotic rates of exit from domains of attraction. Under the assumption of positive recurrence these results also allow for the estimation of transitions times between metastable states of this class of processes.
April 25, Kavita Ramanan, Brown
Title:
Beyond Mean-Field Limits: Local Dynamics on Sparse Graphs
Abstract: Many applications can be modeled as a large system of homogeneous interacting particle systems on a graph in which the infinitesimal evolution of each particle depends on its own state and the empirical distribution of the states of neighboring particles. When the graph is a clique, it is well known that the dynamics of a typical particle converges in the limit, as the number of vertices goes to infinity, to a nonlinear Markov process, often referred to as the McKean-Vlasov or mean-field limit. In this talk, we focus on the complementary case of scaling limits of dynamics on certain sequences of sparse graphs, including regular trees and sparse Erdos-Renyi graphs, and obtain a novel characterization of the dynamics of the neighborhood of a typical particle. This is based on various joint works with Ankan Ganguly, Dan Lacker and Ruoyu Wu.
Friday, April 26, Colloquium, Van Vleck 911 from 4pm to 5pm, Kavita Ramanan, Brown
Title:
Tales of Random Projections
Abstract: The interplay between geometry and probability in high-dimensional spaces is a subject of active research. Classical theorems in probability theory such as the central limit theorem and Cramer’s theorem can be viewed as providing information about certain scalar projections of high-dimensional product measures. In this talk we will describe the behavior of random projections of more general (possibly non-product) high-dimensional measures, which are of interest in diverse fields, ranging from asymptotic convex geometry to high-dimensional statistics. Although the study of (typical) projections of high-dimensional measures dates back to Borel, only recently has a theory begun to emerge, which in particular identifies the role of certain geometric assumptions that lead to better behaved projections. A particular question of interest is to identify what properties of the high-dimensional measure are captured by its lower-dimensional projections. While fluctuations of these projections have been studied over the past decade, we describe more recent work on the tail behavior of multidimensional projections, and associated conditional limit theorems. |
Difference between revisions of "Probability Seminar"
(→February 21, Diane Holcomb, KTH)
(→Probability related talk in PDE Geometric Analysis seminar: Monday, 3:30pm to 4:30pm, Van Vleck 901, Xiaoqin Guo, UW-Madison)
Line 44: Line 44:
Abstract: There has been a great deal or recent work on the asymptotics of the maximum of characteristic polynomials or random matrices. Other recent work studies the analogous result for log-correlated Gaussian fields. Here we will discuss a maximum result for the centered counting function of the Sine beta process. The Sine beta process arises as the local limit in the bulk of a beta-ensemble, and was originally described as the limit of a generalization of the Gaussian Unitary Ensemble by Valko and Virag with an equivalent process identified as a limit of the circular beta ensembles by Killip and Stoiciu. A brief introduction to the Sine process as well as some ideas from the proof of the maximum will be covered. This talk is on joint work with Elliot Paquette.
Abstract: There has been a great deal or recent work on the asymptotics of the maximum of characteristic polynomials or random matrices. Other recent work studies the analogous result for log-correlated Gaussian fields. Here we will discuss a maximum result for the centered counting function of the Sine beta process. The Sine beta process arises as the local limit in the bulk of a beta-ensemble, and was originally described as the limit of a generalization of the Gaussian Unitary Ensemble by Valko and Virag with an equivalent process identified as a limit of the circular beta ensembles by Killip and Stoiciu. A brief introduction to the Sine process as well as some ideas from the proof of the maximum will be covered. This talk is on joint work with Elliot Paquette.
−
== Probability related talk in PDE Geometric Analysis seminar: Monday, 3:30pm to 4:30pm, Van Vleck 901, Xiaoqin Guo, UW-Madison ==
+
== Probability related talk in PDE Geometric Analysis seminar: Monday, 3:30pm to 4:30pm, Van Vleck 901, Xiaoqin Guo, UW-Madison ==
Title: Quantitative homogenization in a balanced random environment
Title: Quantitative homogenization in a balanced random environment
Revision as of 11:32, 18 February 2019 Spring 2019 Thursdays in 901 Van Vleck Hall at 2:25 PM, unless otherwise noted. We usually end for questions at 3:15 PM.
If you would like to sign up for the email list to receive seminar announcements then please send an email to join-probsem@lists.wisc.edu
January 31, Oanh Nguyen, Princeton
Title:
Survival and extinction of epidemics on random graphs with general degrees
Abstract: We establish the necessary and sufficient criterion for the contact process on Galton-Watson trees (resp. random graphs) to exhibit the phase of extinction (resp. short survival). We prove that the survival threshold $\lambda_1$ for a Galton-Watson tree is strictly positive if and only if its offspring distribution has an exponential tail, settling a conjecture by Huang and Durrett. On the random graph with degree distribution $D$, we show that if $D$ has an exponential tail, then for small enough $\lambda$ the contact process with the all-infected initial condition survives for polynomial time with high probability, while for large enough $\lambda$ it runs over exponential time with high probability. When $D$ is subexponential, the contact process typically displays long survival for any fixed $\lambda>0$. Joint work with Shankar Bhamidi, Danny Nam, and Allan Sly.
Wednesday, February 6 at 4:00pm in Van Vleck 911 , Li-Cheng Tsai, Columbia University
Title:
When particle systems meet PDEs
Abstract: Interacting particle systems are models that involve many randomly evolving agents (i.e., particles). These systems are widely used in describing real-world phenomena. In this talk we will walk through three facets of interacting particle systems, namely the law of large numbers, random fluctuations, and large deviations. Within each facet, I will explain how Partial Differential Equations (PDEs) play a role in understanding the systems..
Title:
Fluctuations of the KPZ equation in d\geq 2 in a weak disorder regime
Abstract: We will discuss some recent work on the Edwards-Wilkinson limit of the KPZ equation with a small coupling constant in d\geq 2.
February 14, Timo Seppäläinen, UW-Madison
Title:
Geometry of the corner growth model
Abstract: The corner growth model is a last-passage percolation model of random growth on the square lattice. It lies at the nexus of several branches of mathematics: probability, statistical physics, queueing theory, combinatorics, and integrable systems. It has been studied intensely for almost 40 years. This talk reviews properties of the geodesics, Busemann functions and competition interfaces of the corner growth model, and presents some new qualitative and quantitative results. Based on joint projects with Louis Fan (Indiana), Firas Rassoul-Agha and Chris Janjigian (Utah).
February 21, Diane Holcomb, KTH
Title:
On the centered maximum of the Sine beta process Abstract: There has been a great deal or recent work on the asymptotics of the maximum of characteristic polynomials or random matrices. Other recent work studies the analogous result for log-correlated Gaussian fields. Here we will discuss a maximum result for the centered counting function of the Sine beta process. The Sine beta process arises as the local limit in the bulk of a beta-ensemble, and was originally described as the limit of a generalization of the Gaussian Unitary Ensemble by Valko and Virag with an equivalent process identified as a limit of the circular beta ensembles by Killip and Stoiciu. A brief introduction to the Sine process as well as some ideas from the proof of the maximum will be covered. This talk is on joint work with Elliot Paquette.
Title: Quantitative homogenization in a balanced random environment
Abstract: Stochastic homogenization of discrete difference operators is closely related to the convergence of random walk in a random environment (RWRE) to its limiting process. In this talk we discuss non-divergence form difference operators in an i.i.d random environment and the corresponding process—a random walk in a balanced random environment in the integer lattice Z^d. We first quantify the ergodicity of the environment viewed from the point of view of the particle. As consequences, we obtain algebraic rates of convergence for the quenched central limit theorem of the RWRE and for the homogenization of both elliptic and parabolic non-divergence form difference operators. Joint work with J. Peterson (Purdue) and H. V. Tran (UW-Madison).
Wednesday, February 27 at 1:10pm Jon Peterson, Purdue
Title:
Functional Limit Laws for Recurrent Excited Random Walks
Abstract:
Excited random walks (also called cookie random walks) are model for self-interacting random motion where the transition probabilities are dependent on the local time at the current location. While self-interacting random walks are typically very difficult to study, many results for (one-dimensional) excited random walks are remarkably explicit. In particular, one can easily (by hand) calculate a parameter of the model that will determine many features of the random walk: recurrence/transience, non-zero limiting speed, limiting distributions and more. In this talk I will prove functional limit laws for one-dimensional excited random walks that are recurrent. For certain values of the parameters in the model the random walks under diffusive scaling converge to a Brownian motion perturbed at its extremum. This was known previously for the case of excited random walks with boundedly many cookies per site, but we are able to generalize this to excited random walks with periodic cookie stacks. In this more general case, it is much less clear why perturbed Brownian motion should be the correct scaling limit. This is joint work with Elena Kosygina.
March 7, TBA March 14, TBA March 21, Spring Break, No seminar March 28, Shamgar Gurevitch UW-Madison
Title:
Harmonic Analysis on GLn over finite fields, and Random Walks
Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters. For evaluating or estimating these sums, one of the most salient quantities to understand is the
character ratio:
$$ \text{trace}(\rho(g))/\text{dim}(\rho), $$
for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant
rank. This talk will discuss the notion of rank for $GL_n$ over finite fields, and apply the results to random walks. This is joint work with Roger Howe (Yale and Texas AM). |
Research Open Access Published: Oscillation criteria for difference equations with non-monotone arguments Advances in Difference Equations volume 2017, Article number: 62 (2017) Article metrics
693 Accesses
3 Citations
0 Altmetric
Abstract
This paper is concerned with the oscillatory behavior of first-order retarded [advanced] difference equation of the form
where \((p(n))_{n\geq 0}\) \([(q(n))_{n\geq 1}]\) is a sequence of nonnegative real numbers and \(\tau (n)\) \([\sigma (n)]\) is a non-monotone sequence of integers such that \(\tau (n)\leq n-1\), for \(n\in \mathbb{N}_{0}\) and \(\lim_{n\rightarrow \infty }\tau (n)=\infty \) \([\sigma (n)\geq n+1,\mbox{ for }n\in \mathbb{N}]\). Sufficient conditions, involving limsup, which guarantee the oscillation of all solutions are established. These conditions improve all previous well-known results in the literature. Also, using algorithms on MATLAB software, examples illustrating the significance of the results are given.
Introduction
The paper deals with the difference equation with a single variable retarded argument of the form
and the (dual) difference equation with a single variable advanced argument of the form
where \(\mathbb{N}_{0}\) and \(\mathbb{N}\) are the sets of nonnegative integers and positive integers, respectively.
Equations (E) and (E′) are studied under the following assumptions: everywhere \((p(n))_{n\geq 0}\) and \((q(n))_{n\geq 1}\) are sequences of nonnegative real numbers, \((\tau (n))_{n\geq 0}\) is a sequence of integers such that
and \((\sigma (n))_{n\geq 1}\) is a sequence of integers such that
Here, Δ denotes the forward difference operator \(\Delta x(n)=x(n+1)-x(n)\) and ∇ corresponds to the backward difference operator \(\nabla x(n)=x(n)-x(n-1)\).
Set
Clearly,
w is a finite positive integer if (1.1) holds.
By a
solution of (E), we mean a sequence of real numbers \((x(n))_{n\geq -w}\) which satisfies (E) for all \(n\geq 0\). It is clear that, for each choice of real numbers \(c_{-w},c_{-w+1},\ldots, c_{-1}, c_{0}\), there exists a unique solution \((x(n))_{n\geq -w}\) of (E) which satisfies the initial conditions \(x(-w)=c_{-w}, x(-w+1)=c_{-w+1},\ldots,x(-1)=c_{-1}, x(0)=c_{0}\).
By a solution of (E′), we mean a sequence of real numbers \(( x(n) ) _{n\geq 0}\) which satisfies (E′) for all \(n\geq 1\).
A solution \((x(n))_{n\geq -w}\) (or \(( x(n) ) _{n\geq 0}\)) of (E) (or (E′)) is called
oscillatory, if the terms \(x(n)\) of the sequence are neither eventually positive nor eventually negative. Otherwise, the solution is said to be nonoscillatory. An equation is oscillatory if all its solutions oscillate.
In the last few decades, the oscillatory behavior and the existence of positive solutions of difference equations with deviating arguments have been extensively studied; see, for example, papers [1–19] and the references cited therein. Most of these papers concern the special case where the arguments are nondecreasing, while a small number of these papers are dealing with the general case where the arguments are non-monotone. See, for example, [1–3, 11, 15] and the references cited therein. By the consideration of non-monotone arguments of other than the pure mathematical interest, it approximates the natural phenomena described by equation of the type (E) or (E′). That is because there are always natural disturbances (e.g. noise in communication systems) that affect all the parameters of the equation and therefore the fair (from a mathematical point of view) monotone arguments become non-monotone almost always.
Retarded difference equations
where \(h(n)=\max_{0\leq s\leq n}\tau (s)\), or
then all solutions of (E) oscillate.
does not exist. How to fill this gap is an interesting problem which has been investigated by several authors. For example, in 2009, Chatzarakis, Philos and Stavroulakis [6] proved that if
where \(a=\liminf_{n\rightarrow \infty }\sum_{j=\tau (n)}^{n-1}p(j)\), then all solutions of (E) oscillate.
In 2011, Braverman and Karpuz [3] proved that if
In 2015, Braverman, Chatzarakis, and Stavroulakis [2] proved that if for some \(r\in {\mathbb{N}}\)
or
where
then all solutions of (E) oscillate.
Recently, Asteris and Chatzarakis [1] proved that if for some \(\ell \in\mathbb{N}\)
where
with \(p_{0}(n)=p(n)\), then all solutions of (E) oscillate.
Advanced difference equations
In 2012, Chatzarakis and Stavroulakis [7] proved that, if
or
where \(\rho (n)=\min_{s\geq n}\sigma (s)\) and \(b=\liminf_{n\rightarrow \infty }\sum_{i=n+1}^{\sigma (n)}p ( i ) \), then all solutions of (E′) oscillate.
In 2015, Braverman, Chatzarakis, and Stavroulakis [2] proved that if for some \(r\in {\mathbb{N}}\)
or
where
and \(b=\liminf_{n\rightarrow \infty }\sum_{i=n+1}^{\sigma (n)}p ( i ) \), then all solutions of (E′) oscillate.
Recently, Asteris and Chatzarakis [1] proved that if for some \(\ell \in \mathbb{N} \)
where
with \(q_{0}(n)=q(n)\), then all solutions of (E′) oscillate.
In this paper we study further (E) and (E′) and derive new sufficient oscillation conditions. Examples illustrate cases when the results of the present paper imply oscillation while previously known results fail.
Main results Retarded difference equations
We study further (E) and derive a new sufficient oscillation condition, involving limsup, which essentially improves all the previous results.
Let
Clearly, the sequence \(h(n)\) is nondecreasing and \(\tau (n)\leq h(n)\leq n-1\) for all \(n\geq 0\).
The proof of our main result is essentially based on the following lemmas.
Lemma 1 Assume that (1.1) holds and Then we have where \(h(n)\) is defined by (2.1). Proof
Since \(h(n)\) is nondecreasing and \(\tau (n)\leq h(n)\leq n-1\) for all \(n\geq 0\), we have
Therefore
If (2.2) does not hold, then there exist \(a^{\prime }>0\) and a subsequence \(( \theta (n) ) \) such that \(\theta (n)\rightarrow \infty \) as \(n\rightarrow \infty \) and
But \(h(\theta (n))=\max_{0\leq s\leq \theta (n)}\tau (s)\), hence there exists \(\theta ^{\prime }(n)\leq \theta (n)\), \(\theta ^{\prime }(n)\in \mathbb{N}_{0}\) such that \(h(\theta (n))=\tau (\theta ^{\prime }(n))\), and consequently
It follows that \(( \sum_{j=\tau (\theta ^{\prime }(n))}^{\theta ^{\prime }(n)-1}p(j) ) _{n=1}^{\infty }\) is a bounded sequence having a convergent subsequence, say
which implies that
This contradicts (2.2).
The proof of the lemma is complete. □
Lemma 2
[6], Lemma 2.1
and \(x(n)\) is an eventually positive solution of (E). Then Theorem 1 where \(p_{\ell }(n)\) is defined by (1.11), then all solutions of (E) oscillate. Proof
Assume, for the sake of contradiction, that \(( x(n) ) _{n\geq -w}\) is a nonoscillatory solution of (E). Then it is either eventually positive or eventually negative. As \(( -x(n) ) _{n\geq -w}\) is also a solution of (E), we may restrict ourselves only to the case where \(x(n)>0\) for all large
n. Let \(n_{1}\geq -w\) be an integer such that \(x(n)>0\) for all \(n\geq n_{1}\). Then, there exists \(n_{2}\geq n_{1}\) such that \(x(\tau (n))>0\), \(\forall n\geq n_{2}\). In view of this, equation (E) becomes
which means that the sequence \((x(n))\) is eventually decreasing.
Therefore, since \(\tau (n)< n\), (E) implies
Applying the discrete Grönwall inequality, we obtain
Summing up (E) from \(\tau (n)\) to \(n-1\), we have
Multiplying the last inequality by \(p(n)\), we get
which, in view of (E), becomes
Since \(h(n)< n\), the last inequality gives
or
Therefore
where
Repeating the above argument leads to a new estimate,
where
Continuing by induction, for sufficiently large
n we get
where
Clearly, by the Grönwall inequality, we have
Summing up (E) from \(h(n)\) to
n, we have
or
Therefore
which contradicts (2.5). The proof of the theorem is complete. □
Example 1
Consider the retarded difference equation
with (see Figure 1(a))
Observe that the function \(F: \mathbb{N} _{0}\rightarrow \mathbb{R} _{+}\) defined as
attains its maximum at \(n=5\mu +4\), \(\mu \in \mathbb{N} _{0}\), for every \(\ell \in \mathbb{N} \). Specifically, by using an algorithm of Matlab software, we obtain
Thus
Since
we have
Observe, however, that
Notation
It is worth noting that the improvement of condition (2.5) to the corresponding condition (1.4) is significant, approximately 38.45%, if we compare the values on the left-hand side of these conditions. Also, the improvement compared to condition (1.5) (or (1.6) or (1.7) or (1.8)) is very satisfactory, around 17.8%. Also, observe that conditions (1.7), (1.8), and (1.10) do not lead to oscillation for the first iteration. On the contrary, condition (2.5) is satisfied from the first iteration. This means that our condition is better and much faster than (1.7), (1.8), and (1.10).
Advanced difference equations
A similar oscillation theorem for the (dual) advanced difference equation (E′) can be derived easily. The proof of this theorem is omitted, since it is quite similar to the proof for a retarded equation.
Let
Clearly, the sequence \(\rho (n)\) is nondecreasing and \(\sigma (n)\geq \rho (n)\geq n+1\) for all \(n\geq 1\).
Theorem 2 Assume that (1.1′) holds and \(\rho (n)\) is defined by (2.13). If for some \(\ell \in \mathbb{N} \) where \(q_{\ell }(n)\) is defined by (1.18) and \(0< b=\liminf_{n\rightarrow \infty }\sum_{i=n+1}^{\sigma (n)}p ( i ) \leq 1/e\), then all solutions of (E′) oscillate. Example 2
Consider the advanced difference equation
with (see Figure 3(a))
Observe that the function \(F:\mathbb{N}_{0}\rightarrow \mathbb{R}_{+}\) defined as
attains its maximum at \(n=5\mu +1\), \(\mu \in \mathbb{N} _{0}\), for every \(\ell \in\mathbb{N}\). Specifically, by using an algorithm of Matlab software, we obtain
Thus
Since
we have
Observe, however, that
Notation
It is worth noting that the improvement of condition (2.14) to the corresponding condition (1.12) is significant, approximately 93%, if we compare the values on the left-side of these conditions. Also, the improvement compared to condition (1.14) (or (1.15)) is very satisfactory, around 46.3%. Also, observe that conditions (1.14) and (1.15) do not lead to oscillation for first iteration. On the contrary, condition (2.14) is satisfied from the first iteration. This means that our condition is better and much faster than (1.14) and (1.15).
Deviating difference inequalities Theorem 3 Assume that all conditions of Theorem 1 hold. Then (i) the retarded difference inequality$$ \Delta x(n)+p(n)x\bigl(\tau (n)\bigr)\leq 0, \quad n\in \mathbb{N}_{0}, $$ has no eventually positive solutions; (ii) the retarded difference inequality$$ \Delta x(n)+p(n)x\bigl(\tau (n)\bigr)\geq 0, \quad n\in \mathbb{N} _{0}, $$ has no eventually negative solutions. Theorem 4 Assume that all conditions of Theorem 2 hold. Then (i) the advanced difference inequality$$ \nabla x(n)-q(n)x\bigl(\sigma (n)\bigr)\geq 0, \quad n\in \mathbb{N}, $$ has no eventually positive solutions; (ii) the advanced difference inequality$$ \nabla x(n)-q(n)x\bigl(\sigma (n)\bigr)\leq 0, \quad n\in \mathbb{N}, $$ has no eventually negative solutions. References 1.
Asteris, PG, Chatzarakis, GE: New oscillation tests for difference equations with non-monotone arguments. (to appear)
2.
Braverman, E, Chatzarakis, GE, Stavroulakis, IP: Iterative oscillation tests for difference equations with several non-monotone arguments. J. Difference Equ. Appl.
21(9), 854-874 (2015) 3.
Braverman, E, Karpuz, B: On oscillation of differential and difference equations with non-monotone delays. Appl. Math. Comput.
218, 3880-3887 (2011) 4.
Chatzarakis, GE, Koplatadze, R, Stavroulakis, IP: Oscillation criteria of first order linear difference equations with delay argument. Nonlinear Anal.
68, 994-1005 (2008) 5.
Chatzarakis, GE, Koplatadze, R, Stavroulakis, IP: Optimal oscillation criteria for first order difference equations with delay argument. Pacific J. Math.
235, 15-33 (2008) 6.
Chatzarakis, GE, Philos, ChG, Stavroulakis, IP: Oscillations of first order linear difference equations with general delay argument. Portugal. Math.
66, 513-533 (2009) 7.
Chatzarakis, GE, Stavroulakis, IP: Oscillations of difference equations with general advanced argument. Cent. Eur. J. Math.
10, 807-823 (2012) 8.
Chen, M-P, Yu, JS: Oscillations of delay difference equations with variable coefficients. In: Proceedings of the First International Conference on Difference Equations, pp. 105-114. Gordon and Breach, London (1994)
9.
Erbe, LH, Zhang, BG: Oscillation of discrete analogues of delay equations. Differential Integral Equations
2, 300-309 (1989) 10.
Györi, I, Ladas, G: Linearized oscillations for equations with piecewise constant arguments. Differential Integral Equations
2, 123-131 (1989) 11.
Koplatadze, RG, Kvinikadze, G: On the oscillation of solutions of first order delay differential inequalities and equations. Georgian Math. J.
3, 675-685 (1994) 12.
Ladas, G, Philos, ChG, Sficas, YG: Sharp conditions for the oscillation of delay difference equations. J. Appl. Math. Simul.
2, 101-111 (1989) 13.
Ladas, G: Explicit conditions for the oscillation of difference equations. J. Math. Anal. Appl.
153, 276-287 (1990) 14.
Li, X, Zhu, D: Oscillation of advanced difference equations with variable coefficients. Ann. Differ. Equ.
18, 254-263 (2002) 15.
Stavroulakis, IP: Oscillation criteria for delay and difference equations with non-monotone arguments. Appl. Math. Comput.
226, 661-672 (2014) 16.
Tang, XH, Yu, JS: Oscillation of delay difference equations. Comput. Math. Appl.
37, 11-20 (1999) 17.
Tang, XH, Zhang, RY: New oscillation criteria for delay difference equations. Comput. Math. Appl.
42, 1319-1330 (2001) 18.
Yan, W, Meng, Q, Yan, J: Oscillation criteria for difference equation of variable delays. DCDIS Proceedings
3, 641-647 (2005) 19.
Zhang, BG, Tian, CJ: Nonexistence and existence of positive solutions for difference equations with unbounded delay. Comput. Math. Appl.
36, 1-8 (1998) Acknowledgements
The first author was supported by the Special Account for Research of ASPETE through the funding program ‘Strengthening research of ASPETE faculty members’.
Additional information Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
The authors declare that they have made equal contributions to the paper. |
I have this "R" code (I don't know how to use R):
repeat { s = s+log(random); c = c+1; print(s/c);}
It should show a string of numbers. Where does it tends to?
I have tried to translate this into math, so I THINK, I should try to compute this limit:
$$ \lim \limits_{n \to \infty} \frac{log(x_1\cdot x_2 \cdot ... \cdot x_n)}{n} $$ where $ x_1, ...., x_n $ are the random variables.
Is this limit correct? How can I compute this limit?
Thank you very much!!! |
Once we have organized and summarized your sample data, the next step is to identify the underlying distribution of our random variable. Computing probabilities for continuous random variables are complicated by the fact that there are an infinite number of possible values that our random variable can take on, so the probability of observing a particular value for a random variable is zero. Therefore, to find the probabilities associated with a continuous random variable, we use a probability density function (PDF).
A PDF is an equation used to find probabilities for continuous random variables. The PDF must satisfy the following two rules:
The area under the curve must equal one (over all possible values of the random variable). The probabilities must be equal to or greater than zero for all possible values of the random variable.
The area under the curve of the probability density function over some interval represents the probability of observing those values of the random variable in that interval.
The Normal Distribution
Many continuous random variables have a bell-shaped or somewhat symmetric distribution. This is a normal distribution. In other words, the probability distribution of its relative frequency histogram follows a normal curve. The curve is bell-shaped, symmetric about the mean, and defined by µ and σ (the mean and standard deviation).
Figure 9. A normal distribution.
There are normal curves for every combination of µ and σ. The mean (µ) shifts the curve to the left or right. The standard deviation (σ) alters the spread of the curve. The first pair of curves have different means but the same standard deviation. The second pair of curves share the same mean (µ) but have different standard deviations. The pink curve has a smaller standard deviation. It is narrower and taller, and the probability is spread over a smaller range of values. The blue curve has a larger standard deviation. The curve is flatter and the tails are thicker. The probability is spread over a larger range of values.
Figure 10. A comparison of normal curves.
Properties of the normal curve:
The mean is the center of this distribution and the highest point. The curve is symmetric about the mean. (The area to the left of the mean equals the area to the right of the mean.) The total area under the curve is equal to one. As xincreases and decreases, the curve goes to zero but never touches. The PDF of a normal curve is $$ y= \frac {1}{\sqrt {2\pi} \sigma} e^{\frac {-(x-\mu)^2}{2\sigma^2}}$$ A normal curve can be used to estimate probabilities. A normal curve can be used to estimate proportions of a population that have certain x-values. The Standard Normal Distribution
There are millions of possible combinations of means and standard deviations for continuous random variables. Finding probabilities associated with these variables would require us to integrate the PDF over the range of values we are interested in. To avoid this, we can rely on the standard normal distribution. The standard normal distribution is a special normal distribution with a µ = 0 and
= 1. We can use the Z-score to standardize any normal random variable, converting the x-values to Z-scores, thus allowing us to use probabilities from the standard normal table. So how do we find area under the curve associated with a Z-score? σ Standard Normal Table The standard normal table gives probabilities associated with specific Z-scores. The table we use is cumulative from the left. The negative side is for all Z-scores less than zero (all values less than the mean). The positive side is for all Z-scores greater than zero (all values greater than the mean). Not all standard normal tables work the same way.
Example \(\PageIndex{1}\):
What is the area associated with the Z-score 1.62?
Figure 11. The standard normal table and associated area for z = 1.62. Answer
The area is 0.9474.
Reading the Standard Normal Table Read down the Z-column to get the first part of the Z-score (1.6). Read across the top row to get the second decimal place in the Z-score (0.02). The intersection of this row and column gives the area under the curve to the left of the Z-score. Finding Z-scores for a Given Area What if we have an area and we want to find the Z-score associated with that area? Instead of Z-score → area, we want area → Z-score. We can use the standard normal table to find the area in the body of values and read backwards to find the associated Z-score. Using the table, search the probabilities to find an area that is closest to the probability you are interested in.
Example \(\PageIndex{2}\):
To find a Z-score for which the area to the right is 5%:
Since the table is cumulative from the left, you must use the complement of 5%.
$$1.000 – 0.05 = 0.9500$$
Figure 12. The upper 5% of the area under a normal curve.
Find the Z-score for the area of 0.9500. Look at the probabilities and find a value as close to 0.9500 as possible.
Figure 13. The standard normal table. Answer
The Z-score for the 95th percentile is 1.64.
Area in between Two Z-scores
Example \(\PageIndex{3}\):
To find Z-scores that limit the middle 95%:
Figure 14. The middle 95% of the area under a normal curve. Solutions
The middle 95% has 2.5% on the right and 2.5% on the left. Use the symmetry of the curve. Look at your standard normal table. Since the table is cumulative from the left, it is easier to find the area to the left first. Find the area of 0.025 on the negative side of the table. The Z-score for the area to the left is -1.96. Since the curve is symmetric, the Z-score for the area to the right is 1.96. Common Z-scores
There are many commonly used Z-scores:
\(Z_{.05}\) = 1.645 and the area between -1.645 and 1.645 is 90% \(Z_{.025}\) = 1.96 and the area between -1.96 and 1.96 is 95% \(Z_{.005}\) = 2.575 and the area between -2.575 and 2.575 is 99% Applications of the Normal Distribution
Typically, our normally distributed data do not have μ = 0 and
= 1, but we can relate any normal distribution to the standard normal distributions using the Z-score. We can transform values of x to values of z. σ
$$z=\frac {x-\mu}{\sigma}$$
For example, if a normally distributed random variable has a μ = 6 and
= 2, then a value of x = 7 corresponds to a Z-score of 0.5. σ
$$Z=\frac{7-6}{2}=0.5$$
This tells you that 7 is one-half a standard deviation above its mean. We can use this relationship to find probabilities for any normal random variable.
Figure 15. A normal and standard normal curve.
To find the area for values of X, a normal random variable, draw a picture of the area of interest, convert the x-values to Z-scores using the Z-score and then use the standard normal table to find areas to the left, to the right, or in between.
$$z=\frac {x-\mu}{\sigma}$$
Example \(\PageIndex{4}\):
Adult deer population weights are normally distributed with µ = 110 lb. and
= 29.7 lb. As a biologist you determine that a weight less than 82 lb. is unhealthy and you want to know what proportion of your population is unhealthy. σ
P(x<82)
Figure 16. The area under a normal curve for P(x<82).
Convert 82 to a Z-score
$$z=\frac{82-110}{29.7} = -0.94$$
The
x value of 82 is 0.94 standard deviations below the mean.
Figure 17. Area under a standard normal curve for P(z<-0.94).
Go to the standard normal table (negative side) and find the area associated with a Z-score of -0.94.
This is an “area to the left” problem so you can read directly from the table to get the probability.
$$P(x<82) = 0.1736$$
Approximately 17.36% of the population of adult deer is underweight, OR one deer chosen at random will have a 17.36% chance of weighing less than 82 lb.
Example \(\PageIndex{5}\):
Statistics from the Midwest Regional Climate Center indicate that Jones City, which has a large wildlife refuge, gets an average of 36.7 in. of rain each year with a standard deviation of 5.1 in. The amount of rain is normally distributed. During what percent of the years does Jones City get more than 40 in. of rain?
$$P(x > 40)$$
Figure 18. Area under a normal curve for P(x>40). Solution
$$z=\frac {40-36.7}{5.1}=0.65$$
$$ P(x>40) = (1-0.7422) = 0.2578$$
For approximately 25.78% of the years, Jones City will get more than 40 in. of rain.
Assessing Normality
If the distribution is unknown and the sample size is not greater than 30 (Central Limit Theorem), we have to assess the assumption of normality. Our primary method is the normal probability plot. This plot graphs the observed data, ranked in ascending order, against the “expected” Z-score of that rank. If the sample data were taken from a normally distributed random variable, then the plot would be approximately linear.
Examine the following probability plot. The center line is the relationship we would expect to see if the data were drawn from a perfectly normal distribution. Notice how the observed data (red dots) loosely follow this linear relationship. Minitab also computes an Anderson-Darling test to assess normality. The null hypothesis for this test is that the sample data have been drawn from a normally distributed population. A p-value greater than 0.05 supports the assumption of normality.
Figure 19. A normal probability plot generated using Minitab 16.
Compare the histogram and the normal probability plot in this next example. The histogram indicates a skewed right distribution.
Figure 20. Histogram and normal probability plot for skewed right data.
The observed data do not follow a linear pattern and the p-value for the A-D test is less than 0.005 indicating a non-normal population distribution.
Normality cannot be assumed. You must always verify this assumption. Remember, the probabilities we are finding come from the standard NORMAL table. If our data are NOT normally distributed, then these probabilities DO NOT APPLY.
Do you know if the population is normally distributed? Do you have a large enough sample size (n≥30)? Remember the Central Limit Theorem? Did you construct a normal probability plot? |
Why does reconstruction formula have the same formula as the projection of a vector onto a subspace? $$v = \sum_{i=1}^{n} {\langle v | e_i\rangle\, e_i}$$ where ${e_1, ..., e_n}$ is an orthonormal basis and v is in an inner product space$(V,\langle\,\cdot\,|\,\cdot\,\rangle)$. $V$ can be any vector space such as $\mathbb R^n$
Let $U\subseteq V$ be a subspace of a vector space $U$. To define a projection onto $U$, one needs to choose a complement $W$. Here a complement is a second subspace $W\subseteq V$ such that $V=U\oplus W$, which means $V=U+W$ and $U\cap W=\{0\}$. In this situation, every $v\in V$ has a
unique decomposition as $v=u+w$ with $u\in U$ and $w\in W$. Hence, we can define the projection onto $U$ along $W$ by $p(u+w)=u$.
Now we consider the case of a finite dimensional euclidean space and orthogonal projections. For a subspace $U\subseteq V$, the orthogonal projection is the projection onto $U$ along $U^\perp$, the orthogonal complement of $U$. This works since $V=U\oplus U^\perp$. Now to obtain the formula you are referring to, let $(u_1,\dots,u_k)$ be an orthonormal basis of $U$ and $(w_1,\dots,w_l)$ be an orthonormal basis of $U^T$. Together these form an orthonormal basis $(u_1,\dots,u_k,w_1,\dots,w_l)$ of the whole space $V$.
Given any vector $v\in V$ we can write it uniquely as a linear combination $$v = \sum_{i=1}^k \lambda_i u_i + \sum_{i=1}^l \mu_i w_i.$$ Furthermore, since we have an orthonormal basis, the coefficients are given by $\lambda_i = \langle v, u_i\rangle$ and $\mu_i = \langle v, w_i\rangle$. So we have the unique decomposition $$ v = \underbrace{\sum_{i=1}^k \langle v, u_i\rangle\, u_i}_{\in U} + \underbrace{\sum_{i=1}^l \langle v,w_i\rangle w_i}_{\in U^\perp}. $$ But this means that the projection of $V$ onto $U$ along $U^\perp$ is given by $$ p(v) = p\left(\underbrace{\sum_{i=1}^k \langle v, u_i\rangle\, u_i}_{\in U} + \underbrace{\sum_{i=1}^l \langle v,w_i\rangle w_i}_{\in U^\perp}\right) = \sum_{i=1}^k \langle v, u_i\rangle\, u_i. $$ |
Case Study Contents
In economics, a production function relates the output of a production process to the inputs. The Cobb–Douglas (C-D) production function, a particular functional form of the production function, is used widely to represent the technological relationship between the amounts of two or more inputs, physical capital and labor, and the amount of output that can be produced by those inputs. The C-D production function is a special case of the Constant Elasticity of Substitution (CES) production function. Production functions with concave functional form such as CES functions are popular in economic modeling because they exhibit diminishing returns; diminishing returns is the decrease in the marginal (incremental) output of a production process as the amount of a single factor of production is incrementally increased, while the amounts of all other factors of production stay constant.
Here, we estimate the Cobb-Douglas function in its most standard form of a single good with two factors, labor and capital, using Mizon's 1977 data set. In his 1977 paper, Mizon estimated a variety of specifications for production functions including the C-D and CES where they were allowed to have additive as well as multiplicative error terms. (Example 2 will cover the CES models). In his production function estimations, Mizon used U.K. data on capital, labor use, and a common output measure for 24 industries covering the years 1954, 1957 and 1960.
To estimate a vector of unknowns of the Cobb-Douglas function, we are going to minimize the Sum of Squared Errors (SSE) in a nonlinear least squares model. We also will report some standard regression statistics such as standard errors, t-statistic (also called T value), and p-value at the estimation point.
In a typical nonlinear least squares problem, we estimate a vector of unknowns $\theta$ by solving a constrained optimization problem. To be more specific, we search for estimators of $\theta$, i.e., $\hat{\theta}$, that minimize the SSE subject to some constraints:
$$\hat{\theta} = \arg\min_{\theta}\left[(\sum_{t=1}^m\mu_t^2)\right],$$
$$\mbox{s.t.} \quad q_t = f_t(x_t,\theta) + \mu_t.$$
Based on the data set from Mizon (1977), we observe exogenous data $x_t = (k_t, l_t)$, where $k_t$ is capital used at time t and $l_t$ is labor employed at time t, together with dependent variable $q_t$ (quantity of output). We are interested in estimating an unknown vector of $\theta = (\theta_{i0}, \theta_{i1}, \theta_{i2})'$ in which $\theta_{i0}$ is a scale factor and $\theta_{i1}$ and $\theta_{i2}$ are output elasticity of input factors. The constraints are:
$$ {q_t} = {\theta_{i0}} {k_t^{\theta_{i1}}} {l_t^{\theta_{i2}}} + {\mu_t},$$
In more general cases, with more than two inputs, we can denote the inputs as $V_{1}, V_{2}, \dots, V_{m}$, $m > 2$. Then the fitting constraints are:
$$ {q_t} = {\theta_{i0}} {(V_{1t})^{\theta_{i1}}} {(V_{2t})^{\theta_{i2}}} \dots {(V_{mt})^{\theta_{im}}} + {\mu_t},$$
and we need to estimate a unknown vector of $\theta = (\theta_{i0}, \theta_{i1}, \theta_{i2},\dots, \theta_{im})'$ with $m + 1$ elements. Both the standard and general form of the Cobb-Douglas production function are included in the demo of example 1.
To report standard errors at the estimated point, we need to estimate the co-variance matrix of the coefficients $\hat{V}_{\hat{\theta}}$. In standard econometrics references such as Greene (2011),
\begin{equation} \hat{V}_{\hat{\theta}} = \frac{\sum_t(y_t - f_t(x_t, \hat{\theta}))^2}{n-m}{(J^TJ)^{-1}}. \end{equation} Here $\frac{\sum_t(y_t - f_t(x_t, \hat{\theta}))^2}{n-m}$ is the estimated variance of residuals, where $n-m$ is the degrees of freedom, defined by $n$ = number of observations and $m$ = number of unknowns. On the other hand, a Jacobian matrix $J$ is a $n \times m$ matrix with its $(t,i)^{th}$ element defined as $J_{t,i} = \frac{\partial{\mu_t(\theta)}}{\partial{\theta_i}}$, where $t=1,2,\dots, n$ and $i=1,2, \dots, m$. GAMS provides a mechanism to generate the Jacobian matrix $J$ at solution point $\theta$. As we can see from this nonlinear least squares example in GAMS, cd_gdx.gms (Cobb-Douglas model with gdx input), we rely on the convertd solver with options DictMap and Jacobian for generating a dictionary map from the solver to GAMS and the Jacobian matrix at the solution point. We save them individually in data file dictmap.gdx and jacobian.gdx. Combining the information from these two files will provide us with the Jacobian matrix $J$ at the solution point $\theta$.
Once we have the estimators and the corresponding standard errors of these estimators, we can address the common question of how good are the estimators. To test whether the estimators are significantly different from zero, we can generate a t-statistic based on a hypothesis. Following Greene (2011), let $\hat{\beta}$ be an estimator of true value $\beta$, then a t-statistic for $\beta$ is defined as
$$t_{\hat{\beta}} = \frac{\hat{\beta}-\beta_0}{s.e.(\hat{\beta})}.$$
Note that we usually set $\beta_0 = 0$ when testing the significance of a certain estimator. In general, when a t-statistic is large enough (larger than the critical value with respect to some significance level $\alpha$), we tend to reject the null hypothesis claiming that this estimator is not significantly different from zero. The p-value is defined as the probability of obtaining a result equal to or "more extreme" than what was actually observed, assuming that the hypothesis under consideration is true. In this case, we could simply take the p-value as the probability of observing current data assuming our estimator is not significant. If the p-value is less than the significance level $\alpha$, then it is "unlikely" that the null hypothesis was true. It is easy to see that a large t-statistic would indicate a small p-value for the same statistical test.
Check out the demo of example 1 to experiment with a nonlinear least squares model for estimating and statistically testing the Cobb-Douglas model.
Mizon, Grayham E. 1977. Inferential Procedures in Nonlinear Models: An Application in a UK Industrial Cross Section Study of Factor Substitution and Returns to Scale. Econometrica 45(5), 1221-1242. Kalvelagen, Erwin. 2007. Least Squares Calculations with GAMS. Available for download at http://www.amsterdamoptimization.com/pdf/ols.pdf. Greene, William. 2011. Econometrics Analysis, 7th ed.Prentice Hall, Upper Saddle River, NJ. |
Q1) Write down the Lagrangian of the system in terms of y(t) Q2) Obtain the Eqn of motion Q3)Using Lagrange Multiplier method find the forces of constraints 1) We have a constraint such that $$f=y-r\theta=0 $$
And the lagrangian is
$$L=1/2m[\dot{y}^2+\frac{R^2\dot{\theta}^2}{2}]+mgy$$
from here I have to get rid of the $\theta$ by using constraint. Then I get
$$L=1/2m[\dot{y}^2+\frac{\dot{y}^2}{2}]+mgy$$
$$\ddot{y}=2g/3$$ where g is gravity of earth.
Now the question asks to find the constraints so
I dont use the constraint in the lagrangian and I just wrote the Multipler equation which it is
$$\frac{\mathrm d}{\mathrm dt} \frac{\partial L}{\partial \dot{q_j}} - \frac{\partial L}{\partial q_j} = \sum {\lambda_j}\frac{\partial {f_j}}{\partial q_i}$$
From here I get two equations
$$m\ddot{y}-mg=\lambda$$ $$mR^2\ddot{\theta}=-\lambda R$$
But I dont know how to proceed to find the force of constraint, thanks |
Let \( A \) be any real square matrix (not necessarily symmetric). Prove that: $$ (x'A x)^2 \leq (x'A A'x)(x'x) $$
The key point in proving this inequality is to recognize that \( x'A A'x \) can be expressed as vector norm of \( A'x \).
Proof:
If \( x=0 \), then the inequality is trival.
Suppose \( x \neq 0 \).
\( \frac{x'A x}{x'x}
= \frac{(A'x)'x}{\| x \|^2} = (A'\frac{x}{\| x \|})'\frac{x}{\| x \|} \)
Because \( \frac{x}{\| x \|} \) is a unit vector, \( A'\frac{x}{\| x \|} \) can be considered as scale and rotation of \( \frac{x}{\| x \|} \) by \( A' \). Thus, the resulting vector norm of \( A'\frac{x}{\| x \|} \) is \( \alpha \) for some \( \alpha > 0 \). And \( (A'\frac{x}{\| x \|})'\frac{x}{\| x \|}=\alpha \, cos(\beta) \) for some \( -\pi \leq \beta \leq \pi \), which is the angle between before and after premulitplying \( A' \).
Now:
\( ( \frac{x'A x}{x'x} )^2 \)
\(= ( (A'\frac{x}{\| x \|})'\frac{x}{\| x \|} )^2 \)
\( =\alpha^2 \, cos(\beta)^2 \)
\( \leq \alpha^2 \)
\(= (A'\frac{x}{\| x \|})'A'\frac{x}{\| x \|} \)
\(= \frac{(A'x)'A'x}{\| x \|^2} \)
\(= \frac{x'A A'x}{x'x} \)
Finally, multiplying both sides by \( (x'x)^2 \) completes the proof. |
Hello one and all! Is anyone here familiar with planar prolate spheroidal coordinates? I am reading a book on dynamics and the author states If we introduce planar prolate spheroidal coordinates $(R, \sigma)$ based on the distance parameter $b$, then, in terms of the Cartesian coordinates $(x, z)$ and also of the plane polars $(r , \theta)$, we have the defining relations $$r\sin \theta=x=\pm R^2−b^2 \sin\sigma, r\cos\theta=z=R\cos\sigma$$ I am having a tough time visualising what this is?
Consider the function $f(z) = Sin\left(\frac{1}{cos(1/z)}\right)$, the point $z = 0$a removale singularitya polean essesntial singularitya non isolated singularitySince $Cos(\frac{1}{z})$ = $1- \frac{1}{2z^2}+\frac{1}{4!z^4} - ..........$$$ = (1-y), where\ \ y=\frac{1}{2z^2}+\frac{1}{4!...
I am having trouble understanding non-isolated singularity points. An isolated singularity point I do kind of understand, it is when: a point $z_0$ is said to be isolated if $z_0$ is a singular point and has a neighborhood throughout which $f$ is analytic except at $z_0$. For example, why would $...
No worries. There's currently some kind of technical problem affecting the Stack Exchange chat network. It's been pretty flaky for several hours. Hopefully, it will be back to normal in the next hour or two, when business hours commence on the east coast of the USA...
The absolute value of a complex number $z=x+iy$ is defined as $\sqrt{x^2+y^2}$. Hence, when evaluating the absolute value of $x+i$ I get the number $\sqrt{x^2 +1}$; but the answer to the problem says it's actually just $x^2 +1$. Why?
mmh, I probably should ask this on the forum. The full problem asks me to show that we can choose $log(x+i)$ to be $$log(x+i)=log(1+x^2)+i(\frac{pi}{2} - arctanx)$$ So I'm trying to find the polar coordinates (absolute value and an argument $\theta$) of $x+i$ to then apply the $log$ function on it
Let $X$ be any nonempty set and $\sim$ be any equivalence relation on $X$. Then are the following true:
(1) If $x=y$ then $x\sim y$.
(2) If $x=y$ then $y\sim x$.
(3) If $x=y$ and $y=z$ then $x\sim z$.
Basically, I think that all the three properties follows if we can prove (1) because if $x=y$ then since $y=x$, by (1) we would have $y\sim x$ proving (2). (3) will follow similarly.
This question arised from an attempt to characterize equality on a set $X$ as the intersection of all equivalence relations on $X$.
I don't know whether this question is too much trivial. But I have yet not seen any formal proof of the following statement : "Let $X$ be any nonempty set and $∼$ be any equivalence relation on $X$. If $x=y$ then $x\sim y$."
That is definitely a new person, not going to classify as RHV yet as other users have already put the situation under control it seems...
(comment on many many posts above)
In other news:
> C -2.5353672500000002 -1.9143250000000003 -0.5807385400000000 C -3.4331741299999998 -1.3244286800000000 -1.4594762299999999 C -3.6485676800000002 0.0734728100000000 -1.4738058999999999 C -2.9689624299999999 0.9078326800000001 -0.5942069900000000 C -2.0858929200000000 0.3286240400000000 0.3378783500000000 C -1.8445799400000003 -1.0963522200000000 0.3417561400000000 C -0.8438543100000000 -1.3752198200000001 1.3561451400000000 C -0.5670178500000000 -0.1418068400000000 2.0628359299999999
probably the weirdness bunch of data I ever seen with so many 000000 and 999999s
But I think that to prove the implication for transitivity the inference rule an use of MP seems to be necessary. But that would mean that for logics for which MP fails we wouldn't be able to prove the result. Also in set theories without Axiom of Extensionality the desired result will not hold. Am I right @AlessandroCodenotti?
@AlessandroCodenotti A precise formulation would help in this case because I am trying to understand whether a proof of the statement which I mentioned at the outset depends really on the equality axioms or the FOL axioms (without equality axioms).
This would allow in some cases to define an "equality like" relation for set theories for which we don't have the Axiom of Extensionality.
Can someone give an intuitive explanation why $\mathcal{O}(x^2)-\mathcal{O}(x^2)=\mathcal{O}(x^2)$. The context is Taylor polynomials, so when $x\to 0$. I've seen a proof of this, but intuitively I don't understand it.
@schn: The minus is irrelevant (for example, the thing you are subtracting could be negative). When you add two things that are of the order of $x^2$, of course the sum is the same (or possibly smaller). For example, $3x^2-x^2=2x^2$. You could have $x^2+(x^3-x^2)=x^3$, which is still $\mathscr O(x^2)$.
@GFauxPas: You only know $|f(x)|\le K_1 x^2$ and $|g(x)|\le K_2 x^2$, so that won't be a valid proof, of course.
Let $f(z)=z^{n}+a_{n-1}z^{n-1}+\cdot\cdot\cdot+a_{0}$ be a complex polynomial such that $|f(z)|\leq 1$ for $|z|\leq 1.$ I have to prove that $f(z)=z^{n}.$I tried it asAs $|f(z)|\leq 1$ for $|z|\leq 1$ we must have coefficient $a_{0},a_{1}\cdot\cdot\cdot a_{n}$ to be zero because by triangul...
@GFauxPas @TedShifrin Thanks for the replies. Now, why is it we're only interested when $x\to 0$? When we do a taylor approximation cantered at x=0, aren't we interested in all the values of our approximation, even those not near 0?
Indeed, one thing a lot of texts don't emphasize is this: if $P$ is a polynomial of degree $\le n$ and $f(x)-P(x)=\mathscr O(x^{n+1})$, then $P$ is the (unique) Taylor polynomial of degree $n$ of $f$ at $0$. |
So I'm given the Taylor Series expansion of the sine function and I've been asked to prove it (Done) and then construct the following by my lecturer:
Explain why the Taylor series containing $N$ terms is: $$\sin x = \sum_{k=0}^{N-1} \frac{(-1)^k}{(2k+1)!}x^{2k+1}+r_{2N-1}(x)$$ with a remainder $r_{2N-1}(x)$ that satisfies: $$|r_{2N-1}(x)| \leqslant \frac{|x|^{2N}}{(2N)!}$$
How many terms of the series do you need to include if you want to compute $\sin x$ with an error of at most $10^{-3}$ for all $x \in [-\pi/2, \pi/2]$?
Compute $\sin(\pi/2)$ from the Taylor Series. How large is the actual error?'
I'm fairly certain I can do the third part, but the first and second have completely thrown me. Can anybody help, or at least point me in the right direction?
Cheers. |
Given an arbitrary closed set A of $\mathbf{R}^{n}$, we establish therelation between the eigenvalues of the approximate differential of thespherical image map of A and the principal curvatures of A introduced byHug-Last-Weil, thus extending a well known relation for sets of positive reachby Federer and Zaehle. Then we provide for every $ m = 1, \ldots , n-1 $ anintegral representation for the support measure $ \mu_{m} $ of A with respectto the m dimensional Hausdoff measure. Moreover a notion of second fundamentalform $Q_{A} $ for an arbitrary closed set A is introduced so that the finiteprincipal curvatures of A correspond to the eigenvalues of $ Q_{A} $. We provethat the approximate differential of order 2, introduced in a previous work ofthe author, equals in a certain sense the absolutely continuous part of $ Q_{A}$, thus providing a natural generalization to higher order differentiability ofthe classical result of Calderon and Zygmund on the approximatedifferentiability of functions of bounded variation.
In this paper, we endow the space of continuous translation invariantvaluation on convex sets generated by mixed volumes coupled with a suitableRadon measure on tuples of convex bodies with two appropriate norms. Thisenables us to construct a continuous extension of the convolution operator onsmooth valuations to non-smooth valuations, which are in the completion of thespaces of valuations with respect to these norms. The novelty of our approach lies in the fact that our proof does not rely onthe general theory of wave fronts, but on geometric inequalities deduced fromoptimal transport methods. We apply this result to prove a variant of Minkowski's existence theorem, andgeneralize a theorem of Favre-Wulcan and Lin in complex dynamics over toricvarieties by studying the linear actions on the Banach spaces of valuations andby studying their corresponding eigenspaces.
We develop an inversive geometry for anisotropic quadradic spaces, in analogywith the classical inversive geometry of a Euclidean plane.
We classify the singular loci of surfaces in the 3-sphere that are thepointwise Euclidean sum or Hamiltonian product of circles. Such surfaces arethe union of circles in at least two ways. As an application we classifysurfaces that are covered by both great circles and little circles up tohomeomorphism.
The objective of this series is to study metric geometric properties of(coarse) disjoint unions of amenable Cayley graphs. We employ the Cayleytopology and observe connections between large scale structure of metric spacesand group properties of Cayley accumulation points. In this Part I, we provethat a disjoint union has property A of G. Yu if and only if all groupsappearing as Cayley accumulation points in the space of marked groups areamenable. As an application, we construct two disjoint unions of finite speciallinear groups (and unimodular linear groups) with respect to two systems ofgenerators that look similar such that one has property A and the other doesnot admit (fibred) coarse embeddings into any Banach space with non-trivialtype (for instance, any uniformly convex Banach space).
We show that the Grothendieck group associated to integral polytopes in$\mathbb{R}^n$ is free-abelian by providing an explicit basis. Moreover, weidentify the involution on this polytope group given by reflection about theorigin as a sum of Euler characteristic type. We also compute the kernel of thenorm map sending a polytope to its induced seminorm on the dual of$\mathbb{R}^n$.
We give a conjectural classification of virtually cocompactly cubulatedArtin-Tits groups (i.e. having a finite index subgroup acting geometrically ona CAT(0) cube complex), which we prove for all Artin-Tits groups of sphericaltype, FC type or two-dimensional type. A particular case is that for $n \geq4$, the $n$-strand braid group is not virtually cocompactly cubulated.
We have discovered a "little" gap in our proof of the sharp conjecture thatin $\mathbb{R}^n$ with volume and perimeter densities $r^m$ and $r^k$, ballsabout the origin are uniquely isoperimetric if $0 < m \leq k - k/(n+k-1)$, thatis, if they are stable (and $m > 0$). The implicit unjustified assumption isthat the generating curve is convex.
The classical bi-Lipschitz and quasisymmetric Schoenflies theorems in theplane by Tukia, Beurling and Ahlfors are generalized in this paper for allplanar uniform domains. Specifically, we show that if $U\subset \mathbb{R}^2$is a uniform domain then it has the following two extension properties: (1)every bi-Lipschitz map $f:\partial U \to \mathbb{R}^2$ that can be extendedhomeomorphically to $\mathbb{R}^2$ can also be extended bi-Lipschitz to$\mathbb{R}^2$ and (2) if $\partial U$ is relatively connected then everyquasisymmetric map $f:\partial U \to \mathbb{R}^2$ that can be extendedhomeomorphically to $\mathbb{R}^2$ can also be extended quasisymmetrically to$\mathbb{R}^2$. In higher dimensions, we show that if $U$ is the exterior of auniformly disconnected set in $\mathbb{R}^n$ then every bi-Lipschitz embedding$f:\partial U \to \mathbb{R}^n$ extends to a bi-Lipschitz homeomorphism of$\mathbb{R}^n$. The same is also true for quasisymmetric embeddings under theadditional assumption that $\partial U$ is relatively connected.
An N-tiling of triangle ABC by triangle T is a way of writing ABC as a unionof N trianglescongruent to T, overlapping only at their boundaries. Thetriangle T is the "tile". The tile may or may not be similar to ABC. In thispaper we study the case of isosceles (but not equilateral) ABC. We study threepossible forms of the tile: right-angled, or with one angle double another, orwith a 120 degree angle. In the case of a right-angled tile, we give a completecharacterization of the tilings, for N even, but leave open whether N can beodd. In the latter two cases we prove the ratios of the sides of the tile arerational, and give a necessary condition for the existence of an N-tiling. Forthe case when the tile has one angle double another, we prove N cannot be primeor twice a prime.
As shown by McMullen in 1983, the coefficients of the Ehrhart polynomial of alattice polytope can be written as a weighted sum of facial volumes. Theweights in such a local formula depend only on the outer normal cones of faces,but are far from being unique. In this paper, we develop an infinite class ofsuch local formulas. These are based on choices of fundamental domains insublattices and obtained by polyhedral volume computations. We hereby also givea kind of geometric interpretation for the Ehrhart coefficients. Since ourconstruction gives us a great variety of possible local formulas, these can,for instance, be chosen to fit well with a given polyhedral symmetry group. Incontrast to other constructions of local formulas, ours does not rely ontriangulations of rational cones into simplicial or even unimodular ones.
A pseudo-edge graph of a convex polyhedron K is a 3-connected embedded graphin K whose vertices coincide with those of K, whose edges are distanceminimizing geodesics, and whose faces are convex. We construct a convexpolyhedron K in Euclidean 3-space with a pseudo-edge graph with respect towhich K is not unfoldable. The proof is based on a result of Pogorelov onconvex caps with prescribed curvature, and an unfoldability obstruction foralmost flat convex caps due to Tarasov. Our example, which has 340 vertices,significantly simplifies an earlier construction by Tarasov, and confirms thatDurer's conjecture does not hold for pseudo-edge unfoldings.
Internal diffusion-limited aggregation (IDLA) is a stochastic growth model ona graph $G$ which describes the formation of a random set of vertices growingfrom the origin (some fixed vertex) of $G$. Particles start at the origin andperform simple random walks; each particle moves until it lands on a site whichwas not previously visited by other particles. This random set of occupiedsites in $G$ is called the IDLA cluster. In this paper we consider IDLA on Sierpinski gasket graphs, and show that theIDLA cluster fills balls (in the graph metric) with probability 1.
Using the authors' 2014 "constraints method," we give a short proof for a2015 result of Dobbins on representations of a point in a polytope as thebarycenter of points in a skeleton, and show that the "r-fold Whitney trick" ofMabillard and Wagner (2014/2015) implies that the Topological TverbergConjecture for r-fold intersections fails dramatically for all r that are notprime powers.
Sectors at centre of affine quadrics with point symmetry are investigatedover arbitrary fields of characteristic different from two. As an applicationwe demonstrate nice formulas for the area and the volume of such planar andspatial sectors, respectively, in euclidean space. It seems that up to nowthere has been atmost little research in this field up to very special cases.
It is well-known that a complete Riemannian manifold M which is locallyisometric to a symmetric space is covered by a symmetric space. Here we provethat a discrete version of this property (called local to global rigidity)holds for a large class of vertex-transitive graphs, including Cayley graphs oftorsion-free lattices in simple Lie groups, and Cayley graph of torsion-freevirtually nilpotent groups. By contrast, we exhibit various examples of Cayleygraphs of finitely presented groups (e.g. SL(4,Z)) which fail to have thisproperty, answering a question of Benjamini, Ellis, and Georgakopoulos. Answering a question of Cornulier, we also construct a continuum of nonpairwise isometric large-scale simply connected locally finitevertex-transitive graphs. This question was motivated by the fact thatlarge-scale simply connected Cayley graphs are precisely Cayley graphs offinitely presented groups and therefore have countably many isometric classes.
We combine aspects of the notions of finite decomposition complexity andasymptotic property C into a notion that we call finite APC-decompositioncomplexity. Any space with finite decomposition complexity has finiteAPC-decomposition complexity and any space with asymptotic property C hasfinite APC-decomposition complexity. Moreover, finite APC-decompositioncomplexity implies property A for metric spaces. We also show that finiteAPC-decomposition complexity is preserved by direct products of groups andspaces, amalgamated products of groups, and group extensions, among otherconstructions.
An $N$-tiling of triangle $ABC$ by triangle $T$ (the `tile') is a way ofwriting $ABC$ as a union of $N$ copies of $T$ overlapping only at theirboundaries. Let the tile $T$ have angles $(\alpha,\beta,\gamma)$, and sides$(a,b,c)$. This paper takes up the case when $3\alpha + 2\beta = \pi$. Thenthere are (as was already known) exactly five possible shapes of $ABC$: either$ABC$ is isosceles with base angles $\alpha$, $\beta$, or $\alpha+\beta$, orthe angles of $ABC$ are $(2\alpha,\beta,\alpha+\beta)$, or the angles of $ABC$are $(2\alpha, \alpha, 2\beta)$. In each of these cases, we have discovered,and here exhibit, a family of previously unknown tilings. These are tilingsthat, as far as we know, have never been seen before. We also discovered, ineach of the cases, a Diophantine equation involving $N$ and the (necessarilyrational) number $s = a/c$ that has solutions if there is a tiling using tile$T$ of some $ABC$ not similar to $T$. By means of these Diophantine equations,some conclusions about the possible values of $N$ are drawn; in particularthere are no tilings possible for values of $N$ of certain forms. We prove, forexample, that there is no $N$-tiling with $N$ prime when $3\alpha + 2\beta =\pi$. These equations also imply that for each $N$, there is a finite set ofpossibilities for the tile $(a,b,c)$ and the triangle $ABC$. (Usually, but notalways, there is just one possible tile.) These equations provide necessary,and in three of the five cases sufficient, conditions for the existence of$N$-tilings.
An elastic graph is a graph with an elasticity associated to each edge. Itmay be viewed as a network made out of ideal rubber bands. If the rubber bandsare stretched on a target space there is an elastic energy. We characterizewhen a homotopy class of maps from one elastic graph to another is loosening,i.e., decreases this elastic energy for all possible targets. This fits into amore general framework of energies for maps between graphs.
We prove the following conjecture of Furstenberg (1969): if $A,B\subset[0,1]$ are closed and invariant under $\times p \mod 1$ and $\times q \mod 1$,respectively, and if $\log p/\log q\notin \mathbb{Q}$, then for all realnumbers $u$ and $v$, $$\dim_{\rm H}(uA+v)\cap B\le \max\{0,\dim_{\rmH}A+\dim_{\rm H}B-1\}.$$ We obtain this result as a consequence of our study onthe intersections of incommensurable self-similar sets on $\mathbb{R}$. Ourmethods also allow us to give upper bounds for dimensions of arbitrary slicesof planar self-similar sets satisfying SSC and certain natural irreducibleconditions.
Persistence diagrams are common objects in the field of Topological DataAnalysis. They are topological summaries that capture both topological andgeometric structure within data. Recently there has been a surge of interest indeveloping tools to statistically analyse populations of persistence diagrams,a process hampered by the complicated geometry of the space of persistencediagrams. In this paper we study the median of a set of diagrams, defined asthe minimizer of an appropriate cost function analogous to the sum of distancesused for samples of real numbers. We then characterize the local minima of thiscost function and in doing so characterize the median. We also do somecomparative analysis of the properties of the median and the mean.
Under certain assumptions on CAT(0) spaces, we show that the geodesic flow istopologically mixing. In particular, the Bowen-Margulis' measure finitenessassumption used in recent work of Ricks is removed. We also construct examplesof CAT(0) spaces which do not admit finite Bowen-Margulis measure.
Two hexagons in the space are said to intersect badly if the intersection oftheir convex hulls consists of at least one common vertex as well as aninterior point. We are going to show that the number of hexagons on n points in3-space without bad intersections is o(n^2), under the assumption that thehexagons are "fat".
We show that the Cheeger constant for $n$-dimensional isotropic logconcavemeasures is $O(n^{1/4})$, improving on the previous best bound of$O(n^{1/3}\sqrt{\log n}).$ As corollaries we obtain the same improved bound onthe thin-shell estimate, Poincar\'{e} constant and Lipschitz concentrationconstant and an alternative proof of this bound for the isotropic (slicing)constant; it also follows that the ball walk for sampling from an isotropiclogconcave density in ${\bf R}^{n}$ converges in $O^{*}(n^{2.5})$ steps from awarm start. The proof is based on gradually transforming any logconcave densityto one that has a significant Gaussian factor via a Martingale process. Extending this proof technique, we prove that the log-Sobolev constant of anyisotropic logconcave density in ${\bf R}^{n}$ with support of diameter $D$ is$\Omega(1/D)$, resolving a question posed by Frieze and Kannan in 1997. This isasymptotically the best possible estimate and improves on the previous bound of$\Omega(1/D^{2})$ by Kannan-Lov\'{a}sz-Montenegro. It follows that for anyisotropic logconcave density, the ball walk with step size$\delta=\Theta(1/\sqrt{n})$ mixes in $O\left(n^{2}D\right)$ proper steps from\emph{any }starting point. This improves on the previous best bound of$O(n^{2}D^{2})$ and is also asymptotically tight. The new bound leads to the following large deviation inequality for an$L$-Lipschitz function $g$ over an isotropic logconcave density $p$: for any$t>0$, \[ Pr_{x\sim p}\left(\left|g(x)-\bar{g}\right|\geq L\cdott\right)\leq\exp(-\frac{c\cdot t^{2}}{t+\sqrt{n}}) \] where $\bar{g}$ is themedian or mean of $g$ for $x\sim p$; this generalizes and improves on previousbounds by Paouris and by Guedon-Milman. The technique also bounds the ``smallball'' probability in terms of the Cheeger constant, and recovers the currentbest bound.
This paper studies sheaf cohomology on coarse spaces. |
Lauds to our colleague Vizag for his elegant demonstration that
$\lambda \; \text{an eigenvalue of} \; T \Longrightarrow \bar \lambda \; \text{an eigenvalue of} \; T^\dagger; \tag 1$
however, his work on this subject leaves unaddressed the title question, that is,
$\text{"Why is} \; T^\dagger - \bar \lambda I \; \text{not one-to-one?"} \tag 2$
I wish to take up this specific topic here, and provide a sort of "classic" answer; specifically, I wish to demonstrate the essential and well-known result,
"A linear map $S:V \to V$ from a finite dimensional vector space to itself is one-to-one if and only if it is onto."
in what follows we allow $V$ to be a vector space over Note: any base field $\Bbb F$.
The argument is based upon elementary notions of basis and linear independence. Proof:
We first assume $S:V \to V$ is
. Then we let $w_1, w_2, \dots w_n$ be a basis for $V$ over the field $\Bbb F$, and we see, since $S$ is surjective, that there must be a set of vectors $v_i$, $1 \le i \le n$, with onto
$Sv_i = w_i, \; 1 \le i \le n; \tag 3$
I claim the set $\{ v_i \mid 1 \le i \le n \}$ is linearly independent over $\Bbb F$; for if not, there would exist $\alpha_i \in \Bbb F$, not all zero, with
$\displaystyle \sum_1^n \alpha_i v_i = 0; \tag 4$
then
$\displaystyle \sum_1^n \alpha_i w_i = \sum_1^n \alpha_i Sv_i = S \left (\sum_1^n \alpha_i v_i \right ) = S(0) = 0; \tag 5$
but this contradicts the linear independence of the $w_i$ unless
$\alpha_i = 0, \; 1 \le i \le n; \tag 6$
but condition (6) is precluded by our assumption that not all the $\alpha_i = 0$; therefore the $v_i$ are linearly independent over $\Bbb F$ and hence form a basis for $V$; then any $x \in V$ may be written
$x = \displaystyle \sum_1^n x_i v_i, \; x_i \in \Bbb F; \tag 7$
now suppose $S$ were
not injective. Then we could find $x_1, x_2 \in V$ with
$Sx_1 = Sx_2; \tag 8$
if, in accord with (7) we set
$x_1 = \displaystyle \sum_1^n \alpha_i v_i, \tag 9$
$x_2 = \displaystyle \sum_1^n \beta_i v_i, \tag{10}$
then from (8)-(10),
$\displaystyle \sum_1^n \alpha_i w_i = \sum_1^n \alpha_i Sv_i = S \left (\sum_1^n \alpha_i v_i \right ) = S \left (\sum_1^n \beta_i v_i \right) = \sum_1^n \beta_i Sv_i = \sum_1^n \beta_i w_i, \tag{11}$
whence
$\displaystyle \sum_1^n (\alpha_i - \beta_i) w_i = 0; \tag{12}$
now the linear independence of the $w_i$ forces
$\alpha_i = \beta_i, \; 1 \le i \le n, \tag{13}$
whence again
via (9)-(10)
$x_1 = x_2, \tag{14}$
and we see that $S$ is injective.
Going the other way, we now suppose $S$ is
; and let the set$\{v_i \mid 1 \le i \le n \}$ form a basis for $V$. I claim that the vectors $Sv_1, Sv_2, \ldots, Sv_n$ also form a basis; for if not, they must be linearly dependent and we may find $\alpha_i \in \Bbb F$ such that injective
$\displaystyle S \left ( \sum_1^n \alpha_i v_i \right ) = \sum_1^n \alpha_i Sv_i = 0; \tag{15}$
now with $S$ injective this forces
$\displaystyle \sum_1^n \alpha_i v_i = 0, \tag{16}$
impossible by the assumed linear independence of the $v_i$; thus the $Sv_i$
do form a basis and hence any $y \in V$ may be written
$y = \displaystyle \sum_1^n \beta_i Sv_i = S \left ( \sum_1^n \beta_i v_i \right ); \tag{17}$
thus every $y \in V$ lies in the image of $S$ which at last seen to be onto.
End: Proof.
If we apply this result to $T^\dagger - \bar \lambda I$ as in the body of the question, we see that, having shown that $T^\dagger - \bar \lambda I$ is
not onto, we may conclude it is also not injective by the preceding basic demonstration; but not injective implies the null space is not $\{ 0 \}$, since if $x_1 \ne x_2$ but $Sx_1 = Sx_2$, we have $x_1 - x_2 \ne 0$ but
$S(x_1 - x_2) = Sx_1 - Sx_2 = 0, \tag{18}$
whence $0 \ne x_1 - x_2 \in \ker S \ne \{ 0 \}$. |
Applied/ACMS/absS15 Contents ACMS Abstracts: Spring 2015 Irene Kyza (U Dundee) Adaptivity and blowup detection for semilinear evolution convection-diffusion equations based on a posteriori error control
We discuss recent results on the a posteriori error control and adaptivity for an evolution semilinear convection-diffusion model problem with possible blowup in finite time. This belongs to the broad class of partial differential equations describing e.g., tumor growth,chemotaxis and cell modelling. In particular, we derive a posteriori error estimates that are conditional (estimates which are valid under conditions of a posteriori type) for an interior penalty discontinuous Galerkin (dG) implicit-explicit (IMEX) method using a continuation argument. Compared to a previous work, the obtained conditions are more localised and allow the efficient error control near the blowup time. Utilising the conditional a posteriori estimator we are able to propose an adaptive algorithm that appears to perform satisfactorily. In particular, it leads to good approximation of the blowup time and of the exact solution close to the blowup. Numerical experiments illustrate and complement our theoretical results. This is joint work with A. Cangiani, E.H. Georgoulis, and S. Metcalfe from the University of Leicester.
Daniel Vimont (UW) Linear Inverse Modeling of Central and East Pacific El Niño / Southern Oscillation (ENSO) Events
Research on the structure and evolution of individual El Niño / Southern Oscillation (ENSO) events has identified two categories of ENSO event characteristics that can be defined by maximum equatorial SST anomalies centered in the Central Pacific (around the dateline to 150 deg. W; CP events) or in the Eastern Pacific (east of about 150 deg. W; EP events). The distinction between these two events is not just academic: both types of event evolve differently, implying different predictability; the events tend to have different maximum amplitude; and the global teleconnection differs between each type of event.
In this presentation I will (i) describe the Linear Inverse Modeling (LIM) technique, (ii) apply LIM to determine an empirical dynamical operator that governs the evolution of tropical Pacific climate variability, (iii) define norms under which initial conditions can be derived that optimally lead to growth of CP or EP ENSO events, and (iv) identify patterns of stochastic forcing that are responsible for exciting each type of event.
Saverio Spagnolie (UW) Sedimentation in viscous fluids: flexible filaments and boundary effects
The deformation and transport of elastic filaments in viscous fluids play central roles in many biological and technological processes. Compared with the well-studied case of sedimenting rigid rods, the introduction of filament compliance may cause a significant alteration in the long-time sedimentation orientation and filament geometry. In the weakly flexible regime, a multiple-scale asymptotic expansion is used to obtain expressions for filament translations, rotations and shapes which match excellently with full numerical simulations. In the highly flexible regime we show that a filament sedimenting along its long axis is susceptible to a buckling instability. Embedding the analytical results for a single filament into a mean-field theory, we show how flexibility affects a well established concentration instability in a sedimenting suspension.
Another problem of classical interest in fluid mechanics involves the sedimentation of a rigid particle near a wall, but most studies have been numerical or experimental in nature. We have derived ordinary differential equations describing the sedimentation of arbitrarily oriented prolate and oblate spheroids near a vertical or inclined plane wall which may be solved analytically for many important special cases. Full trajectories are predicted which compare favorably with complete numerical simulations performed using a novel double layer boundary integral formulation, a Method of Stresslet Images. Several trajectory-types emerge, termed tumbling, glancing, reversing, and sliding, along with their fully three-dimensional analogues.
Jonathan Freund (UIUC) Adjoint-based optimization for understanding and reducing flow noise
Advanced simulation tools, particularly large-eddy simulation techniques, are becoming capable of making quality predictions of jet noise for realistic nozzle geometries and at engineering relevant flow conditions. Increasing computer resources will be a key factor in improving these predictions still further. Quality prediction, however, is only a necessary condition for the use of such simulations in design optimization. Predictions do not of themselves lead to quieter designs. They must be interpreted or harnessed in some way that leads to design improvements. As yet, such simulations have not yielded any simplifying principals that offer general design guidance. The turbulence mechanisms leading to jet noise remain poorly described in their complexity. In this light, we have implemented and demonstrated an aeroacoustic adjoint-based optimization technique that automatically calculates gradients that point the direction in which to adjust controls in order to improve designs. This is done with only a single flow solutions and a solution of an adjoint system, which is solved at computational cost comparable to that for the flow. Optimization requires iterations, but having the gradient information provided via the adjoint accelerates convergence in a manner that is insensitive to the number of parameters to be optimized. The talk will review the formulation of the adjoint of the compressible flow equations for optimizing noise-reducing controls and present examples of its use. We will particularly focus on some mechanisms of flow noise that have been revealed via this approach.
Markos Katsoulakis (U Mass Amherst) Information Theory methods for parameter sensitivity and coarse-graining of high-dimensional stochastic dynamics
In this talk we discuss path-space information theory-based sensitivity analysis and parameter identification methods for complex high-dimensional dynamics, as well as information-theoretic tools for parameterized coarse-graining of non-equilibrium extended systems. Furthermore, we establish their connections with goal-oriented methods in terms of new, sharp, uncertainty quantification inequalities. The combination of proposed methodologies is capable to (a) handle molecular-level models with a very large number of parameters, (b) address and mitigate the high-variance in statistical estimators, e.g. for sensitivity analysis, in spatially distributed
Kinetic Monte Carlo (KMC), (c) tackle non-equilibrium processes, typically associated with coupled physicochemical mechanisms, boundary conditions, etc. (such as reaction-diffusion systems), and where even steady states are unknown altogether, e.g. do not have a Gibbs structure. Finally, the path-wise information theory tools, (d) yield a surprisingly simple, tractable and easy-to-implement approach to quantify and rank parameter sensitivities, as well as (e) provide reliable molecular model parameterizations for coarse-grained molecular systems and their dynamics, based on fine-scale data and rational model selection methods through suitable path-space (dynamics-based) information criteria. The proposed methods are tested against a wide range of high-dimensional stochastic processes, ranging from complex biochemical reaction networks with hundreds of parameters, to spatially extended Kinetic Monte Carlo models in catalysis and Langevin dynamics of interacting molecules with internal degrees of freedom.
Tao Zhou (Chinese Academy of Sciences) The Christoffel function weighted least-squares for stochastic collocation approximations: applications to Uncertainty Quantification
We shall consider the multivariate stochastic collocation methods on unstructured grids. The motivation for such a study is the applications in parametric Uncertainty Quantification (UQ). We will first give a general framework of stochastic collocation methods, which include approaches such as compressed sensing, least-squares, and interpolation. Particular attention will be then given to the least-squares approach, and we will review recent progresses in this topic.
Elaine Spiller (Marquette)
TBA
Murad Banaji (Portsmouth)
"Nonexpansivity in chemical reaction networks"
This work is motivated by the observation that quite often systems of differential equations describing chemical reaction networks (CRNs) display simple global behaviour such as convergence of all orbits to a unique equilibrium under only weak and physically reasonable assumptions on the reaction rates (kinetics). We are led to wonder if the structure of a CRN may sometimes force some distance between solutions to decrease (or at least not increase) with time. If so, how can we find this nonincreasing quantity? We explore different ways in which CRNs can define nonexpansive semiflows (recall that a semiflow [math](\phi_t)_{t \geq 0}[/math] on some Banach space [math](X, |\cdot|)[/math] is nonexpansive if [math]|\phi_t(x)-\phi_t(y)| \leq |x-y|[/math] for all [math]x,y \in X[/math] and all [math]t \geq 0[/math]). It turns out that in CRNs the natural evolution of chemical concentrations may be nonexpansive; or a nonexpansive semiflow may be obtained from the evolution of the so-called "extents" of reactions. In both cases we may be able to draw global conclusions about convergence of chemical concentrations. In each case the challenge is to find the correct norm to get nonexpansivity for arbitrary kinetics. To construct such norms and show nonexpansivity we appeal to the theory of monotone dynamical systems. Families of CRNs which can be analysed in this way are presented; however characterising fully the class of CRNs to which this theory applies remains an open - and undoubtedly difficult - task.
This is joint work with Bas Lemmens (University of Kent) and Pete Donnell (University of Portsmouth). |
It’s soon exam time, so I’m practicing proofs in complex analysis. Right now that means Cauchy’s integral formula for \(n\)’th derivatives.
Let \(G\) be a domain of the complex numbers and \(f: G\to \mathbb{C}\) a holomorphic function. We first want to show that \(f\) can be expressed as a power series, such that $$f(z)=\sum^\infty_{n=0} a_n(z-a).$$ for some \(a\in\mathbb{C}\), let \(B_\rho(a)\) the largest open ball at \(a\) contained in \(G\). We claim that $$a_n = \frac{1}{2\pi i} \oint\frac{f(z)}{(z-a)^{n+1}}dz.$$ By the Cauchy integral formula we have that, for a fixed \(z_0\in B_\rho(a)\) we have $$f(z_0)=\frac{1}{2\pi i} \oint \frac{f(z)}{z-z_0}$$ and by elementary calculations we can, for \(z\in \partial B_r(a)\), write $$\frac{1}{z-z_0} = \frac{1}{z-a} \frac{1}{1-\frac{z_0 -a}{z-a}}=\frac{1}{z-a}\sum^\infty_{n=0} \left(\frac{z_0-a}{z-a}\right)^n,$$ and from above we then have $$\begin{split} f(z_0)& = \frac{1}{2\pi i} \oint \frac{f(z)}{z-z_0} \\ &=\frac{1}{2\pi i}\oint\sum^\infty_{n=0} \frac{f(z)(z_0-a)^n}{(z-a)^{n+1}}dz\\ &=\frac{1}{2\pi i}\sum^\infty_{n=0}\oint \frac{f(z)(z_0-a)^n}{(z-a)^{n+1}}dz\\&=\frac{1}{2\pi i}\sum^\infty_{n=0}\oint \frac{f(z)}{(z-a)^{n+1}}dz(z_0-a)^n\\ &=\sum^\infty_{n=0}a_n (z_0-a)^n,\end{split}$$ as wanted. We see that \(f\) is a power series and thus infinitely complex differentiable, and the derivatives are $$f^{(n)}(a)=\frac{n!}{2\pi i}\oint\frac{f(z)}{(z-a)^{n+1}}dz,$$ as desired.
The observant reader will have noticed that I didn’t check that the sums were uniformly convergent, which is needed in other to switch the sum and integral signs, but this is an easy application of the Weierstraß \(M\)-test.
References Complex analysis. Copenhagen: Department of Mathematical Sciences, University of Copenhagen. |
Suppose you have an interaction term in your hamiltonian that looks like
\begin{equation} H=\sum_{ijkl}U_{ijkl}c^\dagger_ic^\dagger_jc_kc_l \end{equation}
where $U$ is the coupling and $c$, $c^\dagger$ are fermion operators. The questions is: What can you do to figure out the properties of the tensor $U_{ijkl}$ under index exchange? I first thought that I can demand that $H$ is hermitian, but then I run into the problem of not knowing what is the adjoint of a tensor with more than two indeces. If it is what I think so, then I'd get
\begin{equation} H^\dagger=\sum_{ijkl}U^*_{lkji}c^\dagger_lc^\dagger_kc_jc_i \end{equation}
But then only demanding that $U$ is real seems enough. However I've seen in many places that $U$ needs to have a specific symmetry under index exchange. My other approach was to use the commutation rules, For example switching the first two operators we get
\begin{equation} \begin{aligned} H&=\sum_{ijkl}U_{ijkl}c^\dagger_ic^\dagger_jc_kc_l \\ &=-\sum_{ijkl}U_{ijkl}c^\dagger_jc^\dagger_ic_kc_l \\ &=-\sum_{ijkl}U_{jikl}c^\dagger_ic^\dagger_jc_kc_l \end{aligned} \end{equation} which would mean that the matrix is antisymmetric under exchange of $i$ and $j$. Is this reasoning correct? |
The introduction to coding theory in this post will now allow us to explore some more interesting topics in coding theory, courtesy of Pinter’s
A Book of Abstract Algebra. We’ll introduce the notion of a code, informations, and parity check equations. Most communication channels are noisy to some extent, which means that a transmitted codeword may have one or more errors. If the recipient of a codeword attempts to decode a message, he needs to be able to determine first if there was a possible error in the code, and then correct it. This will lead us to one method called maximum likelihood decoding.
Recall that we denoted \mathbb{B}^{n} as the set of all binary words of length n. We will call a subset of \mathbb{B}^{n} a code. The first code we’re going to work with is a subset of \mathbb{B}^{5}. We’ll call this code C_{1}, consisting of the following binary words with length 5:C_{1} = \left\{\begin{array}{lr}00000 & 10011\\00111 & 10100\\01001 & 11010\\01110 & 11101\end{array}\right\}
This is only a subset of \mathbb{B}^{5}. Remember that the full cardinality of \mathbb{B}^{5} = 2^{5} = 32. The 8 words in C_{1} are
codewords. Anything not in C_{1} is not a codeword. When we design a code, only these codewords are to be transmitted. That means there is an error if any word not in C_{1} was received.
If we designed the code well, then it should be very unlikely that an error in transmission results in another codeword. (Think if we transmitted 10011 but received 11010. Since 11010 is a codeword, we wouldn’t actually know that’s not what was meant, and the transmission error would not be detected.) This means we need a good code that makes it easy on us to detect and correct transmission errors. To do that, we’re going to need some notions of weight, distance, and redundancy.
Weight and Distance of Codewords
: The Definition: weight of a codeword weightof a binary word is the number of 1s in the word. Alternatively, we could add the bits (using regular addition.)
For example, the weight of 00111 is 3, and the weight of 01001 is 2. The position of the 1s is irrelevant; we’re just counting them here.
The Definition: distance between codewords: distancebetween two binary words is the number of positions in which they differ
The distance between 00111 and 01001 is 3. (The two words differ in positions 2,3, and 4.)
1. We may also define the minimum distance of a code to be the smallest distance between any two words in the code. Looking up at the code C_{1}, we can make pairwise comparisons to see that the minimum distance of this code is 2. 2
What does this mean? When we receive a transmitted code over a potentially noisy channel, we will be able to notice we received a code with errors only if the number of errors in the transmitted code is less than this minimum distance; otherwise it is possible to receive a code word, not knowing that’s not the code word that was meant.
For instance, in C_{1}, we will be able to detect that we received a bad transmission if the number of bits that are in error is 1 or 0. It takes at least 2 errors in transmission before we could possibly be receiving another code word than what was sent.
This gives us a way to design good codes. The more errors needed to change a codeword into another, the better the code is. Even more desirable would be a code with a minimum distance of 3, since 1 or 2 errors will always be detected, and 3 errors are highly unlikely.
Constructing a Code
In every codeword, certain positions are called
information positions, and the remaining are redundancy positions, sometimes called parity bits. In C_{1}, the first three bits are information positions. Looking at the first three bits each of the eight code words, we can see that every possible 3-bit binary sequence is there:
So what’s up with the last two bits? How do we decide what those are? For that, we use
parity check equations. even, then the redundancy position is set to a 1, giving a binary word (with parity) that has an odd number of ones. Otherwise, the parity bit is a 0. We reverse that for even single-bit parity.
So if the word is 1100, under odd single-bit parity, the word with redundancy would be 11001. Under even single-bit parity, the word with redundancy is 11000.
We can also define parity check equations when we want more than one bit of redundancy. For example, in C_{1}, the parity check equations are given bya_{4} = a_{1} + a_{3} \bmod 2 \qquad a_{5} = a_{1} + a_{2} + a_{3} \bmod 2
Check C_{1} for yourself. The redundancy positions a_{4} and a_{5} for 110 are given bya_{4} = 1 \oplus 0 = 1 \text{ and } a_{5} = 1\oplus 1 \oplus 0 = 0
giving the word 11010 from the code above.
3
We could create an entirely new code in \mathbb{B}^{5} using the same information positions \mathbb{B}^{3} and different parity check equations.
Decoding Noisy Communication: Maximum Likelihood Decoding
Let’s make some shorthand for the distance between two binary words. We’ll denote d(\mathbf{a},\mathbf{b}) to be the distance between two binary words.
If we receive some transmission, we know it has to be made of words from our given code (in this case, we’re working with C_{1}). We
decode a received word by finding the codeword closest to our received word. That is, if we receive \mathbf{x}, we minimize d(\mathbf{a},\mathbf{x}) over all the possible codewords \mathbf{a} in our code.
This is
maximum likelihood decoding.
If we receive
11111, then we need to find the codeword in C_{1} closest. That is, the one that differs in the fewest positions. Likely you have already figured out that this word isn’t a codeword, and thus is an erroneous transmission, and that the closest word is 11101. You did this quickly by calculating the minimum distance between 11111 and all the other words.
Since it is not a codeword, and the minimum distance between codewords is 2, we know there must be at least one codeword that differs from
11111 by one position.
Taking another example, let’s assume we received
10111. This time, when you go through all the words, we actually find two codewords with a distance of 1 away: 10011 and 00111. That is, either a mistake could have been made in the 3rd position or the first position, but we actually can’t tell which it is if we just receive the word (assuming no other context).
This actually makes C_{1} a less desirable code. The codewords are close enough together that we cannot ensure any possible word from \mathbb{B}^{5} received can be uniquely decoded to a single word in C_{1}
How do we find conditions to put on a code design to guarantee unique maximum-likelihood decoding? [Fair warning here, we’ll get a little more theoretical and math heavy. But here’s where things get interesting.]
Now it’s time to get general. Let m be the minimum distance for some general code C.
4
First, let’s formally state what we’ve already implicitly played with: as long as there are m-1 errors or fewer, we can detect an error. For instance, in C_{1}, m=2, and thus we can detect an error in one position, but not necessarily 2, since an error in two positions may give us another codeword that wasn’t intended. But since we would decode it as a codeword, we wouldn’t know that word wasn’t the one our partner meant to send.
The goal of this section is to figure out how to separate the codewords in any code to guarantee that any word received will be “closest” to only one word. In other words, the problem with the last code C_{1} is that we found a word in \mathbb{B}^{5} that was equidistant from two codewords. We want to design a code that keeps that from happening.
a and b are codewords in our general code C. x, y, z and w are any other codewords of the same length as the codewords in C. k is the radius of the sphere drawn around each codeword. Here you can see that x is in the sphere for a, and y is in the sphere for b. z and w are outside of both. The spheres overlap, so in theory it’s possible a codeword lies in both spheres.
As shown in the figure above, we’re going to define a
sphere of radius k around a codeword \mathbb{a} as the set of all words in \mathbb{B}^{n} 5whose distance from \mathbb{a} is k or smaller. We’ll call this set S_{k}(\mathbb{a}). Mathematically, we write this as The sphere of radius k around codeword a is defined as the set of all words of binary length n such that the distance from the word to a is less than or equal to the radius k.
You see here why mathematicians like our symbolic shorthand]
If we want to design a good code, then we want to make sure that the spheres for all codewords are completely disjoint (not overlapping). So how big should we make these spheres in terms of the minimum distance of the code?
If we let t = \frac{1}{2}(m-1), then we can show that for any two spheres around generic codewords \mathbb{a} and \mathbb{b} in a generic code C that have radius t, then the spheres will have no words in common.
To show this, we’ll show it by contradiction.
6 We’ll assume that these two generic spheres have a word in common, say \mathbb{x}, and show that this assumption contradicts the notion of the minimum distance m.
If \mathbb{x} is in both S_{t}(\mathbb{a}) and S_{t}(\mathbb{b}), then it must be at most t= \frac{1}{2}(m-1) away from both \mathbb{a} and \mathbb{b}. Suppose it’s directly in between; that is, suppose \mathbb{x} is t away from both \mathbb{a} and \mathbb{b}, but still in the intersection. We know that the minimum distance is m, so the middle point between \mathbb{a} and \mathbb{b} is \frac{m}{2}.
But t = \frac{1}{2}(m-1). That means if \mathbb{x} lies in S_{t}(\mathbb{a}), it can’t be t or less from \mathbb{b}, and vice versa. In other words, it’s impossible for \mathbb{x} to lie in the spheres of both codewords without contradicting the minimum distance m between the two codewords.
So what does this mean? This means that if we have t or fewer errors in transmission, we will be able to uniquely decode \mathbb{x}, and won’t run into the same problem we did with C_{1}.
Conclusion
Designing a code really relies on the parity check equations. The same information positions with different parity check equations yield totally different codes. We learned that some codes are better than others. Namely, we want a lot of “separation” between codewords, so we don’t run into the issue of accidentally decoding the word incorrectly, but being unable to tell we decoded incorrectly.
We want the minimum distance of the code to be as high as reasonable (3+ is considered reasonable, unless your transmission line is really bad). We also used a little geometry to view the notion of separation between codewords as spheres, and we want those spheres to not overlap each other. Overlapping spheres means that we can possibly decode a word incorrectly.
Designing codes using parity check equations isn’t an abstract exercise. We use it in data transmission all the time, but it requires a journey into the abstract a little bit to understand what makes some codes better than others. Score another point for algebra.
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Footnotes The astute observer will notice that we can identify the positions in which the two words differ by using the XOR operation from the previous discussion. There are 28 such comparisons to make. My suggestion: use the XOR operation from our previous discussion and then add bitwise to get the distance. A simple script can be written in your language of choice for good coding practice. Feel free to test the other words as well Doesn’t matter the length of the binary words, just any code made of binary words where n is the length of the codewords For all the mathematicians out there, bear with me. My goal is to impart intuition and a geometric reasoning, so I’m going to make a less formal proof here. The argument would center around looking at each position, via the definition of the minimum distance, but I want to relax formality here for clarity. The argument is no less correct, just less scary for non mathematicians. |
No wonder mathematicians find numbers to be the passion of a lifetime. Deceptively simple things can lead to such amazing complexity, to intriguing links between seemingly unconnected concepts.
Take the simple concept of the factorial. The factorial of a positive integer $n$ is defined as the product of all the integers between 1 and $n$ inclusive. Thus, for instance, the factorial of 5 is 5! = 1×2×3×4×5, or 120. Because $n!=n\times(n-1)!$, and conversely, $(n-1)!=n!/n$, it is possible to define 0! as 1!/1, or 1.
If you plot the factorial of integers on a graph, you'll notice that the points you plotted will be along what appears to be some kind of an exponential curve. In any case, it looks very much possible to connect the points with a smooth line. This begs the question: might it be possible to extend the definition of the factorial and create a smooth function that has a value for all real numbers, not just integers, but for integers, its value is equal to that of the factorial itself?
The answer is a resounding yes; such a function indeed exists. In fact, the so-called Gamma function extends the definition of the factorial to the entire complex plane. In other words, while the factorial is defined only for non-negative integers, the Gamma function is defined for any complex number
z as long as z is not a negative integer. Definition
The Gamma function is defined by the following integral:
\[\Gamma(z)=\int\limits_0^\infty t^{z-1}e^{-t}~dt.\]
For all complex numbers $z$, the following recurrence relation is true:
\[\Gamma(z+1)=z\Gamma(z).\]
Consequently, for positive integers:
\[n!=\Gamma(n+1).\]
Some very helpful relationships exist between the Gamma function values for various arguments. For instance, the following relationship makes it possible to compute the Gamma function for a negative argument easily:
\[\Gamma(-z)=\frac{-\pi}{z\Gamma(z)\sin\pi z}.\]
But how do you actually
compute the Gamma function? The definition integral is not very useful; to produce an accurate result, an insanely high number of terms would have to be added during some numerical integration procedure. There must be some other way; there indeed is another way, several other ways in fact, each yielding useful algorithms for handheld calculators. The
Numerical Recipes Solution
Back in 1979, when I was possessed with the desire to squeeze a Gamma function implementation into the 72 programming steps of my tiny calculator, I could not find any ready reference on numerical approximations of this function. Eventually, I came across a 1955 Hungarian book on differential and integral calculus, which provided a usable approximation formula. Being theoretical in nature, the book offered not only no hints on efficient computation of the function, it didn't even provide numerical approximations of the coefficients in the formula; that was all left to the reader to work out (or to not work out, as might be the case, since theoretical mathematicians rarely sink to the level of actually working out a numeric result). Work it out I did, making my little calculator run for
several days from its wall charger, while it calculated some of those coefficients. Eventually I had my result: a 71-step program that provided 8 or more digits of precision for real $z$'s in the range of $0.5\lt z\lt 1.5$. From these values, it is possible to compute the Gamma function of any real number using the recurrence relation.
As it turns out, while my approximation is not a bad one, there's a much better method for calculating the Gamma function.
Numerical Recipes in C (2nd ed. Cambridge University Press, 1992) demonstrates a much more efficient algorithm, the so-called Lanczos approximation for computing the Gamma function for any positive argument (indeed, for any complex argument with a nonnegative real part) to a high level of accuracy. Here is their magic formula, computing the Gamma function with an error that's smaller than $|\epsilon|\lt 2\times 10^{-10}$ for any complex $z$ for which ${\rm Re}~z\gt 0$:
\[\Gamma(z)=\left[\frac{\sqrt{2\pi}}{z}\left(p_0+\sum\limits_{n=1..6}\frac{p_n}{z+n}\right)\right](z+5.5)^{z+0.5}e^{-(z+5.5)},\]
where
\begin{align}p_0&=1.000000000190015,\\
p_1&=76.18009172947146,\\ p_2&=-86.50532032941677,\\ p_3&=24.01409824083091,\\ p_4&=-1.231739572450155,\\ p_5&=1.208650973866179\times 10^{-3},\\ p_6&=-5.395239384953\times 10^{-6}.\end{align}
This formula can be rearranged in a form more suitable for the limited program capacity of programmable calculators using the following identity:
\[p_0+\sum\limits_{n=1..6}\frac{p_n}{z+n}=\frac{p_0(z+1)(z+2)(z+3)(z+4)(z+5)(z+6)+\sum\limits_{n=1..6}\frac{p_n(z+1)(z+2)(z+3)(z+4)(z+5)(z+6)}{z+n}}{(z+1)(z+2)(z+3)(z+4)(z+5)(z+6)}.\]
This way after like terms are collected, the numerator becomes a simple polynomial, and the denominator can be calculated using a simple loop. The calculation can be further simplified by multiplying each constant with \(\sqrt{2\pi}\). The result is the following formula:
\[\Gamma(z)=\frac{\sum\limits_{n=0..N}q_nz^n}{\prod\limits_{n=0..N}(z+n)}(z+5.5)^{z+0.5}e^{-(z+5.5)},\]
\begin{align}q_0&=75122.6331530,\\
q_1&=80916.6278952,\\ q_2&=36308.2951477,\\ q_3&=8687.24529705,\\ q_4&=1168.92649479,\\ q_5&=83.8676043424,\\ q_6&=2.50662827511.\end{align} The
Lanczos Approximation Numerical Recipes neglects to mention an important fact: that the Lanczos approximation ( A Precision Approximation of the Gamma Function. J. SIAM Numer. Anal. Ser. B, Vol. 1 1964. pp. 86-96) can be used with other choices of values. It is possible to derive simpler versions with fewer coefficients; it is also possible to develop extremely accurate versions. As a matter of fact, the Lanczos approximation can be used to compute the Gamma function to arbitrary precision. I am grateful to Paul Godfrey for providing me with the details that are presented here.
The
Lanczos approximation can, in essence, be reduced to a simple vector inner product and then some additional elementary computations:
\[\ln\Gamma(z+1)=\ln{\bf\mathrm{ZP}}+(z+0.5)\ln(z+g+0.5)-(z+g+0.5),\]
where $\vec{Z}$ is an $n$-dimensional row vector constructed from the function argument, $z$:
\[{\bf\mathrm{Z}}=\left[1~~\frac{1}{z+1}~~\frac{1}{z+2}~...~\frac{1}{z+n-1}\right],\]
and $\vec{P}$ is an $n$-dimensional column vector constructed as the product of several $n\times n$ matrices and the $n$-dimensional column vector $\vec{F}$:
\[{\bf\mathrm{P}}={\bf\mathrm{D}}\cdot{\bf\mathrm{B}}\cdot{\bf\mathrm{C}}\cdot{\bf\mathrm{F}},\]
where
\[{\bf\mathrm{B}}_{ij}=\left\{\begin{matrix}1&{\rm if}~i=0,\\-1^{j-i}\begin{pmatrix}i+j-1\\j-i\end{pmatrix}&{\rm if}~i>0,~j\ge i,\\0&{\rm otherwise}.\end{matrix}\right.\]
$\vec{C}$ is a matrix containing coefficients from Chebyshev polynomials:
\[{\bf\mathrm{C}}_{ij}=\left\{\begin{matrix}1/2&{\rm if}~i=j=0,\\0&{\rm if}~j>i,\\-1^{i-j}\sum\limits_{k=0}^{i}\begin{pmatrix}2i\\2k\end{pmatrix}\begin{pmatrix}k\\k+j-i\end{pmatrix}&{\rm otherwise},\end{matrix}\right.\]
and
\[{\bf\mathrm{D}}_{ij}=\left\{\begin{matrix}0&{\rm if}~i\ne j\\1&{\rm if}~i=j=0,\\-1&{\rm if}~i=j=1,\\\frac{{\bf{\mathrm{D}}_{i-1,j-1}}2(2i-1)}{i-1}&{\rm otherwise},\end{matrix}\right.\]
\[{\bf\mathrm{F}}_{i}=\frac{(2i)!e^{i+g+0.5}}{i!2^{2i-1}(i+g+0.5)^{i+0.5}},\]
and $g$ is an arbitrary real parameter, $n$ is an integer. For suitable choices of $g$ and $n$, very good precision can be obtained. The typical error can be computed as follows:
\[{\bf\mathrm{E}}={\bf\mathrm{C}}\cdot{\bf\mathrm{F}},\]
\[|\epsilon|=\left|\frac{\pi}{2\sqrt{2e}}\left(e^g\sqrt{\pi}-\sum\limits_{i=0}^{n-1}-1^{i}{\bf\mathrm{E}}_i\right)\right|.\]
I endeavored to follow up on Godfrey's work and wrote an arbitrary precision implementation of the
Lanczos approximation in the C++ language, utilizing the GMP multi-precision library and my custom C++ wrapper classes for the functions contained therein. This arbitrary precision implementation enabled me to compute, for instance, the Gamma function of 101 (i.e., 100!) to full integral precision, i.e., to a precision better than 158 significant digits.
On a more mundane level, my implementation also allowed me to experiment with various sets of coefficients, trying to find the most accurate and economical version of the Lanczos approximation for pocket calculators. I indeed found a set of only 4 coefficients that has a computed precision of $|\epsilon|\lt 2\times 10^{-7}$, but in practice, appears to yield more than 8 significant digits for all positive real arguments. The precision is further increased when 5 or 6 coefficients are used:
$n$ 4 5 6 $g$ 3.65 4.35 5.15 $p_0$ 2.50662846436560184574 2.50662828350136765681 2.50662827563479526904 $p_1$ 41.4174045302370911317 92.2070484521121938211 225.525584619175212544 $p_2$ −27.0638924937115168658 −83.1776370828788963029 −268.295973841304927459 $p_3$ 2.23931796330266601246 14.8028319307817071942 80.9030806934622512966 $p_4$ N/A −0.220849707953311479372 −5.00757863970517583837 $p_5$ N/A N/A 0.0114684895434781459556 $|\epsilon|$ 2×10
−7 1×10
−8 3×10
−11
The simplicity of the formula for $n=4$ makes it suitable for use with most programmable calculators, including some of the simplest models; and that includes my old OEM Commodore machine. Which means the end of my quest after more than 20 years of programming: I finally have a "perfect" method of computing the Gamma function, to the maximum precision possible, on my first programmable calculator.
Stirling's Formula
When great accuracy isn't required, a much simpler approximation exists for the generalized factorial: Stirling's formula. In its original form, this is what this formula looks like:
\[x!=x^xe^{-x}\sqrt{2\pi x}.\]
Unfortunately, this formula isn't particularly accurate (a slightly more accurate version, often called Burnside's formula, gets rid of the
x under the square root and replaces all remaining $x$'s with $x+1/2$ in the right-hand expression: \(x!=(x+1/2)^{x+1/2}e^{-x-1/2}\sqrt{2\pi}\)).
A simple modification, however, can make Stirling's formula accurate to three or four significant digits for most arguments, making it suitable for many applications:
\[x!=x^xe^{-x}\sqrt{2\pi x}\left(1+\frac{1}{12x}\right).\]
These variants may make you wonder: are there more accurate versions of Stirling's formula? Indeed there are. Here is a generalized version for the logarithm of the Gamma function:
\[\ln\Gamma(z)=\left(z-\frac{1}{2}\right)\ln z-z+\frac{\ln 2\pi}{2}+\frac{B_2}{1\cdot 2}\frac{1}{z}+\frac{B_4}{3\cdot 4}\frac{1}{z^3}+...+\frac{B_{2k}}{(2k-1)2k}\frac{1}{z^{2k-1}}+...\]
In this formula, the coefficients $B_n$ are derived from the so-called Bernoulli polynomials. They can easily be calculated from the following equations (in which you'll no doubt recognize those good old ordinary binomial coefficients):
\begin{align}1+2B_1&=0,\\ 1+3B_1+3B_2&=0,\\ 1+4B_1+6B_2+4B_3&=0,\\ 1+5B_1+10B_2+10B_3+5B_4&=0,\end{align}
and so on. The first few $B_k$'s are: $B_1=-1/2$, $B_2=1/6$, $B_3=0$, $B_4=-1/30$, $B_5=0$, $B_6=1/42$, ...
Stirling's formula is an odd bird. What you see above may suggest that the absolute values of $B$'s get ever smaller as the index increases, and that therefore, Stirling's formula converges. This is not so; after $B_6$ in fact the absolute values of these $B$'s begins to increase (e.g., $B_8=-1/30$). After a while, the $B$'s will get very large, and subsequent sums in the approximation will swing around ever more wildly. For instance, $B_{98}$ is approximately $1.1318\times 10^{76}$! Still, Stirling's formula can be used to approximate the Gamma function quite nicely if you don't sum too many terms of the series. In fact, the higher the real part of $z$ is, the more accurate the formula becomes... so one trick in the case when ${\rm Re}~z$ is small is to evaluate Stirling's formula for $z+n$, and then divide the result by $z(z+1)(z+2)...(z+n-1)$.
One particular set of coefficients is easy to remember, produces a relatively compact version of Stirling's formula, and yields very good accuracy for arguments over 5:
\[\ln\Gamma(z)\simeq z\ln z-z+\ln\sqrt{\frac{2\pi}{z}}+\frac{1}{1188z^9}-\frac{1}{1680z^7}+\frac{1}{1260z^5}-\frac{1}{360z^3}+\frac{1}{12z}.\]
While the
Lanczos approximation may be somewhat more accurate and compact, it requires several floating point constants that may be difficult to remember, and take up precious (program or registry) memory on your calculator. So in many cases, using Stirling's formula may be preferable. However, it should not be forgotten that even with corrections, it remains inaccurate for small arguments. A Curious Result
In November 2002, I received an e-mail from Robert H. Windschitl who noticed a curious coincidence. He wrote up the extended Stirling's formula as this:
\[\ln\Gamma(x)-\left[\frac{\ln 2\pi}{2}+\left(x-\frac{1}{2}\right)\ln x-x\right]=\frac{1}{12x}-\frac{1}{360x^3}+\frac{1}{1260x^5}-\frac{1}{1680x^7}+...,\]
which he then rewrote as follows:
\[\ln\left[\left(\frac{\Gamma(x)}{\sqrt{\frac{2\pi}{x}}}\right)^{1/x}\frac{e}{x}\right]=\frac{1}{12x^2}-\frac{1}{360x^4}+\frac{1}{1260x^6}-\frac{1}{1680x^8}+...=\ln M(x).\]
Then, through a power series expansion, he obtained:
\[e^{2\ln M(x)}=1+\frac{1}{6x^2}+\frac{1}{120x^4}+\frac{13}{9072x^6}+...\]
At this point, he noticed that this expansion is very similar to a known function expansion:
\[x\sinh \frac{1}{x}=1+\frac{1}{6x^2}+\frac{1}{120x^4}+\frac{1}{5040x^6}+...\]
This led to an approximation formula for the Gamma function that is fairly accurate, and requires no stored constants except for \(\sqrt{2\pi}\) (easily computed on most calculators) and, optionally, the integer value 810 (the part below that's within square brackets can be omitted at a small cost in terms of precision loss):
\[\left(\frac{x}{e}\sqrt{x\sinh\frac{1}{x}\left[+\frac{1}{810x^6}\right]}\right)^x\sqrt{\frac{2\pi}{x}}\simeq\Gamma(x).\]
For values greater than 8, this approximation formula yields 8+ digits of precision even without the correction term that is enclosed in square brackets. On calculators with limited program or register memory, this method may make it possible to implement the Gamma function even when other methods fail.
I have not previously seen this approximation in literature, although Windschitl modestly suggests that it's only a question of searching in the right places.
And Another!
In early July, 2006, I received an e-mail from Hungary, from Gergő Nemes, who sent me several formulas for the Gamma function that he developed. Two of these caught my attention especially, because they were surprisingly accurate yet simple. Then, a few days later, Gergő wrote to me again, this time with a formula that approximates the Gamma function with an error term proportional to $n^{-8}$, which is quite remarkable! Without further ado, here are his more interesting formulae (updated November 16, 2006):
$n!=e^{-n}\sqrt{2\pi n}(n+1/12n+1/1440n^3+239/362880n^5)^n$, $n!=n^n\sqrt{2\pi n}\exp(1/(12n+2/5n)-n)(1+{\cal O}(n^{-5}))$, $n!=n^n\sqrt{2\pi n}\exp(1/[12n+2/(5n+53/42n)]-n)(1+{\cal O}(n^{-8}))$.
Gergő says that he was able to prove that his formulae are correct. I have not seen his proofs, but I did check the formulae for selected values and confirmed that they do indeed appear to behave as promised. (Since our initial exchange of e-mails, Gergő has informed me that he has identified the second and third of these three formulae as the initial approximations of the Gamma function using continued fractions.)
The Incomplete Gamma Function
A close relative to the Gamma function is the incomplete Gamma function. Its name is due to the fact that it is defined with the same integral expression as the Gamma function, but the infinite integration limit is replaced by a finite number:
\[\gamma(a,x)=\int\limits_0^x t^{a-1}e^{-t}dt.\]
The incomplete Gamma function can be approximated fairly rapidly using an iterative method:
\[\gamma(a,x)=x^ae^{-x}\sum\limits_{n=0}^\infty\frac{x^n}{a(a+1)...(a+n)}.\]
The magnitude of successive terms quickly diminishes as $n$ increases; 8-12 digit accuracy is achieved for most values of $x$ and $a$ after no more than a few hundred terms (often less than a hundred) are added.
The incomplete Gamma function can also be used to approximate the value of the Gamma function itself, by selecting a suitably high integration limit $x$. For $a\lt 50$, $x=30$, 8-digit accuracy is achieved. |
I am trying to compare asymptotic runtime bounds of a few algorithms presented in this research paper, A quasi-polynomial algorithm for discrete logarithm in finite fields of small characteristic. The functions are,
$L(\alpha) = \exp(O(\log(n)^{a}(\log\log(n))^{1-\alpha}))$,
which I'd like to plot for $\alpha = 1/3, 1/4+O(1)$, and $1/4 + O(n)$,
and $O(n^{\log n})$
Where n is the bit-size of the input. ... Such a complexity is smaller than any $L(\epsilon)$ for any $\epsilon > 0$
I would like to plot each of these functions together to compare their growth. The problem is that the second function grows much faster than the first, which implies I am misinterpreting something.
So what is the correct way to interpret and compare these functions? |
Contents Equality-Constrained Quadratic Programs Inequality-Constrained Quadratic Programs Linear Least-Squares Problem
Back to Quadratic Programming
Equality-constrained quadratic programs are QPs where only equality constraints are present. They arise both in applications (e.g., structural analysis) and as subproblems in active set methods for solving the general QPs. Consider the equality-constrained quadratic program: \[ \begin{array}{lll} EQP: & \min_x & \frac{1}{2} x^T Q x + c^T x \\ & \mbox{s.t.} & A x = b \end{array} \]
The first-order necessary conditions for \(x^*\) to be a solution of EQP state that there is a vector \(\lambda^*\) such that the following system of equations, the KKT system, is satisfied:
\[\begin{bmatrix} Q & -A^T \\ A & 0 \end{bmatrix} \begin{bmatrix} x^*\\ \lambda^* \end{bmatrix} = \begin{bmatrix} -c \\ b \end{bmatrix} \]
If we express \(x^*\) as \(x^* = x + p\) where \(x\) is an estimate of the solution and \(p\) is a step, we obtain an alternative form:
\[ \begin{bmatrix} Q & A^T \\ A & 0 \end{bmatrix} \begin{bmatrix} -p \\ \lambda^* \end{bmatrix} = \begin{bmatrix} c + Qx \\ Ax - b \end{bmatrix} \] The matrix is called the KKT matrix. Let \(Z\) denote the \(n \times (n-m)\) matrix whose columns are a basis for the null space of \(A\). When \(A\) has full row rank and the reduced-Hessian matrix \(Z^T Q Z\) is positive definite, there is a unique vector pair \((x^*, \lambda^*)\) that satisfies the KKT system. There are several methods for solving the KKT system. Range-space methods can be used when \(Q\) is positive definite and easy to invert (for example, diagonal or block-diagonal). Multiplying the first equation in the alternative equation above by \(A Q^{-1}\) and subtracting the second equation, we obtain a linear system in the vector \(\lambda^*\): \[ (A Q^{-1} A^T)\lambda^* = (A Q^{-1} c + b)\] Then, we recover \(p\) by solving \[Qp = A^T \lambda^* - (c + Qx).\] Null-space methods require a null-space basis matrix \(Z\). This matrix can be computed with orthogonal factorizations or, for sparse problems, by LU factorization of a submatrix of \(A\). Given a feasible vector \(x_0\), we can express any other feasible vector \(x\) in the form \(x = x_0 + Z w\) for some \(w \in R^m\). Direct computation shows that the equality-constrained subproblem EQP is equivalent to the unconstrained subproblem
\[ \min_w \; \frac{1}{2} w^T (Z^T Q Z) w + (Q x_0 + c)^T Z w.\]
If the reduced Hessian matrix \(Z^T Q Z \) is positive definite, then the unique solution \(w^* \,\) of this subproblem can be obtained by solving the linear system
\[ (Z^T Q Z) w = - Z^T (Q x_0 + c).\]
The solution \(x^*\) of the equality-constrained subproblem EQP is then recovered by using \(x = x_0 + Z w\). Lagrange multipliers can be computed from \(x^*\) by noting that the first-order condition for optimality in EQP is that there exists a multiplier vector \(\lambda^* \,\) such that \(Q x^* + c + A^T \lambda^* = 0\). If \(A\) has full rank, then \(\lambda^* = - (A^T A)^{-1} A (Q x^* + c)\) is the unique set of multipliers. Most traditional codes use null-space methods.
Inequality-constrained quadratic programs are QPs that contain inequality constraints and possibly equality constraints. Active set methods can be applied to both convex and nonconvex problems. Gradient projection methods, which allow rapid changes in the active set, are most effective for QPs with only bound constraints. Interior point methods work well for large convex QPs. Active set methods start by finding a feasible point during an initial phase and then search for a solution along the edges and faces of the feasible set by solving a sequence of equality-constrained QPs. Active set methods differ from the simplex method for linear programming in that neither the iterates nor the solution need to be vertices of the feasible set. When the quadratic programming problem is nonconvex, these methods usually find a local minimizer. Finding a global minimizer is a more difficult task.
The
active set \(\mathcal{A}(x^*)\) at an optimal point \(x^*\) is defined as the indices of the constraints at which equality holds: \[ \mathcal{A}(x^*) = \{i \in \mathcal{E} \cup \mathcal{I}: a_i^T x^* = b_i\}.\] If \(\mathcal{A}(x^*)\) were known, the solution could be found by solving an equality-constrained QP of the form: \[ \begin{array}{ll} \min_x & q(x) = \frac{1}{2} x^T Q x + c^T x \\ \mbox{s.t.} & a_i^T x = b_i \quad \forall i \in \mathcal{A}(x^*) \end{array}\] Therefore, the main challenge in solving inequality-constrained QPs is determining this set.
Given a feasible \(x_k\), these methods find a direction \(d_k\) by solving the subproblem
\[ \begin{array}{lll} \mbox{EQP}_k & \min & q(x_k + d) \\ & \mbox{s.t.} & a_i^T(x_k + d) = b_i\qquad i \in \mathcal{W}_k \end{array} \] where \(q\) is the objective function \(q(x) = \frac{1}{2} x^T Q x + c^T x\) and \(\mathcal{W}_k \) is a working set of constraints. In all cases, \(\mathcal{W}_k\) is a subset of \(\mathcal{A}(x_k) = \{ i \in \mathcal{I} : a_i^T x_k = b_i \} \cup \mathcal{E}\), the set of constraints that are active at \(x_k\). Typically, \(\mathcal{W}_k\) either is equal to \(\mathcal{A}(x_k)\) or else has one fewer index than \(\mathcal{A}(x_k)\).
The working set \(\mathcal{W}_k\) is updated at each iteration with the aim of determining the set \(\mathcal{A}^*\) of active constraints at a solution \(x^*\). When \(\mathcal{W}_k\) is equal to \(\mathcal{A}^*\), a local minimizer of the original problem can be obtained as a solution of the equality-constrained subproblem \(\mbox{EQP}_k\). The updating of \(\mathcal{W}_k\) depends on the solution of the direction-finding subproblem.
Subproblem \(\mbox{EQP}_k\) has a solution if the reduced Hessian matrix \(Z_k^T Q Z_k\) is positive definite. This is always the case if \(Q\) is positive definite. If subproblem \(\mbox{EQP}_k\) has a solution \(d_k\), we compute the largest possible step
\[\mu_k = \max\{ \frac{b_i - a_i^T x_k}{a_i^T d_k}: \; a_i^T d_k > 0, i \not \in \mathcal{W}_k \}\] that does not violate any constraints, and we set \(x_{k+1} = x_k + \alpha_k d_k\), where \(\alpha_k = \min\{ 1 , \mu_k \}\).
The step \(\alpha_k = 1\) would take us to the minimizer of the objective function on the subspace defined by the current working set, but it may be necessary to truncate this step if a new constraint is encountered. The working set is updated by including in \(\mathcal{W}_{k+1}\) all constraints active at \(x_{k+1}\).
If the solution to subproblem \(\mbox{EQP}_k\) is \(d_k=0\), then \(x_k\) is the minimizer of the objective function on the subspace defined by \(\mathcal{W}_k\). First-order optimality conditions for subproblem \(\mbox{EQP}_k\) imply that there are multipliers \(\lambda_i^{(k)}\) such that \(Q x_k + c + \sum_{i \in \mathcal{W}_k} \lambda_i^{(k)} a_i = 0\).
If \(\lambda_i^{(k)} \geq 0\) for \(i \in \mathcal{W}_k\), then \(x_k\) is a local minimizer of problem QP. Otherwise, we obtain \(\mathcal{W}_{k+1}\) by deleting one of the indices \(i\) for which \(\lambda_i^{(k)} ≤ 0\). As in the case of linear programming, various pricing schemes for making this choice can be implemented.
If the reduced Hessian matrix \(Z_k^T Q Z_k\) is indefinite, then subproblem \(\mbox{EQP}_k\) is unbounded below. In this case we need to determine a direction \(d_k\) such that \(q(x_k + \alpha d_k)\) is unbounded below, using techniques based on factorizations of the reduced Hessian matrix. Given \(d_k\), we compute \(\mu_k\) as before, and define \(x_{k+1} = x_k + \mu d_k\). The new working set \(\mathcal{W}_{k+1}\) is obtained by adding to \(\mathcal{W}_k\) all constraints active at \(x_{k+1}\).
A key to the efficient implementation of active set methods is the reuse of information from solving the equality-constrained subproblem at the next iteration. The only difference between consecutive subproblems is that the working set grows or shrinks by a single component. Efficient codes perform updates of the matrix factorizations obtained at the previous iteration, rather than calculating them from scratch each time.
Path-following methods (also known as trajectory-following, barrier or interior-point methods) offer a good alternative to the earlier active-set methods. Although path-following methods may be applied to the general problem QP, it is easier to describe them for problems of the form \[ \begin{array}{lll} QP2: & \min & \frac{1}{2} x^T Q x + c^T x\\ & \mbox{s.t.} & A x = b \\ & & x \geq 0 \end{array} \] for which first-order optimality conditions are that \[\begin{array}{lll} A x^* & = & b \\ Qx^* + c & = & A^T y^* + z^*\\ x_i^* z_i^* & = & 0 \; \forall i = \{1,\dots,n\} \end{array}\] for optimal primal variables \(x^* \geq 0\), Lagrange multipliers \(y_{}^*\) and dual variables \(z^* \geq 0\).
In their simplest form, interior-point methods trace the central path that is defined as the solution \(v_{}^{}(t) = (x(t),y(t),z(t))\) to the parametric nonlinear system \(A x(t) = b, \;\; Qx(t) + c = A^T y(t) + z(t)\) and \(x_i^{}(t) z_i^{}(t) = t\) for \(i = \{1,\dots,n\}\) with \((x_{}^{}(t),z(t)) > 0\) as the scalar \(t_{}\) decreases to 0. Notice that all points on the central path are primal and dual feasible, and that complementary slackness is achieved in the limit as \(t\) approaches 0. A disadvantage of this simple idea is that a point \(v_{}^{}(t_0)\) must be available for some \(t_0 > 0\), but such a point may be found as a first-order critical point of the logarithmic-barrier function \(\frac{1}{2} x^T Q x + c^T x - t_0 \sum_{i=1}^n \log x_i\) within the region \(A_{}^{} x = b\); indeed, early path-following methods were based on a sequential minimization of the logarithmic barrier function.
Notwithstanding, to cope with this potential deficiency, infeasible interior point methods start from any \(v_{}^s = (x_{}^{s},y_{}^{s},z_{}^{s})\) for which \((x_{}^{s},z_{}^{s}) > 0\) and follow instead the trajectory \(v_{}^{}(t)\) that satisfies the homotopy \(A x_{}^{}(t) - b = \theta(t) [ A x_{}^s - b ]\) , \(Q x^{}_{}(t) + c - A^T y(t) - z(t) = \theta(t) [ Qx_{}^s + c - A^T y^s - z^s ]\), and
\(x_i^{}(t) z_i^{}(t) = \theta(t) x_i^s z_i^s\) for \(i = \{1,\dots,n\}\) as \(t_{}\) decreases from 1 to 0. The scalar function \(\theta_{}^{}(t)\) may be any increasing function for which \(\theta^{}_{}(0) = 0\) and \(\theta_{}^{}(1) = 1\). The simplest choice \(\theta_{}^{}(t) = t\) is popular, but there are theoretical advantages in using \(\theta_{}^{}(t) = t^2\) since then the unknown trajectory \(v_{}^{}(t)\) is analytic for convex problems at \(t_{} = 0\).
In practice, it is sometimes advantageous for numerical reasons to aim for a small value of the complementarity instead of zero, and in this case the complementary slackness part of the homotopy may be replaced by
\[ x_i^{}(t) z_i^{}(t) = \theta(t) x_i^s z_i^s + [1-\theta(t)] \sigma \;\; \forall i = \{1,\dots,n\}\] and some small centering parameter \(\sigma_{}^{} > 0\).
Notice that all of these homotopies define their trajectories \(v_{}^{}(t)\) implicitly; all that is known is the starting point \(v_{}^s\). Many path-following methods replace the true but unknown \(v_{}^{}(t)\) by a Taylor series approximation \(v_{}^{s}(t)\) evaluated about \(v_{}^s\) and trace this approximation instead. Clearly, as \(v_{}^{s}(t)\) is simply an approximation, it will most likely diverge from \(v_{}^{}(t)\) as \(t_{}\) decreases from 1 towards 0. To cope with this, sophisticated safeguarding rules are used to decide how far \(t_{}\) may decrease while giving an adequate approximation, and if \(t^l_{}\) is this best value, \(v_{}^s\) is replaced by \(v_{}^{s}(t^l)\) and the process repeated. The resulting iteration defines a typical path-following method.
The centering parameter is sometimes computed after a initial predictor step (a first-order Taylor approximation with \(\sigma = 0\)) is used to compute an estimate of the solution. Once \(\sigma > 0\) is known, the Taylor approximation to the revised homotopy gives the corrector step.
The Taylor series coefficients are found by repeated differentiation of the homotopy equations with respect to \(t_{}\), and the \(k\) th order coefficients \((x^{(k)}, y^{(k)}, z^{(k)})\) may be obtained by solving the linear system
\[\begin{pmatrix} A & 0 & 0 \\ Q & - A^T & - I\\ Z^s & 0 & X^s \end{pmatrix} \begin{pmatrix} x^{(k)} \\ y^{(k)} \\ z^{(k)} \end{pmatrix} = \begin{pmatrix} r_p^{(k)} \\ r_d^{(k)} \\ r_c^{(k)} \end{pmatrix} \doteq r^{(k)},\] where \(X^s_{}\) and \(Z^s_{}\) are the diagonal matrices whose diagonal entries are \(x^s_{}\) and \(z^s_{}\) respectively, and the right-hand side \(r^{(k)}\) depends on the values of previously-calculated lower-order coefficients. Since the coefficient matrix is the same for each order of coefficients, a single factorization enables us to find increasingly accurate Taylor approximations at a gradually increasing but reasonable cost. Block elimination of the system results in the smaller, symmetric system \[\begin{pmatrix} Q + (X^s)^{-1} Z^s & A^T \\ A & 0 \end{pmatrix} \begin{pmatrix} x^{(k)} \\ - y^{(k)} \end{pmatrix} = \begin{pmatrix} r_d^{(k)} + (X^s_{})^{-1}r_c^{(k)} \\ r_p^{(k)} \end{pmatrix},\] and this is usually exploited in practice; the variables \(z^{(k)}\) may be recovered as \((X^s)^{-1}[r_c^{(k)} - Z^s_{} x_{}^{(k)}]\). Some algorithms seek to avoid possible numerical difficulties by regularizing these defining systems. For example, the coefficient matrix above is replaced by \[ \begin{pmatrix} Q + (X^s)^{-1} Z^s + D_d & A^T \\ A & - D_d \end{pmatrix} \] where \(D_p\) and \(D_d\) are small, positive-definite diagonal perturbations. Other algorithms, try to avoid these difficulties by pre-processing the data to remove singularities. Both techniques appear to work well in practice.
If the problem is convex, iterations of the form described can be shown to converge very fast and in a polynomially-bounded number of iterations. For non-convex problems, most methods prefer instead to approximately minimize the logarithmic barrier function for a decreasing sequence of values of \(t_0\) using a globally-convergent (linesearch or trust-region) method typically used for linearly-constrained optimization; many of the details - such as the structure of the vital linear systems - are effectively the same as in the convex case.
Linear least-squares problems are special instances of convex quadratic programs that arise frequently in data-fitting applications. The linear least-squares problem is
\[ \begin{array}{lllll} LLS: & \min & \frac{1}{2} \| C x - d \|_2^2 & & \\ & \mbox{s.t.} & a_i^T x & = & b_i \; \forall i \in \mathcal{E}\\ & & a_i^T x & \geq & b_i \; \forall i \in \mathcal{I} \end{array} \] where \(C \in R^{m\times n}\) and \(d \in R^m\). LLS is a special case of problem QP, which can be seen by replacing \(Q\) by \(C^T C\) and \(c\) by \(C^T d\) in problem QP. In general, it is preferable to solve a least squares problem with a code that takes advantage of the special structure of the least squares problem. LSSOL is a Fortran package for constrained linear least-squares problems and convex QPs. See Lawson and Hanson (1974) for more information. Lawson, C. L. and Hanson, R. J. 1974. Solving Least Squares Problems , Prentice-Hall, Englewood Cliffs, NJ. |
Answer
The given statement $$\cos140^\circ=\cos60^\circ\cos80^\circ-\sin60^\circ\sin80^\circ$$ is true.
Work Step by Step
$$\cos140^\circ=\cos60^\circ\cos80^\circ-\sin60^\circ\sin80^\circ$$ As $140^\circ=60^\circ+80^\circ$, $$\cos140^\circ=\cos(60^\circ+80^\circ)$$ Now we use the cosine sum identity $$\cos(A+B)=\cos A\cos B-\sin A\sin B$$ (be extremely careful about the sign in the middle) to expand $\cos(60^\circ+80^\circ)$: $$\cos140^\circ=\cos60^\circ\cos80^\circ-\sin60^\circ\sin80^\circ$$ This means that the statement $$\cos140^\circ=\cos60^\circ\cos80^\circ-\sin60^\circ\sin80^\circ$$ is true. |
Department of Mathematics
University of California Los Angeles
Los Angeles, CA 90095-1555
Office: Math Sciences 6334
Phone: (310) 794 5317
Fax: (310) 206 6673
Email:
Forcing with ultrafilters. Logic Colloquium 2009, Sofia, Bulgaria, plenary talk.
Steel forcing in reverse mathematics. CIRM workshop on Set Theory, Luminy, France, 2008, and Steel VIG, UCLA, 2009.
Determinacy and large cardinals. International Congress of Mathematicians, Logic and Foundations of Mathematics section, Madrid, Spain, 2006.
Set theory, infinite games, and strong axioms. Wissenschaftskolleg zu Berlin, Germany, 2005, general audience talk.
The determinacy of long games. De Gruyter Series in Logic and its Applications, Volume 7, Walter de Gruyter and Co., Berlin, November 2004.
Thomas Gilton and Itay Neeman, Abraham-Rubin-Shelah open colorings and a large continuum.
James Cummings, Yair Hayut, Menachem Magidor, Itay Neeman, Dima Sinapova, and Spencer Unger, The ineffable tree property and failure of the singular cardinals hypothesis.
James Cummings, Yair Hayut, Menachem Magidor, Itay Neeman, Dima Sinapova, and Spencer Unger, The tree property at the two immediate successors of a singular cardinal.
Itay Neeman and John Susice, Chang's Conjecture with \(\Box_{\omega_1,2}\) from an \(\omega_1\)-Erdos Cardinal.
Itay Neeman and Zach Norwood, Coding along trees and generic absoluteness.
Omer Ben-Neria, Moti Gitik, Itay Neeman, and Spencer Unger, On the powerset of singular cardinals in HOD.
William Chen and Itay Neeman, On the relationship between mutual and tight stationarity.
Thomas Gilton and Itay Neeman, Side conditions and iteration theorems.
Itay Neeman and Zach Norwood, Happy and MAD families in \(L({\mathbb R})\), J. of Symbolic Logic, vol. 83 (2018), pp.572–597.
Itay Neeman, Two applications of finite side conditions at \(\omega_2\), Archive for Mathematical Logic, vol. 56 (2017), special issue dedicated to the memory of James Baumgartner, pp. 983–1036.
Itay Neeman and John Steel, Equiconsistencies at subcompact cardinals, Archive for Mathematical Logic, vol. 55 (2016), special issue dedicated to the memory of Richard Laver, pp. 207–238.
Itay Neeman, An inner models proof of the Kechris–Martin Theorem, in Ordinal definability and recursion theory, the Cabal Seminar Vol. III, Kechris, Loewe, Steel editors, pp. 220–242, Cambridge University Press, 2016.
William Chen and Itay Neeman, Square principles with tail-end agreement, Archive for Mathematical Logic, vol. 54 (2015), pp. 439–452.
Itay Neeman, The tree property up to \(\aleph_{\omega+1}\), J. Symbolic Logic, vol. 79 (2014), pp. 429–459.
Itay Neeman, Forcing with sequences of models of two types, Notre Dame J. Formal Logic, vol. 55 (2014), pp. 265–298.
Itay Neeman and Spencer Unger, Aronszajn trees and the SCH. In
Appalachian Set Theory 2006–2012 (Cummings, Schimmerling, eds.), pp. 187–206, LMS Lecture Notes Series 406, 2013.
Itay Neeman, Necessary use of \(\Sigma^1_1\) induction in a reversal. J. Symbolic Logic, vol. 76 (2011), pp. 561–574.
Andreas Blass, Yuri Gurevich, Michal Moskal, and Itay Neeman, Evidential Authorization. In
The future of software engineering (Nanz, ed.), pp. 73–99, Springer, 2011.
Itay Neeman, Ultrafilters and large cardinals. In
Ultrafilters across Mathematics (Bergelson, Blass, Di Nasso, Jin, eds.), Contemporary Mathematics Vol. 530, pp. 181–200, AMS, 2010.
Paul Larson, Itay Neeman, and Saharon Shelah, Universally measurable sets in generic extensions. Fund. Math., vol. 208 (2010), pp. 173–192.
Itay Neeman, Aronszajn trees and failure of the singular cardinal hypothesis. J. of Mathematical Logic, vol. 9 (2009, published 2010) 139–157.
Gunter Fuchs, Itay Neeman, and Ralf Schindler, A criterion for coarse iterability. Archive for Mathematical Logic, vol. 49 (2010), pp. 447–467.
Itay Neeman, Determinacy in \(L({\mathbbR})\). In
Handbook of Set Theory (Foreman, Kanamori, eds.), pp. 1887–1950, Springer, 2010.
Yuri Gurevich and Itay Neeman, The logic of infons. Bull. Eur. Assoc. Theor. Comput. Sci. EATCS, No. 98 (2009), pp. 150–178, and ACM Trans. Comput. Log. 12 (2011) no. 2, Art. 9. See also technical report MSR-TR-2011-90.
Itay Neeman, Monadic theories of wellorders. In
Logic, Methodology and Philosophy of ScienceProceedings of the Thirteenth International Congress (Glymour, Wei, Westerstahl, eds.), pp. 108–121, College Publications, 2009.
Itay Neeman, The strength of Jullien's indecomposability theorem. J. of Mathematical Logic, vol. 8 (2008, published June 2009), pp. 93–119.
Yuri Gurevich and Itay Neeman, DKAL: Distributed Knowledge Authorization Language.
Proceedings of the 21st IEEE Computer Security Foundations Symposium, pp. 149–162, IEEE Computer Society, 2008. See alsotechnical report MSR-TR-2008-09.
Itay Neeman, Propagation of the scale property using games. In
Games, scales, and Suslin cardinals, the Cabal seminar vol. I (Kechris, Loewe, Steel, eds.), pp. 75–89, Lecture Notes in Logic 31, 2008.
Itay Neeman, Monadic definabilityof ordinals. In
Computational Prospects of Infinity, Part II, Presented Talks, pp. 193–205, Lect. Notes Ser. Inst. Math. Sci. Natl. Univ. Singap. 15, World Scientific Publishing, 2008.
Itay Neeman, Hierarchies of forcing axioms II. J. of Symbolic Logic, vol. 73 (2008), pp. 522–542.
Itay Neeman, Finite state automata and monadic definability of singular cardinals. J. of Symbolic Logic, vol. 73 (2008), pp. 412–438.
Itay Neeman and Ernest Schimmerling, Hierarchies of forcing axioms I. J. of Symbolic Logic, vol. 73 (2008), pp. 343–362.
Itay Neeman, Games of length \(\omega_1\). J. of Mathematical Logic, vol. 7 (2007), pp. 83–124.
Alex Andretta, Greg Hjorth, and Itay Neeman, Effective cardinals of boldface pointclasses. J. of Mathematical Logic, vol. 7 (2007), pp. 35–82.
Moti Gitik, Itay Neeman, and Dima Sinapova, A cardinal preserving extension making the set of points of countable \(V\) cofinality nonstationary. Archive for Mathematical Logic, vol. 46 (2007), pp. 451–456.
Itay Neeman, Inner models and ultrafilters in \(L({\mathbb R})\). Bull. of Symbolic Logic, vol. 13 (2007), pp. 31–53.
Itay Neeman, Determinacy and LargeCardinals.
Proceedings of the International Congress of Mathematicians,vol. II, Madrid 2006, pp. 27–43, European Math. Society Publishing House, 2007.
Itay Neeman and John Steel, Counterexamples to the unique and cofinal branches hypotheses. J. of Symbolic Logic, vol. 71 (2006) pp. 977–988.
Itay Neeman, Determinacy for games ending at the first admissible relative to the play. J. of Symbolic Logic, vol. 71 (2006), pp. 425–459.
Itay Neeman, Unraveling \(\Pi^1_1\) sets, revisited. Israel J. of Math., vol. 152 (2006), pp. 181–203.
Itay Neeman, An introduction to proofs ofdeterminacy of long games.
Logic Colloquium ’01, pp. 43–86, Lecture Notesin Logic No. 20, Association for Symbolic Logic, Urbana, IL, 2005.
Itay Neeman, The Mitchell order below rank-to-rank. J. of Symbolic Logic, vol. 69 (2004), pp. 1143–1162.
Donald A. Martin, Itay Neeman, and Marco Vervoort, The strength of Blackwell determinacy. J. of Symb. Logic, vol. 68 (2003), pp. 615–636.
Itay Neeman, Optimal proofs of determinacy II. J. of Math. Logic, vol. 2 (2002), pp. 227–258.
Itay Neeman, Inner models in the region of a Woodin limit of Woodin cardinals. Ann. of Pure and Applied Logic, vol. 116 (2002), pp. 67–155.
Alex Andretta, Itay Neeman, and John Steel, The domestic levels of \(K^c\) are iterable. Israel J. of Math., vol. 125 (2001), pp. 157–201.
Itay Neeman and Jindrich Zapletal, Proper forcing and \(L({\mathbb R})\). J. of Symb. Logic, vol. 66 (2001), pp. 801–810.
Itay Neeman, Unraveling \(\Pi^1_1\) sets. Ann. of Pure and Applied Logic, vol. 106 (2000), pp. 151–205.
Itay Neeman and John Steel, A weak Dodd–Jensen lemma. J. of Symb. Logic, vol. 64 (1999), pp. 1285–1294.
Itay Neeman, Games of countable length.In
Sets and Proofs, LMS Lecture Note Series 258, pp. 159–196, CambridgeUniversity Press, 1999.
Itay Neeman and Jindrich Zapletal, Proper forcing and absoluteness in \(L({\mathbb R})\). Commentationes Math. Uni. Carolinea, vol. 39 (1998), pp. 281–301.
Itay Neeman, Optimal proofs of determinacy. Bull. of Symb. Logic, vol. 1 (1995), pp. 327–339. |
This question actually has a very easy and rigorous answer. Having which-path information "available" is just a crude way of saying that the system is correlated with
anything else. Usually, this is because the system has been decohered in whatever basis corresponds to the possible paths, which is usually position basis. In your case, the photon is actually never put into a coherent local superposition, and so interference will not be seen. Instead the SPDC process essentially creates a Bell state where one photon is thrown away. Skematically, the situation you describe is as follows. The splitting process is
$\vert S \rangle \otimes \vert I \rangle \to \frac{1}{\sqrt{2}} \Big [ \vert S_L \rangle \otimes \vert I_L \rangle + \vert S_R \rangle \otimes \vert I_R \rangle \Big ] = \vert \psi \rangle \qquad \qquad \qquad (1)$
where $S$ and $I$ stand for the signal and idle photons, respectively, and $L$ and $R$ stand for the left and right path. The reduced state of the signal photon is
$\rho^{(S)}=\mathrm{Tr}_I\Big[\vert \psi \rangle \langle \psi \vert \Big]$
(If you don't know what $\mathrm{Tr}$ means, or what a density matrix is, you absolutely must go learn about them. It doesn't take that long, and is crucial for understanding this question.) The measurement performed by the apparatus is essentially a measurement in the basis $\{ \vert \pm \rangle = \vert S_L \rangle \pm \vert S_R \rangle \} $. Here, getting a "plus" result in the laboratory means seeing the photon near a peak on the screen, and a "minus" result is seeing it in a trough.
You can check that measuring $\rho^{(S)}$ in the $\{ \vert \pm \rangle \}$ basis (or, in fact, any basis at all) gives equal probability of either outcome. This means no interference pattern, since photons are evenly spread over peaks and troughs. In particular, this is true no matter what happens to the idle photon; it could be carefully measured, or thrown away.
On the other hand, if you simply send the photon into a double slit experiment by sending it through a small hole and allowing the photon to enter either slit without being correlated with anything else, the evolution looks like
$\vert S \rangle \to \frac{1}{\sqrt{2}} \Big [ \vert S_L \rangle + \vert S_R \rangle \Big ] \quad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad (2)$
which doesn't involve a second photon that "knows" anything. In this case, a measurement in the $\{ \vert \pm \rangle \}$ basis gives "plus" with certainty (or near certainty), meaning we see an interference pattern because all (or most) of the photons only land at the peaks.
Finally, suppose we place a second particle like a spin-up electron in front of the right slit such that the electron's spin flips if and only if the photons brushes by it on the way through the right slit. In this case we'd get
$\frac{1}{\sqrt{2}} \Big [ \vert S_L \rangle + \vert S_R \rangle \Big ] \otimes \vert e_\uparrow \rangle \to \frac{1}{\sqrt{2}} \Big [ \vert S_L \rangle \otimes \vert e_\uparrow \rangle + \vert S_R \rangle \otimes \vert e_\downarrow \rangle \Big ] \qquad \qquad (3)$
Now, although nothing has really happened to the signal photon as it passed through the right slit--it doesn't, say, get slowed down or deflected--the electron now knows where the photon is. In fact, this state is identical to the first one we considered except with the electron in place of the idle photon. If we make a measurement on the signal photon, we now get either outcome with equal likelihood, meaning the interference pattern is lost.
The process of the electron getting entangled with photon is known as
decoherence. (Note that we only use that word when the electron is lost, like it usually is. If the electron was still accessible and could potentially be brought back to interact again with the photon, we'd just say they had become entangled.) Decoherence is the key process, and plays a fundamental role in understanding how "classicality" arises in a fundamentally quantum world. Edit:
Make sure not to confuse two possible situations. The first is where the momenta of the idle and signal photon are correlated, and the slits are positioned to simply select for one of two possible outcomes, corresponding to equation (1) above:
The second is where the signal photon's spread over $L$ and $R$ is not caused by an initial event correlating it with idle photon, but simply by its own coherent spreading when it is restricted to pass through a small hole, corresponding to equation (2):
Note here that there is no violations of conservations of momentum, a subtle (for beginners) consequences of the infinite dimensional aspect of the photon's Hilbert space. (The fact that the two-slit experiment is the canonical example for introducing quantum wierdness is unfortunate because of these complications.) When the photon is confined to a small initial slit, it necessarily has a wide transverse momentum spread.
It might be helpful to concatenate these two cases:
Here, the idle photon is initially entangled with the signal photon, but the wall with the single slit destroys the signal photon for the $X/R_1$ outcome. When $Y/L_1$ happens, the signal photon can now be sent through 2 slits to produce and interference pattern. The idle photon's direction $X$ vs. $Y$ was correlated with the signal photon's $L_1$ vs. $R_1$, but it is never correlated with $L_2$ vs. $R_2$. |
What is Calculus? Calculus is all about the comparison of quantities which vary in a one-liner way. It has significant applications in Science and Engineering. Many of the topics that we study like acceleration, velocity and current in a circuit do not behave in a linear fashion. If quantities are changing continually, we need calculus to study about it. Calculus is the branch of mathematics that deals with continuous change.
Table of Contents: Calculus Topics Definitions of Important Topics in Calculus Applications of Calculus Examples Basic Calculus List of some of the important Calculus topics :
All Topics in Calculus Limits and Continuity Differential Calculus Differentiation and integration Limits Derivatives Integral Calculus Continuity Derivative of a function Integration Continuity and Differentiability Quotient Rule Methods of Integration Mean Value Theorem Chain Rule Definite Integral Second Derivative Test Anti-derivative Formula Indefinite Integral Applications of Integration Integration by Parts Derivative of Inverse Trigonometric Function Discontinuity Cauchy Riemann Equations Integrating Factor Partial Differential Equations Exact Differential Equation Riemann sums Application of Derivatives L Hospital Rule Simpson’s Rule Trapezoidal Rule Line Integral Surface Integral Let us discuss definitions for some of the important calculus topics: Differential Calculus Basics Limits
The degree of closeness to any value or the approaching term. A limit is normally expressed using the limit formula as-\(\lim_{x \rightarrow c}f(x)=A\)
It is read as “the limit of f of x as x approaches c equals A”.
Derivatives
Instantaneous rate of change of a quantity with respect to the other. The derivative of a function is represented as:\(\lim_{x \to h}\frac{f(x+h)-f(x)}{h}=A\)
Continuity
Continuity
A function is said to be continuous at a particular point if the following three conditions are satisfied-
f(a) is defined \(lim_{x \to a}f(x)\) exists \(lim_{x \to a^-}f(x) = lim_{x \to a^+}f(x) =f(a)\) Read More: Limits And Continuity Continuity and Differentiability
A Function is always continuous if it is differentiable at any point, whereas the vice-versa condition is not always true.
Quotient Rule
The Quotient rule is a method for determining the derivative (differentiation) of a function which is in fractional form.
Chain Rule
The rule applied for finding the derivative of the composition of a function is basically known as the chain rule.
Integral Calculus Basics
Study about integrals and their properties. It is mostly useful for the following two purposes:
To calculate f from f’(i.e from its derivative). If a function f is differentiable in the interval of consideration, then f’ is defined in that interval. To calculate the area under a curve. Integration
Integration is the reciprocal of differentiation. As differentiation can be understood as dividing a part into many small parts, integration can be said as a collection of small parts in order to form a whole. It is generally used for calculating area.
Definite Integral
A definite integral has a specific boundary within which function needs to be calculated. The lower limit and upper limit of the independent variable of a function is specified, its integration is described using definite integrals. A definite integral is denoted as:\(\int_a^b\ f(x).dx = F(x)\)
Indefinite Integral
An indefinite integral does not have a specific boundary, i.e. no upper and lower limit is defined. Thus the integration value is always accompanied by a constant value (C). It is denoted as:\(\int\) f(x).dx = F(x) + C
Calculus Applications
In real life, concepts of calculus play a major role either it is related to solving area of complicated shapes, safety of vehicles, to evaluate survey data for business planning, credit cards payment records, or to find how the changing conditions of a system affect us, etc. Calculus often used by physicians, economists, biologists, architects and statisticians. For example, Architects and engineers use concepts of calculus to determine the size and shape of the curves to design bridges, roads and tunnels etc.
Calculus Problems
Below are some examples based on calculus topics:
Problem 1: Let f(y) = e y and g(y) = 10y. Use the chain rule to calculate h′(y) where h(y) = f(g(y)). Solution: Given,
f(y) = e
y and
g(y) = 10y
First derivative above functions are
f'(y) = e
y and
g'(y) = 10
To find: h′(y)
Now, h(y) = f(g(y))
h'(y) = f'(g(y))g'(y)
h'(y) = f'(10y)10
By substituting the values.
h'(y) = e
10y x 10
or h'(y) = 10 e
10y
Problem 2: Integrate sin 3x + 2x with respect to x. Solution: Given instructions can be written as:
∫ sin 3x + 2x dx
Use the sum rule, which implies
∫ sin 3x dx+ ∫ int2x dx ……… Equation 1
Solve ∫ sin 3x dx first.
use substitution method,
let 3x = u => 3 dx = du (after derivation)
or dx = 1/3 du
=> ∫ sin 3x dx turned as∫ sin u X 1/3 du
or 1/3 ∫ sin u du
which is 1/3 (-cos u) + C, where C= constant of integration
Substituting values again, we get
∫ sin 3x dx= -cos(3x)/3 + C ……… Equation 2
Solve∫ 2x dx
∫ 2x dx = 2∫ x dx = 2 * x
2/2 + C = x 2 + C ……. Equation 3
Equation (1) => ∫ sin 3x dx+ ∫ 2x dx
= -cos(3x)/3 + x
2 + C
Practice more questions on calculus such as limits and derivatives, integration using different methods, continuity etc. To learn more about calculus in maths, Register with BYJU’S today. |
Notations :
AUT$(X)$ (where $X$ is a graph) is the group of automorphisms of the graph $X$
$G=\langle A \rangle $ means group $G$ is generated by set $A$.
$G_{\{\Delta\}}$ is the set-wise stabiliser of $\Delta$
STAB Input : $A \subseteq \text{Sym}(\Omega)$ and $\Delta \subseteq \Omega$ Find : Generator of $\langle A \rangle_{\{\Delta\}} = \{g \in \langle A \rangle \mid \Delta ^ g = \Delta\}$ ISO Given : $X_1 = (V,E_1)$ and $X_2 = (V,E_2)$ Find : Is $X_1$ isomorphic to $X_2$? Claim : ISO $\le_{P}$ STAB ( polynomial time reduction )
Proof : Take disjoint union $X=X_1 \cup X_2$ and note that AUT$(X) \le \text{Sym}(V) =G$
$G$ acts on the set $V \choose 2$ i.e. set of all unordered pair of vertices. Clearly $E \le$ $V \choose 2$ and AUT$(X) = G_{\{E\}}$ under the above group action.
Question : Is this a polynomial time reduction from decision problem to non-decision problem? Please note that It is possible that I may have misunderstood the reduction
Reference : Poly time computation in groups by E.M Luks See Page no 3 |
This article is a stub.
It is too short to provide more than rudimentary information about a subject.
You can help Death Stranding Wiki by expanding it.
A
Q-pid is a necklace bearing a series of tag-engraved physics equations, containing all of the necessary security and operations protocols to integrate a terminal location into the Chiral Network. All established strand points are accessible to those bearing the required equations.
Sam is given a Q-pid upon undertaking his expedition across America.
List of equations Edit
Inscribed equations are processed in the following order, and include:
$ \partial_t q = \underline{\underline{D}} \,\nabla^2 q + R(q) $ Reaction-diffusion equation, represented in the general form. $ i \gamma^\mu \partial_\mu \psi(x) - m \psi(x) = 0 $ Dirac equation, a part of quantum mechanics relating to Einstein's special relativity. It predicted the existence of antimatter. $ \phi = (\frac{\phi_1}{\phi_2}) $ Higgs field equation, in a simplified form. $ G_\mu + \Lambda g_{\mu\nu} = \kappa T_{\mu\nu} $ Einstein field equation, which describes gravitation, and notably uses Einstein's gravitational constant $ \kappa $. Quantum entanglement state. $ r_g = \frac{2 G M}{c^2} $ Schwartzschild radius, the radius to which an object would have to be compressed (i.e., increasing its density without losing any of its mass) in order to turn it into a black hole.
Objects in
Death Stranding Equipment B-handcuffs · Blood bag · Bridge babies (Bridge baby pod) · Canteen · Climbing anchor · Container repair spray · Damage sensor tape · Floating carrier · Ladder · Odradek · Power skeleton · Q-pid · Smoke grenade · Speed skeleton Weapons Anti-BT handgun · Assault rifle · Bola gun · Electrified pole weapon · EX grenade · Handgun · Hematic grenade · Strand · Stun bomb Vehicles Reverse trike Clothing Bridges boots · Cap · "Ludens mask" sunglasses Materials Ceramics · Chiral crystal · Chiralium · Metals · Special alloys Other items Anti-BT weapons · Medicine pack · Old sound system · PCC · Sperm and eggs |
let $A$ a set, I define the cardinality of $|A|:=\{B|B \sim A\}$, but $|A|$ is a set?
Thanks in advance!!
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
If $A$ is non-empty then the answer is no.
To see this, note that you can fix $a\in A$ and pick any $x\notin A$ and consider $A_x=(A\setminus\{a\})\cup\{x\}$. Show that this is an injection from $V\setminus A=\{x\mid x\notin A\}$ into $|A|$.
Is $V\setminus A$ a set?
No, it is not, unless $A$ is empty. If it were, then (for example) taking a singleton $A$, we would have that $|A|$ is the set of
all singletons, which is equinumerous to the set-theoretic universe in question, and then we've encountered Russell's paradox.
There are several work-arounds for this. Typically, if $A$ is well-orderable, then $|A|$ is taken to be the least ordinal equinumerous to $A$; otherwise, $|A|$ is taken to be the set of sets of least rank that are equinumerous to $A$. |
Heat transfer by conduction is one of four types of to heat transfer: mixing, conduction, convection and radiation. It is the second most efficient form of heat transfer after mixing. It can be abbreviated by Q
c. It is the flow of heat between two objects having different temperatures that are in contact with each other.
Heat Conduction Formula
(Eq. 1) \(\large{ Q_c = \frac { k \; A \; \Delta T \; t } {d} }\)
(Eq. 2) \(\large{ Q_c = \frac { k \; A \; \left( T_h \;-\; T_c \right) \; t } {d} }\)
Where:
\(\large{ Q_c }\) = heat transfer by conduction
\(\large{ A }\) = area cross section
\(\large{ T_c }\) = cooler temperature
\(\large{ T_h }\) = higher temperature
\(\large{ \Delta T }\) = temperature differential
\(\large{ t }\) = time taken
\(\large{ k }\) or \(\large{ \lambda }\) (Greek symbol lambda) = thermal conductivity\(\large{ d }\) = thickness of the material |
Chi Li, Xiaowei Wang, Chenyang Xu proved the Quasi-projectivity of the moduli space of smooth Kahler-Einstein Fano manifolds. My question is about when central fibre $X_0$ along Kahler-Einstein Fano fibration $X\to B$ has log-terminal singularities. Then the moduli space of singular Kahler-Einstein Fano fibres is Quasi-projective(An old conjecture due to Gang Tian)?
In fact, if we prove that the Lelong number of singular hermitian metric $(L_{CM},h_{WP})$ corresounding to Weil-Petersson current on virtual CM-bundle has vanishing Lelong number, then $L_{CM}$ is nef and if we know the nefness of $L_{CM}$ then due to a result of E.Viehweg, we can prove that the moduli space of Kahler-Einstein Fano manifolds $\mathcal M$ is quasi-projective and its compactification $\mathcal M$ is projective.
( we know if $(L,h)$ be a positive , singular hermitian line bundle , whose Lelong numbers vanish everywhere. Then $L$ is nef) But Gang Tian recently proved that CM-bundle $L_{CM}$ is positive
But by the same theorem 2 and Poroposition 6, of the paper, Georg Schumacher and Hajime Tsuji, Quasi-projectivity of moduli spaces of polarized varieties, Annals of Mathematics,159(2004), 597–639
by assuming diameter of fibers $X_t$ is uniform bounded (which is my question)we can show that the Weil-Petersson current on mduli space of Fano Kahler-Einstein arieties has zero Lelong number(in smooth case it is easy).
Or Theorem 3.4 http://www.mathematik.uni-marburg.de/~schumac/doubar.pdf
A theorem of Donaldson-Sun states that if $X_t$ are Kahler-Einstein metric with negative Ricci curvature with uniform diameter bound, then the central fiber is normal and klt singularities at worst. In view of the moduli theory of canonically polarized varieties, limit of fibers should have canonical singularities.
when fibres are polarized Calabi-Yau varities, then the diameter of fibres is uniform bounded iff the central fibre $X_0$ has canonical singularities
$$diam(X_t,\omega_t)\leq 2+C\int_{X_t}\Omega_t\wedge\bar\Omega_t$$
My question is , $X_t$ are Kahler-Einstein metric with positive Ricci curvature with uniform diameter bound, then the central fiber is normal and has log-terminal singularities at worst and vise versa? In fact by knowing this fact we can construct a canonical Weil-Petersson metric on moduli space of Kahler-Einstein Fano varieties with mild singularities |
A canonical pulsar can be described as a ball of mass $M \approx 1,44 \, M_{\odot}$ and radius $R \approx 10 \, \mathrm{km}$, rotating with a period of about $P \approx 5 \, \mathrm{ms}$. It also have a typical magnetic field of around $B_{\text{pole}} \sim 10^{6} \, \mathrm{tesla} = 10^{10} \, \mathrm{gauss}$ (roughly). The field can be approximated as a
dipolar magnetic field. Because of the emission of dipolar electromagnetic radiation, the pulsar loses some energy, thus reducing its angular velocity $\omega \equiv 2 \pi / P$ (and maybe its polar magnetic field) :\begin{equation}\tag{1}\frac{dE_{\text{rad}}}{dt} = -\, \frac{\mu_0 \, \mu^2 \, \omega^4}{6 \pi c^3} \, \sin^2 {\alpha},\end{equation}where $\mu$ is the magnetic moment of the star, and $\alpha$ is the tilt angle relative to the rotation axis. The magnetic filed at the poles has this intensity :\begin{equation}\tag{2}B_{\text{pole}} = \frac{\mu_0 \, \mu}{2 \pi R^3},\end{equation}where $R$ is the radius (assumed to be a constant) of the star. The rotation kinetic energy and the magnetic energy stored into the dipolar magnetic field (assuming that the internal field is uniform) can be added together :\begin{equation}\tag{3}K_{\text{rot}} + U_{\text{magn}} = \frac{1}{2} \, I \, \omega^2 + \frac{\mu_0 \, \mu^2}{4 \pi R^3},\end{equation}where $I \approx \frac{2}{5} \, M R^2$ is the moment of inertia of the star.
The time derivative of (3) should be equal to the power lost (1) : \begin{equation}\tag{4} \frac{dE}{dt} = I \, \omega \, \dot{\omega} + \frac{\mu_0 \, \mu \, \dot{\mu}}{2 \pi R^3} = -\, \frac{\mu_0 \, \mu^2 \, \omega^4}{6 \pi c^3} \, \sin^2 {\alpha}. \end{equation}
Now the problem is the following. It is usually assumed that the star will slow down by the electromagnetic emission, so $\dot{\omega} \ne 0$. In all textbooks and lectures I have seen, the magnetic energy is not added in (3)-(4). But yet it is known that the magnetic field intensity may also be evolving (i.e decaying) with time. If I neglect the rotation frequency decreasing (i.e consider $\omega = \text{constant}$), I get this from (4) : \begin{equation}\tag{5} \dot{\mu} = -\, \Big( \frac{\omega^4 \, R^3}{3 c^3} \, \sin^2 {\alpha} \Big) \mu \equiv -\, \lambda \, \mu. \end{equation} This is a linear differential equation, of solution $\mu(t) = \mu(0) \, e^{- \lambda \, t}$. Thus, the polar magnetic field (2) is exponentially decaying with time. For our canonical pulsar, this gives an half-life of about $22.5 \, \mathrm{s}$ for the field decays, if $\alpha = 90^{\circ}$.
How can we justify that this decay mode is negligible relative to the rotation decay ? I.e how can we justify that $\dot{\mu} \approx 0$ while $\dot{\omega} \ne 0$ ?
EDIT 1 : If we assume $\dot{\mu} = 0$, equation (4) gives another differential equation for $\omega(t)$. It gives this solution, which is not exponential :\begin{equation}\tag{6}P(t) = P_0 \sqrt{1 + \kappa \, t},\end{equation}where $\kappa$ is a complicated constant :\begin{equation}\tag{7}\kappa = \frac{4 \pi \mu_0 \, \mu^2 \sin^2 {\alpha}}{3 I c^3 \, P_0^2} = \frac{5 (2 \pi)^3}{3 \mu_0 \, c^3} \, \frac{B_{\text{pole}}^2 \, R^4}{M P_0^2} \sin^2 {\alpha}.\end{equation}According to (6), the constant $\tau = \kappa^{-1}$ is the caracteristic time of period evolution. For our canonical pulsar defined at the beginning, with $\alpha = 90^{\circ}$ and $B_{\text{pole}} \approx 10^6 \, \mathrm{tesla}$, (7) gives this time lenght :\begin{equation}\tau \approx 5.87 \times 10^{14} \, \mathrm{s} \sim \text{18.6 millions years}.\end{equation}This is the model usually considered for a breaking pulsar. But I think that the scenario (5) is also valid in its own right and should be considered as a possibility. |
Skills to Develop
To use Spearman rank correlation to test the association between two ranked variables, or one ranked variable and one measurement variable. You can also use Spearman rank correlation instead of linear regression/correlation for two measurement variables if you're worried about non-normality, but this is not usually necessary. When to use it
Use Spearman rank correlation when you have two ranked variables, and you want to see whether the two variables covary; whether, as one variable increases, the other variable tends to increase or decrease. You also use Spearman rank correlation if you have one measurement variable and one ranked variable; in this case, you convert the measurement variable to ranks and use Spearman rank correlation on the two sets of ranks.
For example, Melfi and Poyser (2007) observed the behavior of \(6\) male colobus monkeys (
Colobus guereza) in a zoo. By seeing which monkeys pushed other monkeys out of their way, they were able to rank the monkeys in a dominance hierarchy, from most dominant to least dominant. This is a ranked variable; while the researchers know that Erroll is dominant over Milo because Erroll pushes Milo out of his way, and Milo is dominant over Fraiser, they don't know whether the difference in dominance between Erroll and Milo is larger or smaller than the difference in dominance between Milo and Fraiser. After determining the dominance rankings, Melfi and Poyser (2007) counted eggs of Trichuris nematodes per gram of monkey feces, a measurement variable. They wanted to know whether social dominance was associated with the number of nematode eggs, so they converted eggs per gram of feces to ranks and used Spearman rank correlation.
Monkey
name
Dominance
rank
Eggs per
gram
Eggs per
gram (rank)
Erroll 1 5777 1 Milo 2 4225 2 Fraiser 3 2674 3 Fergus 4 1249 4 Kabul 5 749 6 Hope 6 870 5
Some people use Spearman rank correlation as a non-parametric alternative to linear regression and correlation when they have two measurement variables and one or both of them may not be normally distributed; this requires converting both measurements to ranks. Linear regression and correlation that the data are normally distributed, while Spearman rank correlation does not make this assumption, so people think that Spearman correlation is better. In fact, numerous simulation studies have shown that linear regression and correlation are not sensitive to non-normality; one or both measurement variables can be very non-normal, and the probability of a false positive (\(P<0.05\), when the null hypothesis is true) is still about \(0.05\) (Edgell and Noon 1984, and references therein). It's not incorrect to use Spearman rank correlation for two measurement variables, but linear regression and correlation are much more commonly used and are familiar to more people, so I recommend using linear regression and correlation any time you have two measurement variables, even if they look non-normal.
Null hypothesis
The null hypothesis is that the Spearman correlation coefficient, \(\rho \) ("rho"), is \(0\). A \(\rho \) of \(0\) means that the ranks of one variable do not covary with the ranks of the other variable; in other words, as the ranks of one variable increase, the ranks of the other variable do not increase (or decrease).
Assumption
When you use Spearman rank correlation on one or two measurement variables converted to ranks, it does not assume that the measurements are normal or homoscedastic. It also doesn't assume the relationship is linear; you can use Spearman rank correlation even if the association between the variables is curved, as long as the underlying relationship is monotonic (as \(X\) gets larger, \(Y\) keeps getting larger, or keeps getting smaller). If you have a non-monotonic relationship (as \(X\) gets larger, \(Y\) gets larger and then gets smaller, or \(Y\) gets smaller and then gets larger, or something more complicated), you shouldn't use Spearman rank correlation.
Like linear regression and correlation, Spearman rank correlation assumes that the observations are independent.
How the test works
Spearman rank correlation calculates the
When you use linear regression and correlation on the ranks, the Pearson correlation coefficient (\(r\)) is now the Spearman correlation coefficient, \(\rho \), and you can use it as a measure of the strength of the association. For \(11\) or more observations, you calculate the test statistic using the same equation as for linear regression and correlation, substituting \(\rho \) for \(r\): \(t_s=\frac{\sqrt{d.f.}\times \rho ^2}{\sqrt{(1-\rho ^2)}}\). If the null hypothesis (that \(\rho =0\)) is true, \(t_s\) is \(t\)-distributed with \(n-2\) degrees of freedom.
If you have \(10\) or fewer observations, the \(P\) value calculated from the \(t\)-distribution is somewhat inaccurate. In that case, you should look up the \(P\) value in a table of Spearman t-statistics for your sample size. My Spearman spreadsheet does this for you.
You will almost never use a regression line for either description or prediction when you do Spearman rank correlation, so don't calculate the equivalent of a regression line.
For the Colobus monkey example, Spearman's \(\rho \) is \(0.943\), and the \(P\) value from the table is less than \(0.025\), so the association between social dominance and nematode eggs is significant.
Example
Fig. 5.2.1 Magnificent frigatebird, Fregata magnificens.
Volume
(cm
3) Frequency
(Hz)
1760 529 2040 566 2440 473 2550 461 2730 465 2740 532 3010 484 3080 527 3370 488 3740 485 4910 478 5090 434 5090 468 5380 449 5850 425 6730 389 6990 421 7960 416
Males of the magnificent frigatebird (
Fregata magnificens) have a large red throat pouch. They visually display this pouch and use it to make a drumming sound when seeking mates. Madsen et al. (2004) wanted to know whether females, who presumably choose mates based on their pouch size, could use the pitch of the drumming sound as an indicator of pouch size. The authors estimated the volume of the pouch and the fundamental frequency of the drumming sound in \(18\) males.
There are two measurement variables, pouch size and pitch. The authors analyzed the data using Spearman rank correlation, which converts the measurement variables to ranks, and the relationship between the variables is significant (Spearman's \(\rho =-0.76,\; 16 d.f.,\; P=0.0002\)). The authors do not explain why they used Spearman rank correlation; if they had used regular correlation, they would have obtained \(r=-0.82,\; P=0.00003\).
Graphing the results
You can graph Spearman rank correlation data the same way you would for a linear regression or correlation. Don't put a regression line on the graph, however; it would be misleading to put a linear regression line on a graph when you've analyzed it with rank correlation.
How to do the test Spreadsheet
I've put together a spreadsheet that will perform a Spearman rank correlation spearman.xls on up to \(1000\) observations. With small numbers of observations (\(10\) or fewer), the spreadsheet looks up the \(P\) value in a table of critical values.
Web page
This web page will do Spearman rank correlation.
R
Salvatore Mangiafico's \(R\)
Companion has a sample R program for Spearman rank correlation. SAS
Use PROC CORR with the SPEARMAN option to do Spearman rank correlation. Here is an example using the bird data from the correlation and regression web page:
PROC CORR DATA=birds SPEARMAN;
VAR species latitude; RUN; The results include the Spearman correlation coefficient ρ, analogous to the r value of a regular correlation, and the P value:
Spearman Correlation Coefficients, \(N = 17\)
Prob > |r| under H0: Rho=0
species latitude
species 1.00000 -0.36263 Spearman correlation coefficient 0.1526 P value latitude -0.36263 1.00000 0.1526 References Edgell, S.E., and S.M. Noon. 1984. Effect of violation of normality on the t–test of the correlation coefficient. Psychological Bulletin 95: 576-583. Madsen, V., T.J.S. Balsby, T. Dabelsteen, and J.L. Osorno. 2004. Bimodal signaling of a sexually selected trait: gular pouch drumming in the magnificent frigatebird. Condor 106: 156-160. Melfi, V., and F. Poyser. 2007. Trichurisburdens in zoo-housed Colobus guereza. International Journal of Primatology 28: 1449-1456. Contributor
John H. McDonald (University of Delaware) |
ISSN:
1531-3492
eISSN:
1553-524X
All Issues
Discrete & Continuous Dynamical Systems - B
July 2016 , Volume 21 , Issue 5
Special issue dedicated to Lishang Jiang on his 80th birthday
Select all articles
Export/Reference:
Abstract:
We dedicate this volume of the Journal of Discrete and Continuous Dynamical Systems-B to Professor Lishang Jiang on his 80th birthday. Professor Lishang Jiang was born in Shanghai in 1935. His family had migrated there from Suzhou. He graduated from the Department of Mathematics, Peking University, in 1954. After teaching at Beijing Aviation College, in 1957 he returned to Peking University as a graduate student of partial differential equations under the supervision of Professor Yulin Zhou. Later, as a professor, a researcher and an administrator, he worked at Peking University, Suzhou University and Tongji University at different points of his career. From 1989 to 1996, Professor Jiang was the President of Suzhou University. From 2001 to 2005, he was the Chairman of the Shanghai Mathematical Society.
For more information please click the “Full Text” above.
Abstract:
This paper investigates the positive solutions for second order linear elliptic equation in unbounded cylinder with zero boundary condition. We prove there exist two special positive solutions with exponential growth at one end while exponential decay at the other, and all the positive solutions are linear combinations of these two.
Abstract:
In this paper we discuss the optimal liquidation over a finite time horizon until the exit time. The drift and diffusion terms of the asset price are general functions depending on all variables including control and market regime. There is also a local nonlinear transaction cost associated to the liquidation. The model deals with both the permanent impact and the temporary impact in a regime switching framework. The problem can be solved with the dynamic programming principle. The optimal value function is the unique continuous viscosity solution to the HJB equation and can be computed with the finite difference method.
Abstract:
The following type of parabolic Barenblatt equations
min {$\partial_t V - \mathcal{L}_1 V, \partial_t V-\mathcal{L}_2 V$} = 0
is studied, where $\mathcal{L}_1$ and $\mathcal{L}_2$ are different elliptic operators of second order. The (unknown) free boundary of the problem is a divisional curve, which is the optimal insured boundary in our stochastic control problem. It will be proved that the free boundary is a differentiable curve.
To the best of our knowledge, this is the first result on free boundary for Barenblatt Equation. We will establish the model and verification theorem by the use of stochastic analysis. The existence of classical solution to the HJB equation and the differentiability of free boundary are obtained by PDE techniques.
Abstract:
Based on the optimal estimate of convergence rate $O(\Delta x)$ of the value function of an explicit finite difference scheme for the American put option problem in [6], an $O(\sqrt{\Delta x})$ rate of convergence of the free boundary resulting from a general compatible numerical scheme to the true free boundary is proven. A new criterion for the compatibility of a generic numerical scheme to the PDE problem is presented. A numerical example is also included.
Abstract:
In this note, we remove the technical assumption $\gamma>0$ imposed by Dai et. al. [SIAM J. Control Optim., 48 (2009), pp. 1134-1154] who consider the optimal investment and consumption decision of a CRRA investor facing proportional transaction costs and finite time horizon. Moreover, we present an estimate on the resulting optimal consumption.
Abstract:
Recent years have seen a dramatic increase in the number and variety of new mathematical models describing biological processes. Some of these models are formulated as free boundary problems for systems of PDEs. Relevant biological questions give rise to interesting mathematical questions regarding properties of the solutions. In this review we focus on models whose formulation includes Stokes equations. They arise in describing the evolution of tumors, both at the macroscopic and molecular levels, in wound healing of cutaneous wounds, and in biofilms. We state recent results and formulate some open problems.
Abstract:
To capture the impact of spatial heterogeneity of environment and available resource of the public health system on the persistence and extinction of the infectious disease, a simplified spatial SIS reaction-diffusion model with allocation and use efficiency of the medical resource is proposed. A nonlinear space dependent recovery rate is introduced to model impact of available public health resource on the transmission dynamics of the disease. The basic reproduction numbers associated with the diseases in the spatial setting are defined, and then the low, moderate and high risks of the environment are classified. Our results show that the complicated dynamical behaviors of the system are induced by the variation of the use efficiency of medical resources, which suggests that maintaining appropriate number of public health resources and well management are important to control and prevent the temporal-spatial spreading of the infectious disease. The numerical simulations are presented to illustrate the impact of the use efficiency of medical resources on the control of the spreading of infectious disease.
Abstract:
This paper introduces a new class of optimal switching problems, where the player is allowed to switch at a sequence of exogenous Poisson arrival times, and the underlying switching system is governed by an infinite horizon backward stochastic differential equation system. The value function and the optimal switching strategy are characterized by the solution of the underlying switching system. In a Markovian setting, the paper gives a complete description of the structure of switching regions by means of the comparison principle.
Abstract:
This paper is concerned with a coupled Navier-Stokes/Allen-Cahn system describing a diffuse interface model for two-phase flow of viscous incompressible fluids with different densities in a bounded domain $\Omega\subset\mathbb R^N$($N=2,3$). We establish a criterion for possible break down of such solutions at finite time in terms of the temporal integral of both the maximum norm of the deformation tensor of velocity gradient and the square of maximum norm of gradient of phase field variable in 2D. In 3D, the temporal integral of the square of maximum norm of velocity is also needed. Here, we suppose the initial density function $\rho_0$ has a positive lower bound.
Abstract:
We show that solutions of equations of the form \[ -u_t+D_{11}u+(x^1)D_{22}u = f \] (and also more general equations in any number of dimensions) satisfy simple Hölder estimates involving their derivatives. We also examine some pointwise properties for these solutions. Our results generalize those of Daskalopoulos and Lee, and Hong and Huang.
Abstract:
In this paper we consider the following equation $$ u_t=(u^m)_{xx}+(u^n)_x, \ \ (x, t)\in \mathbb{R}\times(0, \infty) $$ with a Dirac measure as initial data, i.e., $u(x, 0)=\delta(x)$. The solution of the Cauchy problem is well-known as source-type solution. In the recent work [11] the author studied the existence and uniqueness of such kind of singular solutions and proved that there exists a number $n_0=m+2$ such that there is a unique source-type solution to the equation when $0 \leq n < n_0$. Here our attention is focused on the nonexistence and asymptotic behavior near the origin for a short time. We prove that $n_0$ is also a critical number such that there exits no source-type solution when $n \geq n_0$ and describe the short time asymptotic behavior of the source-type solution to the equation when $0 \leq n < n_0$. Our result shows that in the case of existence and for a short time, the source-type solution of such equation behaves like the fundamental solution of the standard porous medium equation when $0 \leq n < m+1$, the unique self-similar source-type solution exists when $n = m+1$, and the solution does like the nonnegative fundamental entropy solution in the conservation law when $m+1 < n < n_0$, while in the case of nonexistence the singularity gradually disappears when $n \geq n_0$ that the mass cannot concentrate for a short time and no such a singular solutions exists. The results of previous work [11] and this paper give a perfect answer to such topical researches.
Abstract:
In this paper we will introduce for a convex domain $K$ in the Euclidean plane a function $\Omega_{n}(K, \theta)$ which is called by us the biwidth of $K$, and then try to find out the least area convex domain with constant biwidth $\Lambda$ among all convex domains with the same constant biwidth. When $n$ is an odd integer, it is proved that our problem is just that of Blaschke-Lebesgue, and when $n$ is an even number, we give a lower bound of the area of such constant biwidth domains.
Abstract:
We apply the general theory of pricing in incomplete markets, due to the author, on the problem of pricing bonds for the Hull-White stochastic interest rate model. As pricing in incomplete markets involves more market parameters than the classical theory, and as the derived risk premium is time-dependent, the proposed methodology might offer a better way for replicating different shapes of the empirically observed yield curves. For example, the so-called humped yield curve can be obtained from a normal yield curve by only increasing the investors risk aversion.
Abstract:
In this paper, we consider the compressible magnetohydrodynamic equations with nonnegative thermal conductivity and electric conductivity. The coefficients of the viscosity, heat conductivity and magnetic diffusivity depend on density and temperature. Inspired by the framework of [11], [13] and [15], we use the maximal regularity and contraction mapping argument to prove the existence and uniqueness of local strong solutions with positive initial density in the bounded domain for any dimension.
Abstract:
In this paper we present a new proof for the interior $C^{1,\alpha}$ regularity of weak solutions for a class of quasilinear elliptic equations, whose prototype is the $p$-Laplace equation.
Abstract:
An efficient parallelization method for numerically solving Lagrangian radiation hydrodynamic problems with three-temperature modeling on structural quadrilateral grids is presented. The three-temperature heat conduction equations are discretized by implicit scheme, and their computational cost are very expensive. Thus a parallel iterative method for three-temperature system of equations is constructed, which is based on domain decomposition for physical space, and combined with fixed point (Picard) nonlinear iteration to solve sub-domain problems. It can avoid global communication and can be naturally implemented on massive parallel computers. The space discretization of heat conduction equations uses the well-known local support operator method (LSOM). Numerical experiments show that the parallel iterative method preserves the same accuracy as the fully implicit scheme, and has high parallel efficiency and good stability, so it provides an effective solution procedure for numerical simulation of the radiation hydrodynamic problems on parallel computers.
Readers Authors Editors Referees Librarians More Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top] |
also, if you are in the US, the next time anything important publishing-related comes up, you can let your representatives know that you care about this and that you think the existing situation is appalling
@heather well, there's a spectrum
so, there's things like New Journal of Physics and Physical Review X
which are the open-access branch of existing academic-society publishers
As far as the intensity of a single-photon goes, the relevant quantity is calculated as usual from the energy density as $I=uc$, where $c$ is the speed of light, and the energy density$$u=\frac{\hbar\omega}{V}$$is given by the photon energy $\hbar \omega$ (normally no bigger than a few eV) di...
Minor terminology question. A physical state corresponds to an element of a projective Hilbert space: an equivalence class of vectors in a Hilbert space that differ by a constant multiple - in other words, a one-dimensional subspace of the Hilbert space. Wouldn't it be more natural to refer to these as "lines" in Hilbert space rather than "rays"? After all, gauging the global $U(1)$ symmetry results in the complex line bundle (not "ray bundle") of QED, and a projective space is often loosely referred to as "the set of lines [not rays] through the origin." — tparker3 mins ago
> A representative of RELX Group, the official name of Elsevier since 2015, told me that it and other publishers “serve the research community by doing things that they need that they either cannot, or do not do on their own, and charge a fair price for that service”
for example, I could (theoretically) argue economic duress because my job depends on getting published in certain journals, and those journals force the people that hold my job to basically force me to get published in certain journals (in other words, what you just told me is true in terms of publishing) @EmilioPisanty
> for example, I could (theoretically) argue economic duress because my job depends on getting published in certain journals, and those journals force the people that hold my job to basically force me to get published in certain journals (in other words, what you just told me is true in terms of publishing)
@0celo7 but the bosses are forced because they must continue purchasing journals to keep up the copyright, and they want their employees to publish in journals they own, and journals that are considered high-impact factor, which is a term basically created by the journals.
@BalarkaSen I think one can cheat a little. I'm trying to solve $\Delta u=f$. In coordinates that's $$\frac{1}{\sqrt g}\partial_i(g^{ij}\partial_j u)=f.$$ Buuuuut if I write that as $$\partial_i(g^{ij}\partial_j u)=\sqrt g f,$$ I think it can work...
@BalarkaSen Plan: 1. Use functional analytic techniques on global Sobolev spaces to get a weak solution. 2. Make sure the weak solution satisfies weak boundary conditions. 3. Cut up the function into local pieces that lie in local Sobolev spaces. 4. Make sure this cutting gives nice boundary conditions. 5. Show that the local Sobolev spaces can be taken to be Euclidean ones. 6. Apply Euclidean regularity theory. 7. Patch together solutions while maintaining the boundary conditions.
Alternative Plan: 1. Read Vol 1 of Hormander. 2. Read Vol 2 of Hormander. 3. Read Vol 3 of Hormander. 4. Read the classic papers by Atiyah, Grubb, and Seeley.
I am mostly joking. I don't actually believe in revolution as a plan of making the power dynamic between the various classes and economies better; I think of it as a want of a historical change. Personally I'm mostly opposed to the idea.
@EmilioPisanty I have absolutely no idea where the name comes from, and "Killing" doesn't mean anything in modern German, so really, no idea. Googling its etymology is impossible, all I get are "killing in the name", "Kill Bill" and similar English results...
Wilhelm Karl Joseph Killing (10 May 1847 – 11 February 1923) was a German mathematician who made important contributions to the theories of Lie algebras, Lie groups, and non-Euclidean geometry.Killing studied at the University of Münster and later wrote his dissertation under Karl Weierstrass and Ernst Kummer at Berlin in 1872. He taught in gymnasia (secondary schools) from 1868 to 1872. He became a professor at the seminary college Collegium Hosianum in Braunsberg (now Braniewo). He took holy orders in order to take his teaching position. He became rector of the college and chair of the town...
@EmilioPisanty Apparently, it's an evolution of ~ "Focko-ing(en)", where Focko was the name of the guy who founded the city, and -ing(en) is a common suffix for places. Which...explains nothing, I admit. |
The question says it all...I know the second of of them is mass * velocity, but what is the first one for, and when is it used? Also, what are its units, and is there a symbol for it?
Given a system of particles, the impulse exerted on the system during a time interval $[t_a, t_b]$ is defined as $$ \mathbf J(t_1, t_2) = \int_{t_1}^{t_2} dt\,\mathbf F(t) $$ where $\mathbf F$ is the net external force on the system. Since one can show that the net external force on a system is given by Newton's second law by $$ \mathbf F(t) = \dot{\mathbf P}(t) $$ where $\mathbf P$ is the total momentum of the system, one has $$ \mathbf J(t_1, t_2) = \mathbf P(t_2) - \mathbf P(t_1) $$ In other words, the impulse is equal to the change in momentum of the system. The dimensions of these quantities are the same, namely mass times velocity.
You can think of impulse as kind of the "net effect" that a force has in changing the state of motion of a system. Here is an example to illustrate what I mean.
Imagine you're pushing a shopping cart. Let's say you push the cart with a constant force for a short period of time versus a long period of time. When you push it for a short period of time, then the integral of the force with respect to time will be smaller than when you push it for a long period of time, and the result will be that the cart's momentum will not change as much. However, if you were to push the cart for a short period of time, but if you were to push it very hard, then you could make up for the short period of time for which the force acts and still get the cart going fast.
The Upshot: The impulse takes into consideration both the effect of the
force on the system, and the duration of time for which the force acts.
Impulse is defined as the change in momentum and is represented by the symbol $J$. The units of impulse are newton-seconds, which are algebraically the same units as momentum (kilogram metre per second). As its units hint, impulse is calculated by multiplying force by time. Impulse is useful whenever there is a change in velocity.
More rigorously, impulse and momentum are related by the impulse-momentum theorem: $$J = \Delta p$$
The change in momentum of a system, $\Delta p$, between two points in time, $t_0$ and $t_f$, can be calculated with an integral, $$\int_{t_0}^{t_f} F(t)dt$$
where $F(t)$ is the net force on the system as a function of time. While algebraically momentum and impulse have the same units, it is helpful to distinguish the latter by using newton-seconds because impulse is, conceptually, not the same thing as momentum.
Impulse is a change in momentum. Naturally it has the same units. It's useful for whacks and thuds and bounces and other more or less sudden changes in motion.
Think of a golf ball struck by a club. Zero momentum initially. Whatever momentum it has after, as it begins to fly through the air. We might not know, or want to bother with, the exact instantaneous changing amount of force of the club on the ball. It happens quickly, and it's just the integral $\int^{after}_{before}F(t){dt}$ that matters to knowing how fast the ball is launched. We call this impulse, and it is equal to $p_{after}-p_{before}$.
protected by Qmechanic♦ Dec 11 '17 at 7:34
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead? |
One of my favourite quotes is from Stefan Banach: "A good mathematician sees analogies between theorems. A great mathematician sees analogies between analogies." This post is clearly in the former camp. I'm fairly sure it's a trivial thing, but it's not something I'd noticed before. One of the first seriousRead More →
Dear Uncle Colin, When I have an angle in the second quadrant, I can find it just fine using $\cos^{-1}$ - but using $\sin^{-1}$ or $\tan^{-1}$ gives me an angle in the fourth quadrant. I don't understand why this is! -- I Need Verbose Explanations; Radians Seem Excellent Hi, INVERSE,Read More →
Every so often, my muggle side and mathematical side conflict, and this clip from @marksettle shows one of them. My toddler's train track is freaking me out right now. What is going on here?! pic.twitter.com/9o8bVWF5KO — marc blank-settle (@MarcSettle) April 6, 2016 My muggle side says "wait, what, how canRead More →
Dear Uncle Colin, I'm struggling a bit with my C4 vectors. Most of it is fine, except when I have to find a point $P$ on a given line such that $\vec{AP}$ is perpendicular to the line, for some known $A$. How do I figure that out? -- Any VectorRead More →
This month on Wrong, But Useful, @reflectivemaths and @icecolbeveridge are joined by @evelynjlamb, who is Evelyn Lamb in real life. She writes the Roots Of Unity column for Scientific American. We discuss: How Evelyn got into maths, into writing and into France Evelyn picks the numbers of the podcast: 339,613Read More →
A nice puzzle this week, via NRICH's magnificent @ajk44: a semicircle is inscribed in a 3-4-5 triangle as shown. Find $X$. I think it's a nice puzzle because Alison's way of doing it was entirely different to mine, but thankfully got the same answer. You might like to try itRead More →
Dear Uncle Colin, I'm struggling to understand why, if you know a triangle has two sides the same, the base angles must be the same. Can you explain? -- I'm Struggling Over Some Coherent Explanation Leveraging Equal Sides Hi, ISOSCELES, and thanks for your message! There are several good proofsRead More →
"Did you know," asked a student at third-hand1, "that the in-circle of a 3-4-5 triangle has a radius of 1?" That's the kind of thing I'd normally just fire up GeoGebra to check, but I was in the middle of a podcast! The best I could do was check toRead More →
Dear Uncle Colin, I need to find an angle! ABC is a triangle with median AD, while angles BAD and CAD are 110º and 20º, respectively. What's angle ACB? -- Angle Being Evasive, LOL Hi, ABEL, and thanks for your question! Even if you've used degrees. For heaven's sake, getRead More → |
Search
Now showing items 1-5 of 5
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV
(Springer, 2015-09)
We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ...
Inclusive, prompt and non-prompt J/ψ production at mid-rapidity in Pb-Pb collisions at √sNN = 2.76 TeV
(Springer, 2015-07-10)
The transverse momentum (p T) dependence of the nuclear modification factor R AA and the centrality dependence of the average transverse momentum 〈p T〉 for inclusive J/ψ have been measured with ALICE for Pb-Pb collisions ... |
For every positive, continuous and homogeneous function $f$ on the space ofcurrents on a compact surface $\overline{\Sigma}$, and for every compactlysupported filling current $\alpha$, we compute as $L \to \infty$, the number ofmapping classes $\phi$ so that $f(\phi(\alpha))\leq L$. As an application, whenthe surface in question is closed, we prove a lattice counting theorem forTeichm\"uller space equipped with the Thurston metric.
We construct an infinite discrete subgroup of the isometry group of $\mathbbH^3$ with no finite quotients other than the trivial group.
We prove that the group of diffeomorphisms of the interval $[0,1]$ containssurface groups whose action on $(0,1)$ has no global fix point, istopologically transitive, and such that only countably many points of theinterval $(0,1)$ have non-trivial stabiliser.
We construct a geometric decomposition for the convex core of a thickhyperbolic 3-manifold M with bounded rank. Corollaries include upper bounds interms of rank and injectivity radius on the Heegaard genus of M and on theradius of any embedded ball in the convex core of M.
We show that if the monodromy of a 3-manifold M that fibers over the circlehas large translation distance in the curve complex, then the rank of thefundamental group of M is 2g+1, where g is the genus of the fiber.
We prove that there are finite area flat surfaces whose Veech group is aninfinite cyclic group consisting of hyperbolic elements
Let $\gamma_0$ be a curve on a surface $\Sigma$ of genus $g$ and with $r$boundary components and let $\pi_1(\Sigma)\curvearrowright X$ be a discrete andcocompact action on some metric space. We study the asymptotic behavior of thenumber of curves $\gamma$ of type $\gamma_0$ with translation length at most$L$ on $X$. For example, as an application, we derive that for any finitegenerating set $S$ of $\pi_1(\Sigma)$ the limit $$\lim_{L\to\infty}\frac1{L^{6g-6+2r}}\{\gamma\text{ of type }\gamma_0\text{ with }S\text{-translationlength}\le L\}$$ exists and is positive. The main new technical tool is thatthe function which associates to each curve its stable length with respect tothe action on $X$ extends to a (unique) continuous and homogenous function onthe space of currents. We prove that this is indeed the case for any action ofa torsion free hyperbolic group.
This note is about a type of quantitative density of closed geodesics onclosed hyperbolic surfaces. The main results are upper bounds on the length ofthe shortest closed geodesic that $\varepsilon$-fills the surface.
We give a necessary condition for a closed subset of $R^3$ to be the set ofcritical points of some smooth function. In particular we obtain that forexample neither the Whitehead continuum nor the p-adic solenoid are such acritical sets.
Let $S$ be a closed orientable hyperbolic surface, and let $\mathcal{O}(K,S)$denote the number of mapping class group orbits of curves on $S$ with at most$K$ self-intersections. Building on work of Sapir [16], we give upper and lowerbounds for $\mathcal{O}(K,S)$ which are both exponential in $\sqrt{K}$.
The geometric dimension for proper actions $\underline{\mathrm{gd}}(G)$ of agroup $G$ is the minimal dimension of a classifying space for proper actions$\underline{E}G$. We construct for every integer $r\geq 1$, an example of avirtually torsion-free Gromov-hyperbolic group $G$ such that for every group$\Gamma$ which contains $G$ as a finite index normal subgroup, the virtualcohomological dimension $\mathrm{vcd}(\Gamma)$ of $\Gamma $ equals$\underline{\mathrm{gd}}(\Gamma)$ but such that the outer automorphism group$\mathrm{Out}(G)$ is virtually torsion-free, admits a cocompact model for$\underline E\mathrm{Out}(G)$ but nonetheless has$\mathrm{vcd}(\mathrm{Out}(G))\le\underline{\mathrm{gd}}(\mathrm{Out}(G))-r$.
Suppose that $\Sigma$ is a hyperbolic surface and $f:\mathbb R_+\to\mathbbR_+$ a monotonic function. We study the closure in the projective tangentbundle $PT\Sigma$ of the set of all geodesics $\gamma$ satisfying$I(\gamma,\gamma)\leq f(\ell_\Sigma(\gamma))$. For instance we prove that if$f$ is unbounded and sublinear then this set has Hausdorff dimension strictlybounded between 1 and 3.
Let $\Sigma$ be a hyperbolic surface. We study the set of curves on $\Sigma$of a given type, i.e. in the mapping class group orbit of some fixed butotherwise arbitrary $\gamma_0$. For example, in the particular case that$\Sigma$ is a once-punctured torus, we prove that the cardinality of the set ofcurves of type $\gamma_0$ and of at most length $L$ is asymptotic to $L^2$times a constant.
In this note we show that a bounded degree planar triangulation is recurrentif and only if the set of accumulation points of some/any circle packing of itis polar (that is, planar Brownian motion avoids it with probability 1). Thisgeneralizes a theorem of He and Schramm [6] who proved it when the set ofaccumulation points is either empty or a Jordan curve, in which case the graphhas one end. We also show that this statement holds for any straight-lineembedding with angles uniformly bounded away from 0.
We prove that Kleinian groups whose limit sets are Cantor sets of Hausdorffdimension $<1$ are free. On the other hand we construct for any $\epsilon>0$examples of non-free purely hyperbolic Kleinian groups whose limit set is aCantor set of Hausdorff dimension $<1+\epsilon$.
We prove that if $\Gamma$ is a lattice in a classical simple Lie group $G$,then the symmetric space of $G$ is $\Gamma$-equivariantly homotopy equivalentto a proper cocompact $\Gamma$-CW complex of dimension the virtualcohomological dimension of $\Gamma$.
Bounded-type 3-manifolds arise as combinatorially bounded gluings ofirreducible 3-manifolds chosen from a finite list. We prove effectivehyperbolization and effective rigidity for a broad class of 3-manifolds ofbounded type and large gluing heights. Specifically, we show the existence anduniqueness of hyperbolic metrics on 3-manifolds of bounded type and largeheights, and prove existence of a bilipschitz diffeomorphism to a combinatorialmodel described explicitly in terms of the list of irreducible manifolds, thetopology of the identification, and the combinatorics of the gluing maps.
Benjamini and Schramm introduced the notion of distributional limit of asequence of graphs with uniformly bounded valence and studied such limits inthe case that the involved graphs are planar. We investigate distributionallimits of sequences of Riemannian manifolds with bounded curvature whichsatisfy certain condition of quasi-conformal nature. We then apply our resultsto somewhat improve Benjamini's and Schramm's original result on the recurrenceof the simple random walk on limits of planar graphs. For instance, as anapplication give a proof of the fact that for graphs in an expander family, thegenus of each graph is bounded from below by a linear function of the number ofvertices.
The main purpose of this article is to demonstrate three techniques forproving algebraicity statements about circle packings. We give proofs of threerelated theorems: (1) that every finite simple planar graph is the contactgraph of a circle packing on the Riemann sphere, equivalently in the complexplane, all of whose tangency points, centers, and radii are algebraic, (2) thatevery flat conformal torus which admits a circle packing whose contact graphtriangulates the torus has algebraic modulus, and (3) that if R is a compactRiemann surface of genus at least 2, having constant curvature -1, which admitsa circle packing whose contact graph triangulates R, then R is isomorphic tothe quotient of the hyperbolic plane by a subgroup of PSL_2(real algebraicnumbers). The statement (1) is original, while (2) and (3) have been previouslyproved in the Ph.D. thesis of McCaughan. Our first proof technique is to apply Tarski's Theorem, a result from modeltheory, which says that if an elementary statement in the theory of real-closedfields is true over one real-closed field, then it is true over any real closedfield. This technique works to prove (1) and (2). Our second proof technique isvia an algebraicity result of Thurston on finite co-volume discrete subgroupsof the orientation-preserving-isometry group of hyperbolic 3-space. Thistechnique works to prove (1). Our first and second techniques had notpreviously been applied in this area. Our third and final technique is via alemma in real algebraic geometry, and was previously used by McCaughan to prove(2) and (3). We show that in fact it may be used to prove (1) as well.
Let S be a closed surface of genus g >= 2 and z in S a marked point. We provethat the subgroup of the mapping class group Map(S,z) corresponding to thefundamental group pi_1(S,z) of the closed surface does not lift to the group ofdiffeomorphisms of S fixing z. As a corollary, we show that the Atiyah-Kodairasurface bundles admit no invariant flat connection, and obtain another proof ofMorita's non-lifting theorem.
We construct a Cantor set in S^3 whose complement admits a completehyperbolic metric.
We prove that the spectral gap of a finite planar graph $X$ is bounded by$\lambda_1(X)\le C(\frac{\log(\diam X)}{\diam X})^2$ where $C$ depends only onthe degree of $X$. We then give a sequence of such graphs showing the the aboveestimate cannot be improved. This yields a negative answer to a question ofBenjamini and Curien on the mixing times of the simple random walk on planargraphs.
We show that, for any (symmetric) finite generating set of the Torelli groupof a closed surface, the probability that a random word is not pseudo-Anosovdecays exponentially in terms of the length of the word.
Suppose that $X$ and $Y$ are surfaces of finite topological type, where $X$has genus $g\geq 6$ and $Y$ has genus at most $2g-1$; in addition, suppose that$Y$ is not closed if it has genus $2g-1$. Our main result asserts that everynon-trivial homomorphism $\Map(X) \to \Map(Y)$ is induced by an {\emembedding}, i.e. a combination of forgetting punctures, deleting boundarycomponents and subsurface embeddings. In particular, if $X$ has no boundarythen every non-trivial endomorphism $\Map(X)\to\Map(X)$ is in fact anisomorphism. As an application of our main theorem we obtain that, under thesame hypotheses on genus, if $X$ and $Y$ have finite analytic type then everynon-constant holomorphic map $\CM(X)\to\CM(Y)$ between the corresponding modulispaces is a forgetful map. In particular, there are no such holomorphic mapsunless $X$ and $Y$ have the same genus and $Y$ has at most as many markedpoints as $X$.
Anderson and Canary have shown that if the algebraic limit of a sequence ofdiscrete, faithful representations of a finitely generated group into PSL(2,C)does not contain parabolics, then it is also the sequence's geometric limit. Weconstruct examples that demonstrate the failure of this theorem for certainsequences of unfaithful representations, and offer a suitable replacement. |
Back to Linear Programming
The announcement by Karmarkar in 1984 that he had developed a fast algorithm that generated iterates that lie in the interior of the feasible set (rather than on the boundary, as simplex methods do) opened up exciting new avenues for research in both the computational complexity and mathematical programming communities. Since then, there has been intense research into a variety of methods that maintain strict feasibility of all iterates, at least with respect to the inequality constraints. Although dwarfed in volume by simplex-based packages, interior-point products have emerged and have proven to be competitive with, and often superior to, the best simplex packages, especially on large problems.
The NEOS Server offers several solvers that implement interior-point methods, including bpmpd, MOSEK, and OOQP.
Here, we discuss only
primal-dual interior point algorithms, which are effective from a computational perspective as well as the most amenable to theoretical analysis. To begin, the dual of the standard form linear programming problem can be written as \[\max\left\{b^T y \; : \; s = c - A^T y \geq 0 \right\}.\] Then, the optimality conditions for \((x,y,s)\) to be a primal-dual solution triplet are that
where
and \(e\) is the vector of all ones.
Interior-point algorithms generate iterates \((x_k,y_k,s_k)\) such that \(x_k > 0\) and \(s_k > 0\). As \(k \to \infty\), the equality-constraint violations \(\|A x_k - b \|\) and \(\| a^T y_k + s_k - c\|\) and the duality gap \(x_k^T s_k\) are driven to zero, yielding a limiting point that solves the primal and dual linear programs.
Primal-dual methods can be thought of as a variant of Newton's method applied to the system of equations formed by the first three optimality conditions. Given the current iterate \((x_k,y_k,s_k)\) and the damping parameter \(\sigma_k \in [0,1]\), the search direction \((w_k,z_k,t_k)\) is generated by solving the linear system
where
\[\mu_k = x_k^T s_k / n\]
The new point is then obtained by setting
where \(\alpha_k^D\) and \(\alpha_k^P\) are chosen to ensure that \(x_{k+1} > 0\) and \(s_{k+1} > 0\).
When \(\sigma_k=0\), the search direction is the pure Newton search direction for the nonlinear system,
\(A x = b, \; A^T y + s = c, \; S X e = 0\) and the resulting method is an ''affine scaling'' algorithm. The effect of choosing positive values of \(\sigma_k\) is to orient the step away from the boundary of the nonnegative orthant defined by \(x \geq 0, \; s \geq 0\), thus allowing longer step lengths \(\alpha_k^P, \; \alpha_k^D\) to be taken. ''Path-following'' methods require \(\alpha_k^P, \; \alpha_k^D\) to be chosen so that \(x_k\) and \(s_k\) are not merely positive but also satisfy the centrality condition\[(x_k,s_k) \in C_k\] where \[C_k = \left\{ (x,s): x_i s_i \geq \gamma \mu_k, \; i=1,\ldots,n \right\}\] for some \(\gamma \in (0,1)\). The other requirement on \(\alpha_k^P\) and \(\alpha_k^D\) is that the decrease in \(\mu_k\) should not outpace the improvement in feasibility (that is, the decrease in \(\| A x_k - b \|\) and \(\| A^T y_k + s_k - c \|\)). Greater priority is placed on attaining feasibility than on closing the duality gap. It is possible to design path-following algorithms satisfying these requirements for which the sequence \(\{ \mu_k \}\) converges to zero at a linear rate. Further, the number of iterates required for convergence is a polynomial function of the size of the problem (typically, order \(n\) or order \(n^{3/2}\)). By allowing the damping parameter \(\sigma_k\) to become small as the solution is approached, the method behaves more and more like a pure Newton method, and superlinear convergence can be achieved. |
Orthogonal vectors:
Vectors can be easily represented using the co-ordinate system in three dimensions. Before getting into the representation of vectors, let us understand what orthogonal representation is.
In terms of coordinate geometry, by orthogonal representation, we mean parameters that are at right angles to each other. In orthogonal three dimensional system, we have three axes perpendicular to each other, which represent x,y and z axis.
Unit vectors: are the vectors which have magnitude of unit length.
\(\hat{x} = \frac{\overrightarrow{x}}{|\overrightarrow{x}|}\)
Here, \(\hat{x}\) represents a unit vector, \(\overrightarrow{x}\) represents the vector and represents the magnitude of the vector.
In orthonormal or orthogonal systems, we can have three different unit vectors with one in each direction. It can be represented as follows:
The point X(1, 1, 1) can be represented using the three mutually perpendicular axes as points A(1, 0, 0), B(0, 1, 0) and C(0, 0, 1) on the and axes respectively.
The magnitude of the vector \(\overrightarrow{OA}\) along the-axis is 1. Similarly, that of vectors \(\overrightarrow{OB}\) and \( \overrightarrow {OC}\) is also 1 along the y and z axes respectively. These vectors are the unit vectors along x, y and z axis and are represented by \(\hat{i}\),\( \hat{j}\) and \(\hat{k}\) respectively.
Now, with the help of unit vectors we can represent any vector in the three dimensional co-ordinate system.
To represent a vector in space, we resolve the vector along the three mutually perpendicular axes as shown below.
The vector OM can be resolved along the three axes as shown. With OM as the diagonal, a parallelepiped is constructed whose edges OA, OB and OC lie along the three perpendicular axes.
From the above figure, we can say that
\(\overrightarrow{OA}\) =\( x\hat{i}\)
\(\overrightarrow{OB}\) =\( y\hat{j}\)
\(\overrightarrow{OC}\) =\( z\hat{k}\)
The vector can be represented as
\(r\) = \(\overrightarrow{OM} = x\hat{i} + y \hat{j} + z \hat{k}\)
This is known as the component form of a vector.
Thus, the vector r can be resolved in the directions i, j and k respectively. This represents the position of given vectors in terms of the three co-ordinate axes.
If a vector is given in a form as shown above, then the magnitude of such a vector can be found out by using the Pythagoras theorem in the given figure as ,
r =\(\overrightarrow {OM} \)= \(x\hat{i} + y\hat{j} + z\hat{k}\)
\( \Rightarrow |r|\) = \(\sqrt{(x^2 + y^2 + z^2)}\)
The sum of two vectors a =\( a_1 \hat{i} + a_2 \hat{j} + a_3 \hat{k} \)and b =\( b_1 \hat{i} + b_2 \hat{j} + b_3 \hat{k} \) is given by adding the components of the three axes separately.
i.e.a + b = \( a_1 \hat{i} + a_2\hat{j} + a_3\hat{k} + b_1 \hat{i} + b_2\hat{j} + b_3\hat{k} \)
\(\Rightarrow a + b\) = \((a_1 + b_1)\hat{i} + (a_2 + b_2)\hat{j} + (a_3 + b_3)\hat{k}\)
Similarly, the difference can be given as:
a – b = \((a_1 – b_1)\hat{i} + (a_2 – b_2)\hat{j} + (a_3 – b_3)\hat{k}\).
We can perform a number of mathematical operations on vectors using this system of representation. It is both easy and simple. To make our understanding more clear, let us take an example.
Example: Two vectors are given by a = \(5\hat{i} – 3\hat{j} + 4\hat{k}\) and b = \(2\hat{i} – \hat{j} + \hat{k}\). Find the unit vectors and the sum and difference of both the vectors.
Solution: The unit vector is given by \(\hat{x}\) =\( \frac{\overrightarrow{x}}{|\overrightarrow{x}|}\)
The magnitude of both the vectors can be given as:
|a| = \(\sqrt{5^2 + (-3)^2 + 4^2}\)
\(\Rightarrow \)|a| = \(\sqrt{50}\)
|b| = \(\sqrt{2^2 + (-1)^2 + 1^2}\)
\(\Rightarrow\) |b| = \(\sqrt{6}\)
Now, the unit vectors can be given as:
\(\hat{a}\) = \(\frac{5\hat{i} – 3\hat{j} + 4\hat{k}}{\sqrt{50}}\)
\(\hat{b}\) = \(\frac{2\hat{i} – \hat{j} + \hat{k}}{\sqrt{6}}\)
The sum can be given by:
a + b = \(5\hat{i} – 3\hat{j} + 4 \hat{k} + 2\hat{i} – \hat{j} + \hat{k}\)
a + b = \(7\hat{i} – 4\hat{j} + 5\hat{k}\)
The difference is given by:
a – b = \(3\hat{i} – 2\hat{j} + 3\hat{k}\)<
Learn more NCERT solutions for Vector Algebra. |
Let $F$ be a finite set equipped the discrete topology. Let $X = F \times F \times ...$ be the countably infinite product space equipped with the product topology. Let $\mathcal A$ be any field of subsets of $X$ that contains the open subsets of $X$. Let $\mu$ be a finitely additive, finite measure with domain $\mathcal A$.
Suppose that $\mu$ has the following "clopen approximation property": For any $\epsilon > 0$ and any open subset $G$ of $X$, there is a clopen subset $C$ of $X$ such that $\mu(G \triangle C)< \epsilon$.
Is the clopen approximation property equivalent to the following "inner regularity property"? For every open subset $G$ of $X$, $\mu(G) = \sup\{\mu(C): C \subseteq G, C \text{ clopen}\}$.
Clearly inner regularity implies clopen approximation, but I am unable to see that the converse is true.
If $G$ is open, then it can be written as a countable union $G = C_1 \cup C_2 \cup ...$ of pairwise disjoint clopen sets. It seems reasonable to expect that if $G$ is approximable by
some sequence of clopen sets, then it should be approximable from within by finite unions of its constiuent clopen sets. i.e. $\mu(\cup_{i=1}^n C_i) \to \mu(G)$ as $n \to \infty$. But, again, I am unable to see that this is the case.
If $G$ is approximable by some clopen sequence $B_n$, so that $\mu(G \triangle B_n) \to 0$, then $\mu(G - B_n) \to 0$ and $G - B_n$ is an
open subset of $G$. If we could replace this sequence of open subsets of $G$ with a "similar" sequence of clopen subsets of $G$, maybe we could prove the result. I played around with sequences like $(C_1 \cup ... \cup C_n) - B_n$ (recall that $G = C_1 \cup C_2 \cup ...$), but didn't get anywhere.
The motivation for this has to do with finding countably additive extensions of finitely additive measures. The clopen approximation property can be used to characterize extreme points in the convex set of extensions from the clopen field to another, larger field. And, since the clopen sets form a compact class, inner regularity provides a sufficient condition for countable additivity. If the result in question holds, then I could say that the extreme points of the set of extensions from the clopen field to the field generated by open sets are countably additive on the latter. |
Tanimura, Yuka ; Nishimoto, Takaaki ; Bannai, Hideo ; Inenaga, Shunsuke ; Takeda, Masayuki
Small-Space LCE Data Structure with Constant-Time Queries
AbstractThe longest common extension (LCE) problem is to preprocess a given string w of length n so that the length of the longest common prefix between suffixes of w that start at any two given positions is answered quickly. In this paper, we present a data structure of O(z \tau^2 + \frac{n}{\tau}) words of space which answers LCE queries in O(1) time and can be built in O(n \log \sigma) time, where 1 \leq \tau \leq \sqrt{n} is a parameter, z is the size of the Lempel-Ziv 77 factorization of w and \sigma is the alphabet size. The proposed LCE data structure not access the input string w when answering queries, and thus w can be deleted after preprocessing. On top of this main result, we obtain further results using (variants of) our LCE data structure, which include the following:
- For highly repetitive strings where the z\tau^2 term is dominated by \frac{n}{\tau}, we obtain a constant-time and sub-linear space LCE query data structure.
- Even when the input string is not well compressible via Lempel-Ziv 77 factorization, we still can obtain a constant-time and sub-linear space LCE data structure for suitable \tau and for \sigma \leq 2^{o(\log n)}.
- The time-space trade-off lower bounds for the LCE problem by Bille et al. [J. Discrete Algorithms, 25:42-50, 2014] and by Kosolobov [CoRR, abs/1611.02891, 2016] do not apply in some cases with our LCE data structure.
BibTeX - Entry
@InProceedings{tanimura_et_al:LIPIcs:2017:8102,
author = {Yuka Tanimura and Takaaki Nishimoto and Hideo Bannai and Shunsuke Inenaga and Masayuki Takeda},
title = {{Small-Space LCE Data Structure with Constant-Time Queries}},
booktitle = {42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
pages = {10:1--10:15},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-046-0},
ISSN = {1868-8969},
year = {2017},
volume = {83},
editor = {Kim G. Larsen and Hans L. Bodlaender and Jean-Francois Raskin},
publisher = {Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik},
address = {Dagstuhl, Germany},
URL = {http://drops.dagstuhl.de/opus/volltexte/2017/8102},
URN = {urn:nbn:de:0030-drops-81021},
doi = {10.4230/LIPIcs.MFCS.2017.10},
annote = {Keywords: longest common extension, truncated suffix trees, t-covers}
}
Keywords: longest common extension, truncated suffix trees, t-covers Seminar: 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017) Issue date: 2017 Date of publication: 2017 |
Asymptotic properties of standing waves for mass subcritical nonlinear Schrödinger equations
Department of Mathematics, School of Sciences, Wuhan University of Technology, Wuhan 430070, China
${d_{{a_q}}}(q): = \mathop {\inf }\limits_{\{ \int {_{{\mathbb{R}^2}}|u{|^2}dx = 1} \} } {E_{q,{a_q}}}(u),$
$E_{q,a_q}(·)$
${{E}_{q,{{a}_{q}}}}(u):=\int_{{{\mathbb{R}}^{2}}}{(|\nabla u(x){{|}^{2}}+V(x)|u(x){{|}^{2}})}dx-\frac{2{{a}_{q}}}{q+2}\int_{{{\mathbb{R}}^{2}}}{|}u(x){{|}^{q+2}}dx.$
$a_q>0, \ q∈(0,2)$
$V(x)$
$a^*:= \|Q\|_2^2$
$Q$
$Δ u-u+u^3=0$
$\mathbb{R}^2$
$\lim_{q\nearrow2}a_q=a<a^*$
$d_{a_q}(q)$
$q\nearrow2$
$\lim_{q\nearrow2}a_q=a≥q a^*$ 256, (2014), 2079-2100]. Keywords:Constrained variational method, energy estimates, blow-up, standing waves, nonlinear Schrödinger equation. Mathematics Subject Classification:35J20, 35J60. Citation:Xiaoyu Zeng. Asymptotic properties of standing waves for mass subcritical nonlinear Schrödinger equations. Discrete & Continuous Dynamical Systems - A, 2017, 37 (3) : 1749-1762. doi: 10.3934/dcds.2017073
References:
[1] [2]
T. Bartsch amd Z.-Q. Wang,
Existence and multiplicity results for some superlinear elliptic problems on $\mathbb{R}^N$,
[3] [4] [5]
T. Cazenave,
[6] [7]
B. Gidas, W. M. Ni and L. Nirenberg, Symmetry of positive solutions of nonlinear elliptic equations in $\mathbb{R}^n$, in
[8]
Y. J. Guo and R. Seiringer,
On the mass concentration for Bose-Einstein condensates with attractive interactions,
[9] [10]
Y. J. Guo, X. Y. Zeng and H. S. Zhou,
Concentration behavior of standing waves for almost mass critical nonlinear Schrödinger equations,
[11]
Y. J. Guo, X. Y. Zeng and H. S. Zhou,
Energy estimates and symmetry breaking in attractive Bose-Einstein condensates with ring-shaped potentials,
[12]
Q. Han and F. H. Lin,
[13] [14]
Y. Li and W.-M. Ni,
Radial symmetry of positive solutions of nonlinear elliptic equations in $\mathbb{R}^n$,
[15]
E. H. Lieb, R. Seiringer and J. Yngvason, Bosons in a trap: A rigorous derivation of the Gross-Pitaevskii energy functional,
[16] [17] [18] [19] [20]
M. Reed and B. Simon,
[21] [22] [23] [24] [25] [26] [27]
show all references
References:
[1] [2]
T. Bartsch amd Z.-Q. Wang,
Existence and multiplicity results for some superlinear elliptic problems on $\mathbb{R}^N$,
[3] [4] [5]
T. Cazenave,
[6] [7]
B. Gidas, W. M. Ni and L. Nirenberg, Symmetry of positive solutions of nonlinear elliptic equations in $\mathbb{R}^n$, in
[8]
Y. J. Guo and R. Seiringer,
On the mass concentration for Bose-Einstein condensates with attractive interactions,
[9] [10]
Y. J. Guo, X. Y. Zeng and H. S. Zhou,
Concentration behavior of standing waves for almost mass critical nonlinear Schrödinger equations,
[11]
Y. J. Guo, X. Y. Zeng and H. S. Zhou,
Energy estimates and symmetry breaking in attractive Bose-Einstein condensates with ring-shaped potentials,
[12]
Q. Han and F. H. Lin,
[13] [14]
Y. Li and W.-M. Ni,
Radial symmetry of positive solutions of nonlinear elliptic equations in $\mathbb{R}^n$,
[15]
E. H. Lieb, R. Seiringer and J. Yngvason, Bosons in a trap: A rigorous derivation of the Gross-Pitaevskii energy functional,
[16] [17] [18] [19] [20]
M. Reed and B. Simon,
[21] [22] [23] [24] [25] [26] [27]
[1]
Zaihui Gan, Jian Zhang.
Blow-up, global existence and standing waves for the magnetic nonlinear Schrödinger equations.
[2] [3]
Jian Zhang, Shihui Zhu, Xiaoguang Li.
Rate of $L^2$-concentration of the blow-up solution
for critical nonlinear Schrödinger equation with potential.
[4]
Binhua Feng.
On the blow-up solutions for the fractional nonlinear Schrödinger equation with combined power-type nonlinearities.
[5]
Van Duong Dinh.
On blow-up solutions to the focusing mass-critical nonlinear fractional Schrödinger equation.
[6]
Jianbo Cui, Jialin Hong, Liying Sun.
On global existence and blow-up for damped stochastic nonlinear Schrödinger equation.
[7] [8] [9]
Reika Fukuizumi.
Stability and instability of standing waves for the nonlinear Schrödinger equation with harmonic potential.
[10]
François Genoud.
Existence and stability of high frequency standing waves for a nonlinear
Schrödinger equation.
[11]
Cristophe Besse, Rémi Carles, Norbert J. Mauser, Hans Peter Stimming.
Monotonicity properties of the blow-up time for nonlinear Schrödinger equations: Numerical evidence.
[12]
Hristo Genev, George Venkov.
Soliton and blow-up solutions to the time-dependent Schrödinger-Hartree equation.
[13]
Alex H. Ardila.
Stability of standing waves for a nonlinear SchrÖdinger equation under an external magnetic field.
[14]
Reika Fukuizumi, Louis Jeanjean.
Stability of standing waves for a nonlinear Schrödinger equation wdelta potentialith a repulsive Dirac.
[15]
Jun-ichi Segata.
Initial value problem for the fourth order
nonlinear Schrödinger type equation
on torus and orbital stability of
standing waves.
[16] [17]
Jun-ichi Segata.
Well-posedness and existence of standing waves
for the fourth order nonlinear Schrödinger type equation.
[18]
José R. Quintero, Juan C. Cordero.
Instability of the standing waves for a Benney-Roskes/Zakharov-Rubenchik system and blow-up for the zakharov equations.
[19]
Masahito Ohta.
Strong instability of standing waves for nonlinear Schrödinger equations with a partial confinement.
[20]
Soohyun Bae, Jaeyoung Byeon.
Standing waves
of nonlinear Schrödinger equations with optimal conditions for
potential and nonlinearity.
2018 Impact Factor: 1.143
Tools Metrics Other articles
by authors
[Back to Top] |
Hi, if Cos 12° =h, find Sin12°in terms of h...subsequently, write down the value of Cot 12°...please help
By the Pythagorean Identity,
\(\sin^2(12^\circ)+\cos^2(12^\circ)\ =\ 1\)
We are given that cos(12°) = h so we can substitute h in for cos(12°)
\(\sin^2(12^\circ)+h^2\ =\ 1\)
Subtract h
2 from both sides of the equation.
\(\sin^2(12^\circ)\ =\ 1-h^2\)
Because 12° is in Quadrant I, sin(12°) is positive. So take positive sqrt of both sides.
\(\sin(12^\circ)\ =\ \sqrt{1-h^2}\)
By definition of cotangent,
\(\cot(12^\circ)\ =\ \frac{\cos(12^\circ)}{\sin(12^\circ)}\)
Substitute h in for cos(12°) and substitute √[ 1 - h
2 ] in for sin(12°)
\(\cot(12^\circ)\ =\ \frac{h}{\sqrt{1-h^2}}\)_
Hectictar,
thank you very much!!..may I ask one more thing please, how would we approach finding Sin78?..Thank you very much for your time.. |
Probability Seminar Spring 2019 Thursdays in 901 Van Vleck Hall at 2:25 PM, unless otherwise noted. We usually end for questions at 3:15 PM.
If you would like to sign up for the email list to receive seminar announcements then please send an email to join-probsem@lists.wisc.edu
January 31, Oanh Nguyen, Princeton
Title:
Survival and extinction of epidemics on random graphs with general degrees
Abstract: We establish the necessary and sufficient criterion for the contact process on Galton-Watson trees (resp. random graphs) to exhibit the phase of extinction (resp. short survival). We prove that the survival threshold $\lambda_1$ for a Galton-Watson tree is strictly positive if and only if its offspring distribution has an exponential tail, settling a conjecture by Huang and Durrett. On the random graph with degree distribution $D$, we show that if $D$ has an exponential tail, then for small enough $\lambda$ the contact process with the all-infected initial condition survives for polynomial time with high probability, while for large enough $\lambda$ it runs over exponential time with high probability. When $D$ is subexponential, the contact process typically displays long survival for any fixed $\lambda>0$. Joint work with Shankar Bhamidi, Danny Nam, and Allan Sly.
Wednesday, February 6 at 4:00pm in Van Vleck 911 , Li-Cheng Tsai, Columbia University
Title:
When particle systems meet PDEs
Abstract: Interacting particle systems are models that involve many randomly evolving agents (i.e., particles). These systems are widely used in describing real-world phenomena. In this talk we will walk through three facets of interacting particle systems, namely the law of large numbers, random fluctuations, and large deviations. Within each facet, I will explain how Partial Differential Equations (PDEs) play a role in understanding the systems..
Title:
Fluctuations of the KPZ equation in d\geq 2 in a weak disorder regime
Abstract: We will discuss some recent work on the Edwards-Wilkinson limit of the KPZ equation with a small coupling constant in d\geq 2.
February 14, Timo Seppäläinen, UW-Madison
Title:
Geometry of the corner growth model
Abstract: The corner growth model is a last-passage percolation model of random growth on the square lattice. It lies at the nexus of several branches of mathematics: probability, statistical physics, queueing theory, combinatorics, and integrable systems. It has been studied intensely for almost 40 years. This talk reviews properties of the geodesics, Busemann functions and competition interfaces of the corner growth model, and presents some new qualitative and quantitative results. Based on joint projects with Louis Fan (Indiana), Firas Rassoul-Agha and Chris Janjigian (Utah).
February 21, Diane Holcomb, KTH
Title:
On the centered maximum of the Sine beta process Abstract: There has been a great deal or recent work on the asymptotics of the maximum of characteristic polynomials or random matrices. Other recent work studies the analogous result for log-correlated Gaussian fields. Here we will discuss a maximum result for the centered counting function of the Sine beta process. The Sine beta process arises as the local limit in the bulk of a beta-ensemble, and was originally described as the limit of a generalization of the Gaussian Unitary Ensemble by Valko and Virag with an equivalent process identified as a limit of the circular beta ensembles by Killip and Stoiciu. A brief introduction to the Sine process as well as some ideas from the proof of the maximum will be covered. This talk is on joint work with Elliot Paquette.
Title: Quantitative homogenization in a balanced random environment
Abstract: Stochastic homogenization of discrete difference operators is closely related to the convergence of random walk in a random environment (RWRE) to its limiting process. In this talk we discuss non-divergence form difference operators in an i.i.d random environment and the corresponding process—a random walk in a balanced random environment in the integer lattice Z^d. We first quantify the ergodicity of the environment viewed from the point of view of the particle. As consequences, we obtain algebraic rates of convergence for the quenched central limit theorem of the RWRE and for the homogenization of both elliptic and parabolic non-divergence form difference operators. Joint work with J. Peterson (Purdue) and H. V. Tran (UW-Madison).
Wednesday, February 27 at 1:10pm Jon Peterson, Purdue
Title:
Functional Limit Laws for Recurrent Excited Random Walks
Abstract:
Excited random walks (also called cookie random walks) are model for self-interacting random motion where the transition probabilities are dependent on the local time at the current location. While self-interacting random walks are typically very difficult to study, many results for (one-dimensional) excited random walks are remarkably explicit. In particular, one can easily (by hand) calculate a parameter of the model that will determine many features of the random walk: recurrence/transience, non-zero limiting speed, limiting distributions and more. In this talk I will prove functional limit laws for one-dimensional excited random walks that are recurrent. For certain values of the parameters in the model the random walks under diffusive scaling converge to a Brownian motion perturbed at its extremum. This was known previously for the case of excited random walks with boundedly many cookies per site, but we are able to generalize this to excited random walks with periodic cookie stacks. In this more general case, it is much less clear why perturbed Brownian motion should be the correct scaling limit. This is joint work with Elena Kosygina.
March 21, Spring Break, No seminar March 28, Shamgar Gurevitch UW-Madison
Title:
Harmonic Analysis on GLn over finite fields, and Random Walks
Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters. For evaluating or estimating these sums, one of the most salient quantities to understand is the
character ratio:
$$ \text{trace}(\rho(g))/\text{dim}(\rho), $$
for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant
rank. This talk will discuss the notion of rank for $GL_n$ over finite fields, and apply the results to random walks. This is joint work with Roger Howe (Yale and Texas AM). April 4, Philip Matchett Wood, UW-Madison
Title:
Outliers in the spectrum for products of independent random matrices
Abstract: For fixed positive integers m, we consider the product of m independent n by n random matrices with iid entries as in the limit as n tends to infinity. Under suitable assumptions on the entries of each matrix, it is known that the limiting empirical distribution of the eigenvalues is described by the m-th power of the circular law. Moreover, this same limiting distribution continues to hold if each iid random matrix is additively perturbed by a bounded rank deterministic error. However, the bounded rank perturbations may create one or more outlier eigenvalues. We describe the asymptotic location of the outlier eigenvalues, which extends a result of Terence Tao for the case of a single iid matrix. Our methods also allow us to consider several other types of perturbations, including multiplicative perturbations. Joint work with Natalie Coston and Sean O'Rourke.
April 11, Eviatar Procaccia, Texas A&M Title: Stabilization of Diffusion Limited Aggregation in a Wedge.
Abstract: We prove a discrete Beurling estimate for the harmonic measure in a wedge in $\mathbf{Z}^2$, and use it to show that Diffusion Limited Aggregation (DLA) in a wedge of angle smaller than $\pi/4$ stabilizes. This allows to consider the infinite DLA and questions about the number of arms, growth and dimension. I will present some conjectures and open problems.
April 18, Andrea Agazzi, Duke
Title:
Large Deviations Theory for Chemical Reaction Networks
Abstract: The microscopic dynamics of well-stirred networks of chemical reactions are modeled as jump Markov processes. At large volume, one may expect in this framework to have a straightforward application of large deviation theory. This is not at all true, for the jump rates of this class of models are typically neither globally Lipschitz, nor bounded away from zero, with both blowup and absorption as quite possible scenarios. In joint work with Amir Dembo and Jean-Pierre Eckmann, we utilize Lyapunov stability theory to bypass this challenges and to characterize a large class of network topologies that satisfy the full Wentzell-Freidlin theory of asymptotic rates of exit from domains of attraction. Under the assumption of positive recurrence these results also allow for the estimation of transitions times between metastable states of this class of processes.
April 25, Kavita Ramanan, Brown
Title:
Beyond Mean-Field Limits: Local Dynamics on Sparse Graphs
Abstract: Many applications can be modeled as a large system of homogeneous interacting particle systems on a graph in which the infinitesimal evolution of each particle depends on its own state and the empirical distribution of the states of neighboring particles. When the graph is a clique, it is well known that the dynamics of a typical particle converges in the limit, as the number of vertices goes to infinity, to a nonlinear Markov process, often referred to as the McKean-Vlasov or mean-field limit. In this talk, we focus on the complementary case of scaling limits of dynamics on certain sequences of sparse graphs, including regular trees and sparse Erdos-Renyi graphs, and obtain a novel characterization of the dynamics of the neighborhood of a typical particle. This is based on various joint works with Ankan Ganguly, Dan Lacker and Ruoyu Wu.
Friday, April 26, Colloquium, Van Vleck 911 from 4pm to 5pm, Kavita Ramanan, Brown
Title:
Tales of Random Projections
Abstract: The interplay between geometry and probability in high-dimensional spaces is a subject of active research. Classical theorems in probability theory such as the central limit theorem and Cramer’s theorem can be viewed as providing information about certain scalar projections of high-dimensional product measures. In this talk we will describe the behavior of random projections of more general (possibly non-product) high-dimensional measures, which are of interest in diverse fields, ranging from asymptotic convex geometry to high-dimensional statistics. Although the study of (typical) projections of high-dimensional measures dates back to Borel, only recently has a theory begun to emerge, which in particular identifies the role of certain geometric assumptions that lead to better behaved projections. A particular question of interest is to identify what properties of the high-dimensional measure are captured by its lower-dimensional projections. While fluctuations of these projections have been studied over the past decade, we describe more recent work on the tail behavior of multidimensional projections, and associated conditional limit theorems.
Tuesday , May 7, Van Vleck 901, 2:25pm,, Duncan Dauvergne (Toronto) |
Search
Now showing items 1-10 of 27
Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider
(American Physical Society, 2016-02)
The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ...
Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(Elsevier, 2016-02)
Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ...
Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(Springer, 2016-08)
The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ...
Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Elsevier, 2016-03)
The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ...
Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2016-03)
Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ...
Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV
(Elsevier, 2016-07)
The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ...
$^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2016-03)
The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ...
Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV
(Elsevier, 2016-09)
The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ...
Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV
(Elsevier, 2016-12)
We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ...
Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV
(Springer, 2016-05)
Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ... |
Probability Seminar Spring 2019 Thursdays in 901 Van Vleck Hall at 2:25 PM, unless otherwise noted. We usually end for questions at 3:15 PM.
If you would like to sign up for the email list to receive seminar announcements then please send an email to join-probsem@lists.wisc.edu
January 31, Oanh Nguyen, Princeton
Title:
Survival and extinction of epidemics on random graphs with general degrees
Abstract: We establish the necessary and sufficient criterion for the contact process on Galton-Watson trees (resp. random graphs) to exhibit the phase of extinction (resp. short survival). We prove that the survival threshold $\lambda_1$ for a Galton-Watson tree is strictly positive if and only if its offspring distribution has an exponential tail, settling a conjecture by Huang and Durrett. On the random graph with degree distribution $D$, we show that if $D$ has an exponential tail, then for small enough $\lambda$ the contact process with the all-infected initial condition survives for polynomial time with high probability, while for large enough $\lambda$ it runs over exponential time with high probability. When $D$ is subexponential, the contact process typically displays long survival for any fixed $\lambda>0$. Joint work with Shankar Bhamidi, Danny Nam, and Allan Sly.
Wednesday, February 6 at 4:00pm in Van Vleck 911 , Li-Cheng Tsai, Columbia University
Title:
When particle systems meet PDEs
Abstract: Interacting particle systems are models that involve many randomly evolving agents (i.e., particles). These systems are widely used in describing real-world phenomena. In this talk we will walk through three facets of interacting particle systems, namely the law of large numbers, random fluctuations, and large deviations. Within each facet, I will explain how Partial Differential Equations (PDEs) play a role in understanding the systems..
Title:
Fluctuations of the KPZ equation in d\geq 2 in a weak disorder regime
Abstract: We will discuss some recent work on the Edwards-Wilkinson limit of the KPZ equation with a small coupling constant in d\geq 2.
February 14, Timo Seppäläinen, UW-Madison
Title:
Geometry of the corner growth model
Abstract: The corner growth model is a last-passage percolation model of random growth on the square lattice. It lies at the nexus of several branches of mathematics: probability, statistical physics, queueing theory, combinatorics, and integrable systems. It has been studied intensely for almost 40 years. This talk reviews properties of the geodesics, Busemann functions and competition interfaces of the corner growth model, and presents some new qualitative and quantitative results. Based on joint projects with Louis Fan (Indiana), Firas Rassoul-Agha and Chris Janjigian (Utah).
February 21, Diane Holcomb, KTH
Title:
On the centered maximum of the Sine beta process Abstract: There has been a great deal or recent work on the asymptotics of the maximum of characteristic polynomials or random matrices. Other recent work studies the analogous result for log-correlated Gaussian fields. Here we will discuss a maximum result for the centered counting function of the Sine beta process. The Sine beta process arises as the local limit in the bulk of a beta-ensemble, and was originally described as the limit of a generalization of the Gaussian Unitary Ensemble by Valko and Virag with an equivalent process identified as a limit of the circular beta ensembles by Killip and Stoiciu. A brief introduction to the Sine process as well as some ideas from the proof of the maximum will be covered. This talk is on joint work with Elliot Paquette.
Title: Quantitative homogenization in a balanced random environment
Abstract: Stochastic homogenization of discrete difference operators is closely related to the convergence of random walk in a random environment (RWRE) to its limiting process. In this talk we discuss non-divergence form difference operators in an i.i.d random environment and the corresponding process—a random walk in a balanced random environment in the integer lattice Z^d. We first quantify the ergodicity of the environment viewed from the point of view of the particle. As consequences, we obtain algebraic rates of convergence for the quenched central limit theorem of the RWRE and for the homogenization of both elliptic and parabolic non-divergence form difference operators. Joint work with J. Peterson (Purdue) and H. V. Tran (UW-Madison).
Wednesday, February 27 at 1:10pm Jon Peterson, Purdue
Title:
Functional Limit Laws for Recurrent Excited Random Walks
Abstract:
Excited random walks (also called cookie random walks) are model for self-interacting random motion where the transition probabilities are dependent on the local time at the current location. While self-interacting random walks are typically very difficult to study, many results for (one-dimensional) excited random walks are remarkably explicit. In particular, one can easily (by hand) calculate a parameter of the model that will determine many features of the random walk: recurrence/transience, non-zero limiting speed, limiting distributions and more. In this talk I will prove functional limit laws for one-dimensional excited random walks that are recurrent. For certain values of the parameters in the model the random walks under diffusive scaling converge to a Brownian motion perturbed at its extremum. This was known previously for the case of excited random walks with boundedly many cookies per site, but we are able to generalize this to excited random walks with periodic cookie stacks. In this more general case, it is much less clear why perturbed Brownian motion should be the correct scaling limit. This is joint work with Elena Kosygina.
March 21, Spring Break, No seminar March 28, Shamgar Gurevitch UW-Madison
Title:
Harmonic Analysis on GLn over finite fields, and Random Walks
Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters. For evaluating or estimating these sums, one of the most salient quantities to understand is the
character ratio:
$$ \text{trace}(\rho(g))/\text{dim}(\rho), $$
for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant
rank. This talk will discuss the notion of rank for $GL_n$ over finite fields, and apply the results to random walks. This is joint work with Roger Howe (Yale and Texas AM). April 4, Philip Matchett Wood, UW-Madison
Title:
Outliers in the spectrum for products of independent random matrices
Abstract: For fixed positive integers m, we consider the product of m independent n by n random matrices with iid entries as in the limit as n tends to infinity. Under suitable assumptions on the entries of each matrix, it is known that the limiting empirical distribution of the eigenvalues is described by the m-th power of the circular law. Moreover, this same limiting distribution continues to hold if each iid random matrix is additively perturbed by a bounded rank deterministic error. However, the bounded rank perturbations may create one or more outlier eigenvalues. We describe the asymptotic location of the outlier eigenvalues, which extends a result of Terence Tao for the case of a single iid matrix. Our methods also allow us to consider several other types of perturbations, including multiplicative perturbations. Joint work with Natalie Coston and Sean O'Rourke.
April 11, Eviatar Procaccia, Texas A&M Title: Stabilization of Diffusion Limited Aggregation in a Wedge.
Abstract: We prove a discrete Beurling estimate for the harmonic measure in a wedge in $\mathbf{Z}^2$, and use it to show that Diffusion Limited Aggregation (DLA) in a wedge of angle smaller than $\pi/4$ stabilizes. This allows to consider the infinite DLA and questions about the number of arms, growth and dimension. I will present some conjectures and open problems.
April 18, Andrea Agazzi, Duke
Title:
Large Deviations Theory for Chemical Reaction Networks
Abstract: The microscopic dynamics of well-stirred networks of chemical reactions are modeled as jump Markov processes. At large volume, one may expect in this framework to have a straightforward application of large deviation theory. This is not at all true, for the jump rates of this class of models are typically neither globally Lipschitz, nor bounded away from zero, with both blowup and absorption as quite possible scenarios. In joint work with Amir Dembo and Jean-Pierre Eckmann, we utilize Lyapunov stability theory to bypass this challenges and to characterize a large class of network topologies that satisfy the full Wentzell-Freidlin theory of asymptotic rates of exit from domains of attraction. Under the assumption of positive recurrence these results also allow for the estimation of transitions times between metastable states of this class of processes.
April 25, Kavita Ramanan, Brown
Title:
Beyond Mean-Field Limits: Local Dynamics on Sparse Graphs
Abstract: Many applications can be modeled as a large system of homogeneous interacting particle systems on a graph in which the infinitesimal evolution of each particle depends on its own state and the empirical distribution of the states of neighboring particles. When the graph is a clique, it is well known that the dynamics of a typical particle converges in the limit, as the number of vertices goes to infinity, to a nonlinear Markov process, often referred to as the McKean-Vlasov or mean-field limit. In this talk, we focus on the complementary case of scaling limits of dynamics on certain sequences of sparse graphs, including regular trees and sparse Erdos-Renyi graphs, and obtain a novel characterization of the dynamics of the neighborhood of a typical particle. This is based on various joint works with Ankan Ganguly, Dan Lacker and Ruoyu Wu.
Friday, April 26, Colloquium, Van Vleck 911 from 4pm to 5pm, Kavita Ramanan, Brown
Title:
Tales of Random Projections
Abstract: The interplay between geometry and probability in high-dimensional spaces is a subject of active research. Classical theorems in probability theory such as the central limit theorem and Cramer’s theorem can be viewed as providing information about certain scalar projections of high-dimensional product measures. In this talk we will describe the behavior of random projections of more general (possibly non-product) high-dimensional measures, which are of interest in diverse fields, ranging from asymptotic convex geometry to high-dimensional statistics. Although the study of (typical) projections of high-dimensional measures dates back to Borel, only recently has a theory begun to emerge, which in particular identifies the role of certain geometric assumptions that lead to better behaved projections. A particular question of interest is to identify what properties of the high-dimensional measure are captured by its lower-dimensional projections. While fluctuations of these projections have been studied over the past decade, we describe more recent work on the tail behavior of multidimensional projections, and associated conditional limit theorems.
Tuesday , May 7, Van Vleck 901, 2:25pm, Duncan Dauvergne (Toronto) |
development of modern continuo developing nations Magnetics Board manufacturing technologies, which present opportunities for remote automated monitoring; and greater collaboration but partnering among local, state, but international regulators to ensure…
Let us draw three rough sketch of one situation In three rough sketch three should include all of one information given in one problem. All of one magnitudes of one…
Fhsst waves46.png {\displaystyle f’=f({\frac {v\pm v_{0}}{v\mp v_{s}}})} {\displaystyle f’=f({\frac {v\pm v_{0}}{v\mp v_{s}}})} f’ is a observed frequency, f is a actual frequency, v is a speed of sound ( {\displaystyle…
So back to Priscillone and Aquila. Read through our lens thing our contemporary culture, Paul’s habit to place Priscillone first is unusual and spoke thing our great esteem he must…
neodymium intendedHook magnets spread magnets for sale by Samarium Cobalt magnets sword. Pintak notes that within 100 years Muslim armies conquered and carved out one super Strong magnets Samarium Cobalt…
All magnets Toys us, caring parents, want Neodymium see our children successful and happy when they grow up. We, at Educational Toys Planet, think that the most valuable gift Neodymium… |
Could someone please give me some hint on how can I show that the integral $\int_0^1 (1 - t^2)^{-3/4} dt$ is bounded? Thank you very much!
Note here that the integrand $(1-t^2)^{-3/4}$ can be factored as $(1-t)^{-3/4}(1+t)^{-3/4}$, the second part of which is bounded in the defining interval $(0,1)$.
Thus it suffices to show that the integral $\int_0^1(1-t)^{-3/4}dt$ is bounded
But then after linear change of variables $t \to 1-x$, $\int_0^1(1-t)^{-3/4}dt=\int_0^1x^{-3/4}dx$ is bounded as the primitive function of $x^{-3/4}$ is $4x^{1/4}$, bounded on $(0,1)$.
$$\int_{0}^{1}(1-t^2)^{-3/4}\,dt =\frac{1}{2}\int_{0}^{1}u^{-1/2}(1-u)^{-3/4}\,du = \frac{\Gamma\left(\frac{1}{2}\right)\,\Gamma\left(\frac{1}{4}\right)}{2\,\Gamma\left(\frac{3}{4}\right)}=\color{red}{\frac{\Gamma\left(\frac{1}{4}\right)^2}{2\sqrt{2\pi}}}.$$ |
Search
Now showing items 1-10 of 21
Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV
(Springer, 2015-01-10)
The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ...
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV
(Springer, 2015-05-27)
The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ...
Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(American Physical Society, 2015-03)
We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV
(American Physical Society, 2015-06)
The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ...
Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Springer, 2015-11)
The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ...
Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV
(Springer, 2015-09)
We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ...
Centrality dependence of particle production in p-Pb collisions at $\sqrt{s_{\rm NN} }$= 5.02 TeV
(American Physical Society, 2015-06)
We report measurements of the primary charged particle pseudorapidity density and transverse momentum distributions in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV, and investigate their correlation with experimental ... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.