text
stringlengths 256
16.4k
|
|---|
I found the following exercise in Introduction to Metric and Topological Spaces by Sutherland (Chapter 10 Question 20).
Prove that the topology on a space X is discrete iff the diagonal $\Delta=\{ (x,x) \mid x\in X\}$ is open in the topological product $X \times X$.
I believe I could prove in the $implies$ direction. It is the converse that got me stuck.
So,I would want to prove that if the diagonal is open in the topological product then the topology on $X$ must be discrete.
I was thinking that I could achieve the result if I could show that every singleton $\{ x\}$ is open in $X$, but I couldn't think of a way to establish this. I tried considering $X-\{ x\}$ and try to show that it's closed but I couldn't continue to anything fruitful. Also, I think I could use projection maps but I am stumped as well.
Any help/hint is appreciated. Thanks in advance.
|
INFN
- LABORATORI NAZIONALI DI FRASCATI
SEMINAR
Aula
Fisica del Nucleo, building 22
Monte Carlo Simulation for the background of the $\gamma d \rightarrow \Theta^+ \Lambda$ reaction using the g10 data.
The $\gamma d \rightarrow \Theta^+ \Lambda$ is one of the reactions under study for the g10 experiment at CLAS, with the goal to search for evidence of a possible narrow exotic resonace. A Monte Carlo study is being
developped to see if known resonances, in combination with simple kinematical cuts, could give raise to an eccess of events in the mass region of the $K^+ n$ system around $1.5$~ GeV.
|
I'm trying to solve exercise 2614.1 from Demidovich's famous book of exercises on Analysis. Having solved the bit about convergence, I'm now stuck in trying to prove that a series of terms $a_n\geq 0$ is divergent if there exists $N$ such that $(1-\sqrt[n]{a_n})\frac{n}{\log{n}} \leq 1$ for all $n>N$. I've been trying to find another divergent series with which to compare it, or to prove that $a_n$ does not tend to zero. Any help will be appreciated.
I'll prove both parts.
Suppose that for some $N$ and some $p$, $(1-\sqrt[n]{a_n})\frac{n}{\log{n}}\geq p>1$ for $n>N$. From here it follows that it's enough to show the convergence of $\sum (1-p\frac{\log{n}}{n})^n$. Since $(1-p\frac{\log{n}}{n})^n=\exp(n\log(1-p\frac{\log{n}}{n}))$, I'll use an order argument using the series for $\log(1-x)$ and $\exp{x}$. These are
$\log(1-x)=-x-\frac{x^2}{2}-\frac{x^3}{3}-...-\frac{x^n}{n} + o(x^n), x\to 0$,
$\exp{x}=1+x+\frac{x^2}{2}+\frac{x^3}{3!}+...+\frac{x^n}{n!} + o(x^n), x\to 0$.
Thus, $\log(1-p\frac{\log{n}}{n})=-p\frac{\log{n}}{n}-p^2\frac{(\log{n})^2}{n^2}+o(\frac{(\log{n})^2}{n^2})$, for $\frac{\log{n}}{n}\to 0$, so $n\log(1-p\frac{\log{n}}{n})=-p\log{n}-p^2\frac{(\log{n})^2}{n}+o(\frac{(\log{n})^2}{n})$. Therefore $\exp(n\log(1-p\frac{\log{n}}{n}))=\exp(-p\log{n}-p^2\frac{(\log{n})^2}{n}+o(\frac{(\log{n})^2}{n}))$
$=\exp(-p\log{n})\exp(-p^2\frac{(\log{n})^2}{n}+o(\frac{(\log{n})^2}{n}))=\frac{1}{n^p}\exp(-p^2\frac{(\log{n})^2}{n}+o(\frac{(\log{n})^2}{n}))$
$=\frac{1}{n^p}\left( 1-p^2\frac{(\log{n})^2}{n}+o(\frac{(\log{n})^2}{n})+o\left(-p^2\frac{(\log{n})^2}{n}+o(\frac{(\log{n})^2}{n}) \right) \right)$
$=\frac{1}{n^p}\left( 1-p^2\frac{(\log{n})^2}{n}+o(\frac{(\log{n})^2}{n}) \right)$.
So we have $(1-p\frac{\log{n}}{n})^n=\frac{1}{n^p}-p^2\frac{(\log{n})^2}{n^{p+1}}+o(\frac{(\log{n})^2}{n^{p+1}})$. Dividing the LHS of the last equation by $\frac{1}{n^p}-p^2\frac{(\log{n})^2}{n^{p+1}}$ and evaluating the limit, we find that $\lim_{n\to\infty}\frac{(1-p\frac{\log{n}}{n})^n}{\frac{1}{n^p}-p^2\frac{(\log{n})^2}{n^{p+1}}}=1$. So the series of the terms in the denominator converges iff if the series of the terms in the numerator does. But since $p>1$, we know that $\sum\frac{1}{n^p}$ converges, and $\sum\frac{(\log{n})^2}{n^{p+1}}$ converges too by comparing with the former. Hence $\sum (1-p\frac{\log{n}}{n})^n$ converges and so does $\sum a_n$.
On to the divergence part. It's enough to prove that $\sum (1-\frac{\log{n}}{n})^n$ diverges. All steps above are still valid for $p=1$, so $\lim_{n\to\infty}\frac{(1-\log{n}/n))^n}{1/n-(\log{n})^2/n^2}=1$. But $\sum(\log{n})^2/n^2$ can be shown to converge, by comparison with the series $\sum\frac{1}{n^\frac{3}{2}}$. This implies that the series in the denominator diverges and so does that in the numerator, which is what we wanted to prove.
Note that the key here involves manipulation of $o-$little.
this test is known in Russian circles as "Zhame test"
Try to show that $(1 - \frac{{\log n}}{n})^n $ is asymptotically equal to $b_n$, as $n \to \infty$, where $\sum\nolimits_{n = 1}^\infty {b_n }$ is some very well known divergent series...
|
If $n$ is a positive integer and $(p_1,p_2,p_3,p_4,\ldots, p_n)$ are distinct positive primes, show that the integer $(p_1\cdot p_2\cdot p_3\cdot p_4\cdots p_n)+1$ is divisible by none of these primes. How do I figure out that none of the primes divide that new integer?
Suppose $7$ is one of the primes, so $p_1\cdots p_n$ is a multiple of $7$.
The next multiple of $7$ after $p_1\cdots p_n$ is $(p_1\cdots p_n)+7$. So $(p_1\cdots p_n)+1$ is not a multiple of $7$.
More formally, suppose $7$ divides $(p_1\cdots p_n)+1$. Then for some integers $j$, $k$, $$ \begin{align} (p_1 \cdots p_n) & = 7j \tag 1 \\ (p_1 \cdots p_n) + 1 & = 7k \tag 2 \\[10pt] \text{Then subtracting $(1)$ from $(2)$, we get: } 1 & = 7k - 7j = 7(k-j) \\ \end{align} $$ So $1 = 7(k-j)$.
We will need the following theorem:
Theorem. For any two natural numbers $x, y$ ($ y \geq 1 $), there are unique natural numbers $q, r$ where $ 0 \leq r < y $ such that $x = qy + r $. Proof. If $ x < y $ then we must have $ q = 0 $ and $ r = x $, so in this case the $q, r$ exist and are unique. Assume that there is a natural number $ x \geq y $ such that no such $q, r$ exist, then without loss of generality we may assume that $ x $ is the smallest such number. However, then $x - y$ cannot be written in the desired form either, as if we had $x - y = qy + r $ then this would imply $x = (q+1)y + r$. This contradicts the minimality of $ x $, as $ x - y $ is a smaller natural number with the same property. Therefore, such $q, r$ must exist for all values of $x$.
To prove uniqueness, assume that we had $x = q_1 y + r_1 = q_2 y + r_2$ and $r_1 > r_2$, then $r_1 - r_2 = y(q_2 - q_1)$. On the other hand, $r_1 - r_2 \leq r_1 < y$, so that we have $y(q_2 - q_1) < y$ and therefore $q_1 = q_2$. QED.
Corollary. $x = qn + 1$ ($q, n \in \mathbb{N}$) is not divisible by $ q $ for any $q \geq 2$. Proof. If it were, we would have $x = qm$ for some $ m $ and $x = qn + 1$ simultaneously, contradicting the uniqueness proved in the above theorem.
The statement in the question follows immediately from the corollary.
|
The global attractor for a class of extensible beams with nonlocal weak damping
Department of Mathematics, Nanjing University, Nanjing, 210093, China
$ \begin{eqnarray*} u_{tt}+\Delta^2 u-m(\|\nabla u\|^2)\Delta u +\| u_t\|^{p}u_t+f(u) = h, \rm{in}\; \Omega\times\mathbb{R^{+}}, p\geq0 \end{eqnarray*} $
$ \Omega\subset\mathbb{R}^{n} $
$ m(\|\nabla u\|^2) $
$ f(u) $ Keywords:Extensible beam equation, nonlocal weak damping, global attractor, subcritical growth exponent. Mathematics Subject Classification:Primary: 35B40, 35B41; Secondary: 37L30. Citation:Chunxiang Zhao, Chunyan Zhao, Chengkui Zhong. The global attractor for a class of extensible beams with nonlocal weak damping. Discrete & Continuous Dynamical Systems - B, doi: 10.3934/dcdsb.2019197
References:
[1] [2]
E. H. de Brito,
The damped elastic stretched string equation generalized: Existence, uniqueness, regularity and stability,
[3] [4] [5] [6] [7]
V. Barbu,
[8]
M. M. Cavalcanti, V. N. Domingos Cavalcanti and J. A. Soriano,
Global existence and asymptotic stability for the nonlinear and generalized damped extensible plate equation,
[9]
I. D. Chueshov,
[10] [11] [12] [13] [14] [15] [16]
J. K. Hale,
[17] [18]
J.-L. Lions, On some questions in boundary value problems in mathematical physics, in
[19] [20] [21] [22] [23]
F. J. Meng, M. H. Yang and C. K. Zhong,
Attractors for wave equations with nonlinear damping on time-dependent space,
[24]
S. Kouémou Patcheu,
On a global solution and asymptotic behaviour for the generalized damped extensible beam equation,
[25] [26]
I. Perai,
[27] [28]
M. A. Jorge Silva and V. Narciso,
Attractors and their properties for a class of nonlocal extensible beams,
[29]
M. A. J. da Silva and V. Narciso,
Long-time dynamics for a class of extensible beams with nonlocal nonlinear damping,
[30]
J. Simon, Régularité de la solution d'une équation non linéaire dans ${{\mathbf{R}}^{N}}$,
[31] [32]
R. E. Showalter,
[33]
C. Y. Sun, D. M. Cao and J. Q. Duan,
Non-autonomous dynamics of wave equations with nonlinear damping and critical nonlinearity,
[34]
R. Temam,
[35]
C. F. Vasconcellos and L. M. Teixeira,
Existence, uniqueness and stabilization for a nonlinear plate system with nonlinear damping,
[36] [37]
L. Yang and X. Wang, Existence of attractors for the non-autonomous Berger equation with nonlinear damping,
[38]
L. Yang,
Uniform attractor for non-autonomous plate equation with a localized damping and a critical nonlinearity,
[39]
show all references
References:
[1] [2]
E. H. de Brito,
The damped elastic stretched string equation generalized: Existence, uniqueness, regularity and stability,
[3] [4] [5] [6] [7]
V. Barbu,
[8]
M. M. Cavalcanti, V. N. Domingos Cavalcanti and J. A. Soriano,
Global existence and asymptotic stability for the nonlinear and generalized damped extensible plate equation,
[9]
I. D. Chueshov,
[10] [11] [12] [13] [14] [15] [16]
J. K. Hale,
[17] [18]
J.-L. Lions, On some questions in boundary value problems in mathematical physics, in
[19] [20] [21] [22] [23]
F. J. Meng, M. H. Yang and C. K. Zhong,
Attractors for wave equations with nonlinear damping on time-dependent space,
[24]
S. Kouémou Patcheu,
On a global solution and asymptotic behaviour for the generalized damped extensible beam equation,
[25] [26]
I. Perai,
[27] [28]
M. A. Jorge Silva and V. Narciso,
Attractors and their properties for a class of nonlocal extensible beams,
[29]
M. A. J. da Silva and V. Narciso,
Long-time dynamics for a class of extensible beams with nonlocal nonlinear damping,
[30]
J. Simon, Régularité de la solution d'une équation non linéaire dans ${{\mathbf{R}}^{N}}$,
[31] [32]
R. E. Showalter,
[33]
C. Y. Sun, D. M. Cao and J. Q. Duan,
Non-autonomous dynamics of wave equations with nonlinear damping and critical nonlinearity,
[34]
R. Temam,
[35]
C. F. Vasconcellos and L. M. Teixeira,
Existence, uniqueness and stabilization for a nonlinear plate system with nonlinear damping,
[36] [37]
L. Yang and X. Wang, Existence of attractors for the non-autonomous Berger equation with nonlinear damping,
[38]
L. Yang,
Uniform attractor for non-autonomous plate equation with a localized damping and a critical nonlinearity,
[39]
[1]
Yanan Li, Zhijian Yang, Fang Da.
Robust attractors for a perturbed non-autonomous extensible beam equation with nonlinear nonlocal damping.
[2]
Michele Coti Zelati.
Global and exponential attractors for the singularly perturbed
extensible beam.
[3]
Marcio Antonio Jorge da Silva, Vando Narciso.
Long-time dynamics for a class of extensible beams with nonlocal nonlinear damping
[4]
D. Hilhorst, L. A. Peletier, A. I. Rotariu, G. Sivashinsky.
Global attractor and inertial sets for a nonlocal Kuramoto-Sivashinsky equation.
[5]
Azer Khanmamedov, Sema Simsek.
Existence of the global attractor
for the plate equation with nonlocal nonlinearity in $
\mathbb{R}
^{n}$.
[6]
Tomás Caraballo, Marta Herrera-Cobos, Pedro Marín-Rubio.
Global attractor for a nonlocal
[7]
Fengjuan Meng, Chengkui Zhong.
Multiple equilibrium points in global attractor for the weakly damped wave equation with critical exponent.
[8]
Marcio A. Jorge Silva, Vando Narciso, André Vicente.
On a beam model related to flight structures with nonlocal energy damping.
[9] [10] [11] [12]
Jiayun Lin, Kenji Nishihara, Jian Zhai.
Critical exponent for the semilinear wave equation with time-dependent damping.
[13]
Wenru Huo, Aimin Huang.
The global attractor of the 2d Boussinesq equations with fractional Laplacian in subcritical case.
[14]
Marcio Antonio Jorge da Silva, Vando Narciso.
Attractors and their properties for a class of nonlocal extensible beams.
[15]
Tomás Caraballo, David Cheban.
On the structure of
the global attractor for non-autonomous dynamical systems with weak convergence.
[16]
Sergey Zelik.
Asymptotic regularity of solutions of a nonautonomous damped wave equation with a critical growth exponent.
[17]
Gui-Dong Li, Chun-Lei Tang.
Existence of positive ground state solutions for Choquard equation with variable exponent growth.
[18]
Linfeng Mei, Wei Dong, Changhe Guo.
Concentration phenomenon in a nonlocal equation modeling phytoplankton growth.
[19] [20]
2018 Impact Factor: 1.008
Tools
Article outline
[Back to Top]
|
The statement is false. Set $$f(z)=i\frac{1-rz}{1+rz}$$defined on the disk $D_{1/r}:= \{ z: |z|<1/r\}$ where $r$ is chosen to be $0<r<1$ and $1/r$ close enough to $1$. Then $f$ is holomorphic on $D_{1/r}$, hence on $D$. It maps $D_{1/r}$ onto the upper half plane, $f(0)=i$ but $f'(0)=-2ri$ so that $$|f'(0)|=2r>1$$ if $r$ was taken to be greater than $1/2$.
However, if $f$ is holomorphic on $\mathbb D:= \{ z: |z|<1\}$, $\text{Im} (f(z))>0$ for all $z\in \mathbb D$ and $f(0)=i$, then one can deduce that $|f'(0)|\le 2$. The proof follows from the
Schwarz lemma: If a holomorphic function $f:\mathbb D\to \mathbb D$ satisfies $f(0)=0$, then $|f(z)|\le |z|$ on $\mathbb D$ and $|f'(0)|\le 1$.
The proof of this theorem can be found in any textbook so I omit it. Let $h:=g\circ f$ where
$$g(z)=\frac{z-i}{z+i}.$$Then $g$ is holomorphic and it maps upper half plane onto $\mathbb D$. Hence $h$ is holomorphic, $h(0)=g(i)=0$ and $|h(z)|<1$ on $\mathbb D$. Applying Schwarz's lemma to $h$, we conclude that
$$|h'(z)|=\left| \frac{2i}{(f(z)+i)^2}\cdot f'(z)\right|\le 1.$$Consequently, $|f'(0)|\le |2i|=2$.
|
I'm on 11.3.0 for macOS (64-bit).
On a recent CAS-enabled exam question a few weeks ago I was required to evaluate the following integral:
$$ \int_0^5\left(\sqrt[3]{125-x^3}\right)^2\,dx $$
In Mathematica, using the
Integrate function returns this answer:
Integrate[(125-x^3)^(2/3),{x,0,5}]
$$ 75\cdot 3^{2/3} F_1\left(\frac{5}{3};-\frac{2}{3},-\frac{2}{3};\frac{8}{3};-\frac{1}{-1+(-1)^{2/3}},\frac{1}{1+\sqrt[3]{-1}}\right) $$ Where $F_1$ represents the AppellF1 function.
However, on a TI-Nspire CX CAS, the same integral evaluates to: $$\frac{500\pi}{9\sqrt3}$$
That's a much nicer looking answer!
Both of these have the same numerical value of about $100.767$, which tells me that both answers appear to be equivalent - but is it possible to get the CX's more concise answer in Mathematica? I've tried wrapping each of these functions around Mathematica's answer, but none of them have worked:
RootReduce
FullSimplify
FunctionExpand
ToRadicals
ComplexExpand
adding
Assumptions -> x \[Element] Realsto the
Integratefunction
All of these seem to keep the F1 function in place, sometimes changing the arguments slightly but still keeping the F1 function there, more or less the same. If it is possible, how could I get the simpler answer in Mathematica? Thanks!
|
Good evening! I'm actually doing an internship at the Archives Nationales of France and I encountered a situation I wanted to solve using graphs...
I. The dusty situation
We want to optimize the arrangement of books of my library according to their height in order to minimize their archive cost. The height and thickness of the books are known. We already arranged the books in ascending order of height $H_1,H_2,\dots,H_n$ (I don't know if it was the best thing but... that's the way we did it). Knowing each book's thickness, we can determine for each $H_i$ class the necessary thickness for their arrangement, call it $L_i$ (for example, the books that are $H_i = 23\,\mathrm{cm}$ tall might have total thickness $L_i = 300\,\mathrm{cm}$).
The library can custom manufacture shelves, indicating the wished length and height (no problem with depth). A shelf of height $H_i$ and length $x_i$ costs $F_i+C_ix_i$, where $F_i$ is a fixed cost and and $C_i$ is the cost of the shelf per length unit.
Note that a shelf of height $H_i$ can be used to store books of height $H_j$ with $j\leq i$. We want to minimize the cost.
My tutor suggested I model this problem as a path-finding problem. The model might involve $n+1$ vertices indexed form $0$ to $n$. My mentor suggested I work out the existing conditions, each edge signification and how to work out the valuation $v(i,j)$ associated to the edge $(i,j)$. I would also be OK with other solutions as well as insights.
For instance we have for the
Convention (a dark period of the French History) such an array:
\begin{array}{|c|rr} i & 1 & 2 & 3 & 4\\ \hline H_i & 12\,\mathrm{cm} & 15\,\mathrm{cm} & 18\,\mathrm{cm} & 23\,\mathrm{cm}\\ L_i & 100\,\mathrm{cm} & 300\,\mathrm{cm} & 200\,\mathrm{cm} & 300\,\mathrm{cm} \\ \hline F_i & 1000€ & 1200€ & 1100€ & 1600€ \\ C_i & 5€/\mathrm{cm} & 6€/\mathrm{cm} & 7€/\mathrm{cm} & 9€/\mathrm{cm}\\ \end{array}
II. The assumptions of a trainee bookworm
I think I have to compute an algorithm between Djikstra, Bellman or Bellman-Kalaba... I'm trying to find out which one in the following subsections.
1.Conditions
We are here with a problem of pathfinding between a vertice $0$ and a vertice $n$, $n$ must be outgoing from $0$ (that is to say, a path (or a walk) must exists between $0$ and $n$
2.What to compute (updated (25/10/2015))
//
Work still under process as far as I don't know which vertices to and which edges to model... My best guess
I think we get rid of at least one type of shelves every time we find a shortest path from the array, but that's only my assumption... ;).
I think the best way to model how to buy shelves and store our books must look like the following graph,
(but, please, feel free to criticize my method! ;))
vertices:
$i\in[1,4]$ are shelves we can use to store our books. $0$ is the state where no book is stored. Using this vertice allows me to use each cost formulas (edges).
edges: $F_i+C_ix_i,i\in[1,4]$ are the cost using a type of shelve. for instance: $F_1+C_1x_1$ fom 0 is the cost using only type 1 shelves to store our parchments, manuscripts...
Yet, from here I don't know how to create my shortest path problem.
Indeed, I would not know where would I have stowed all my books.
This leads me to another idea...
another idea...
Here, I am searching for the shortest path from a given vertice to the 0 state, that is to say, knowing that the highest document is $type \ i$ tall, I am searching for the cheapest way to arrange my documents.
vertices:
$i\in[1,4]$ are shelves we can use to store our books. $0$ is the state where all books are stored. Using this vertice allows me to use each cost formulas (edges).
edges: $F_i+C_ix_i,i\in[1,4]$ are the cost using a type of shelve. for instance: $F_1+C_1x_1$ from 3 is the cost using $type \ 1$ shelves after using $type \ 3$ shelves to store our parchments, manuscripts...
Yet, I don't know where to put $F_4+C_4x_4$.
3.How to compute
I think that we have to start with the higher shelves as far as we can then store the smaller books...
Do
We take $L_n$ cm of with the $H_{i=n}$ height in a shelve of their height + $z$ cm of an $H_{i=n-1}$ height until it becomes more expensive than taking the $H_{i=n-1}$ shelve. then $i=i-1$
While i><0
Finally, I don't know how to make x varying...
That is to say how to choose to put $x_i$ documents in $4$ or $3$ for instance.
|
In the last few days I thought a lot about (fully) time-constructible functions and I will present what I found out by answering Q1 and Q3. Q2 seems too hard.
Q3:
Kobayashi in his article (the reference is in the question) proved that a function $f:\mathbb{N}\rightarrow\mathbb{N}$, for which there exists an $\epsilon>0$ s.t. $f(n)\geq (1+\epsilon)n$, is fully time constructible iff it is computable in $O(f(n))$ time. (note that it is irrelevant whether the input or output is in unary/binary since we can transform between these two representations in linear time). This makes the following functions fully time-constructible: $2^n$, $2^{2^n}$, $n!$, $n\lfloor \log n \rfloor$, all polynomials $p$ over $\mathbb{N}$ s.t. $p(n)\geq (1+\epsilon)n$ ... Kobayashi also proved fully time-constructibility for some functions that grow slower than $(1+\epsilon)n$, like $n+\lfloor\lfloor\log n\rfloor^q\rfloor$ for $q\in\mathbb{Q}^+$ ...
To continue with examples of fully time-constructible functions, one can prove that if $f_1$ and $f_2$ are fully time-constructible, then $f_1+f_2$, $f_1f_2$, $f_1^{f_2}$ and $f_1\circ f_2$ are also fully time-constructible (the later follows directly from Theorem 3.1 in Kobayashi). This altogether convince me that many nice functions are indeed fully time-constructible.
It is surprising that Kobayashi did not see a way to prove fully time-constructibility of the (nice) function $\lfloor n\log n\rfloor$ (and neither do I).
Let us also comment the definition from Wikipedia article:
A function $f$ is time-constructible, if there exists a Turing machine $M$ which, given a string $1^n$, outputs $f(n)$ in $O(f(n))$ time. We see that this definition is equivallent to our definition of fully time-constructibility for functions $f(n)\geq (1+\epsilon)n$. Q1:
This question has a really interesting answer. I claim that if all time-constructible functions are fully time-constructible, then $EXP-TIME=NEXP-TIME$. To prove that, let us take an arbitrary problem $L\in NEXP-TIME$, $L\subseteq\{0,1\}^*$. Then there exists a $k\in\mathbb{N}$, s.t. $L$ can be solved by a NDTM $M$ in $2^{n^k-1}$ steps. We can assume that at each step $M$ goes in at most two different states for simplicity. Now define the function$$f(n)=\left\{\begin{array}{ll} 8n+2 & \mbox{if }\left(\mbox{first } \lfloor\sqrt[k]{\lfloor\log n\rfloor+1}\rfloor\mbox{ bits of } bin(n)\right)\in L\\ 8n+1 & \mbox{else} \end{array}\right.$$
I claim that $f$ is time-construcible. Consider the following deterministic Turing machine $T$:
on input $w$ of length $n$ it computes $\left(\textrm{first }\lfloor\sqrt[k]{\lfloor\log n\rfloor+1}\rfloor\textrm{ bits of }bin(n)\right)$ in $O(n)$ time then it simulates $M$ on these bits, where the bits of $w$ determine which (formerly nondeterminisic) choices to take. accept iff $\left(M\textrm{ accepts using choices given by }w\right)$.
Note that the length of $w$ ($=n$) is enough that it determines all nondeterministic choices, since $M$ on input $\left(\textrm{first }\lfloor\sqrt[k]{\lfloor\log n\rfloor+1}\rfloor\textrm{ bits of }bin(n)\right)$ makes at most $n$ steps.
We can make $T$ run in at most $8n+1$ steps. Now the following Turing machine proves that $f$ is time-constructible:
on input $w$ of length $n$ run $T$ and count steps in parallel so that exacly $8n$ steps are done. if $T$ rejected or would reject in the next step, go to a halting state in the next step. Else, make one more step and then go to a halting state.
Now suppose that $f$ is fully time-constructible. We will prove that this leads to $EXP-TIME=NEXP-TIME$.
The following algorithm solves $L$:
on input $x$, let $n$ be the number with binary representation $x00\ldots 0$ ($|x|^{k-1}$ zeros). It follows that $x=\left(\textrm{first }\lfloor\sqrt[k]{\lfloor\log n\rfloor+1}\rfloor\textrm{ bits of }bin(n)\right)$. compute $f(n)$ in time $f(n)$ and check whether it is divisible by 2.
This algorithm runs in exponential time and solves $L$. Since $L\in NEXP-TIME$ was arbitrary, $EXP-TIME=NEXP-TIME$.
|
Anyone who has had to prepare for an algebra qualifying exam is familiar with the "Classify groups of order $X$" question.
To illustrate my general question, which I postpone until the end, consider the following simple example in which I classify groups $G$ of order $3 \cdot 7$. Let $H$ and $K$ be the $7$- and $3$-Sylow subgroups, respectively. By Sylow's theorems, we find easily that $H$ is normal and $K$ is either normal or one of seven conjugate copies. Also $H \cong \mathbf{Z}_7$ and $K \cong \mathbf{Z}_3$. Let $x$ be a generator of $\mathbf{Z}_3$ and let $y$ be a generator of $\mathbf{Z}_7$, both viewed multiplicatively. Now $G$ is a semidirect product of $H$ and $K$, hence the possible structures of $G$ are determined by the possible group homomorphisms $$ \mathbf{Z}_3 \to \mathrm{Aut}(\mathbf{Z}_7) \cong \mathbf{Z}_6. $$ Such a group homomorphism is determined by the image of $x$; since the order of this image must divide the order of $x$, we see $x$ is either sent to the identity automorphism $\mathbf{1}$ or an automorphism of order three.
We find that a generator of $\mathbf{Z}_6$ is the automorphism $\alpha \colon y \mapsto y^3$. Therefore, there are three possible group homomorphisms, determined by sending $x$ to $\mathbf{1}$, to $\alpha^2 \colon y \mapsto y^2$, or to $\alpha^4 \colon y \mapsto y^4$. It follows there are at most three possible groups of order $21$, generated by $x$ and $y$ and subject to the relations $x^3 = x^7 = 1$ as well as one of the following commutativity relations: $$xy = yx, \;\;\;\;\; xy = y^2 x, \;\;\;\;\; xy = y^4 x. $$ All such groups exist by employing the abstract construction of the semidirect product.
What follows is always the most subtle part of the analysis.
Which of these groups are duplicates?
The first is the case $G \cong \mathbf{Z}_3 \times \mathbf{Z}_7$ which is clearly distinct from the remaining two. Let the second group be denoted $G_2$ and the third $G_4$. If $G_2$ were isomorphic to $G_4$, then there would have to exist $X, Y \in G_4$ of orders three and seven, respectively, and satisfying $XY = Y^2 X$ (or $X, Y \in G_2$ satisfying $XY = Y^4 X$). And it is easy to see that, in fact, this condition is sufficient for $x \mapsto X$, $y \mapsto Y$ to determine an isomorphism $G_2 \cong G_4$. Note that both $G_2$ and $G_4$ have seven $3$-Sylow subgroups (otherwise they would be Cartesian products). So there are $14$ candidates for $X$ and $6$ candidates for $Y$.
Morally, at least in my opinion, these groups should be isomorphic, because the only difference in their definition occurs when we chose between the two generators $\alpha^2$ and $\alpha^4$ of the cyclic subgroup $\mathbf{Z}_3 \subset \mathrm{Aut}(\mathbf{Z}_7) \cong \mathbf{Z}_6$, and these generators are 'essentially the same'.
This is indeed the case, but the proof feels 'lucky'. One finds by calculation that no map of the form $X = x$ and $Y = y^k$ satisfies $XY = Y^2 X \in G_4$. But this is satisfied by taking $X = x^2$ and $Y = y$: $$X Y = x^2 y = x y^4 x = y^{16} x^2 = y^2 x^2 = Y^2 X \in G_4. $$ We conclude there are two groups of order $21$ up to isomorphism.
To give an example of how this problem becomes more complex, if instead one were computing groups of order $3 \cdot 7 \cdot 13$, then one must determine group homomorphisms $$\mathbf{Z}_3 \to \mathrm{Aut}(\mathbf{Z}_7 \times \mathbf{Z}_{13}) \cong \mathbf{Z}_6 \times \mathbf{Z}_{12}. $$ (Don't forget that the automorphisms of the direct product is the direct product of the automorphisms when the orders of the groups are coprime!) If $\alpha$ generates $\mathbf{Z}_6$ and $\beta$ generates $\mathbf{Z}_{12}$, then there are nine possible semidirect structures, corresponding to $x$ being sent to to any of the following pairs: $$(\mathbf{1}, \mathbf{1}), \;\;\;\;\; (\alpha^2, \mathbf{1}), \\ (\alpha^4, \mathbf{1}), \;\;\;\;\; (\mathbf{1}, \beta^4), \\ (\mathbf{1}, \beta^8), \;\;\;\;\; (\alpha^2, \beta^4), \\ (\alpha^4, \beta^4), \;\;\;\;\; (\alpha^4, \beta^8), \;\;\;\;\; (\alpha^2, \beta^8). $$ Which of these are isomorphic?
Hopefully at this point my general question is clear. First, in words:
In considering semidirect products $G \cong H \rtimes K$ is there a (natural?) proof that shows the choice of generator(s) of $\mathrm{Aut}(H)$ affects the resulting group only up to the choice of 'non-equivalent' generators?
Here is a precise phrasing for which I would be thrilled to receive an answer:
Question: Prove or disprove.Let $p$ and $q$ be primes such that $q$ divides $p-1$. Consider semidirect products $G_\rho = \mathbf{Z}_p \rtimes_\rho \mathbf{Z}_q$ determined by group homomorphisms $$ \rho \colon \mathbf{Z}_q \to \mathrm{Aut}(\mathbf{Z}_p) \cong \mathbf{Z}_{p-1}. $$ Let $x$ multiplicatively generate $\mathbf{Z}_q$ and let $\alpha$ multiplicatively generate $\mathbf{Z}_{p-1}$. Setting $n = (p-1)/q$ the generators for $\mathbf{Z}_q \subset \mathbf{Z}_{p-1}$ are $\alpha^{nk}$ where $k = 1, \dots, q-1$. Let $\rho_k$ denote the group homomorphism determined by $x \mapsto \alpha^{nk}$. Then $G_{\rho_k} \cong G_{\rho_\ell}$ for all $1 \leq k, \ell \leq q-1$.
|
Consider the following transition matrix
$$ T= \left[ {\begin{array}{cccc} \frac{1}{3} & \frac{1}{4} & \frac{1}{5} & \frac{1}{6}\\ \frac{1}{3} & \frac{1}{4} & \frac{1}{5} & \frac{2}{6}\\ \frac{1}{3} & \frac{1}{4} & \frac{3}{5}& 0\\ 0 & \frac{1}{4} & 0& \frac{3}{6}\\ \end{array} } \right] $$
of a Markov chain process $$P_{t+1}=TP_t$$
The steady state is given by
$$ P^*=\frac{1}{137} \left[ {\begin{array}{c} 33 \\ 36 \\ 50 \\ 18 \\ \end{array} } \right] $$
The detailed balance condition implies that $$\sum_i T_{ij}P_i^*=\sum_j T_{ji}P_j^*$$
since for example $T_{24}P^*_2\neq T_{42}P_4^*$, the chain process does not satisfy the detailed balance condition.
I want to understand if the system is in thermal equilibrium. Obviously the system is not reversible, since there is no detailed balance, but is it a sufficient condition?
can a system be irreversible and in thermal equilibrium?
If so, what is the mathematical condition that needs to be satisfied in order to be in thermal equilibrium?
|
Look at the right-most figure in diagram. The small triangle defines the all the major dimensions of the hexagon. Assuming the user measures the edge-to-edge dimension $c$, he or she can calculate the rest of the measurements. A simple right-triangle expression gives the relationship between $s/2$ and $c/2$, and is readily solved for $s$ in terms of $c$,
$$ \frac{s}{2} = \frac{c}{2}\tan\left(30°\right) $$ $$ s = c\tan\left(30°\right) = c\frac{\sqrt{3}}{3}. $$
Similarly, the Pythagorean formula gives the relationship between $c$ and the two other dimensions,
$$ d = \sqrt{ c^2 + s^2} = \frac{2c}{\sqrt{3}}.$$
Simplifying this equation by substituting for $s$ into the previous equation and simplifying produces
$$ \text{long pitch} = s + \frac{d-s}{2} = \frac{s+d}{2} = \frac{\sqrt{3}}{2}c \approx 0.87c. $$
A blanket $n$ hexagons by $m$ hexagons will be approximately
$$ 0.87c\,n \times m\,c, $$ where the $m$ and $n$ dimensions are as shown in the next figure.
If a blanket will be wider at the ends, as shown in the figure, then $n$ will be odd. The total number of hexagons will then be
$$ \text{number of hexagons} = m \frac{n+1}{2} + (m-1)\frac{n-1}{2}. $$
In the example there are $(n+1)/2=5$ tall columns and $(n-1)/2=4$ short columns. So there are 6×5=30 hexagons in tall columns and (6-1)×4=20 hexagons in the short columns, or 50 hexagons overall. Using equation for blanket size and assuming $c=10$ inches, the approximate dimensions of the finished blanket are 9×10×0.87 by 6×10, or 78.3 by 60 inches.
|
I feel that as it was my comment, I am obliged to answer this :-).
First of all, birational equivalence is really a geometric notion. As far as I know, there is no analogue for groups, rings or fields and therefore the cryptographic relevance is limited. It becomes relevant when speaking of geometric objects: for example, elliptic curves.
Given these geometric objects, we want to define what it means to be "the same". The usual terminology is that given two curves $E_1$ and $E_2$, they are "the same" when they are isomorphic. There is another way to equate objects, and that is by saying that they are "almost the same". This is what a birational equivalence does: two curves $E_1$ and $E_2$ are birationally equivalent when there is a map $\phi:E_1\rightarrow E_2$ between them which is defined at every point of $E_1$
except a small set of exceptions and there is an inverse map $\phi^{-1}:E_2\rightarrow E_1$ which is defined at every point of $E_2$ except a small set of exceptions. This definition is very close to that of an isomorphism, except for the fact that we allow some exceptions.
To make this more concrete, you could think of an isomorphism as a tuple of polynomials:
$$\psi:E_1\rightarrow E_2,\quad (x,y)\mapsto (f(x,y),g(x,y)),$$where $f,g$ are polynomials in $x,y$. The inverse is also defined in terms of polynomials.
A birational map can be thought of as a tuple of
fractions of polynomials, say
$$\phi:E_1\rightarrow E_2,\quad (x,y)\mapsto \left(\frac{f_1(x,y)}{f_2(x,y)},\frac{g_1(x,y)}{g_2(x,y)}\right).$$
This is defined at all points $(x,y)$ except for the ones where $f_2(x,y)=0$ or $g_2(x,y)=0$. The inverse is also a fraction of polynomials, and can therefore be undefined at certain points.
Now let's make this even more concrete, and follow the Ed25519 paper. The Curve25519 curve is defined by $$E_1:v^2=u^3+486662u^2+u.$$It is birationally equivalent to the Edwards curve given by$$E_2:x^2+y^2=1+\frac{121665}{121666}x^2y^2.$$The birational equivalence is given by the map $\phi:E_1\rightarrow E_2$ defined by $$\phi(u,v)=\left(\frac{\sqrt{486664}u}{v},\frac{u-1}{u+1}\right).$$Notice that it is undefined for $v=0$ or $u=-1$, and therefore is not an isomorphism. The inverse map is defined by$$\phi^{-1}(x,y)=\left(\frac{1+y}{1-y},\frac{\sqrt{486664}u}{x}\right).$$Again, it is undefined for $y=1$ or $x=0$.
Finally, consider the twisted Edwards curve $$E_3:-x^2+y^2=1-\frac{121665}{121666}x^2y^2.$$There is a map $\psi:E_2\rightarrow E_3$ defined by $\psi(x,y)=\left(ix,y\right)$, assuming that $i$ is a square root of $-1$. This is clearly defined everywhere, and is an isomorphism.
|
What new approach did Yitang Zhang try & what did the experts miss in the first place?
Yes it is a good question as to why (say) FI did not hit upon such a result, as the two major glue components, dispersion a la BFI, and beating the square-root barrier akin as Friedlander/Iwaniec, are due to them. As Zhang puts it, last paragraph section 2 page 7, he saves a $\sqrt r$ factor in a Kloostermann type bound, and can take $r$ as a small power (well, I think he should take it larger than he does, after review).
Most everything else is the paper is rather "standard", as the idea of restricting to integers/moduli which have no small/large factors is common, the dispersion method is in BFI, maybe in S10 Zhang has to work a little to get his conditions on congruence classes to roll through. But then the Type III estimate (trilinear) in 13 and 14 is the heart, of why he wins. Indeed, he handles each $d$ separately, rather than on average (as dispersion or large sieve). Conceptually again he brings forth a Weyl shift to copy a sum many times, then turning to Fourier techniques. The extra flexibility of factoring $d=qr$ is not distressed, for he uses Chinese Remainder Theorem in repair, and in fact the Ramunajan sums pop out too. See bottom page 52, at 14.13 and below to top line page 53, where the Birch-Bombieri bound is applied. See in $J(m_1,m_2)$, the part from $r$ is a Ramanujan sum, so the double $r$-sum is bounded in essence by $r$, not $r^{3/2}$. Again the preceeding Fourier analysis has technique, but the idea is already bookish.
So I repeat, the main advance was not to apply Deligne to something like $Z(k;m_1,m_2)$ below 14.12 on page 52 directly on modulus $d$, but to first peel over factor, small perhaps but useful, as $d=qr$ leaving $q$ to geometry and $r$ to Ramanujan after spinning out the $N_3'$ sum over this modulus. Well, that is my word, I don't say I understand all, not even philosophically why this line should give the win.
The Friedlander/Iwaniec paper: http://www.jstor.org/stable/1971175
PS. It could say useful to have a rewrite of 13 and 14, independent of the rest of the work, for he chooses parameters there for his purpose, but this vitiates the generality. They are independent for the whole.
Or again, look at $P_2$ near bottom page 48. This should be of size $d_1^{3/2}K^2$ if you apply direct reasoning, but Zhang first rolls out $n$ modulo $r$. It is still mysterious for me why this avails, factoring $V$ into $W\cdot C$ in 14.7 the latter a Ramanujan.
Let me try again, and briefly sketch the whole idea of Sections 13 and 14.
Zhang wants to estimate $|\Delta(\gamma,d,c)|$ where $\gamma=\alpha\star\chi_{N_3}\star\chi_{N_2}\star\chi_{N_1}$ is a triple convolution with $N_1\ge N_2\ge N_3$ of decent size, say $N_3\ge x^{1/4-6\omega}$. The characteristic functions $\chi_N$ are for an interval say $N$ to $N+N/(\log N)^B$. Here we have the standard discrepancy $\Delta$ defined as$$\Delta(\gamma,d,c)=\sum_{n\sim x\atop n\equiv c (d)}\gamma(n)-{1\over\phi(d)}\sum_{n\sim x\atop (n,d)=1}\gamma(n).$$Most importantly, $d=qr$ with $r$ convenient. The estimate $|\Delta(\gamma,d,c)|\ll x^{1-\kappa}/d$ is desired for some positive $\kappa$.
Zhang replaces $\chi_{N_1}$ by a smooth approximant. This is standard, there are various versions of this in the field, the idea being that if a function goes from 0 to 1 over an interval of length $Y$, you can control its derivatives as powers of $Y$. Then one executes the inner sum in $\Delta$ over this variable, and replaces the sum over the smoothed function by its Fourier transform. This allows the main terms in $\Delta$ to cancel from the frequency 0 contribution, leaving a deal with highers. See middle page 45 and following. Copying,$$\sum_{n\equiv c (d)}\gamma^\star(n)=\sum_{(m,d)=1}\alpha(m)\sum_{n_3\sim N_3\atop (n_3,d)=1}\sum_{n_2\sim N_2\atop (n_2,d)=1}\sum_{mn_3n_2n_1\equiv c (d)}f(n_1),$$and the inner sum is$${1\over d}\sum_{|h|\le H} \hat f(h/d)e_d(-ch\overline{mn_3n_2})+O()$$where $H=d/N_1$ essentially. The sum over $m$ handles itself, the inner part will yield the cancel.
So Zhang's goal is to obtain the estimate$$\sum_{1\le h\le H}\sum_{n_3\sim N_3\atop (n_3,d)=1}\sum_{n_2\sim N_2\atop(n_2,d)=1}\hat f(h/d)e_d(-ch\overline{mn_2n_3})\ll x^{1-\kappa}/M.$$In fact to use a Möbius inversion device that I omit below, we need this for $d$ not just near $\sqrt x$ (beyond the Bombieri-Vinogradov range), but also divisors of $d$ so a wider range. When $d$ is small enough, or $N_2$ large enough to say another way, a one-variable estimate suffices. Actually the trickiest case is when $N_1\sim N_2\sim N_3\sim x^{5/16-9\omega/2}$ (maybe this is not the exact $\omega$ multiplier, but 5/16 is right) and $d\sim x^{5/12-9\omega}$, neither a 1-variable method or the ensuing, is going to swell. But when $d\le N_1$ the estimate for the $H$-sum is empty, and when $d^{3/2}N_3\ll x^{1-\kappa}/M$ a 1-variable bound on $n_2$ suffices. I digress. In fact, as noted per (2.4) in Friedlander/Iwaniec, a 2-variable bound can be applied from Deligne some times(the inner double sum then bounded by $\sqrt{d^2}$). Zhang does not do this extra, I find it gives a suitable bound when $d^2\ll x^{1-\kappa}/M$.
Back to the main story. After a application of Möbius to insert a coprimality condition in the frequency variable $h$, Zhang then uses the idea of the Weyl shift. It is key that he shifts by multiples of $r$, this convenient factor of $d$. The idea of the Weyl shift is to copy a sum many times, only partially shifted by much less than its length. In the above, Zhang replaces $n_2$ by $n_2+hkr$ for $k$ up to some bound $K$, and then computes that the difference between the sum and the $hkr$-shifted sum is small (provided $K$ small enough of course). Then one wants to bound the average over the shifts$$N(d,k)=\sum_{1\le h\le H\atop (h,d)=1}\sum_{n_3\sim N_3\atop (n_3,d)=1}\sum_{n_2\sim N_2\atop(n_2+hkr,d)=1}\hat f(h/d)e_d(-ch\overline{m(n_2+hkr)n_3}).$$I simplified the formula on top of page 47 a bit, not including the Möbius step, the above is not truly correct, but an idea of how it goes self-contained here.To state again, the idea is that we want to bound $N(d,0)$, we know that $N(d,k)$ is close to $N(d,0)$ for small $k$, and will establish a bound on average for ${1\over K}\sum_{k\sim K} N(d,k)$.
From Cauchy, and substituting $l\equiv \bar hn_2$ modulo $d$, one is left to estimate ($P_2$ of bottom page 48)$$\sum_{l (d)}|\sum_{k\sim K\atop (l+kr,d)=1}\sum_{n\sim N_3\atop (n,d)=1}e_d(b\overline{(l+kr)n})|^2.$$Staring at this, if you expand, the $l$-sum is over $d$ and the $n$-sums are essentially incomplete sums modulo the same, so expect $d^{3/2}K^2$. But the shift by a
multiple of $r$ will allow us to win. The idea is that $N_3$ exceeds $r$ by enough to allow sprawling $n=rn'+s$ over residue classes modulo $r$, and this to be efficient. Now normally, this should not gain, see bottom page 49 with 14.6, Zhang wants to estimate$$\sum_{k_1\sim K}\sum_{k_2\sim K}\sum_{s_1\le r\atop (s_1,r)=1}\sum_{s_2\le r\atop (s_2,r)=1}\sum_{n_1\sim N_3/r}\sum_{n_2\sim N_3/r}\sum_{l (d)} e_d(b\overline{l(n_1r+s_1)}-b\overline{(l+kr)(n_2r+s_2)}).$$Again one doesn't expect to win, as though the $e_d$ will factor over $d=qr$ to $e_q()e_r()$, there will still be three sums over a variable modulo $r$, and $r^{3/2}$ shall appear. But the idea is that the fact the Weyl shift was a multiple of $r$, so upon unwinding the CRT, the triple sum modulo $r$ is really a double sum of a Ramanujan sum. So that's why I typed out the innards of $e_d$ above.
Construing the technicalities with Fourier transforms, this results as needed. The key is that $r$ can be taken as a small power of $x$, and we win by $\sqrt r$, or really the fourth-root after Cauchy, but this is enough.
|
After some thought, I think the answer is in fact NO, even for IND-1-CCA
* and even for Shoup's OAEP+.
RSA-OAEP/OAEP+ work by taking a message $m$, producing a padding $p(m,r)$ and then encrypting this, so $c = f(p(m,r))$ where $f$ is RSA encryption, and
$f(u) = u^e \pmod{N}$ is deterministic. In fact, the whole point of OAEP(+) is to inject some entropy into ciphertexts which is required for IND-CPA and higher security.
ElGamal encryption is already randomised. If we try ElGamal-OAEP(+) we get $c = (g^r, y^r \cdot p(m, r'))$ where $y$ is the public key. Since ElGamal is homomorphic, this is obviously not even CCA1: consider an adversary who picks $m_0, m_1$, asks for a challenge ciphertext $c = (u, v)$ and then sets $c' = (u \cdot g^s, v \cdot y^s)$ for randomly chosen $s$. This is still a valid OAEP(+) ciphertext whatever the padding $p$ is (since we're only changing the "outer" randomness $r \mapsto r + s$) so the IND-1-CCA game will happily decrypt this and return $m_0$ or $m_1$ as desired.
This is assuming of course that you can map your padding function's range into the group over which you're doing ElGamal --- for ECC, this should be fine, for $\mathbb Z^\times_p$ groups it's harder. As an alternative one could consider hashed ElGamal-OAEP+ with $c = (g^r, H(y^r) \oplus p(m, r'))$ where $H$ is independent of the hash functions used in the OAEP+ padding $p$. My intuition is that this is still not CCA1, even though it doesn't have the homomorphic property anymore. Certainly if $H$ has some homomorphic properties itself then one should be able to do something like the above counterexample.
IND-1-CCA: Is standard IND-CCA2 where you only get 1 decryption query after seeing the challenge ciphertext instead of polynomially many.
|
closed as off-topic by Mark McClure, happy fish, bbgodfrey, Michael E2, yohbs May 10 '17 at 3:54
This question appears to be off-topic. The users who voted to close gave these specific reasons:
"This question cannot be answered without additional information. Questions on problems in code must describe the specific problem and include valid code to reproduce it. Any data used for programming examples should be embedded in the question or code to generate the (fake) data must be included." – bbgodfrey, Michael E2, yohbs "The question is out of scopefor this site. The answer to this question requires either advice from Wolfram support or the services of a professional consultant." – Mark McClure, happy fish
You can convert to polar coordinates:
r[t_] := 2 Cos[t]^2Integrate[r[t]^2/2, {t, 0, 2 Pi}]
yields $3\pi/2\approx 4.71239 $
or you can use Green's Theorem (with $\vec{F}=\{-y/2,x/2\}$):=
Integrate[{-r[t] Sin[t], r[t] Cos[t]} .D[{r[t] Cos[t], r[t] Sin[t]}, t]],{t,0,2Pi}]/2
also yielding $3\pi/2$
or approximate using
ImplicitRegion:
reg = ImplicitRegion[(x^2 + y^2)^3 <= 4 x^4, {{x, -2, 2}, {y, -2, 2}}]RegionMeasure[DiscretizeRegion[reg, MaxCellMeasure -> {"Length" -> 0.01}]]
yields: 4.71238
See Kuba comment below for shorter
ImplicitRegion solution:
Area@ImplicitRegion[(x^2. + y^2)^3. <= 4. x^4., {x, y}]
yields: 4.71239
|
Update: The MathJax Plugin for TiddlyWiki has a new home: https://github.com/guyru/tiddlywiki-mathjax Some time ago I came across MathJax, a nifty, Javascript based engine for displaying TeX and LaTeX equations. It works by “translating” the equation to MathML or HTML+CSS, so it works on all modern browsers. The result isn’t a raster image, like in most LaTeX solutions (e.g. MediaWiki), so it’s scales with the text around it. Furthermore, it’s quite easy to integrate as it doesn’t require any real installation, and you could always use MathJax’s own CDN, which makes things even simpler.
I quickly realized MathJax will be a perfect fit for TiddlyWiki which is also based on pure Javascript. It will allow me to enter complex formulas in tiddlers and still be able to carry my wiki anywhere with me, independent of a real TeX installation. I searched the web for an existing MathJaX plugin for TiddlyWiki but I came up empty handed (I did find some links, but they referenced pages that no longer exist). So I regarded it as a nice opportunity to begin writing some plugins for TiddlyWiki and created the MathJaxPlugin which integrates MathJax with TiddlyWiki.
As I don’t have an online TiddlyWiki, you’ll won’t be able to import the plugin, instead you’ll have to install it manually (which is pretty simple).
Start by creating a new tiddler named
MathJaxPlugin, and tag with
systemConfig (this tag will tell TiddlyWiki to execute the JS code in the tiddler, thus making it a plugin. Now copy the following code to the tiddler content:
/***
|''Name:''|MathJaxPlugin|
|''Description:''|Enable LaTeX formulas for TiddlyWiki|
|''Version:''|1.0.1|
|''Date:''|Feb 11, 2012|
|''Source:''|http://www.guyrutenberg.com/2011/06/25/latex-for-tiddlywiki-a-mathjax-plugin|
|''Author:''|Guy Rutenberg|
|''License:''|[[BSD open source license]]|
|''~CoreVersion:''|2.5.0|
!! Changelog
!!! 1.0.1 Feb 11, 2012
* Fixed interoperability with TiddlerBarPlugin
!! How to Use
Currently the plugin supports the following delemiters:
* """\(""".."""\)""" - Inline equations
* """$$""".."""$$""" - Displayed equations
* """\[""".."""\]""" - Displayed equations
!! Demo
This is an inline equation \(P(E) = {n \choose k} p^k (1-p)^{ n-k}\) and this is a displayed equation:
\[J_\alpha(x) = \sum_{m=0}^\infty \frac{(-1)^m}{m! \, \Gamma(m + \alpha + 1)}{\left({\frac{x}{2}}\right)}^{2 m + \alpha}\]
This is another displayed equation $$e=mc^2$$
!! Code
***/
//{{{
config.extensions.MathJax = {
mathJaxScript : "http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML",
// uncomment the following line if you want to access MathJax using SSL
// mathJaxScript : "https://d3eoax9i5htok0.cloudfront.net/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML",
displayTiddler: function(TiddlerName) {
config.extensions.MathJax.displayTiddler_old.apply(this, arguments);
MathJax.Hub.Queue(["Typeset", MathJax.Hub]);
}
};
jQuery.getScript(config.extensions.MathJax.mathJaxScript, function(){
MathJax.Hub.Config({
extensions: ["tex2jax.js"],
"HTML-CSS": { scale: 100 }
});
MathJax.Hub.Startup.onload();
config.extensions.MathJax.displayTiddler_old = story.displayTiddler;
story.displayTiddler = config.extensions.MathJax.displayTiddler;
});
config.formatters.push({
name: "mathJaxFormula",
match: "\\\\\\[|\\$\\$|\\\\\\(",
//lookaheadRegExp: /(?:\\\[|\$\$)((?:.|\n)*?)(?:\\\]|$$)/mg,
handler: function(w)
{
switch(w.matchText) {
case "\\[": // displayed equations
this.lookaheadRegExp = /\\\[((?:.|\n)*?)(\\\])/mg;
break;
case "$$": // inline equations
this.lookaheadRegExp = /\$\$((?:.|\n)*?)(\$\$)/mg;
break;
case "\\(": // inline equations
this.lookaheadRegExp = /\\\(((?:.|\n)*?)(\\\))/mg;
break;
default:
break;
}
this.lookaheadRegExp.lastIndex = w.matchStart;
var lookaheadMatch = this.lookaheadRegExp.exec(w.source);
if(lookaheadMatch && lookaheadMatch.index == w.matchStart) {
createTiddlyElement(w.output,"span",null,null,lookaheadMatch[0]);
w.nextMatch = this.lookaheadRegExp.lastIndex;
}
}
});
//}}}
After saving the tiddler, reload the wiki and the MathJaxPlugin should be active. You can test it by creating a new tiddler with some equations in it:
This is an inline equation $$P(E) = {n \choose k} p^k (1-p)^{ n-k}$$ and this is a displayed equation:
\[J_\alpha(x) = \sum_{m=0}^\infty \frac{(-1)^m}{m! \, \Gamma(m + \alpha + 1)}{\left({\frac{x}{2}}\right)}^{2 m + \alpha}\]
Which should result in the tiddler that appears in the above image.
Update 2011-08-19: Removed debugging code from the plugin.
Changelog 1.0.1 (Feb 11, 2012 Applied Winter Young’s fix for interoperability with other plugins (mainly TiddlerBarPlugin
|
253 23 Homework Statement An object is undergoing circular motion in horizontal plane at fixed radius## r = 0.12m## Radial acceleration is ##2+2t ##m/s Calculate arc length the object swept through the first 2 seconds. Homework Equations -
From what I understand,
##a_{r} = v_{tan}^2 /r## ##a_{r} = (r\omega)^2 /r## ##a_{r} = r\omega^2## ##\omega^2 = \frac{a_{r}}{r}## ##\omega^2 = \frac{2+2t}{0.12}## ##\omega = \sqrt{\frac{2+2t}{0.12}}## ##s =\int_{0}^{2} \sqrt{\frac{2+2t}{0.12}}## After integrating, I still can't seem to get the correct answer which is 1.37m Are my concepts wrong or..? Thanks
##a_{r} = v_{tan}^2 /r##
##a_{r} = (r\omega)^2 /r##
##a_{r} = r\omega^2##
##\omega^2 = \frac{a_{r}}{r}##
##\omega^2 = \frac{2+2t}{0.12}##
##\omega = \sqrt{\frac{2+2t}{0.12}}##
##s =\int_{0}^{2} \sqrt{\frac{2+2t}{0.12}}##
After integrating, I still can't seem to get the correct answer which is 1.37m
Are my concepts wrong or..?
Thanks
|
Congratulations Aleksander from GdyniaBilingual High School No3, Poland for your excellent solution tothe Cocked Hat problem. As you will see, the solution hinges onsimplification of an algebraic expression and solving a quadraticequation.
Here is Aleksander'ssolution.
First we will rearrange the expression from implicit toexplicit form. The given equation is
$$(x^2 + 2ay -a^2)^2 = y^2(a^2 - x^2)$$
Squaring the LHS gives
$$4a^2y^2 + 4ay(x^2-a^2) + (x^2 - a^2)^2 = y^2(a^2 -x^2).$$
Collecting like terms gives the quadratic equation
$$y^2(3a^2+x^2)+ 4ay(x^2-a^2) + (x^2 - a^2)^2 = 0.$$
The discriminant is
$$\Delta = {16a^2(x^2 - a^2)^2-4(3a^2+x^2)(x^2-a^2)^2}= (x^2 -a^2)^2(4a^2-x^2) = 4(a^2 - x^2)^3.$$
Solving this equation we get:
$$\eqalign{ y &= {-4a(x^2 - a^2)\pm \sqrt{4(a^2 -x^2)^3}\over 2(3a^2+x^2)} \cr &= {(x^2-a^2)(-2a\pm\sqrt{(a^2-x^2)})\over (3a^2+x^2)} }$$
For each value of $a$ there are two branches of the graphgiven by taking the +ve and -ve signs in this equation. Values of$y$ are only defined for the interval $-a \leq x \leq a$. Thedomain of $f(x)$ is $-a \leq x \leq a.$ The zeros of the functionare given by $x = a$ and $x = -a.$
Additionally, the function is symmetric with respect to they-axis, because $x$ always appears in even powers. Graphs forparameters $a = n$ and $a = -n$ and are symmetric to each otherwith respect to the x-axis, that is $y(n) = y(-n)$. Proof:
$$ {(x^2-n^2)(-2n\pm \sqrt{(n^2-x^2)})\over (3n^2+x^2)} =-{(x^2-(-n)^2)(-2(-n)\pm \sqrt{(-(n)^2-x^2)})\over(3(-n)^2+x^2)}$$
Here are some graphs of the function:
|
2019-09-04 12:06
Soft QCD and Central Exclusive Production at LHCb / Kucharczyk, Marcin (Polish Academy of Sciences (PL)) The LHCb detector, owing to its unique acceptance coverage $(2 < \eta < 5)$ and a precise track and vertex reconstruction, is a universal tool allowing the study of various aspects of electroweak and QCD processes, such as particle correlations or Central Exclusive Production. The recent results on the measurement of the inelastic cross section at $ \sqrt s = 13 \ \rm{TeV}$ as well as the Bose-Einstein correlations of same-sign pions and kinematic correlations for pairs of beauty hadrons performed using large samples of proton-proton collision data accumulated with the LHCb detector at $\sqrt s = 7\ \rm{and} \ 8 \ \rm{TeV}$, are summarized in the present proceedings, together with the studies of Central Exclusive Production at $ \sqrt s = 13 \ \rm{TeV}$ exploiting new forward shower counters installed upstream and downstream of the LHCb detector. [...] LHCb-PROC-2019-008; CERN-LHCb-PROC-2019-008.- Geneva : CERN, 2019 - 6. Fulltext: PDF; In : The XXVII International Workshop on Deep Inelastic Scattering and Related Subjects, Turin, Italy, 8 - 12 Apr 2019 Detaljnije - Slični zapisi 2019-08-15 17:39
LHCb Upgrades / Steinkamp, Olaf (Universitaet Zuerich (CH)) During the LHC long shutdown 2, in 2019/2020, the LHCb collaboration is going to perform a major upgrade of the experiment. The upgraded detector is designed to operate at a five times higher instantaneous luminosity than in Run II and can be read out at the full bunch-crossing frequency of the LHC, abolishing the need for a hardware trigger [...] LHCb-PROC-2019-007; CERN-LHCb-PROC-2019-007.- Geneva : CERN, 2019 - mult.p. In : Kruger2018, Hazyview, South Africa, 3 - 7 Dec 2018 Detaljnije - Slični zapisi 2019-08-15 17:36
Tests of Lepton Flavour Universality at LHCb / Mueller, Katharina (Universitaet Zuerich (CH)) In the Standard Model of particle physics the three charged leptons are identical copies of each other, apart from mass differences, and the electroweak coupling of the gauge bosons to leptons is independent of the lepton flavour. This prediction is called lepton flavour universality (LFU) and is well tested. [...] LHCb-PROC-2019-006; CERN-LHCb-PROC-2019-006.- Geneva : CERN, 2019 - mult.p. In : Kruger2018, Hazyview, South Africa, 3 - 7 Dec 2018 Detaljnije - Slični zapisi 2019-05-15 16:57 Detaljnije - Slični zapisi 2019-02-12 14:01
XYZ states at LHCb / Kucharczyk, Marcin (Polish Academy of Sciences (PL)) The latest years have observed a resurrection of interest in searches for exotic states motivated by precision spectroscopy studies of beauty and charm hadrons providing the observation of several exotic states. The latest results on spectroscopy of exotic hadrons are reviewed, using the proton-proton collision data collected by the LHCb experiment. [...] LHCb-PROC-2019-004; CERN-LHCb-PROC-2019-004.- Geneva : CERN, 2019 - 6. Fulltext: PDF; In : 15th International Workshop on Meson Physics, Kraków, Poland, 7 - 12 Jun 2018 Detaljnije - Slični zapisi 2019-01-21 09:59
Mixing and indirect $CP$ violation in two-body Charm decays at LHCb / Pajero, Tommaso (Universita & INFN Pisa (IT)) The copious number of $D^0$ decays collected by the LHCb experiment during 2011--2016 allows the test of the violation of the $CP$ symmetry in the decay of charm quarks with unprecedented precision, approaching for the first time the expectations of the Standard Model. We present the latest measurements of LHCb of mixing and indirect $CP$ violation in the decay of $D^0$ mesons into two charged hadrons [...] LHCb-PROC-2019-003; CERN-LHCb-PROC-2019-003.- Geneva : CERN, 2019 - 10. Fulltext: PDF; In : 10th International Workshop on the CKM Unitarity Triangle, Heidelberg, Germany, 17 - 21 Sep 2018 Detaljnije - Slični zapisi 2019-01-15 14:22
Experimental status of LNU in B decays in LHCb / Benson, Sean (Nikhef National institute for subatomic physics (NL)) In the Standard Model, the three charged leptons are identical copies of each other, apart from mass differences. Experimental tests of this feature in semileptonic decays of b-hadrons are highly sensitive to New Physics particles which preferentially couple to the 2nd and 3rd generations of leptons. [...] LHCb-PROC-2019-002; CERN-LHCb-PROC-2019-002.- Geneva : CERN, 2019 - 7. Fulltext: PDF; In : The 15th International Workshop on Tau Lepton Physics, Amsterdam, Netherlands, 24 - 28 Sep 2018 Detaljnije - Slični zapisi 2019-01-10 15:54 Detaljnije - Slični zapisi 2018-12-20 16:31
Simultaneous usage of the LHCb HLT farm for Online and Offline processing workflows LHCb is one of the 4 LHC experiments and continues to revolutionise data acquisition and analysis techniques. Already two years ago the concepts of “online” and “offline” analysis were unified: the calibration and alignment processes take place automatically in real time and are used in the triggering process such that Online data are immediately available offline for physics analysis (Turbo analysis), the computing capacity of the HLT farm has been used simultaneously for different workflows : synchronous first level trigger, asynchronous second level trigger, and Monte-Carlo simulation. [...] LHCb-PROC-2018-031; CERN-LHCb-PROC-2018-031.- Geneva : CERN, 2018 - 7. Fulltext: PDF; In : 23rd International Conference on Computing in High Energy and Nuclear Physics, CHEP 2018, Sofia, Bulgaria, 9 - 13 Jul 2018 Detaljnije - Slični zapisi 2018-12-14 16:02
The Timepix3 Telescope andSensor R&D for the LHCb VELO Upgrade / Dall'Occo, Elena (Nikhef National institute for subatomic physics (NL)) The VErtex LOcator (VELO) of the LHCb detector is going to be replaced in the context of a major upgrade of the experiment planned for 2019-2020. The upgraded VELO is a silicon pixel detector, designed to with stand a radiation dose up to $8 \times 10^{15} 1 ~\text {MeV} ~\eta_{eq} ~ \text{cm}^{−2}$, with the additional challenge of a highly non uniform radiation exposure. [...] LHCb-PROC-2018-030; CERN-LHCb-PROC-2018-030.- Geneva : CERN, 2018 - 8. Detaljnije - Slični zapisi
|
Can I be a pedant and say that if the question states that $\langle \alpha \vert A \vert \alpha \rangle = 0$ for every vector $\lvert \alpha \rangle$, that means that $A$ is everywhere defined, so there are no domain issues?
Gravitational optics is very different from quantum optics, if by the latter you mean the quantum effects of interaction between light and matter. There are three crucial differences I can think of:We can always detect uniform motion with respect to a medium by a positive result to a Michelson...
Hmm, it seems we cannot just superimpose gravitational waves to create standing waves
The above search is inspired by last night dream, which took place in an alternate version of my 3rd year undergrad GR course. The lecturer talks about a weird equation in general relativity that has a huge summation symbol, and then talked about gravitational waves emitting from a body. After that lecture, I then asked the lecturer whether gravitational standing waves are possible, as a imagine the hypothetical scenario of placing a node at the end of the vertical white line
[The Cube] Regarding The Cube, I am thinking about an energy level diagram like this
where the infinitely degenerate level is the lowest energy level when the environment is also taken account of
The idea is that if the possible relaxations between energy levels is restricted so that to relax from an excited state, the bottleneck must be passed, then we have a very high entropy high energy system confined in a compact volume
Therefore, as energy is pumped into the system, the lack of direct relaxation pathways to the ground state plus the huge degeneracy at higher energy levels should result in a lot of possible configurations to give the same high energy, thus effectively create an entropy trap to minimise heat loss to surroundings
@Kaumudi.H there is also an addon that allows Office 2003 to read (but not save) files from later versions of Office, and you probably want this too. The installer for this should also be in \Stuff (but probably isn't if I forgot to include the SP3 installer).
Hi @EmilioPisanty, it's great that you want to help me clear out confusions. I think we have a misunderstanding here. When you say "if you really want to "understand"", I've thought you were mentioning at my questions directly to the close voter, not the question in meta. When you mention about my original post, you think that it's a hopeless mess of confusion? Why? Except being off-topic, it seems clear to understand, doesn't it?
Physics.stackexchange currently uses 2.7.1 with the config TeX-AMS_HTML-full which is affected by a visual glitch on both desktop and mobile version of Safari under latest OS, \vec{x} results in the arrow displayed too far to the right (issue #1737). This has been fixed in 2.7.2. Thanks.
I have never used the app for this site, but if you ask a question on a mobile phone, there is no homework guidance box, as there is on the full site, due to screen size limitations.I think it's a safe asssumption that many students are using their phone to place their homework questions, in wh...
@0ßelö7 I don't really care for the functional analytic technicalities in this case - of course this statement needs some additional assumption to hold rigorously in the infinite-dimensional case, but I'm 99% that that's not what the OP wants to know (and, judging from the comments and other failed attempts, the "simple" version of the statement seems to confuse enough people already :P)
Why were the SI unit prefixes, i.e.\begin{align}\mathrm{giga} && 10^9 \\\mathrm{mega} && 10^6 \\\mathrm{kilo} && 10^3 \\\mathrm{milli} && 10^{-3} \\\mathrm{micro} && 10^{-6} \\\mathrm{nano} && 10^{-9}\end{align}chosen to be a multiple power of 3?Edit: Although this questio...
the major challenge is how to restrict the possible relaxation pathways so that in order to relax back to the ground state, at least one lower rotational level has to be passed, thus creating the bottleneck shown above
If two vectors $\vec{A} =A_x\hat{i} + A_y \hat{j} + A_z \hat{k}$ and$\vec{B} =B_x\hat{i} + B_y \hat{j} + B_z \hat{k}$, have angle $\theta$ between them then the dot product (scalar product) of $\vec{A}$ and $\vec{B}$ is$$\vec{A}\cdot\vec{B} = |\vec{A}||\vec{B}|\cos \theta$$$$\vec{A}\cdot\...
@ACuriousMind I want to give a talk on my GR work first. That can be hand-wavey. But I also want to present my program for Sobolev spaces and elliptic regularity, which is reasonably original. But the devil is in the details there.
@CooperCape I'm afraid not, you're still just asking us to check whether or not what you wrote there is correct - such questions are not a good fit for the site, since the potentially correct answer "Yes, that's right" is too short to even submit as an answer
|
Problem: Let $\phi(x)$ be the normal probability density function (pdf), and $\Phi(x)$ the normal cumulative distribution (cdf). I'm interested in the asymptotic behavior of the following integral
$I(a,d)=\int_{-\infty}^{\infty}dx\left[\Phi\left(x/a\right)\right]^{d}\phi(x)$
in the limit that $a \rightarrow \infty $, and $d =\alpha a^2$, with $\alpha>1$ some constant.
Background: Suppose we have an equivaraiant random Gaussian vector $\mathbf{x}$ in $\mathbb{R}^d$, i.e. $\mathbf{x}\sim\mathcal{N}(0,\boldsymbol{\Sigma})$, where $\Sigma_{ij} =\delta_{ij} + a^{-2}$. Then the integral above is the orthant probability:$ I(a,d)= P(\forall i:x_i >0). $ Conjecture: From numerical simulations and some intuition, I'm suspecting that in this limit $$ \log I(a,d) \leq - \kappa a^2 \log(d) + o(a^2\log(d)) \quad(\ast) $$
for some positive constant $\kappa $. The numerical results might be wrong, since I'm getting warnings on precision accuracy. The intuition behind this bound is that $\left[\Phi\left(x/a\right)\right]^{d}$ is "approximately" a step function. In other words, we can choose some constant $y$ so that $\left[\Phi\left(x/a\right)\right]^{d}$ is very small for $x<x_0\triangleq a\Phi^{-1}({y^{1/q})}$ ($\Phi^{-1}$ is the inverse CDF), and upper bounded by $1$ if $x>x_0$. Integrating over this bound when $x>x_0$ and using standard bounds we get $(\ast)$, as long as $y$ is not too small. However, so far, I haven't found a good way to upper bound $\left[\Phi\left(x/a\right)\right]^{d}$ when $x<x_0$ so that the integral in this range is smaller than the integral in the range $x>x_0$ (while keeping $y$ sufficiently large).
Goal: A valid answer to this problem can either
(1) Prove this bound and find $\kappa $.
(2) Disprove this bound (show it is too low) and find a different (non trivial) upper bound.
(3) Find a better (lower) upper bound.
Any help would be appreciated, and thanks in advance!
|
We study the evolution of convex hypersurfaces with initial at a rate equal to <i>H</i> 鈥<i>f</i> along its outer normal, where <i>H</i> is the inverse of harmonic mean curvature of is a smooth, closed, and uniformly convex hypersurface. We find a <i>胃</i>* > 0 and a sufficient condition about the anisotropic function <i>f</i>, such that if <i>胃</i> > <i>胃</i>*, then remains uniformly convex and expands to infinity as <i>t</i> 鈫+ 鈭and its scaling, , converges to a sphere. In addition, the convergence result is generalized to the fully nonlinear case in which the evolution rate is log<i>H</i>-log <i>f</i> instead of <i>H</i>-f.
In this paper the existence of positive 2\pi -periodic solutions to the ordinary differential equation \begin{equation*} u^{\prime\prime}+u=\frac{f}{u^3} \ \textrm{ in } \mathbb{R} \end{equation*}is studied, where 2\pi is a positive 2\pi -periodic smooth function. By virtue of a new generalized Blaschke鈥揝antal贸 inequality, we obtain a new existence result of solutions.
In this paper we study the solvability of the rotationally symmetric centroaffine Minkowski problem. By delicate blow-up analyses, we remove a technical condition in the existence result obtained by Lu and Wang [30].
In this paper the Orlicz鈥揗inkowski problem, a generalization of the classical Minkowski problem, is studied. Using the variational method, we obtain a new existence result of solutions to this problem for general measures.
The centroaffine Minkowski problem is studied, which is the critical case of the <i>L</i><sub><i>p</i></sub>-Minkowski problem. It admits a variational structure that plays an important role in studying the existence of solutions. In this paper, we find that there is generally no maximizer of the corresponding functional for the centroaffine Minkowski problem.
In this paper the centroaffine Minkowski problem, a critical case of the L p-Minkowski problem in the n+ 1 dimensional Euclidean space, is studied. By its variational structure and the method of blow-up analyses, we obtain two sufficient conditions for the existence of solutions, for a generalized rotationally symmetric case of the problem.
Consider the existence of rotationally symmetric solutions to the L_p -Minkowski problem for L_p . Recently a sufficient condition was obtained for the existence via the variational method and a blow-up analysis in [16]. In this paper we use a topological degree method to prove the same existence and show the result holds under a similar complementary sufficient condition. Moreover, by this degree method, we obtain the existence result in a perturbation case.
In this paper we study the prescribed centroaffine curvature problem in the Euclidean space R n+ 1. This problem is equivalent to solving a Monge鈥揂mp猫re equation on the unit sphere. It corresponds to the critical case of the Blaschke鈥揝antal贸 inequality. By approximation from the subcritical case, and using an obstruction condition and a blow-up analysis, we obtain sufficient conditions for the a priori estimates, and the existence of solutions up to a Lagrange multiplier.
|
Search
Now showing items 1-10 of 33
The ALICE Transition Radiation Detector: Construction, operation, and performance
(Elsevier, 2018-02)
The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD ...
Constraining the magnitude of the Chiral Magnetic Effect with Event Shape Engineering in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2018-02)
In ultrarelativistic heavy-ion collisions, the event-by-event variation of the elliptic flow $v_2$ reflects fluctuations in the shape of the initial state of the system. This allows to select events with the same centrality ...
First measurement of jet mass in Pb–Pb and p–Pb collisions at the LHC
(Elsevier, 2018-01)
This letter presents the first measurement of jet mass in Pb-Pb and p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV and 5.02 TeV, respectively. Both the jet energy and the jet mass are expected to be sensitive to jet ...
First measurement of $\Xi_{\rm c}^0$ production in pp collisions at $\mathbf{\sqrt{s}}$ = 7 TeV
(Elsevier, 2018-06)
The production of the charm-strange baryon $\Xi_{\rm c}^0$ is measured for the first time at the LHC via its semileptonic decay into e$^+\Xi^-\nu_{\rm e}$ in pp collisions at $\sqrt{s}=7$ TeV with the ALICE detector. The ...
D-meson azimuthal anisotropy in mid-central Pb-Pb collisions at $\mathbf{\sqrt{s_{\rm NN}}=5.02}$ TeV
(American Physical Society, 2018-03)
The azimuthal anisotropy coefficient $v_2$ of prompt D$^0$, D$^+$, D$^{*+}$ and D$_s^+$ mesons was measured in mid-central (30-50% centrality class) Pb-Pb collisions at a centre-of-mass energy per nucleon pair $\sqrt{s_{\rm ...
Search for collectivity with azimuthal J/$\psi$-hadron correlations in high multiplicity p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 and 8.16 TeV
(Elsevier, 2018-05)
We present a measurement of azimuthal correlations between inclusive J/$\psi$ and charged hadrons in p-Pb collisions recorded with the ALICE detector at the CERN LHC. The J/$\psi$ are reconstructed at forward (p-going, ...
Systematic studies of correlations between different order flow harmonics in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(American Physical Society, 2018-02)
The correlations between event-by-event fluctuations of anisotropic flow harmonic amplitudes have been measured in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The results are ...
$\pi^0$ and $\eta$ meson production in proton-proton collisions at $\sqrt{s}=8$ TeV
(Springer, 2018-03)
An invariant differential cross section measurement of inclusive $\pi^{0}$ and $\eta$ meson production at mid-rapidity in pp collisions at $\sqrt{s}=8$ TeV was carried out by the ALICE experiment at the LHC. The spectra ...
J/$\psi$ production as a function of charged-particle pseudorapidity density in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Elsevier, 2018-01)
We report measurements of the inclusive J/$\psi$ yield and average transverse momentum as a function of charged-particle pseudorapidity density ${\rm d}N_{\rm ch}/{\rm d}\eta$ in p-Pb collisions at $\sqrt{s_{\rm NN}}= 5.02$ ...
Energy dependence and fluctuations of anisotropic flow in Pb-Pb collisions at √sNN=5.02 and 2.76 TeV
(Springer Berlin Heidelberg, 2018-07-16)
Measurements of anisotropic flow coefficients with two- and multi-particle cumulants for inclusive charged particles in Pb-Pb collisions at 𝑠NN‾‾‾‾√=5.02 and 2.76 TeV are reported in the pseudorapidity range |η| < 0.8 ...
|
A parabola is a U-shaped plane curve where any point is at an equal distance from a fixed point (known as the focus) and from a fixed straight line which is known as the directrix. Parabola is an integral part of conic section topic and all its concepts parabola are covered here which include the following:
Definition Standard Equation Latus Rectum Parametric co-ordinates General Equations Tangent to a Parabola Normal to a Parabola Focal Chord Properties Focal Chord, Tangent and Normal Properties Forms Questions What is Parabola?
Section of a right circular cone by a plane parallel to a generator of the cone is a
parabola. It is a locus of a point, which moves so that distance from a fixed point (focus) is equal to the distance from a fixed line (directrix) Fixed point is called focus Fixed line is called directrix
Standard Equation of Parabola
The simplest equation of a parabola is y
2 = x when the directrix is parallel to the y-axis. In general, if the directrix is parallel to the y-axis in the standard equation of a parabola is given as:
y 2 = 4ax
If the parabola is sideways i.e., the directrix is parallel to x-axis, the standard equation of a parabole becomes,
x 2 = 4ay
Apart from these two, the equation of a parabola can also be
y 2 = 4ax and xif the parabola is in the negative quadrants. Thus, the four equations of a parabola are given as: 2= 4ay y 2= 4ax y 2= – 4ax x 2= 4ay x 2= – 4ay y Parabola Equation Derivation
In the above equation, “a” is the distance from the origin to the focus. Below is the derivation for the parabola equation. First, refer to the image given below.
From definition,
\(\frac{SP}{PM}=1\)
SP = PM
\(\sqrt{{{\left( x-a \right)}^{2}}+{{y}^{2}}}=\left| \frac{x+a}{1} \right|\)
(x – a)
2 + y 2 = (x + a) 2
\(y^{2}=4ax\) ⇒
Standard equation of Parabola. Latus Rectum of Parabola
The latus rectum of a parabola is the chord that passes through the focus and is perpendicular to the axis of the parabola.
LSL’ Latus Ractum
= \(2\left( \sqrt{4a.a} \right)\)
= 4a (length of latus Rectum)
Note: – Two parabola are said to be equal if their latus rectum are equal. Parametric co-ordinates of Parabola
For a parabola, the equation is y
2 = -4ax. Now, to represent the co-ordinates of a point on the parabola, the easiest form will be = at 2 and y = 2at as for any value of “t”, the coordinates (at 2, 2at) will always satisfy the parabola equation i.e. y 2 = 4ax. So, Any point on the parabola
y
2 = 4ax (at 2, 2at)
where ‘t’ is a parameter.
Video Lesson on Parabola
Focal Chord and Focal Distance Focal chord: Any chord passes through the focus of the parabola is a fixed chord of the parabola.
Focal Distance: The focal distance of any point p(x, y) on the parabola y 2 = 4ax is the distance between point ‘p’ and focus.
PM = a + x
PS = Focal distance = x + a
General Equations of Parabola
Equation of parabola by definition.
SP = PM
\({{(x-\alpha )}^{2}}+{{(y-\beta )}^{2}}=\frac{{{(\ell x+my+n)}^{2}}}{{{\ell }^{2}}+{{m}^{2}}}\)
The general equation of 2
nd degree i.e. ax 2 + 2hxy + by 2 + 2gx + 2fy + c = 0 if
\(\Delta \ne 0\) \({{h}^{2}}=ab\)
Position of a point with respect to parabola
For parabola
\(S\equiv {{y}^{2}}-4ax=0\,\,\,\,\,\,\,\,\,,\,p(x{}_{1},{{y}_{1}})\)
\({{S}_{1}}={{y}_{1}}^{2}-4a{{x}_{1}}\)
\({{S}_{1}}<0\,\,\,\,\,(inside\,curve)\)
\({{S}_{1}}=0\,\,\,\,\,(on\,curve)\)
\({{S}_{1}}>0\,\,\,\,(outside\,curve)\)
Intersection of a straight line with the parabola y
2 = 4ax
Straight line y = mx + c
m slope of straight line
(mx + c)
2 – 4ax = 0
m
2x 2 + 2x(mc – 2a) + c 2 = 0
Ax
2 Bx + c = 0
B
2 – 4AC = discriminant D
D = 0
\(c={}^{a}/{}_{m}\)
D > 0
mc – a > 0: Straight line intersect the curve D < 0 (mc – a) < 0: Straight line not touching the curve
Tangent to a Parabola
Tangent at point (x
1, y 1)
y
2 = 4ax (parabola)
equation of Tangent
\(y{{y}_{1}}-{{y}_{1}}^{2}=2a(x-{{x}_{1}})\)
\(y{{y}_{1}}-4a{{x}_{1}}=2a(x-{{x}_{1}})\)
\(y{{y}_{1}}=2a(x+{{x}_{1}})\)
⇒ Point \(({{x}_{1}}\,{{y}_{1}})\)
Tangent in slope (m) form:
y
2 = 4ax
Let equation of Tangent y = mx + c
From the previous illustration
y = mx + c touches curve at a point
so , \(c\text{ }=~{}^{9}/{}_{m}\)
equation of Tangent :- y = mx + \(~{}^{a}/{}_{m}\)
so, point of Tangency is \(\left( {}^{a}/{}_{{{m}^{2}}},\frac{2a}{m} \right)\)
Tangent in parameter form (at 2, 2at)
ty = x + at
2 where ‘t’ is
parameter
Pair of Tangents from (x 1, y1) external points
Let y
2 = 4 are, (parabola)
P(x
1, y 1) external point then equation of Tangents is given by
SS
1 = T 2
\(S\equiv {{y}^{2}}-4\,are,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{{S}_{1}}\equiv {{y}_{1}}^{2}-4a{{x}_{1}}\)
\(T\equiv y{{y}_{1}}-2a(x-{{x}_{1}})\)
Chord of contact:
Equation of chord of contact of Tangents from a point p(x
1, y 1) to the parabola y 2 = 4ax is given by T = 0
i.e., yy
1 – 2a(x + x 1) = 0
Equation of QS T = 0
Normal to the parabola:
Normal to the point p(x
1, y 1) since normal is perpendicular to Tangent so slope of normal be will
\({}^{-1}/{}_{Slope\,of\,Tangent}\)
slope of normal at ‘p’ (x
1 y 1) is \(\frac{-{{y}_{1}}}{2a}\)
equation of normal\(y-{{y}_{1}}=\frac{-{{y}_{1}}}{2a}(x-{{x}_{1}})\)
Normal in term of ‘m’:
\(\left( slope\,of\,normal \right)\Rightarrow m=-\frac{dx}{dy\,\,}\,\,\,\,\,\,\,\,\,\,\,\,\,\,{{y}^{2}}=4ax\,\,\)
\({{y}_{1}}=-2am\)
\(\,\,\,\,\,\,\,m=\frac{-{{y}_{1}}}{2a}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{{x}_{1}}=a{{m}^{2}}\,\,\,\,\)
\(y=mx-2am-a{{m}^{3}}\)
\(m=\frac{-dx}{dy}\)
Equation of normal at point (am
2, – 2am)
Normal at point (at
2, 2at)
T parameter
y = tx + 2at + at
3 Important Properties of Focal Chord If chord joining \(P=(at_{1}^{2},2a{{t}_{1}})\) and \(Q=(at_{2}^{2},2a{{t}_{2}})\)is focal chord of parabola \({{y}^{2}}=4ax\) then \({{t}_{1}}{{t}_{2}}=-1\). If one extremity of a focal chord is \((at_{1}^{2},2a{{t}_{1}})\) then the other extremity \((at_{1}^{2},2a{{t}_{2}})\) becomes \(\left( \frac{a}{t_{1}^{2}},-\frac{2a}{{{t}_{1}}} \right)\). If point \(P(a{{t}^{2}},2at)\) lies on parabola \({{y}^{2}}=4ax\), then the length of focal chord PQis \(a{{(t+1/t)}^{2}}\). The length of the focal chord which makes an angle θ with positive x-axis is \(4a\cos e{{c}^{2}}\theta\). Semi latus rectum is harmonic mean of SP and SQ, where P and Q are extremities of latus rectum. i.e., \(2a=\frac{2SP\times SQ}{SP+SQ}\,or\frac{1}{SP}+\frac{1}{SQ}=\frac{1}{a}\) Circle described on focal length as diameter touches tangent at vertex. Circle described on focal chord as diameter touches directrix.
Important Properties of focal chord, Tangent and normal of Parabola The tangent at any point P on a parabola bisects the angle between the focal chord through P and the perpendicular from P on the directix.
The portion of a tangent to a parabola cut off between the directrix and the curve subtends a right angle at the focus.
(iii) Tangents at the extremities of any focal chord intersect at right angles on the directrix.
(iv) Any Tangent to a parabola and perpendicular on it from the focus meet on the Tangent at its vertex.
Intersect at y-axis, at u = 0
Four Common Forms of a Parabola:
Form: y 2 = 4ax y 2 = – 4ax x 2 = 4ay x 2 = – 4ay Vertex: (0, 0) (0,0) (0, 0) (0, 0) Focus: (a, 0) (-a, 0) (0, a) (0, -a) Equation of the directrix: x = – a x = y = – a y = a Equation of the axis: y = 0 y = 0 x = 0 x = 0 Tangent at the vertex: x = 0 x = 0 y = 0 y = 0 Practice Problems on Parabola Illustration 1: Find the vertex, axis, directrix, tangent at the vertex and the length of the latus rectum of the parabola \(2{{y}^{2}}+3y-4x-3=0\). Solution: The given equation can be re-written as \({{\left( y+\frac{3}{4} \right)}^{2}}=2\left( x+\frac{33}{32} \right)\)
which is of the form \({{Y}^{2}}=4aX\)where \(Y=y+\frac{3}{4},\,X=x+\frac{33}{32},\,4a=2\).
Hence the vertex is \(X=0,Y=0\) i.e. \(\left( -\frac{33}{32},-\frac{3}{4} \right)\).
The axis is \(y+\frac{3}{4}=0\Rightarrow y=-\frac{3}{4}\).
The directix is \(X=a=0\)
\(\Rightarrow x+\frac{33}{32}+\frac{1}{2}=0\Rightarrow x=-\frac{49}{32}\)
The tangent at the vertex is \(X=0\,or\,x+\frac{33}{32}=0\Rightarrow x=-\frac{33}{32}\).
Length of the latus rectum = 4a = 2.
Illustration 2: Find the equation of the parabola whose focus is (3, -4) and directix x – y + 5 = 0. Solution: Let P(x, y) be any point on the parabola. Then
\(\sqrt{{{(x-3)}^{3}}+{{(y+4)}^{2}}}=\frac{\left| x-y+5 \right|}{\sqrt{1+1}}\)
\(\Rightarrow {{(x-3)}^{2}}+{{(y+4)}^{2}}=\frac{{{(x-y+5)}^{2}}}{2}\)
\(\Rightarrow {{x}^{2}}+{{y}^{2}}+2xy-22x+26y+25=0\)
\(\Rightarrow {{(x+y)}^{2}}=22x-26y-25\)
Illustration 3: Find the equation of the parabola having focus (-6, -6) and vertex (-2, 2). Solution: Let S(6, -6) be the focus and A(-2, 2) the vertex of the parabola. On SA take a point K (x 1 , y 1) such that SA = AK. Draw KM perpendicular on SK. Then KM is the directix of the parabola.
Since A bisects SK, \(\left( \frac{-6+{{x}_{1}}}{2},\frac{-6+{{y}_{1}}}{2} \right)=(-2,2)\)
\(\Rightarrow -6+{{x}_{1}}=-4\,and\,-6+{{y}_{1}}=4\,or\,({{x}_{1}},{{y}_{1}})=(2,10).\)
Hence the equation of the directrix KM is y – 10 = m(x+2) ……(1)
Also gradient of \(SK=\frac{10-(-6)}{2-(-6)}=\frac{16}{8}=2;\,m=\frac{-1}{2}\)
So that equation (1) becomes
\(y-10=\frac{1}{2}(x-2)\) or \(x+2y-22=0\) is the directrix.
Next, let PM be a perpendicular on the directrix KM from any point P(x, y) on the parabola.
From SP = PM, the equation of the parabola is
\(\sqrt{\left\{ {{(x+6)}^{2}}+{{(y+6)}^{2}} \right\}}=\frac{x+2y-22}{\sqrt{({{1}^{2}}+{{2}^{2}})}}\)
Illustration 4: Find the coordinates of the focus, axis of the parabola, the equation of directrix and the length of the latus rectum for \({{y}^{2}}=12x\). Solution: The given equation is \({{y}^{2}}=12x\).
Here, the coefficient of x is positive. Hence, the parabola opens towards the right.
On comparing this equation with \({{y}^{2}}=4ax\), we get \(4a=12a\) or \(a=3\).
Coordinates of the focus are given by (a, 0) i.e., (3, 0).
Since the given equation involves y
2, the axis of the parabola is the y-axis.
Equation of directix is \(x=-a\), i.e., \(x=-3\).
Length of latus rectum = 4a = 4 x 3 = 12.
Illustration 5: Find the coordinates of the focus, axis of the parabola, the equation of directrix and the length of the latus rectum for \({{x}^{2}}=-16y\). Solution: The given equation is \({{x}^{2}}=-16y\).
Here, the coefficient of y is negative. Hence, the parabola opens downwards.
On comparing this equation with \({{x}^{2}}=-4ay\), we get \(-4a=-16\) or \(a=4\).
Coordinates of the focus = (0, -a) = (0, -4).
Since the given equation involves \({{x}^{2}}\), the axis of the parabola is the y-axis.
Equation of directrix, y = a i.e. = 4.
Length of latus rectum = 4a = 16.
Illustration 6: If the parabola y 2 = 4x and x 2 = 32y intersect at (16, 8) at an angle θ, then find the value of θ. Solution: The slope of the tangent to y 2 = 4x at (16, 8) is given by
\({m}_{1}={\left( \frac{dy}{dx} \right)}_{(16,8)}={{\left( \frac{4}{2y} \right)}_{(16,8)}}=\frac{2}{8}=\frac{1}{4}\)
The slope of the tangent to x
2 = 32y at (16, 8) is given by
\({m}_{2}={\left( \frac{dy}{dx} \right)}_{(16,8)} ={{\left( \frac{2x}{32} \right)}_{(16,8)}}=1\)
∴ \(Tan \;\theta =\frac{1-(1/4)}{1+(1/4)}=\frac{3}{5}\)
\(\Rightarrow \,\,\,\,\,\theta ={{\tan }^{-1}}\left( \frac{3}{5} \right)\)
Illustration 7: Find the equation of common tangent of y 2 = 4ax and x 2 – 4ay. Solution: Equation of tangent to y 2 = 4ax having slope m is \(y=mx+\frac{a}{m}\).
It will touch x
2 – 4ay, if \({{x}^{2}}=4a\left( mx+\frac{a}{m} \right)\) has a equal roots. Thus, \(16{{a}^{2}}{{m}^{2}}=\text{ }-16\frac{{{a}^{2}}}{m}\,\,\,\Rightarrow \,m=-1\)
Thus, common tangent is y + x + a = 0.
Illustration 8: Find the equation of normal to the parabola y 2 = 4x passing through the point (15, 12). Solution: Equation of the normal having slope m is
\(y=mx-2m-{{m}^{3}}\)
If it passes through the point (15, 12) then
\(12=15m-2m-{{m}^{3}}\)
\(\Rightarrow \,\,\,\,\,{{m}^{3}}-13m+12=0\)
\(\Rightarrow \,\,\,\,\,\left( m-1 \right)\left( m-3 \right)\left( m+4 \right)=0\)
\(\Rightarrow \,\,\,\,\,m=1,\,3,\,-4\)
Hence, equations of normal are:
\(y=x-3,\,y=3x-33\,and\,y+4x=72\)
Illustration 9: Find the point on y 2 = 8x where line x + y = 6 is a normal. Solution: Slope m of the normal x + y = 6 is -1 and a = 2
Normal to parabola at point (am
2, -2am) is
\(y=mx-2am-a{{m}^{3}}\)
\(\Rightarrow \,\,\,\,\,y=-x+4+2\,at\,(2,4)\)
\(\Rightarrow \,\,\,\,\,x+y=6\,is\,normal\,at\,(2,4)\)
Illustration 10: Tangents are drawn to y 2 = 4ax at point where the line lx + my + n = 0 meets this parabola. Find the intersection of these tangents. Solution: Let the tangents intersects at P (h, k). Then lx + my + n = 0 will be the chord of contact. That means lx + my + n = 0 and yk – 2ax – 2ah = 0 which is chord of contact, will represent the same line.
Comparing the ratios of coefficients, we get
\(frac{k}{m}=\frac{-2a}{l}=\frac{-2ah}{n}\)
\(\Rightarrow \,\,\,\,\,h=\frac{n}{l},\,k=-\frac{2am}{l}\)
Illustration 11: If the chord of contact of tangents from a point P to the parabola If the chord of contact of tangents from a point P to the parabola y 2 = 4ax touches the parabola x 2=4by, then find the locus of P. Solution: Chord of contact of parabola y 2 = 4ax w.r.t. point P(x 1 , y 1)
yy
1 = 2a(x + x 1) ……(1)
This line touches the parabola x
2 = 4by.
Solving line (1) with parabola, we have
\({{x}^{2}}=4b\left[ \left( 2a/{{y}_{1}} \right)\left( x+{{x}_{1}} \right) \right]\)
or \({{y}_{1}}{{x}^{2}}-8abx-8ab{{x}_{1}}=0\)
According to the question, this equation must have equal roots.
\(\Rightarrow \,\,\,\,\,D=0\,\)
\(\Rightarrow \,\,\,\,64{{a}^{2}}{{b}^{2}}+32ab{{x}_{1}}{{y}_{1}}=0\)
\(\Rightarrow \,\,\,\,\,{{x}_{1}}{{y}_{1}}=-2ab\) or \(xy=-2ab\), which is the required locus.
|
An important theorem in Diophantine approximation is the theorem of Liouville:
Liouville TheoremIf x is a algebraic number of degree $n$ over the rational numbers then there exists a constant c(x) > 0 such that: $\left|x-{\frac {p}{q}}\right|>{\frac {c(x)}{q^{{n}}}}$ holds for every integer $p,q\in N^*$ where $q>0$.
This theorem explains a phenomenon, the approximation of algebraic numbers by rational numbers could not be very good. Which was generated later to the
Thue–Siegel–Roth theorem, which could be used to prove a lot of constants are not algebraic, i.e. transcendental.
My questions is in another direction; now let us not just consider one root $\alpha_1$ of an integer polynomial $P(x)=a_mx^m+...+a_1x+a_0$ but consider all roots of it, i.e. $\{\alpha_1,...,\alpha_m\}$, which is based on an observation; if we define
$$\sigma_k(P(x))=\sum_{1\leq \alpha_{i_1}<\alpha_{i_2}<...<\alpha_{i_k}\leq m}\alpha_{i_1}\alpha_{i_2}...\alpha_{i_k}$$
By the
Vieta theorem we know $\sigma_k(x)\in \mathbb Q$ for all $k\in N^*$, this will lead to some restrictions and in fact destroy the uniform distribution of $(\{\alpha_1 n\},...,\{\alpha_m n\})\in [0,1]^m$. In fact, the most important one is the determination of the Vandermonde Determinant:$$V(P(m))=\Pi_{1\leq \alpha_i<\alpha_j\leq m}(\alpha_i-\alpha_j)$$.
We know $\Pi_{1\leq \alpha_i<\alpha_j\leq m}(\alpha_i-\alpha_j)\in \mathbb Q$ so when $\Pi_{1\leq \alpha_i<\alpha_j\leq m}(\alpha_i-\alpha_j)\neq 0$ we could use this to proof a nontrivial estimate for $\sum_{1\leq k\leq m}||\alpha_kn||_{\mathbb R/\mathbb Z}$. $$\sum_{1\leq k\leq m}||\alpha_kn||_{\mathbb R/\mathbb Z}\geq c{n^{\frac{-1}{m-1}}}.$$
by combining the A-G inequality and $\Pi_{1\leq \alpha_i<\alpha_j\leq n}(\alpha_i-\alpha_j)=\lambda\neq 0$. While by continual fractional expansion we only know a trivial estimate of type $\sum_{1\leq k\leq m}||\alpha_kn||_{\mathbb R/\mathbb Z}\geq c\frac{1}{n}$.
My question is the following:Is there still a estimate for $\sum_{1\leq k\leq m}||\alpha_kn||_{\mathbb R/\mathbb Z}$ (which could be slight weaker), if we don't have the whole power of
Vieta theorem? More precisely:
Problem 1If we have $\sigma_k((\alpha_1,...,\alpha_m))=\lambda_k\in \mathbb Q$ for all $k\in \{1,2,...,m'\}$ where $m'< m$, is there still some nontrivial estimate of $$\sum_{1\leq k\leq m}||\alpha_kn||_{\mathbb R/\mathbb Z}$$ hold for all $n\in N^*$?
One reason to consider this could be true is that although $\{\alpha_1,...,\alpha_m\}$ are not roots of an integer polynomial, but we could imagine in some suitable metric space $X$ the Gromov-Hausdorff distance of tuple $(\alpha_1,...,\alpha_m)$ and a tuple coming from roots of integer polynomials is small. And it seems reasonable to imagine this type of asymptotic quality is continue with the G-H distance on $X$.
Another problem is what happens when $V((\alpha_1,...,\alpha_m))=\Pi_{1\leq i<j\leq n}(\alpha_i-\alpha_j)=0$. More precisely,
Problem 2What happens when $$V((\alpha_1,...,\alpha_m))=\Pi_{1\leq i<j\leq m}(\alpha_i-\alpha_j)=0$$. Is this result, $$\sum_{1\leq k\leq m}||\alpha_kn||_{\mathbb R/\mathbb Z}\geq c{n^{\frac{-1}{m-1}}}.$$ still true?
Let us go a litter further, if these two problems both have a satisfactory answer, what is the higher dimensional case?
Problem 3Given $m\in \mathbb N^*$. If tuple $(y_1,...,y_k)$ is very closed to the zero set of a variety in $\mathbb Z[x_1,...,x_m]$ in $\mathbb (Z^{m})^k$ in the sense a lots of symmetric sum of $y_1,...,y_k$ belong to $\mathbb Q^m$, will this lead to some good estimate for $$\sum_{1\leq s\leq k}||y_sn||_{\mathbb R^m/\mathbb Z^m}?$$
I think these type of results should be investigated very well, I appreciate to any pointers with useful comments and answers, both on giving some strategy to solve these problems or given some reference about these problems.
|
$\newcommand{\al}{\alpha}\newcommand{\de}{\delta}\newcommand{\De}{\Delta}\newcommand{\ep}{\varepsilon}\newcommand{\ga}{\gamma}\newcommand{\Ga}{\Gamma}\newcommand{\la}{\lambda}\newcommand{\Si}{\Sigma}\newcommand{\thh}{\theta}\newcommand{\R}{\mathbb{R}}\newcommand{\E}{\operatorname{\mathsf E}} \newcommand{\PP}{\operatorname{\mathsf P}} \newcommand{\EE}{\mathcal E}\newcommand{\F}{\mathcal F}\newcommand{\I}{\mathcal I}\newcommand{\x}{\mathbf x}\newcommand{\size}{\text{size}}\newcommand{\pow}{\text{power}}\newcommand{\st}{\text{stupid}}$
This conjecture is true. Of course, the hardest part of this problem is the presence of the $\min$. So, the crucial point in the proof is the following upper bound:
Lemma 1.
\begin{equation*}
\frac1{\min(m+q-1,x)}\le\frac Am+\frac Bx,
\end{equation*}
where
\begin{equation*}
A:=\frac{(x-q+1)^2}{(x+q-1)^2},\quad B:=1-A=\frac{4 (q-1) x}{(x+q-1)^2}, \tag{3}
\end{equation*}
\begin{equation*}
1<q\le x,\quad 1\le m\le x. \tag{4}
\end{equation*}
In what follows, (3) and (4) will always be assumed. (Recall that the case $q=1$ was already verified by the OP, and it will also follow by continuity.) The proof of Lemma 1 is quite elementary, but a bit tedious, and omitted here.
By Lemma 1, the left-hand side (lhs) of the inequality in question is upper bounded by \begin{equation*} \sum_{m=1}^{x} m\,\Big(\frac Am+\frac Bx\Big) \binom{x}{m} \left( \frac{x-q}{x+q}\right)^m =A (r-1)+\frac{B r (x-q)}{2 x}, \end{equation*}where \begin{equation*} r:=\left(\frac{2x}{x+q}\right)^x. \end{equation*}So, the inequality in question reduces to \begin{equation*} A (r-1)+\frac{B r (x-q)}{2 x}\le \frac{x-q+2}{x+q}\,(r-1), \end{equation*}which can be rewriten as \begin{equation} \de(q):=\de(q,x):=\ln r-\ln\frac{(x+1)^2-1-(q-1)^2}{2 x + 2 q-1}\ge0. \tag{5}\end{equation}It is straightforward but tedious to see that \begin{multline*} [(x+1)^2-1-(q-1)^2]\de'(q) \\ =2 q^3 (x+1)+q^2 \left(2 x^2+x-2\right)-2 q x \left(x^2+x-1\right)-x \left(2 x^3+x^2-4 x+1\right)<0 \end{multline*}for \begin{equation} 1\le q\le x-2/5. \tag{6}\end{equation}So, $\de(q)=\de(q,x)$ is decreasing in $q\in[1,x-2/5]$. Moreover, \begin{multline*}(5 x-1) (20 x-9) ( 120 x-49) \frac d{dx}\de(x-2/5,x) \\ =12000 x^3 \ln (5)-100 x^2 (24+127 \ln (5)) \\ +\left(12000 x^3-12700 x^2+4265 x-441\right) \ln \left(\frac{x}{5 x-1}\right)\\ +5 x (512+853 \ln (5))-541-441 \ln (5). \end{multline*}It is straightforward but tedious to see that the latter expression is positive for all $x\ge1$. (One way to deal with such an expression is to notice that the derivative of a high enough order of an expression of the form $P(x)\ln R(x)$ is a ratio of polynomials if $P(x)$ is a polynomial and $R(x)$ is a ratio of polynomials.)So, $\de(x-2/5,x)$ is increasing in $x\ge1$. Also, $\de(x-2/5,x)|_{x=2}=0.00187\ldots>0$. So, $\de(x-2/5,x)>0$ for $x\ge2$. Recalling that $\de(q)=\de(q,x)$ is decreasing in $q\in[1,x-2/5]$, we see that $\de(q,x)>0$ for $x\ge2$ and $q\in[1,x-2/5]$.
Thus, we get the inequality in question for $x\ge2$ and $q\in[1,x-2/5]$. The case $x=1$ is trivial.
So, it remains to consider the case when $x\ge2$ and $q\in(x-2/5,x]$. This case is much easier than the one considered, because in this case the $\min$ does not cause trouble: indeed, for $q\in(x-2/5,x]$ and $m=2,\dots,x$ we have $\min(m+q-1,x)=x$, a constant. So, in this case the difference between the left- and right-hand sides of the inequality in question times $2 q x (x+q)$ equals $2 x (x^2 - q (x - 2)) - q ((x - q)^2 + 4 x)r$, with $r$ as before. Hence, the inequality in question can be rewritten here as \begin{equation} \de(q):=\de(q,x):=\ln r-\ln\frac{2 x (x^2 - q (x - 2))} {q r ((x - q)^2 + 4 x)}\ge0; \tag{7}\end{equation}here we use the same notation, $\de(q)=\de(q,x)$, for an expression (somewhat similar to but) different from the one in (5). Here, we have \begin{multline*} q (x + q) (x^2 - q (x - 2)) ((x - q)^2 + 4 x)\de'(q) \\ =q^3 (11 - 6 x) x^2 + x^4 (4 + x) + q x^3 (4 - 11 x - 2 x^2) \\ + 2 q^4 (2 - 3 x + x^2) + q^2 x^2 (-20 + 5 x + 6 x^2)<0 \end{multline*}for all $q\in[1,x]$, so that $\de(q,x)$ is decreasing in $q\in[1,x]$. Also, $\de(x,x)=0$ and hence $\de(q,x)>0$ for all $q\in[1,x]$.
Thus, the inequality in question is completely proved.
|
In Quantum Field Theory and Jones Polynomial (equation 2.16), Witten used a formula relating the APS eta-invariant to the Chern-Simons action. Witten claimed that it is derived from the Atiyah-Patodi-Singer index theorem. I cannot find any clue how one can derive that formula from APS index theorem.
In the following, I will briefly explain equation 2.19 in that paper, and will write the Atiyah-Patodi-Singer index theorem. Please help me figure out how one can derive that equation from APS index theorem.
Let $Y$ be a closed three dimensional manifold. Let $G$ be a compact simple gauge group, whose Lie algebra is denoted by $\mathfrak{g}$. Let $E$ be a $G$-bundle over $Y$, with connection $1$-form $A\in\Omega^{1}(Y,\mathfrak{g})$. The Chern-Simons action is given by
$$I[A]=\frac{1}{4\pi}\int_{Y}\mathrm{Tr}\left(A\wedge dA+\frac{2}{3}A\wedge A\wedge A\right).$$
Let $D_{A}$ be the covariant derivative. Then, one is interested in the operator
$$L=\bigg( \begin{matrix} \ast D_{A}&-D_{A}\ast\\D_{A}\ast&0 \end{matrix} \bigg).$$
To be specific, one is interested in $L_{-}$, the restriction of $L$ on odd forms, i.e.
$$L_{-}=L|_{\Omega^{1}(Y)\oplus\Omega^{3}(Y)}.$$
One defines its eta-invariant
$$\eta_{L_{-}}(A)=\lim_{s\rightarrow 0}\sum_{j}\frac{\mathrm{sign}(v_{j})}{|v_{j}|^{s}}$$
where $v_{j}$ are non-zero eigenvalues of the operator $L_{-}$.
Similarly, one defines the eta-invariant for the trivial gauge $A=0$, denoted by $\eta_{L_{-}}(0)$. With this trivial gauge, one is interested in the operator
$$L=\bigg( \begin{matrix} \ast d&-d\ast\\d\ast&0 \end{matrix} \bigg),$$
restricted on odd forms.
Equation 2.16 in Quantum Field Theory and Jones Polynomial:
$$\frac{1}{4}\left(\eta_{L_{-}}(A)-\eta_{L_{-}}(0)\right)=\frac{c_{2}(G)}{2\pi}I[A]$$
Witten claimed that this is a result from Atiyah-Patodi-Singer index theorem.
The original statement of APS index theorem is from Spectral Asymmetry and Riemannian Geometry I, which is very hard to read for physics students. In the following, I copy the statement of APS index theorem from Aspects of Boundary Problems in Analysis and Geometry, which is easier to read.
Let the closed three dimensional manifold $Y$ be the boundary of a compact, oriented, four dimensional Riemannian manifold $M$, i,e, $\partial M=Y$. Let $S(M)$ be a spin bundle over $M$, then one has the splitting $S(M)=S^{+}(M)\oplus S^{-}(M)$ into chiral halves. Let $E$ be a Hermitian vector bundles over $M$. The twisted Dirac operator on $M$ is defined as
$$\mathcal{D}=\bigg( \begin{matrix} 0&D^{+}\\D^{-}&0 \end{matrix} \bigg),$$
with
$$D^{+}:\Gamma(M,S^{+}(M)\otimes E)\rightarrow\Gamma(M,S^{-}(M)\otimes E)$$
$$D^{-}:\Gamma(M,S^{-}(M)\otimes E)\rightarrow\Gamma(M,S^{+}(M)\otimes E)$$
where $\Gamma(M,S^{\pm}(M)\otimes E)$ is the set of sections of the bundle $S^{\pm}(M)\otimes E$.
In addition, one assumes the following conditions:
$M$ has a collar neighborhood $N=[0,1)_{s}\times Y$ near $Y$, where the metric is a product
$$g=ds^{2}+h$$
with $h$ a metric on $Y$.
Denote the space of square-integrable spinors on $Y$ by $L^{2}(Y,S(Y))$. Near the boundary $Y$, the Dirac operator $\mathcal{D}$ is of product type on the collar of the following form:
$$\mathcal{D}=\Gamma^{s}(\partial_{s}+D_{Y})$$
where $\Gamma^{s}:S^{\pm}(N)\otimes E|_{N}\rightarrow S^{\mp}(N)\otimes E|_{N}$ is unitary mapping
$$L^{2}(Y,S(Y)\otimes E|_{Y})\rightarrow L^{2}(Y,S(Y)\otimes E|_{Y})$$
of spinors on $Y$, and
$$D_{Y}:\Gamma(Y,S(Y)\otimes E|_{Y})\rightarrow\Gamma(Y,S(Y)\otimes E|_{Y})$$
is the self-adjoint Dirac operator on $Y$.
Denote the APS eta-invariant for $D_{Y}$ by
$$\eta_{D_{Y}}(s)=\sum_{\lambda\in\mathrm{spec}(D_{Y})\backslash\left\{0\right\}}m_{\lambda}\frac{\mathrm{sign}(\lambda)}{|\lambda|^{s}}$$
where $m_{\lambda}$ is the multiplicity of the eigenvalue $\lambda$.
Let $\widehat{M}$ be the non-compact elongation of $M$ defined as follows:
$$\widehat{M}=(-\infty,0]_{s}\times Y\cup_{\partial M}M$$
One denotes the extension of $D_{M}$ on $\widehat{M}$ by $\widehat{D}$.
Atiyah-Padoti-Singer:
$$\mathrm{ind}(\widehat{D})=\int_{M}\hat{A}(TM)\mathrm{ch}(E)-\frac{1}{2}\left(\eta(D_{Y})+\dim\ker D_{Y}\right)$$
$$=\frac{-1}{8\pi^{2}}\int_{M}\mathrm{Tr}\left(F\wedge F\right)+\frac{\dim E}{192\pi^{2}}\int_{M}\mathrm{Tr}\left(R\wedge R\right)-\frac{1}{2}\left(\eta(D_{Y})+\dim\ker D_{Y}\right)$$
Please tell me how I can derive Witten's formula (2.16) from the above APS index formula. The quadratic Casimir $c_{2}(G)$ is suppposed to come from replacing the trace in the adjoint representation by trace in the fundamental representation in the Chern-Simons action. However, from the APS index formula, I cannot see anything in the adjoint representation.
I've seen some "physical" derivations of the formula (2,16) from Gauge Dependence of the Eta-Function in Chern-Simons Field Theory and the Vilkovisky-DeWitt Correction, and Perturbative Expansion of Chern-Simons Theory with Noncompact Gauge Group, but they made me even more confused.
What is even worse, I found a more genetic formula generalizing Witten's formula (2.16) from Lectures on Quantization of Gauge Systems (equation 60 iin page 53) and Computer Calculation of Witten's 3-Manifold Invariant (equation 1.31 in page 86).
Also, in this physics paper Global Symmetries, Counterterms, and Duality in Chern-Simons Matter Theories with Orthogonal Gauge Groups (equation 4.2 in page 33), the APS index theorem looks very different from the original one, with a factor of the quadratic Casimir.
Please tell me where this second order Casimir is coming from.
|
I recently came across this in a textbook (NCERT class 12 , chapter: wave optics , pg:367 , example 10.4(d)) of mine while studying the Young's double slit experiment. It says a condition for the formation of interference pattern is$$\frac{s}{S} < \frac{\lambda}{d}$$Where $s$ is the size of ...
The accepted answer is clearly wrong. The OP's textbook referes to 's' as "size of source" and then gives a relation involving it. But the accepted answer conveniently assumes 's' to be "fringe-width" and proves the relation. One of the unaccepted answers is the correct one. I have flagged the answer for mod attention. This answer wastes time, because I naturally looked at it first ( it being an accepted answer) only to realise it proved something entirely different and trivial.
This question was considered a duplicate because of a previous question titled "Height of Water 'Splashing'". However, the previous question only considers the height of the splash, whereas answers to the later question may consider a lot of different effects on the body of water, such as height ...
I was trying to figure out the cross section $\frac{d\sigma}{d\Omega}$ for spinless $e^{-}\gamma\rightarrow e^{-}$ scattering. First I wrote the terms associated with each component.Vertex:$$ie(P_A+P_B)^{\mu}$$External Boson: $1$Photon: $\epsilon_{\mu}$Multiplying these will give the inv...
As I am now studying on the history of discovery of electricity so I am searching on each scientists on Google but I am not getting a good answers on some scientists.So I want to ask you to provide a good app for studying on the history of scientists?
I am working on correlation in quantum systems.Consider for an arbitrary finite dimensional bipartite system $A$ with elements $A_{1}$ and $A_{2}$ and a bipartite system $B$ with elements $B_{1}$ and $B_{2}$ under the assumption which fulfilled continuity.My question is that would it be possib...
@EmilioPisanty Sup. I finished Part I of Q is for Quantum. I'm a little confused why a black ball turns into a misty of white and minus black, and not into white and black? Is it like a little trick so the second PETE box can cancel out the contrary states? Also I really like that the book avoids words like quantum, superposition, etc.
Is this correct? "The closer you get hovering (as opposed to falling) to a black hole, the further away you see the black hole from you. You would need an impossible rope of an infinite length to reach the event horizon from a hovering ship". From physics.stackexchange.com/questions/480767/…
You can't make a system go to a lower state than its zero point, so you can't do work with ZPE. Similarly, to run a hydroelectric generator you not only need water, you need a height difference so you can make the water run downhill. — PM 2Ring3 hours ago
So in Q is for Quantum there's a box called PETE that has 50% chance of changing the color of a black or white ball. When two PETE boxes are connected, an input white ball will always come out white and the same with a black ball.
@ACuriousMind There is also a NOT box that changes the color of the ball. In the book it's described that each ball has a misty (possible outcomes I suppose). For example a white ball coming into a PETE box will have output misty of WB (it can come out as white or black). But the misty of a black ball is W-B or -WB. (the black ball comes out with a minus). I understand that with the minus the math works out, but what is that minus and why?
@AbhasKumarSinha intriguing/ impressive! would like to hear more! :) am very interested in using physics simulation systems for fluid dynamics vs particle dynamics experiments, alas very few in the world are thinking along the same lines right now, even as the technology improves substantially...
@vzn for physics/simulation, you may use Blender, that is very accurate. If you want to experiment lens and optics, the you may use Mistibushi Renderer, those are made for accurate scientific purposes.
@RyanUnger physics.stackexchange.com/q/27700/50583 is about QFT for mathematicians, which overlaps in the sense that you can't really do string theory without first doing QFT. I think the canonical recommendation is indeed Deligne et al's *Quantum Fields and Strings: A Course For Mathematicians *, but I haven't read it myself
@AbhasKumarSinha when you say you were there, did you work at some kind of Godot facilities/ headquarters? where? dont see something relevant on google yet on "mitsubishi renderer" do you have a link for that?
@ACuriousMind thats exactly how DZA presents it. understand the idea of "not tying it to any particular physical implementation" but that kind of gets stretched thin because the point is that there are "devices from our reality" that match the description and theyre all part of the mystery/ complexity/ inscrutability of QM. actually its QM experts that dont fully grasp the idea because (on deep research) it seems possible classical components exist that fulfill the descriptions...
When I say "the basics of string theory haven't changed", I basically mean the story of string theory up to (but excluding) compactifications, branes and what not. It is the latter that has rapidly evolved, not the former.
@RyanUnger Yes, it's where the actual model building happens. But there's a lot of things to work out independently of that
And that is what I mean by "the basics".
Yes, with mirror symmetry and all that jazz, there's been a lot of things happening in string theory, but I think that's still comparatively "fresh" research where the best you'll find are some survey papers
@RyanUnger trying to think of an adjective for it... nihilistic? :P ps have you seen this? think youll like it, thought of you when found it... Kurzgesagt optimistic nihilismyoutube.com/watch?v=MBRqu0YOH14
The knuckle mnemonic is a mnemonic device for remembering the number of days in the months of the Julian and Gregorian calendars.== Method ===== One handed ===One form of the mnemonic is done by counting on the knuckles of one's hand to remember the numbers of days of the months.Count knuckles as 31 days, depressions between knuckles as 30 (or 28/29) days. Start with the little finger knuckle as January, and count one finger or depression at a time towards the index finger knuckle (July), saying the months while doing so. Then return to the little finger knuckle (now August) and continue for...
@vzn I dont want to go to uni nor college. I prefer to dive into the depths of life early. I'm 16 (2 more years and I graduate). I'm interested in business, physics, neuroscience, philosophy, biology, engineering and other stuff and technologies. I just have constant hunger to widen my view on the world.
@Slereah It's like the brain has a limited capacity on math skills it can store.
@NovaliumCompany btw think either way is acceptable, relate to the feeling of low enthusiasm to submitting to "the higher establishment," but for many, universities are indeed "diving into the depths of life"
I think you should go if you want to learn, but I'd also argue that waiting a couple years could be a sensible option. I know a number of people who went to college because they were told that it was what they should do and ended up wasting a bunch of time/money
It does give you more of a sense of who actually knows what they're talking about and who doesn't though. While there's a lot of information available these days, it isn't all good information and it can be a very difficult thing to judge without some background knowledge
Hello people, does anyone have a suggestion for some good lecture notes on what surface codes are and how are they used for quantum error correction? I just want to have an overview as I might have the possibility of doing a master thesis on the subject. I looked around a bit and it sounds cool but "it sounds cool" doesn't sound like a good enough motivation for devoting 6 months of my life to it
|
entropy is the measure of surprise
That's informal, short and non-quantitative, but correct within that. In the case of a Random Number Generator, we must make that: entropy is the measure of surprise in the outputs of the RNG, for one skilled person (with arbitrarily large computing power) knowing the RNG design including any parameter (for MT: the Mersenne prime used, and a few others). That's for unknown seed (if any) assumed uniformly random but arbitrarily large computing power of the skilled person (unless otherwise stated).
The entropy considered here is a property of the generator, not that of one particular bitstring that it outputs.
Entropy further can be defined for the total output of a generator, or per output bit. In cryptography we measure the entropy in bit, so that it is 1 bit per output bit for an ideal uniform True RNG.
For any Pseudo RNG, the whole output is predictable from design, parameters and seed, hence the entropy in the whole output is limited to the entropy in what generates its seed, which is finite. And the entropy per output bit decreases to zero as the output size increase towards infinity.
the Mersenne Twister(MT) has high entropy.
No, because it is a PRNG (see above).
the MT is also predictable.
Yes, with enough output, and little computing power.
What's the relationship between entropy and predictability?
If a bitstring generator has the property that it's full output is predictable from a finite length prefix (as is the case for MT), then this generator has finite total entropy (bounded by said finite length in bits for Shannon entropy in bits), and vanishingly small entropy per output bit.
The converse is false for practical definition of (un)predictable. In particular, there exist practical Cryptographically Secure PRNGs (thus of finite total entropy) that are practically unpredictable.
To make things quantitative: consider a generator $G$ of arbitrarily long bitstrings. Note $G_b$ that generator restricted to its first $b$ bits. $G_b$ is susceptible to generate $2^b$ different $b$-bit bitstrings $B$ with respective probability $p_B$ (possibly $0$ for some $B$), with $1=\sum p_B$ (the sum being over $2^b$ terms $B$). $G_b$ has Shannon entropy in bits$$H(G_b)=\sum_{B\text{ with }p_B\ne0}p_B\,\log_2\left(\frac1{p_B}\right)$$and its average entropy per bit is $H(G_b)/b$.
For an ideal TRNG (which output is uniformly random independent bits), and any $b$, all $b$-bit bitstrings are equiprobable with $p_B=2^{-b}$ and it comes $H(G_b)=b$.
For any PRNGs with $s$-bit seed (per any distribution), $H(G_b)\le \max(b,s)$. The entropy of the generator's whole output is $H(G)=\displaystyle\lim_{b\to\infty}H(G_b)$. That is maximal with $H(G)=s$ when all seeds generate different outputs and the seed is uniformly random.
|
I am interested in ring-theoretic properties of rings of modular forms. Consider the ring $R$ of integral modular forms for some level, say $\Gamma_1(n)$ -- and to be gentle, let's invert $n$. Algebro-geometrically, this can be defined as the sections of powers of the line bundle $\lambda$ on the compactified or uncompactified moduli stack $\mathcal{M}_1(n)$ of elliptic curves with $\Gamma_1(n)$-level structure; if we consider the compactified moduli, we get
holomorphic modular forms, if we consider the uncompactified, we get meromorphic modular forms.
If we consider
meromorphic modular forms, then $R$ is certainly known to be regular for $n\geq 2$; indeed, for $n\geq 2$ the $\mathbb{G}_m$-torsor over the moduli stack $\mathcal{M}_1(n)$ associated to $\bigoplus_{i\in \mathbb{Z}} \lambda^{\otimes i}$ is representable by an affine scheme and known to be regular as $\mathcal{M}_1(n)$ is regular.
My question is about the
holomorphic case:
For which $n$ is the ring of integral holomorphic modular forms $R$ of level $\Gamma_1(n)$ regular? Is $R$ always normal?
|
I have a question about a reflecting Brownian motion and its boundary local time.
Bass and Hsu studied the existence of Reflecting Brownian motion and boundary local time on a bounded Lipschitz domain in 1991. Although I don't state the definition of boundary local time here, I will briefly explain what it is. Roughly speaking, boundary local time $\{L_{t}\}$ is an additive functional on a probability space and satisfies the following the equation: \begin{align*} L_{t}=\int_{0}^{t}1_{\left\{X_{s} \in \partial D \right\}}\,dL_{s}, \end{align*} where $\{X_{t}\}$ is a Reflecting Brownian motion on $\bar{D}$ (closure of a bounded Lipschitz domain $D$). That is, $L_t$ increases only when $X_t \in \partial D$.
Question
Let $D$ be a rectangle like as the follwong picture. Even in this case, we can define reflecting Brownian motion $(X_t,P_x)$ starting from $x \in \bar{D}$ and boundary local time $\{L_{t}\}$.
I am interested in the quantity $ P_{x}(L_t>M), $ where $M$ is a positive constant.
I think $P_{a}(L_t >M) \ge P_{b}(L_t >M)$, where $a,b$ are boundary points in the following picture. But I don't know how to prove this inequality. If you know related studies, please let me know. ADD
In this question, $\{X_{t}\}$ is the Markov process generated by the following Dirichlet form on $L^{2}(\bar{D})$: \begin{align*} \mathcal{E}(f,g)=\frac{1}{2}\int_{D}(\nabla f, \nabla g)\,dx,\quad f,g \in H^1(D), \end{align*} where $H^{1}(D)$ is the Sobolev space on $D$ with Neumann boundary condition. $\{L_{t}\}$ is the positive continuous additive functional associated with the surface measure on $\partial D$. To be more precise, $\{L_t\}$ and $\sigma$ are in the Revuz correspondence. Futheremore, $X_{t}$ has the following Skorohod representation: \begin{align*} X_t=X_0+B_t+\frac{1}{2}\int_{0}^{t}n(X_s)\,dL_s, \end{align*} where $\{B_t\}$ is the $d$-dimensional Brownian motion and $n$ is the unit inward normal vector to $\partial D$.
|
Error analysis of discontinuous Galerkin method for the time fractional KdV equation with weak singularity solution
1.
School of Mathematics and Statistics, Shandong Normal University, Jinan 250014, China
2.
School of Mathematic and Quantitative Economics, Shandong University of Finance and Economics, Jinan 250014, China
3.
Labroatory of Computational Physics, Institute of Applied Physics and Computational Mathematics, Beijing 100088, China
In this work, the time fractional KdV equation with Caputo time derivative of order $ \alpha \in (0,1) $ is considered. The solution of this problem has a weak singularity near the initial time $ t = 0 $. A fully discrete discontinuous Galerkin (DG) method combining the well-known L1 discretisation in time and DG method in space is proposed to approximate the time fractional KdV equation. The unconditional stability result and O$ (N^{-\min \{r\alpha,2-\alpha\}}+h^{k+1}) $ convergence result for $ P^k \; (k\geq 2) $ polynomials are obtained. Finally, numerical experiments are presented to illustrate the efficiency and the high order accuracy of the proposed scheme.
Keywords:Time fractional KdV equation, weak singularity, discontinuous Galerkin method, stability, error estimate. Mathematics Subject Classification:Primary: 35R11, 65M60; Secondary: 65M12. Citation:Na An, Chaobao Huang, Xijun Yu. Error analysis of discontinuous Galerkin method for the time fractional KdV equation with weak singularity solution. Discrete & Continuous Dynamical Systems - B, 2020, 25 (1) : 321-334. doi: 10.3934/dcdsb.2019185
References:
[1]
N. An, C. Huang and X. Yu,
Error analysis of direct discontinuous Galerkin method for two-dimensional fractional diffusion-wave equation,
[2]
W. Bu and A. Xiao,
An h-p version of the continuous Petrov-Galerkin finite element method for Riemann-Liouville fractional differential equation with novel test basis functions,
[3] [4]
Y. Cheng and C.-W. Shu,
A discontinuous Galerkin finite element method for time dependent partial differential equations with higher order derivatives,
[5]
P. G. Ciarlet,
[6] [7]
P. A. Farrell, A. F. Hegarty, J. J. H. Miller, E. O'Riordan and G. I. Shishkin,
[8] [9]
D. Henry,
[10] [11]
C. Huang, N. An and X. Yu,
A fully discrete direct discontinuous Galerkin method for the fractional diffusion-wave equation,
[12]
C. Huang, M. Stynes and N. An,
Optimal ${L}^\infty ({L}^2)$ error analysis of a direct discontinuous Galerkin method for a time-fractional reaction-diffusion problem,
[13]
C. Huang, X. Yu, C. Wang, Z. Li and N. An,
A numerical method based on fully discrete direct discontinuous Galerkin method for the time fractional diffusion equation,
[14]
D. J. Korteweg and G. de Vries,
On the change of form of long waves advancing in a rectangular canal, and on a new type of long stationary waves,
[15] [16] [17]
S. Momani and A. Yıldı rım,
Analytical approximate solutions of the fractional convection-diffusion equation with nonlinear source term by He's homotopy perturbation method,
[18] [19]
K. Mustapha and W. McLean,
Discontinuous Galerkin method for an evolution equation with a memory term of positive type,
[20]
K. Mustapha and W. McLean,
Uniform convergence for a discontinuous Galerkin, time-stepping method applied to a fractional diffusion equation,
[21]
K. Mustapha, M. Nour and B. Cockburn,
Convergence and superconvergence analyses of HDG methods for time fractional diffusion problems,
[22]
I. Podlubny,
[23]
I. Podlubny, Geometric and physical interpretation of fractional integration and fractional differentiation,
[24]
J. Russell, Report of the committee on waves, Rep. Meet. Brit. Assoc. Adv. Sci., 7th Liverpool, 1837, London, John Murray.Google Scholar
[25]
M. Stynes, E. O'Riordan and J. Gracia,
Error analysis of a finite difference method on graded meshes for a time-fractional diffusion equation,
[26] [27]
L. Wei, Y. He, A. Yildirim and S. Kumar,
Numerical algorithm based on an implicit fully discrete local discontinuous Galerkin method for the time-fractional KdV-Burgers-Kuramoto equation,
[28]
G. H. Weiss, R. Klages, G. Radons and I. M. Sokolov (eds.), Anomalous transport: Foundations and applications [book review of WILEY-VCH Verlag GmbH & Co., Weinheim, 2008],
[29]
G. B. Witham,
[30]
N. Zabusky and M. Kruskal,
Interactions of solitons in a collisionless plasma and the recurrence of initial states,
[31]
Q. Zhang, J. Zhang, S. Jiang and Z. Zhang,
Numerical solution to a linearized time fractional KdV equation on unbounded domains,
show all references
References:
[1]
N. An, C. Huang and X. Yu,
Error analysis of direct discontinuous Galerkin method for two-dimensional fractional diffusion-wave equation,
[2]
W. Bu and A. Xiao,
An h-p version of the continuous Petrov-Galerkin finite element method for Riemann-Liouville fractional differential equation with novel test basis functions,
[3] [4]
Y. Cheng and C.-W. Shu,
A discontinuous Galerkin finite element method for time dependent partial differential equations with higher order derivatives,
[5]
P. G. Ciarlet,
[6] [7]
P. A. Farrell, A. F. Hegarty, J. J. H. Miller, E. O'Riordan and G. I. Shishkin,
[8] [9]
D. Henry,
[10] [11]
C. Huang, N. An and X. Yu,
A fully discrete direct discontinuous Galerkin method for the fractional diffusion-wave equation,
[12]
C. Huang, M. Stynes and N. An,
Optimal ${L}^\infty ({L}^2)$ error analysis of a direct discontinuous Galerkin method for a time-fractional reaction-diffusion problem,
[13]
C. Huang, X. Yu, C. Wang, Z. Li and N. An,
A numerical method based on fully discrete direct discontinuous Galerkin method for the time fractional diffusion equation,
[14]
D. J. Korteweg and G. de Vries,
On the change of form of long waves advancing in a rectangular canal, and on a new type of long stationary waves,
[15] [16] [17]
S. Momani and A. Yıldı rım,
Analytical approximate solutions of the fractional convection-diffusion equation with nonlinear source term by He's homotopy perturbation method,
[18] [19]
K. Mustapha and W. McLean,
Discontinuous Galerkin method for an evolution equation with a memory term of positive type,
[20]
K. Mustapha and W. McLean,
Uniform convergence for a discontinuous Galerkin, time-stepping method applied to a fractional diffusion equation,
[21]
K. Mustapha, M. Nour and B. Cockburn,
Convergence and superconvergence analyses of HDG methods for time fractional diffusion problems,
[22]
I. Podlubny,
[23]
I. Podlubny, Geometric and physical interpretation of fractional integration and fractional differentiation,
[24]
J. Russell, Report of the committee on waves, Rep. Meet. Brit. Assoc. Adv. Sci., 7th Liverpool, 1837, London, John Murray.Google Scholar
[25]
M. Stynes, E. O'Riordan and J. Gracia,
Error analysis of a finite difference method on graded meshes for a time-fractional diffusion equation,
[26] [27]
L. Wei, Y. He, A. Yildirim and S. Kumar,
Numerical algorithm based on an implicit fully discrete local discontinuous Galerkin method for the time-fractional KdV-Burgers-Kuramoto equation,
[28]
G. H. Weiss, R. Klages, G. Radons and I. M. Sokolov (eds.), Anomalous transport: Foundations and applications [book review of WILEY-VCH Verlag GmbH & Co., Weinheim, 2008],
[29]
G. B. Witham,
[30]
N. Zabusky and M. Kruskal,
Interactions of solitons in a collisionless plasma and the recurrence of initial states,
[31]
Q. Zhang, J. Zhang, S. Jiang and Z. Zhang,
Numerical solution to a linearized time fractional KdV equation on unbounded domains,
N = 32 N = 64 N = 128 N = 256 N = 1024 3.0496E-2 1.1110E-2 3.9235E-3 1.3578E-3 4.6379E-4 1.5729E-4 1.4567 1.5016 1.5307 1.5498 1.5600 3.8341E-2 1.5127E-2 5.8825E-3 2.2665E-3 8.6831E-4 3.3157E-4 1.3417 1.3626 1.3759 1.3842 1.3888 5.9953E-2 2.6607E-2 1.1728E-2 5.1485E-3 2.2540E-3 9.8512E-4 1.1720 1.1817 1.1878 1.1916 1.1941
N = 32 N = 64 N = 128 N = 256 N = 1024 3.0496E-2 1.1110E-2 3.9235E-3 1.3578E-3 4.6379E-4 1.5729E-4 1.4567 1.5016 1.5307 1.5498 1.5600 3.8341E-2 1.5127E-2 5.8825E-3 2.2665E-3 8.6831E-4 3.3157E-4 1.3417 1.3626 1.3759 1.3842 1.3888 5.9953E-2 2.6607E-2 1.1728E-2 5.1485E-3 2.2540E-3 9.8512E-4 1.1720 1.1817 1.1878 1.1916 1.1941
Polynomial M Order Order 5 5.3831E-01 - 3.2328E-01 - 10 7.8579E-02 2.7762 4.7729E-02 2.7598 20 9.9319E-03 2.9840 6.2124E-03 2.9416 40 1.1426E-04 3.1196 7.5845E-04 3.0340 5 1.7236E-02 - 1.3819E-02 - 10 1.1399E-03 3.9184 8.7589E-04 3.9798 15 2.2712E-04 3.9406 1.7695E-04 3.9667 20 7.2979E-05 3.9418 6.1408E-04 3.9070
Polynomial M Order Order 5 5.3831E-01 - 3.2328E-01 - 10 7.8579E-02 2.7762 4.7729E-02 2.7598 20 9.9319E-03 2.9840 6.2124E-03 2.9416 40 1.1426E-04 3.1196 7.5845E-04 3.0340 5 1.7236E-02 - 1.3819E-02 - 10 1.1399E-03 3.9184 8.7589E-04 3.9798 15 2.2712E-04 3.9406 1.7695E-04 3.9667 20 7.2979E-05 3.9418 6.1408E-04 3.9070
N = 32 N = 64 N = 128 N = 256 N = 1024 2.6605E-2 9.8042E-3 3.4860E-3 1.2119E-3 4.1549E-4 1.4179E-4 1.4402 1.4918 1.5243 1.5444 1.5510 3.0086E-2 1.2002E-2 4.6980E-3 1.8177E-3 6.9850E-4 2.6770E-4 1.3258 1.3531 1.3699 1.3797 1.3836 4.1374E-2 1.8226E-2 7.9818E-3 3.4836E-3 1.5178E-3 6.6104E-4 1.1827 1.1912 1.1961 1.1985 1.1992
N = 32 N = 64 N = 128 N = 256 N = 1024 2.6605E-2 9.8042E-3 3.4860E-3 1.2119E-3 4.1549E-4 1.4179E-4 1.4402 1.4918 1.5243 1.5444 1.5510 3.0086E-2 1.2002E-2 4.6980E-3 1.8177E-3 6.9850E-4 2.6770E-4 1.3258 1.3531 1.3699 1.3797 1.3836 4.1374E-2 1.8226E-2 7.9818E-3 3.4836E-3 1.5178E-3 6.6104E-4 1.1827 1.1912 1.1961 1.1985 1.1992
Polynomial M Order Order 5 3.8931E-01 - 2.3483E-01 - 10 5.6563E-02 2.7829 3.4418E-02 2.7704 20 7.1139E-03 2.9911 4.4696E-03 2.9449 40 7.7979E-04 3.1894 5.4210E-04 3.0435 5 1.2812E-02 - 1.0564E-02 - 10 8.3809E-03 3.9342 6.7270E-04 3.9731 15 1.6615E-04 3.9552 1.2940E-04 4.0071 20 5.3162E-05 3.9564 4.3749E-05 3.9578
Polynomial M Order Order 5 3.8931E-01 - 2.3483E-01 - 10 5.6563E-02 2.7829 3.4418E-02 2.7704 20 7.1139E-03 2.9911 4.4696E-03 2.9449 40 7.7979E-04 3.1894 5.4210E-04 3.0435 5 1.2812E-02 - 1.0564E-02 - 10 8.3809E-03 3.9342 6.7270E-04 3.9731 15 1.6615E-04 3.9552 1.2940E-04 4.0071 20 5.3162E-05 3.9564 4.3749E-05 3.9578
N = 64 N = 128 N = 256 N = 512 N = 1024 2.4959E-3 8.4989E-4 2.8741E-4 9.6635E-5 3.2367E-5 1.5542 1.5641 1.5720 1.5784 6.0125E-3 2.3383E-3 8.9915E-4 3.4367E-4 1.3092E-4 1.3624 1.3788 1.3875 1.3923 1.0359E-2 4.6808E-3 2.0899E-3 1.2645E-4 4.0879E-4 1.1460 1.1633 1.1736 1.1803
N = 64 N = 128 N = 256 N = 512 N = 1024 2.4959E-3 8.4989E-4 2.8741E-4 9.6635E-5 3.2367E-5 1.5542 1.5641 1.5720 1.5784 6.0125E-3 2.3383E-3 8.9915E-4 3.4367E-4 1.3092E-4 1.3624 1.3788 1.3875 1.3923 1.0359E-2 4.6808E-3 2.0899E-3 1.2645E-4 4.0879E-4 1.1460 1.1633 1.1736 1.1803
N = 64 N = 128 N = 256 N = 512 N = 1024 6.0380E-3 2.2285E-3 7.2999E-4 2.4866E-4 8.4023E-5 1.5110 1.5371 1.5536 1.5653 1.1214E-2 4.4769E-3 1.7480E-3 6.7438E-4 2.5839E-4 1.3248 1.3567 1.3741 1.3839 1.5602E-2 7.0291E-3 3.1157E-3 1.3694E-3 5.9926E-4 1.1503 1.1737 1.1859 1.1923
N = 64 N = 128 N = 256 N = 512 N = 1024 6.0380E-3 2.2285E-3 7.2999E-4 2.4866E-4 8.4023E-5 1.5110 1.5371 1.5536 1.5653 1.1214E-2 4.4769E-3 1.7480E-3 6.7438E-4 2.5839E-4 1.3248 1.3567 1.3741 1.3839 1.5602E-2 7.0291E-3 3.1157E-3 1.3694E-3 5.9926E-4 1.1503 1.1737 1.1859 1.1923
[1]
Mahboub Baccouch.
Superconvergence of the semi-discrete local discontinuous Galerkin method for nonlinear KdV-type problems.
[2]
Chaoxu Pei, Mark Sussman, M. Yousuff Hussaini.
A space-time discontinuous Galerkin spectral element method for the Stefan problem.
[3]
Atsushi Kawamoto.
Hölder stability estimate in an inverse source problem for a first and half order time fractional diffusion equation.
[4] [5]
Kim S. Bey, Peter Z. Daffer, Hideaki Kaneko, Puntip Toghaw.
Error analysis of the p-version discontinuous Galerkin method for heat transfer in built-up structures.
[6]
Yoshifumi Aimoto, Takayasu Matsuo, Yuto Miyatake.
A local discontinuous Galerkin method based on variational structure.
[7]
Runchang Lin, Huiqing Zhu.
A discontinuous Galerkin least-squares finite element method for solving Fisher's equation.
[8]
Yinhua Xia, Yan Xu, Chi-Wang Shu.
Efficient time discretization for local discontinuous Galerkin methods.
[9]
Jerry L. Bona, Stéphane Vento, Fred B. Weissler.
Singularity formation and blowup of complex-valued solutions of the modified KdV equation.
[10]
Konstantinos Chrysafinos, Efthimios N. Karatzas.
Symmetric error estimates for discontinuous Galerkin
approximations for an optimal control problem associated to
semilinear parabolic PDE's.
[11] [12] [13]
Juan-Ming Yuan, Jiahong Wu.
A dual-Petrov-Galerkin method for two integrable fifth-order
KdV type equations.
[14]
Boris P. Belinskiy, Peter Caithamer.
Energy estimate for the wave equation driven by a fractional Gaussian noise.
[15] [16]
Xia Ji, Wei Cai.
Accurate simulations of 2-D phase shift masks with a generalized discontinuous Galerkin (GDG) method.
[17]
Zheng Sun, José A. Carrillo, Chi-Wang Shu.
An entropy stable high-order discontinuous Galerkin method for cross-diffusion gradient flow systems.
[18] [19]
Torsten Keßler, Sergej Rjasanow.
Fully conservative spectral Galerkin–Petrov method for the inhomogeneous Boltzmann equation.
[20]
Andreas C. Aristotelous, Ohannes Karakashian, Steven M. Wise.
A mixed discontinuous Galerkin, convex splitting scheme for a modified Cahn-Hilliard equation and an efficient nonlinear multigrid solver.
2018 Impact Factor: 1.008
Tools
Article outline
Figures and Tables
[Back to Top]
|
Hello guys! I was wondering if you knew some books/articles that have a good introduction to convexity in the context of variational calculus (functional analysis). I was reading Young's "calculus of variations and optimal control theory" but I'm not that far into the book and I don't know if skipping chapters is a good idea.
I don't know of a good reference, but I'm pretty sure that just means that second derivatives have consistent signs over the region of interest. (That is certainly a sufficient condition for Legendre transforms.)
@dm__ yes have studied bells thm at length ~2 decades now. it might seem airtight and has stood the test of time over ½ century, but yet there is some fineprint/ loopholes that even phd physicists/ experts/ specialists are not all aware of. those who fervently believe like Bohm that no new physics will ever supercede QM are likely to be disappointed/ dashed, now or later...
oops lol typo bohm bohr
btw what is not widely appreciated either is that nonlocality can be an emergent property of a fairly simple classical system, it seems almost nobody has expanded this at length/ pushed it to its deepest extent. hint: harmonic oscillators + wave medium + coupling etc
But I have seen that the convexity is associated to minimizers/maximizers of the functional, whereas the sign second variation is not a sufficient condition for that. That kind of makes me think that those concepts are not equivalent in the case of functionals...
@dm__ generally think sampling "bias" is not completely ruled out by existing experiments. some of this goes back to CHSH 1969. there is unquestioned reliance on this papers formulation by most subsequent experiments. am not saying its wrong, think only that theres very subtle loophole(s) in it that havent yet been widely discovered. there are many other refs to look into for someone extremely motivated/ ambitious (such individuals are rare). en.wikipedia.org/wiki/CHSH_inequality
@dm__ it stands as a math proof ("based on certain assumptions"), have no objections. but its a thm aimed at physical reality. the translation into experiment requires extraordinary finesse, and the complex analysis starts with CHSH 1969. etc
While it's not something usual, I've noticed that sometimes people edit my question or answer with a more complex notation or incorrect information/formulas. While I don't think this is done with malicious intent, it has sometimes confused people when I'm either asking or explaining something, as...
@vzn what do you make of the most recent (2015) experiments? "In 2015 the first three significant-loophole-free Bell-tests were published within three months by independent groups in Delft, Vienna and Boulder.
All three tests simultaneously addressed the detection loophole, the locality loophole, and the memory loophole. This makes them “loophole-free” in the sense that all remaining conceivable loopholes like superdeterminism require truly exotic hypotheses that might never get closed experimentally."
@dm__ yes blogged on those. they are more airtight than previous experiments. but still seem based on CHSH. urge you to think deeply about CHSH in a way that physicists are not paying attention. ah, voila even wikipedia spells it out! amazing
> The CHSH paper lists many preconditions (or "reasonable and/or presumable assumptions") to derive the simplified theorem and formula. For example, for the method to be valid, it has to be assumed that the detected pairs are a fair sample of those emitted. In actual experiments, detectors are never 100% efficient, so that only a sample of the emitted pairs are detected. > A subtle, related requirement is that the hidden variables do not influence or determine detection probability in a way that would lead to different samples at each arm of the experiment.
↑ suspect entire general LHV theory of QM lurks in these loophole(s)! there has been very little attn focused in this area... :o
how about this for a radical idea? the hidden variables determine the probability of detection...! :o o_O
@vzn honest question, would there ever be an experiment that would fundamentally rule out nonlocality to you? and if so, what would that be? what would fundamentally show, in your opinion, that the universe is inherently local?
@dm__ my feeling is that something more can be milked out of bell experiments that has not been revealed so far. suppose that one could experimentally control the degree of violation, wouldnt that be extraordinary? and theoretically problematic? my feeling/ suspicion is that must be the case. it seems to relate to detector efficiency maybe. but anyway, do believe that nonlocality can be found in classical systems as an emergent property as stated...
if we go into detector efficiency, there is no end to that hole. and my beliefs have no weight. my suspicion is screaming absolutely not, as the classical is emergent from the quantum, not the other way around
@vzn have remained civil, but you are being quite immature and condescending. I'd urge you to put aside the human perspective and not insist that physical reality align with what you expect it to be. all the best
@dm__ ?!? no condescension intended...? am striving to be accurate with my words... you say your "beliefs have no weight," but your beliefs are essentially perfectly aligned with the establishment view...
Last night dream, introduced a strange reference frame based disease called Forced motion blindness. It is a strange eye disease where the lens is such that to the patient, anything stationary wrt the floor is moving forward in a certain direction, causing them have to keep walking to catch up with them. At the same time, the normal person think they are stationary wrt to floor. The result of this discrepancy is the patient kept bumping to the normal person. In order to not bump, the person has to walk at the apparent velocity as seen by the patient. The only known way to cure it is to remo…
And to make things even more confusing:
Such disease is never possible in real life, for it involves two incompatible realities to coexist and coinfluence in a pluralistic fashion. In particular, as seen by those not having the disease, the patient kept ran into the back of the normal person, but to the patient, he never ran into him and is walking normally
It seems my mind has gone f88888 up enough to envision two realities that with fundamentally incompatible observations, influencing each other in a consistent fashion
It seems my mind is getting more and more comfortable with dialetheia now
@vzn There's blatant nonlocality in Newtonian mechanics: gravity acts instantaneously. Eg, the force vector attracting the Earth to the Sun points to where the Sun is now, not where it was 500 seconds ago.
@Blue ASCII is a 7 bit encoding, so it can encode a maximum of 128 characters, but 32 of those codes are control codes, like line feed, carriage return, tab, etc. OTOH, there are various 8 bit encodings known as "extended ASCII", that have more characters. There are quite a few 8 bit encodings that are supersets of ASCII, so I'm wary of any encoding touted to be "the" extended ASCII.
If we have a system and we know all the degrees of freedom, we can find the Lagrangian of the dynamical system. What happens if we apply some non-conservative forces in the system? I mean how to deal with the Lagrangian, if we get any external non-conservative forces perturbs the system?Exampl...
@Blue I think now I probably know what you mean. Encoding is the way to store information in digital form; I think I have heard the professor talking about that in my undergraduate computer course, but I thought that is not very important in actually using a computer, so I didn't study that much. What I meant by use above is what you need to know to be able to use a computer, like you need to know LaTeX commands to type them.
@AvnishKabaj I have never had any of these symptoms after studying too much. When I have intensive studies, like preparing for an exam, after the exam, I feel a great wish to relax and don't want to study at all and just want to go somehwere to play crazily.
@bolbteppa the (quanta) article summary is nearly popsci writing by a nonexpert. specialists will understand the link to LHV theory re quoted section. havent read the scientific articles yet but think its likely they have further ref.
@PM2Ring yes so called "instantaneous action/ force at a distance" pondered as highly questionable bordering on suspicious by deep thinkers at the time. newtonian mechanics was/ is not entirely wrong. btw re gravity there are a lot of new ideas circulating wrt emergent theories that also seem to tie into GR + QM unification.
@Slereah No idea. I've never done Lagrangian mechanics for a living. When I've seen it used to describe nonconservative dynamics I have indeed generally thought that it looked pretty silly, but I can see how it could be useful. I don't know enough about the possible alternatives to tell whether there are "good" ways to do it. And I'm not sure there's a reasonable definition of "non-stupid way" out there.
← lol went to metaphysical fair sat, spent $20 for palm reading, enthusiastic response on my leadership + teaching + public speaking abilities, brought small tear to my eye... or maybe was just fighting infection o_O :P
How can I move a chat back to comments?In complying to the automated admonition to move comments to chat, I discovered that MathJax is was no longer rendered. This is unacceptable in this particular discussion. I therefore need to undo my action and move the chat back to comments.
hmmm... actually the reduced mass comes out of using the transformation to the center of mass and relative coordinates, which have nothing to do with Lagrangian... but I'll try to find a Newtonian reference.
One example is a spring of initial length $r_0$ with two masses $m_1$ and $m_2$ on the ends such that $r = r_2 - r_1$ is it's length at a given time $t$ - the force laws for the two ends are $m_1 \ddot{r}_1 = k (r - r_0)$ and $m_2 \ddot{r}_2 = - k (r - r_0)$ but since $r = r_2 - r_1$ it's more natural to subtract one from the other to get $\ddot{r} = - k (\frac{1}{m_1} + \frac{1}{m_2})(r - r_0)$ which makes it natural to define $\frac{1}{\mu} = \frac{1}{m_1} + \frac{1}{m_2}$ as a mass
since $\mu$ has the dimensions of mass and since then $\mu \ddot{r} = - k (r - r_0)$ is just like $F = ma$ for a single variable $r$ i.e. an spring with just one mass
@vzn It will be interesting if a de-scarring followed by a re scarring can be done in some way in a small region. Imagine being able to shift the wavefunction of a lab setup from one state to another thus undo the measurement, it could potentially give interesting results. Perhaps, more radically, the shifting between quantum universes may then become possible
You can still use Fermi to compute transition probabilities for the perturbation (if you can actually solve for the eigenstates of the interacting system, which I don't know if you can), but there's no simple human-readable interpretation of these states anymore
@Secret when you say that, it reminds me of the no cloning thm, which have always been somewhat dubious/ suspicious of. it seems like theyve already experimentally disproved the no cloning thm in some sense.
|
Intuitively, I would expect the Taylor expansion around $x_0$ of a polynomial in $(x-x_0)$ to be identical to the polynomial. However, I cannot seem to show that/whether this is the case:
For a finite power series $f(x) = \sum_{i=0}^{n} a_i (x-x_0)^i$ the $k$-th derivative is given by $$\frac{d^k f(x)}{dx^k} = \sum_{j=k}^n \frac{j!}{(j-k)!} a_j (x-x_0)^{j-k}$$
To get the Taylor series, I would write $$ F(x) = \sum_{i=0}^{\infty} \frac{(x-x_0)^i}{i!} \frac{d^i f(x_0)}{dx^i} $$ and substitute the expression for the $i$-th derivative of $f(x)$: $$ F(x) = \sum_{i=0}^{\infty} \sum_{j=i}^{n} \frac{j!}{(j-i)!i!} (x_0 - x_0)^{j-i} a_j (x-x_0)^i $$
I am confused by the $(x_0 - x_0)$ factor that appears and the binomial coefficient that shows up. Is my intuition wrong?
|
We have used UTE sequence to obtain the subject-specific susceptibility distribution, which was then used to simulate motion-induced B
0 change at two head positions. A Fourier-based dipole-approximation method was used to map susceptibility to B 0. We have evaluated the simulation results against the measured B 0 at the same positions and observed a good agreement between the simulated and real data.
Echo Planar Imaging(EPI) and balanced Stead-Stated Free Precession(bSSFP) are limited in their application for real-time fMRI by both subject motion and main magnetic field(B
0) inhomogeneities. Motion-related B 0 variations result from changing position and orientation of susceptibility interfaces relative to the B 0 field; this causes ghosting and ringing artifacts in structural imaging and time-series phase instability in functional scans. Alternatively, the B 0 field inhomogeneities have to be determined for each large-scale movement of the imaged object during acquisitions 1. However, it is impractical to acquire the B 0 field for every possible positions and orientations of the object. Previously, field probes have been used to correct for this, but they cannot fully estimate B 0 in the brain as they are outside the brain 2
Here, we estimate the motion-related B
0 variations using a Fourier-based dipole-approximation method 3-5 and combine with ultrashort echo time(UTE) imaging for computing the subject’s susceptibility model. The UTE sequence allows delineating the cortical bone and air cavities; and thus provides susceptibility models specific to each subject.
UTE sequence was applied to a healthy subject after applying the scanner’s second-order Spherical Harmonic (SH) global shimming. We took 192 slices of dual-echo UTE(1.5mm
3voxel; TE1=0.05ms; TE2= 2.46ms; TR=6ms; flip-angle=8°, FOV=288mm 3) on a 3T scanner(Prisma, Siemens, Erlangen, Germany). As a reference, off-resonance field map was measured using dual-echo GRE sequence(1.5mm 3voxel; TE1=6.66ms; TE2=9.12ms; TR=1630ms; flip-angle=60°, FOV=288x288x192mm). The reference field map, “Measured ΔB 0 (x,y,z)”, was calculated by measuring the phase accrued between two echo times at each image voxel.
A 3-classes(air, cortical bones, and soft tissues) UTE based susceptibility model was processed through 3 steps, as shown in Figure.1. First the magnitude images at the first(TE
1) and second(TE 2) echo times were used to calculate the air mask. An empirically determined threshold was chosen to segment the air cavities 6. Then the bone and soft tissue were segmented using inverse of the transverse effective relaxation rate(R2*) estimated from TE 1 and TE 2, where cortical bone has high R2* values(R2* bone≥0.3ms -1) and soft tissue expected to have low R2* values(0ms -1<R2* soft-tissue<0.3ms -1) 7. Finally, the air mask was multiplied back to the R2* map(R2* air=0ms -1) and corresponding magnetic susceptibilities $$$(\chi_{soft-tissue}\approx-9.2ppm, \chi_{bone}\approx-11.4ppm, \chi_{air}\approx0.36ppm)$$$ 8,9 were assigned to the R2* map.
The simulated off-resonance field map, “Simulated ΔB
0 (x,y,z)”, was calculated using a Fourier-based method(Eq.1),
$${Estimated}\triangle{B_0^{z}}({\bf{x}})=\underbrace{FT^{-1}\left\{B_0\left[\frac{1}{3}-\frac{k_z^2}{k_x^2+k_y^2+k_z^2}\right]\cdot{\tilde{\chi}({\bf{k}})}\right\}}_{simulated\triangle{B_0^z}}+B_{in} \quad{(1)}$$
Where the tilde denotes a 3-dimensional Fourier transform of susceptibility model and
k indicates k-space vector. Susceptibility is weighted by a k-space scaling factor(the terms in brackets) 10. The B in is measured background inhomogeneities, We measured Bin in a spherical phantom with identical SH shimming setting as measured ΔB 0. The resulting “estimated ΔB 0 (x,y,z)” is the sum of simulated ΔB 0 and B in. The standard deviation of B 0(σB 0) within a brain mask was used to assess the simulation performance. Image processing and simulations were performed in MATLAB(Mathwork, Natick, MA).
Two different head positions were measured. We used the first position(pos.1) as reference position. The scanner’s second-order SH shimming was calculated and then applied to the first position. For the second position(pos.2), we kept the identical SH shimming values as position one during measurements. Then second position’s field maps were registered to the first position using the FMRIB Software Library package
11 of FLIRT 12. We calculated the difference between pos.1 and registered pos.2 for “Estimated ΔB 0” and “Measured ΔB 0”, in order to demonstrate that the B 0 field estimated with our method could be used to predict the motion-induced B 0 variations
1 D. F. Hillenbrand K. M. L., W. F. B. Punchard , T. G. Reese, and P. M. Starewicz I.Applied Magnetic Resonance.2005; 29:39-64.
2 Gross S, Duerst Y, Vionnet LM, Barmet C and Pruessmann KP, B0-Atlas with Field-Probe Guidance: Application in Real-Time Field Control, In Proceedings of the 24th Annual Meeting of ISMRM, Melbourne, Singapore, 2016. p. 0930
3 Marques J. P. & Bowtell R.Concepts in Magnetic Resonance Part B: Magnetic Resonance Engineering.2005; 25B:65-78.
4 Boegle R., Maclaren J. & Zaitsev M.Magnetic Resonance Materials in Physics, Biology and Medicine.2010; 23:263-273.
5 Koch K. M., Papademetris X., Rothman D. L. et al.Phys Med Biol.2006; 51:6381-6402.
6 Catana C., van der Kouwe A., Benner T. et al.Journal of Nuclear Medicine.2010; 51:1431-1438.
7 Keereman V., Fierens Y., Broux T. et al.Journal of Nuclear Medicine.2010; 51:812-818.
8 Collins C. M., Yang B., Yang Q. X. et al.Magnetic Resonance Imaging.2002; 20:413-424.
9 Hopkins J. A. & Wehrli F. W.Magnetic Resonance in Medicine.1997; 37:494-500.
10 Cheng Y. C., Neelavalli J. & Haacke E. M.Phys Med Biol.2009; 54:1169-1189.
11 Smith S. M., Jenkinson M., Woolrich M. W. et al.NeuroImage.2004; 23 Suppl 1:S208-219.
12 Jenkinson M., Bannister P., Brady M. et al.NeuroImage.2002; 17:825-841.
13 Sostheim R, Maclaren J, Testud F, Zaitsev M. In Proceedings of the 20th Annual Meeting of ISMRM, Melbourne, Australia, 2012, p.3386
|
Here's a geometric argument, but it isn't as slick as some of the Calculus-based ones.
Consider the unit circle about $O$, through $R$ and $S$, with $\theta = \angle ROS$. The perpendicular from $S$ to $\overline{OR}$ has length $\sin\theta$, while the perpendicular from $R$ up to $T$ on the extension of $\overline{OS}$ has length $\tan\theta$. Let $M$ be the midpoint of $\overline{ST}$.
Then $$2\;|\text{area of sector}\;ROS| = \theta \qquad\text{and}\qquad 2\;|\triangle ORM| = \frac{1}{2}\left(\sin\theta + \tan\theta\right)$$
"All we need to do" is show that the triangle has more area than the sector. This seems pretty clear; after all, the triangle contains almost-all of the sector, except for the circular segment defined by $\overline{KR}$, where $K$ is the intersection of $\overline{RM}$ and the circle. There is a concern, though, that the excess area in the triangular region $KSM$ could be less than that of the tiny sliver of a circular segment for small $\theta$; we need to dispel that concern.
There's probably a simpler route to this, but I coordinatized and, with the help of
Mathematica, found$$M = \left(\frac{1 + \cos\theta}{2}, \frac{\sin\theta (1 + \cos\theta)}{2 \cos\theta}\right)$$$$K = \left(\frac{1 + 3 \cos\theta + 2 \cos^2\theta + 2 \cos^3\theta}{1 + 3 \cos\theta + 4 \cos^2\theta}, \frac{2 \sin\theta \cos\theta ( 1 + \cos\theta)}{1 + 3 \cos\theta + 4 \cos^2\theta}\right)$$so that (after a bit more symbol-crunching)$$\frac{|\overline{MK}|}{|\overline{KR}|} = \frac{1 + 3 \cos\theta}{4 \cos^2\theta} = 1 + \frac{1 + 3 \cos\theta - 4 \cos^2\theta}{4 \cos^2\theta} = 1 + \frac{(1-\cos\theta)(1 + 4 \cos\theta)}{4 \cos^2\theta} > 1$$
for $0 < \theta < \pi/2$.
This says that $\overline{MK}$ is longer than $\overline{KR}$, so that we could reflect $R$ in $K$ to get $R^\prime$, and copy circular segment $KR$ as circular segment $KR^\prime$ inside $\triangle ORM$ yet tangent to the unit circle (and therefore outside of sector $ORS$).
Consequently, the triangle definitely has more area than the sector, so we're done. $\square$
|
I'm having some trouble by trying to solve Euler equations by using the Frobenius method. For example, I'm asked to solve the Euler differential equation
$$ x^2y'' + xy' - y = 0 $$
using a power series solution.
I start by assuming there is at least one solution with the form $ y = x^\sigma\sum{a_nx^n} $.
First, I divide the whole differential equation by $x^2$. Then I substitute the expression above so I get
$$\sum{(n+\sigma)(n+\sigma-1)a_nx^{n+\sigma-2}} + \frac{1}{x}\sum{(n+\sigma)a_nx^{n+\sigma-1}}-\frac{1}{x^2}\sum{a_nx^{n+\sigma}}$$
And now, dividing by $ x^{\sigma-2}$, I get
$$\sum{((n+\sigma)(n+\sigma-1)+(n+\sigma) - 1})a_nx^n $$
Now I don't know how to find the recurrence relation I'm looking for in order to find the form of $a_n$. In all the examples I've been able to find, in the last expression one always finds terms of $a_{n-1}$, for example, but here I don't know how to continue.
Did I do something wrong? I'm confused because I believe the equation given fits the requeriments needed in order to the Frobenius method to be applicable, but this happens to me every time I try to solve an Euler equation using it.
Thank you very much in advance.
|
This question is cross-posted at MSE with a soon to expire bounty that hasn't generated much discussion.
Let $(\Omega, \mathcal{F},P)$ be a probability space and $(\mathcal{F}_n)_n$ a filtration that increases to $\mathcal{F}$.
Is there a way to quantify the "rate of convergence" of $\mathcal{F}_n \uparrow \mathcal{F}?$
I'll now try to clarify the question by explaining its motivation. I was wondering what can be said about the rate of convergence in Levy's martingale convergence theorem. By that result, for an integrable random variable $X$ we have $$E(X \mid \mathcal{F}_n) \to X$$ almost surely. I was wondering if we can make a statement like $$P(|E(X \mid \mathcal{F}_n) - X| > \epsilon) = O(n^{-a})$$ under fairly general assumptions.
As pointed out to me by Bananach in this question, it's hopeless to expect the rate to be independent of the particular filtration because we can replace $\mathcal{F}_n$ with $$\bar{\mathcal{F}_n} = \mathcal{F}_{\sqrt{n}}$$ (rounding to the nearest integer), and obtain a new filtration for which conditional expectations converge more slowly.
But perhaps if we knew "how quickly" $\mathcal{F}_n$ increases to $\mathcal{F}$, we could find a rate of convergence of conditional expectations that depends on the "rate of convergence" of the filtration.
An example is given in Michael's answer to the same question that I linked to above. Let $X$ be uniformly distributed on $[-1,1]$, assume $\mathcal{F} = \sigma(X)$, and define $$Z_n = X \mathbf{1}_{|X|>2^{-n}} \ \ \text{and} \ \ \mathcal{F}_n = \sigma(Z_n,...,Z_1).$$ Then, $$E(X \mid \mathcal{F}_n) = X \mathbf{1}_{|X|>2^{-n}} \to X,$$ and $$P(|E(X \mid \mathcal{F}_n) - X)| > \epsilon) = 2^{-n}.$$
Another, trivial, example is $\mathcal{F} = \sigma(X)$ and $\mathcal{F}_n = \sigma(X)$ for all $n$, and then $$E(X \mid \mathcal{F}_n) = X.$$ The filtration converges "instantly" and so do the conditional expectations.
The examples suggest that the rate of convergence of the conditional expectations is the same as the "rate of convergence" of the filtration. Can this idea be made precise in general?
|
In quantum field theory Feynman has invented a diagrammatic method to encode various terms in the Taylor decomposition of integrals of the following form below which I will write in a baby version as finite dimensional integral rather than path integral (and using "imaginary time"):$$Z(j_1,\dots,j_n):=\frac{\int_{\mathbb{R}^n} \exp\{-B(x_1,\dots,x_n)+\sum_kP_k(x)+\sum_{i=1}^nj_ix_i\}dx}{ \int_{\mathbb{R}^n} \exp\{-B(x_1,\dots,x_n)+\sum_kP_k(x)\}dx},$$where $B$ is a positive definite quadratic form on $\mathbb{R}^n$, $P_k$ are homogeneous polynomials.Furthermore one can write $Z(j)=exp\{ W(j)\}$ and it is shown that $W(j)$ is a sum of terms corresponding only to
connected diagrams.
In the context of path integrals there is a notion of effective action which in this context is defined as follows. Let $\phi_i:=\frac{\partial W(j)}{\partial j_i}$. Define the Legendre transform $$\Gamma(\phi):=\sum_{i=1}^n \phi_ij_i-W(j).$$
In QFT it is claimed that the Taylor decomposition of $\Gamma(\phi)$ is the sum of terms corresponding to
connected one particle irreduciblediagrams. I am wondering if a finite dimensional (baby) version of this claim is true, and if this is the case whether there is a reference to a detailed discussion of the finite dimensional case.
|
Let $M$ denote a smooth $n$-dimensional manifold.
(a) Let $\phi$ denote a smooth $n$ form which is nowhere zero. Show that every $x_{0} \in M$ has a neighborhood on which we can find smooth local coordinates $x^{1}, ...x^{n}$ such that : $\phi=dx^{1}\wedge...\wedge dx^{n}$
(b) Let $\psi$ denote a closed smooth $n-1$ form on $M$ which is nowhere zero. Show that each near point, there can be found smooth coordinates $x^{1}, ...x^{n}$ such that : $\psi=dx^{2}\wedge...\wedge dx^{n}$
Now, in any local coordinates I can write $\phi=f(y)dy^{1}\wedge...\wedge dy^{n}$ but I have no idea how to turn this into the required coordinates. Similarly I can write $\psi$ as sum of $n-1$ forms. The only other thing I see is that $\phi$ looks sort of like an orientation form, but that is all I can gleam from this.
Many thanks for any help!
|
I have series $\sum_{n=2}^{\infty} \frac{n+1}{n^3-1} $
I think that comparison test would give the easiest solution but i'm not sure how to apply it. I know that i need to find larger sum to prove convergence or smaller to prove divergence, so
$\frac{1}{n} $ would give us larger sum (correct me if i am wrong), and we know that $\sum_{n=0}^{\infty} \frac{1}{n} $ diverges, therefore our series from beginning of question diverges, but wolfram alpha said that $\sum_{n=2}^{\infty} \frac{n+1}{n^3-1} $ by comparison test converges.
Where am i making mistake, and what is the best way to apply comparison test on $\sum_{n=2}^{\infty} \frac{n+1}{n^3-1} $?
|
I'm trying to prove the following:
If $(a_n)$ is a sequence of positive numbers such that $\sum_{n=1}^\infty a_n b_n<\infty$ for all sequences of positive numbers $(b_n)$ such that $\sum_{n=1}^\infty b_n^2<\infty$, then $\sum_{n=1}^\infty a_n^2 <\infty$.
The context here is functional analysis homework, in the subject of Hilbert spaces.
Here's what I've thought:
Let $f=(a_n)>0$. Then the problem reads: if $\int f\overline{g}<\infty$ for all $g>0,g\in \ell^2$, then $f\in \ell^2$. This brings the problem into the realm of $\ell^p$ spaces.
I know the inner product is defined only in $\ell^2$, but it's sort of like saying: if $\langle f,g\rangle <\infty$ for all $g>0,g\in \ell^2$ then $f\in \ell^2$.
I read this as: "to check a positive sequence is in $\ell^2$, just check its inner product with any positive sequence in $\ell^2$ is finite, then you're done", which I find nice, but I can't prove it :P
From there, I don't know what else to do. I thought of Hölder's inequality which in this context states: $$\sum_{n=1}^\infty a_nb_n \leq \left( \sum_{n=1}^\infty a_n^2 \right)^{1/2} \left( \sum_{n=1}^\infty b_n^2 \right)^{1/2}$$
but it's not useful here.
|
We have a block matrix:
$$ \left[\begin{array}{c|c|c} A & 0 & 0 \\ \hline 0 & B & 0 \\ \hline 0 & 0 & C \end{array}\right] $$
Here $A$, $B$ and $C$ are all permutation matrices
of varying sizes, raised to a power. For example, all of the block matrices take on forms such as:
$$ \begin{bmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 1 & 0 & 0 \\ \end{bmatrix}^a, \begin{bmatrix} 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 \\ \end{bmatrix}^b, \text{ or } \begin{bmatrix} 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 & 1\\ 1 & 0 & 0 & 0 & 0\\ \end{bmatrix}^c, $$
We can suppose that we can only obtain one copy of this matrix. In other words, we are not allowed to use this block matrix more than once.
Can we assign corresponding scalars to the matrices $(A \to a, B \to b, C \to c)$ and (using matrix multiplication on the block matrix) get a result $a \cdot b \cdot c$? WHAT I MEAN BY ASSIGNING SCALARS
Suppose that $A$ is only one of the following:
$$ \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix} \text{or} \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ \end{bmatrix} $$
...and $B$ is one of the following:
$$ \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix} \text{or} \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ \end{bmatrix} $$
We could say that the first $A$ matrix is equivalent to $1$, and the second $A$ matrix is equivalent to 2. Also, the first $B$ equivalent to 3 and the second to 4. Then if we started with the first $A$ and the second $B$, we would be looking to find a result of $1 \cdot 4$. If we started with the second $A$ and the second $B$ we would be trying to obtain a result $2 \cdot 4$.
|
1,764 69
Hi PF!
Given the ODE system ##x'(t) = A(t) x(t)## where ##x## is a vector and ##A## a square matrix periodic, so that ##A(t) = A(T+t)##, would the following be a good way to solve the system's stability: fix ##t^*##. Then $$ \int \frac{1}{x} \, dx = \int A(t^*) \, dt \implies\\ x(t) = x(0)\exp\left( A(t^*)t \right). $$ The eigenvalues of ##A(t^*)## determine the system's stability, but by fixing ##t##, this approach assumes ##A(t)## is approximately constant for ##t##, which may not be the case. What do you think?
Given the ODE system ##x'(t) = A(t) x(t)## where ##x## is a vector and ##A## a square matrix periodic, so that ##A(t) = A(T+t)##, would the following be a good way to solve the system's stability: fix ##t^*##. Then
$$
\int \frac{1}{x} \, dx = \int A(t^*) \, dt \implies\\
x(t) = x(0)\exp\left( A(t^*)t \right).
$$
The eigenvalues of ##A(t^*)## determine the system's stability, but by fixing ##t##, this approach assumes ##A(t)## is approximately constant for ##t##, which may not be the case. What do you think?
|
A well known fact in probability is that a uniform random variable on $[0,1]$ can be used to simulate any other probability distribution on $\mathbb{R}$.
A standard way of doing this is to define, given $\mu$ a probability on $\mathbb{R}$, the random variable $F(u,\mu) = \min\lbrace x \in \mathbb{R}: \mu((-\infty,x]) \ge u\rbrace$. The random variable $F(u,\mu)$ has distribution $\mu$.
Suppose now that $(X,d)$ is a compact metric space and let $\mathcal{P}(X)$ be the space of Borel probability measures on $X$ endowed with the topology of weak convergence.
I'm looking for a reference for the following statement:
There exists a Borel function $F:[0,1]\times \mathcal{P}(X) \to X$ such that if $u$ is a random variable whose distribution is uniform on $[0,1]$ then for each $\mu$ the random variable $F(u,\mu)$ has distribution $\mu$.
|
The Annals of Statistics Ann. Statist. Volume 26, Number 3 (1998), 1011-1027. Estimation of the truncation probability in the random truncation model Abstract
Under random truncation, a pair of independent random variables $X$ and $Y$ is observable only if $X$ is larger than $Y$. The resulting model is the conditional probability distribution $H( x, y) =P[X \leq x,Y \leq y|X \geq Y]$. For the truncation probability $\alpha=P[X \geq Y]$, a proper estimate is not the sample proportion but $\alpha_n=\int G_n (s)dF_n(s)$ where $F_n$ and $G_n$ are product limit estimates of the distribution functions $F$ and$G$ of $X$ and$Y$, respectively. We obtain a much simpler representation $\hat {\alpha}_n$ for $\alpha_n$. With this, the strong consistency, an iid representation (and hence asymptotic normality), and a LIL for the estimate are established. The results are true for arbitrary$F$ and $G$. The continuity restriction on $F$ and $G$ often imposed in the literature is not necessary. Furthermore, the representation $\hat {\alpha}_n$ of $\alpha_n$ facilitates the establishment of the strong law for the product limit estimates $F_n$ and $G_n$.
Article information Source Ann. Statist., Volume 26, Number 3 (1998), 1011-1027. Dates First available in Project Euclid: 21 June 2002 Permanent link to this document https://projecteuclid.org/euclid.aos/1024691086 Digital Object Identifier doi:10.1214/aos/1024691086 Mathematical Reviews number (MathSciNet) MR1635434 Zentralblatt MATH identifier 0929.62036 Subjects Primary: 62G05. Citation
He, Shuyuan; Yang, Grace L. Estimation of the truncation probability in the random truncation model. Ann. Statist. 26 (1998), no. 3, 1011--1027. doi:10.1214/aos/1024691086. https://projecteuclid.org/euclid.aos/1024691086
|
Fix a metrix space $(X,d)$ and consider the Hausdorff (outer) measures $\mathcal{H}^s$ on $X$.
A
Frostman measure on $X$ is a finite Borel measure $\mu$ such that there exists $C,t,r_0>0$ with $\mu(B_r(x)) \leq C r^t$ for all $x\in X$ and all $0<r\leq r_0$. Let's call the supremum of such $t$ the Frostman-Exponent $FrostExp(\mu)$.
It is well known that any such measure satisfies $$\mu(A) \leq C \mathcal{H}^t(A)$$
In particular: The Hausdorff dimension satisfies $$\dim_\mathcal{H}(A)\geq \sup\lbrace FrostExp(\mu) \mid \mu \text{ Frostman measure with }\mu(A)>0\rbrace$$ Frostman himself proved that in fact this is essentially an equality: For any Borel set $A$ with $\mathcal{H}^t(A)>0$ there exists a Frostman measure of exponent $\geq t$ with $\mu(A)>0$.
Now my question is: Can we also say something about the precise value of the Hausdorff measure in terms of Frostman measures?
A naive conjecture might be that the above inequality is sharp in the sense that if $\infty>\mathcal{H}^t(A)>0$ then $$\sup\lbrace \mu(A) \mid \mu \text{ Borel measure with } \exists r_0 \forall 0<r<r_0: \sup_{x\in X}\mu(B_r(x))r^{-t} \leq 1\rbrace = \mathcal{H}^t(A)$$ I suppose that this is not true. Mostly because it seems to me that this would be so nice that it would be explicitely stated somewhere in textbooks that prove Frostman's lemma and I have yet to see this claim in the literature.
I have fiddled around with some of the constructions (like the one using the max-flow-min-cut theorem on an infinite tree) that come up in the proof of Frostman's lemma to see if they produce measures that come close to something like this. I haven't had any luck so far.
I would be glad if someone could tell me if a similar equality actually holds. Feel free to add additional assumptions on $X$ or $A$, I do not need the most general case although I am of course interested in it.
|
This is a neat little trick that I don’t
think I ever shoe-horned into a publication. It’s essentially a means to leverage side information in the form of NLP corpora to improve multi-class classification.
In short, the trick is: multiply output confidence scores by a co-occurrence matrix. This helps your model predict classes that are more likely to happen together,
e.g. “zebra” and “giraffe” should boost their relative scores, while “zebra” and “chaise” should not. That’s the cliffnotes version.
Assume you have some classification layer, probably a softmax layer, that you’re taking as confidence estimates across your classes. You’ve trained this model to predict, say, ImageNet classes. You’ve fed in a load of training data, with a particular distribution of one-hot annotation vectors. Even with something as big as ImageNet, you’re still bound by the limitations of your particular training set – and, crucially, its distribution of one-hot class labels.
Even if you only have one class label, most photos have more than one thing in them. A chair is much more likely to be in a living room with a rug than it is to be in a savannah with tall grass. Your model has learned some of this implicitly through the context of your training data, but you can leverage your extra knowledge to improve matters. You need to get an external source that contains information about the co-occurrence relationships of your target classes. In my case, I used English language corpora. If “chair” shows up with “rug” and “living room” more often than with “savannah” or “tall grass”, we can use that to our advantage.
You can choose your favorite co-occurrence measure such as Dice coefficient, or more implicit ones like distance within a semantic embedding. The end result you want is some structured encoding of inter-class-relatedless in the form of an $N\times N$ matrix, $Q$, with 1s along the diagonal (“chair” should always be present when “chair” is present). Once you’ve generated this matrix, all you do is multiply your class confidence scores by it. Assuming you have some set of predictions, $\hat{y}$, in the form of an $M\times N$ matrix, your new predictions are given by:
$$\hat{y}_{new} = \hat{y}\cdot Q$$
In essence, you’re re-scoring all the predictions with a little help from their friends. As a contrived example, imagine you have an input image that yields the following class confidence scores:
Dog Bone Collar Large Hadron Collider Physicist0.3 0.2 0.2 0.3 0.0
Looking at the raw scores, our algorithm thinks the input photo is either a dog or a particle accelerator, with equal probability. Luckily, we have a co-occurrence matrix from news stories that tells us what the frequency of particular words showing up together in a single article is:
Dog Bone Collar LHC Physicist Dog 1 0.2 0.3 0 0.01 Bone 0.2 1 0.1 0 0 Collar 0.3 0.1 1 0 0.05 LHC 0 0 0 1 0.95Physicist 0.01 0 0.05 0.95 1
We multiply our predictions by this matrix, and we find our new scores:
Dog Bone Collar Large Hadron Collider Physicist0.4 0.28 0.31 0.3 0.298
The extra evidence from uncertainty around “bone” and “collar” have given the edge to “dog”. Notice the effect on “Physicist” – because the “Large Hadron Collider” was seldom mentioned without “physicist” also being mentioned, it receives a huge boost. This kind of “spreading the wealth” can happen. You can try various schemes like using $(\hat{y}\cdot Q) * \hat{y}$ (where * here denotes element-wise multiplication) to get around this if it is undesirable. Also note that, depending on your choice of co-occurrence measure, the re-weighted predictions may or may not be normalized.
Well, there you go. Quick and painless, and I’ve found it can yield a small improvement, particularly when your training data isn’t large and representative of the natural world.
tags:dice nlp softmax machinelearning
|
The problem is NP-complete. Here is a reduction from 3SAT.
Given an instance of 3SAT with variables $v_1,\ldots,v_n$ and clauses $c_1,\ldots, c_m$, construct numbers as follows. We represent a number as the form $\lambda_1p^{\mu_1}+\lambda_2p^{\mu_2}+\cdots$ where $p$ is a very large number (we will estimate $p$ later) so that we can assume there is no carry when performing addition of numbers.
Construct $2n-1$ numbers $u_1,\ldots,u_{2n-1}$: $p^0, p^1, \ldots, p^{2n-2}$. For each variable $v_i$, if $v_i$ shows up in clauses $c_{j_1},c_{j_2},\ldots$ and $\bar{v}_i$ shows up in $c_{k_1},c_{k_2},\ldots$, construct two numbers $y_i=p^{2n-2+i}+mp^{3n-2+j_1}+mp^{3n-2+j_2}+\cdots$ and $\bar{y}_i=p^{2n-2+i}+mp^{3n-2+k_1}+mp^{3n-2+k_2}+\cdots$. For each clause $c_j$, construct numbers $z_j=p^{3n-2+j}$. Add $0$s to make there are $(13n+1)m$ numbers in total. The target $$\begin{align}x=\;&(n+1)p^0+(n+2)p^1+\cdots+(3n-1)p^{2n-2}\\&+4np^{2n-1}+4np^{2n}+\cdots+4np^{3n-2}\\&+16nmp^{3n-1}+(16nm+1)p^{3n}+\cdots+(16nm+m-1)p^{3n-2+m}.\end{align}$$
Since there are $(13n+1)m$ numbers and the coefficient of $p^{\mu}$ is at most $m$, we can choose $p=(13n+1)m^2+1$. The construction is in polynomial time.
Now we can see in a valid permutation, the first $2n-1$ numbers must occupy the positions from $n+1$ to $3n-1$, and the next $2n$ numbers $y_1,\bar{y}_1,\ldots$ must occupy the positions $1,\ldots, n$ (say
low positions) and $3n,\ldots, 4n-1$ (say high positions), where for each $i$, one of $y_i,\bar{y}_i$ occupies a low position and the other occupies a high position.
Suppose the instance of 3SAT is satisfiable. If $v_i$ is assigned $1$, we arrange $y_i$ at a high position and $\bar{y}_i$ at a low position; otherwise we arrange $y_i$ at a low position and $\bar{y}_i$ at a high position. Now the numbers constructed in steps 1 and 2 are arranged. Consider $\sum\alpha_i\pi_i=\cdots+\lambda_1p^{3n-2+1}+\cdots+\lambda_m p^{3n-2+m}$ for these numbers. We can arrange $z_j$ at position $16nm+j-\lambda_j$ to satisfy the sum goal. Note at least one literal in a clause must be assigned $1$, i.e. its corresponding number ($y_i$ or $\bar{y}_i$) must be at a high position, hence we have
$$m\cdot 3n < \lambda_j\le m\cdot (4n-1+4n-2+4n-3)<12nm,$$
i.e. $4nm+j<16nm+j-\lambda_j<13nm+j\le (13n+1)m$, thus the positions for $z_j$'s are valid. Also note $\lambda_j$ is a multiple of $m$, then $16nm+j-\lambda_j \equiv j \pmod m$, which means these positions do not conflict with each other. As a result, we have arranged these numbers such that $\sum\alpha_i\pi_i=x$.
On the other hand, if there is a permutation satisfying $\sum\alpha_i\pi_i=x$, then for each $i$, if $y_i$ is arranged at a high position, we assign $v_i=1$; otherwise we assign $v_i=0$. Consider $\sum\alpha_i\pi_i=\cdots+\lambda_1p^{3n-2+1}+\cdots+\lambda_m p^{3n-2+m}$ for numbers constructed in steps 1 and 2, then the position of $z_j$ must be $16nm+j-\lambda_j$, which is a valid position, meaning $16nm+j-\lambda_j\le (13n+1)m$, i.e. $\lambda_j>(3n-1)m$. This means at least one number ($y_i$ or $\bar{y}_i$) related to clause $c_j$ is arranged a high position, which means the clause is satisfied.
For example, consider an instance with two clauses $\bar{v}_1\vee v_2\vee v_3$ and $v_1\vee \bar{v}_2\vee v_3$, the numbers are:
p^0 p^1 p^2 p^3 p^4 p^5 p^6 p^7 p^8 p^9
v_1 v_2 v_3 c_1 c_2
u_1 1
u_2 1
u_3 1
u_4 1
u_5 1
y_1 1 2
\bar{y}_1 1 2
y_2 1 2
\bar{y}_2 1 2
y_3 1 2 2
\bar{y}_3 1
z_1 1
z_2 1
0
0
...
-----------------------------------------------------------
x 4 5 6 7 8 12 12 12 96 97
There are $80$ numbers in total. (One of) the permutation(s) corresponding to the solution $v_1=v_2=v_3=1$ is $\bar{y}_1\bar{y}_2\bar{y}_3u_1u_2u_3u_4u_5y_1y_2y_30\cdots 0z_1z_20\cdots$ where $z_1$ is at position $73$ and $z_2$ is at position $74$.
If the numbers are upper-bounded by a
constant, say $100$, then the problem can be solved in polynomial time.
Let $f(\alpha, x)$ denote whether such permutation exists. Note for any permutation $\pi$, suppose $\alpha_k$ is placed at the first position (i.e. $\pi_k=1$), we have$$\sum \alpha_i\pi_i=\alpha_k+\sum_{i\neq k}\alpha_i\pi_i=\sum \alpha_i+\sum_{i\neq k}\alpha_i(\pi_i-1),$$which means the primary problem has a valid solution iff there exists $k$ such that the subproblem for sequence $\alpha-\alpha_k$ (meaning the sequence $\alpha_1,\ldots,\alpha_{k-1},\alpha_{k+1},\ldots$) and target $x-\sum \alpha_i$ has a valid solution, so we have
$$f(\alpha,x)=\bigvee_{i=1}^n f\left(\alpha-\alpha_i,x-\sum_{j=1}^n \alpha_j\right).$$
Note we do not care the order of the sequence, so every subsequence of $\alpha$ can be expressed as "$n_1$ $1$s, ..., $n_{100}$ $100$s", which means there are up to $n^{100}$ different subsequences of $\alpha$. Therefore, we can compute $f(\alpha,x)$ using the recursion formula above in $O(n\cdot xn^{100})=O(n^{103})$ time.
|
In this vignettes, an application of a locally and a globally efficient adaptive sample determination to a confirmatory randomized clinical trial is illustrated.
This trial evaluated whether oral adjuvant chemotherapy with tegaful and uracil (UFT) and leucovorin (LV) reduces the recurrence after resection of liver metastasis from colorectal carcinoma as compared with no adjuvant therapy in Japan (UFT/LV trial) (Hasegawa et al. PLoS One 2016;11:e0162400.). The null hypothesis \(log(HR) = 0\) was tested with the one-sided significance level of 0.025. The minimum of clinically important effect size was hypothesized as HR = 0.65. The test statistic was a stratified log-rank score. Suppose that four interim analyses and one final analysis were planned to be performed but when to perform was not fixed in advance.
The result of the interim analyses were as follows. * Fisher information at analyses: (5.67, 9.18, 14.71, 20.02) * Score statistic = (3.40, 4.35, 7.75, 11.11)
The initial working test (SPRT) is prepared as a basis of conditional error function. Its stopping boundary is \(-\log(\alpha) / \rho + 1 / 2 \rho t\), where the significance level \(\alpha = 0.025\) and the minimum of clinically important effect size \(\rho = -log(0.65)\) will be substituted and \(t\) is the Fisher information. This stopping boundary is depicted below.
The four interim analyses can be performed by the function
adaptive_analysis_norm_local. Designating
FALSE to the argument
final_analysis indicates that the latest analysis is not the final, i.e., the overall significance level must not be exhausted at this time.
The result is summarized as follows:
# Summary
print( with(interim_analysis_4, data.frame(analysis=0:par$analyses, time=par$times,
intercept=char$intercept, stat=par$stats, boundary=char$boundary,
pr_cond_err=char$cond_type_I_err, reject_H0=char$rej_H0)) )
#> analysis time intercept stat boundary pr_cond_err reject_H0
#> 1 0 0.00 8.563198 0.00 8.563198 0.02500000 FALSE
#> 2 1 5.67 8.562666 3.40 9.783935 0.06392209 FALSE
#> 3 2 9.18 8.562085 4.35 10.539378 0.06951043 FALSE
#> 4 3 14.71 8.551346 7.75 11.719755 0.18084726 FALSE
#> 5 4 20.02 8.456860 11.11 12.768997 0.48935479 FALSE
At the forth (final) interim analysis, the null hypothesis is not rejected. Then, the maximum sample size (here, the maximum Fisher information level) is calculated. The alternative hypothesis for which an adequate level of power will be ensured can be determined arbitrarily referring to all available data including the interim results but not correlates of future data. Here, the maximum likelihood estimate \(11.11 / 20.02\) at the forth interim analysis is chosen as the alternative hypothesis. The maximum information level to obtaine the marginal power of 0.75 can be calculated by the function
sample_size_norm_local.
Finally, suppose that the final analysis is performed at \(t = 24.44\). The same function used at interim analyses,
adaptive_analysis_norm_local, can be used with setting
final_analysis = TRUE.
Again, the result is summarized as:
# Summary
print( with(final_analysis, data.frame(analysis=0:par$analyses, time=par$times,
intercept=char$intercept, stat=par$stats, boundary=char$boundary,
pr_cond_err=char$cond_type_I_err, reject_H0=char$rej_H0)) )
#> analysis time intercept stat boundary pr_cond_err reject_H0
#> 1 0 0.00 8.563198 0.00 8.563198 0.02500000 FALSE
#> 2 1 5.67 8.562666 3.40 9.783935 0.06392209 FALSE
#> 3 2 9.18 8.562085 4.35 10.539378 0.06951043 FALSE
#> 4 3 14.71 8.551346 7.75 11.719755 0.18084726 FALSE
#> 5 4 20.02 8.456860 11.11 12.768997 0.48935479 FALSE
#> 6 5 24.44 NA 14.84 11.166106 1.00000000 TRUE
As indicated at the final row, the null hypothesis is rejected.
Globally efficient adaptive design can be performed in a similar way by using the functions for globally efficient functions.
The initial working test, a group sequential design with 50 analyses, is prepared as a basis of conditional error function. Its stopping boundary can be constructed by the function
work_test_norm_global.
Here,
cost_type_1_err = 0 means the value of loss caused by erroneous rejection of the null hypothesis is calculated to make the constructed working test have exactly the type I error probability of \(\alpha\). The default value of
cost_type_1_err is
0 and thus can be omitted. The boundary of the working test just constructed is displayed by the next code.
The four interim analyses can be performed by the function
adaptive_analysis_norm_global. Designating
FALSE to the argument
final_analysis indicates that the latest analysis is not the final, i.e., the overall significance level must not be exhausted at this time.
The result is:
# Summary
print( with(interim_analysis_4, data.frame(analysis=0:par$analyses, time=par$times,
cost=char$cost0, stat=par$stats, boundary=char$boundary, pr_cond_err=char$cond_type_I_err,
reject_H0=char$rej_H0)) )
#> analysis time cost stat boundary pr_cond_err reject_H0
#> 1 0 0.00 1683.458 0.00 Inf 0.02500000 FALSE
#> 2 1 5.67 1555.020 3.40 7.004168 0.06006569 FALSE
#> 3 2 9.18 1545.278 4.35 8.690863 0.06007655 FALSE
#> 4 3 14.71 1528.397 7.75 10.724362 0.15229716 FALSE
#> 5 4 20.02 1471.727 11.11 12.239176 0.39095697 FALSE
At the forth (final) interim analysis, the null hypothesis is not rejected. Then, the maximum Fisher information level is calculated. The maximum likelihood estimate \(11.11 / 20.02\) at the forth interim analysis is chosen as the alternative hypothesis, though this is not compelling. The maximum information level to obtaine the marginal power of \(0.75\) can be calculated by the function
sample_size_norm_global.
Finally, suppose that the final analysis is performed at \(t = 25.88\). The same function used at interim analyses can be used, with setting
final_analysis = TRUE.
# Final analysis
final_analysis <- adaptive_analysis_norm_global(
initial_test = init_work_test,
times = c(5.67, 9.18, 14.71, 20.02, 25.88),
stats = c(3.40, 4.35, 7.75, 11.11, 14.84),
costs = interim_analysis_4$char$cost0[-1], # Omited element is for time = 0
final_analysis = TRUE,
estimate = FALSE
)
# Summary
print( with(final_analysis, data.frame(analysis=0:par$analyses, time=par$times,
cost=char$cost0, stat=par$stats, boundary=char$boundary, pr_cond_err=char$cond_type_I_err,
reject_H0=char$rej_H0)) )
#> analysis time cost stat boundary pr_cond_err reject_H0
#> 1 0 0.00 1683.458 0.00 Inf 0.02500000 FALSE
#> 2 1 5.67 1555.020 3.40 7.004168 0.06006569 FALSE
#> 3 2 9.18 1545.278 4.35 8.690863 0.06007655 FALSE
#> 4 3 14.71 1528.397 7.75 10.724362 0.15229716 FALSE
#> 5 4 20.02 1471.727 11.11 12.239176 0.39095697 FALSE
#> 6 5 25.88 NA 14.84 11.780124 1.00000000 TRUE
As indicated by the final row, the null hypothesis is rejected.
Note that, if
estimate = TRUE, additionally exact P-value, median unbiased estimate, and confidence limits can be calculated. These results will be extracted by:
|
We know that for a point particle, the action is
$$ S[x,e] ~=~ \frac{1}{2}\int_{\lambda_A}^{\lambda_B} d\lambda\left[e^{-1}(\lambda)~g_{\mu\nu}(x(\lambda))~\dot{x}^\mu(\lambda)~\dot{x}^\nu(\lambda) -m^2e(\lambda)\right] , $$
with signature convention $(-,+,+,+)$. It was mentioned on some website as a I googled that $e$ and $x$ are the dynamical variables and from them we should get the Euler-Lagrange equations.
I was wondering how to start since just a few minutes ago I first encountered this einbein variable (which I didn't know was a variable in the first place)!
|
November 5th, 2017, 11:23 PM
# 1
Senior Member
Joined: Sep 2012
Posts: 201
Thanks: 1
Legendre reccurence relation
I am having a slight issue with generating function of Legendre polynomials and shifting the sum of the generating function.
So here is an example:
I need to derive the recurrence relation $lP_l(x)=(2l-1)xP_{l-1}(x)-(l-1)P_{l-2}$
so I start with the following equation:
$$(1-2xh+h^2)\frac{\partial\phi}{\partial h}=(x-h)\phi $$
now taking the differential of the series of the Legendre generating function
$$\frac{\partial}{\partial h}(\sum_{l=0}^{\infty} h^l P_l(x))$$
I make it equal to $$\sum_{l=1}^{\infty} lh^{l-1} P_l(x))$$
Some books I have read ignore the shift and keep the sum $l=0$ which I can see because even at the $l=0$ the sums are the same. But when I expand the following is when I get in a bit of a mess.
So expanding both sides I get the following:
$$\sum_{l=1}^{\infty} lh^{l-1} P_l(x))-2x\sum_{l=1}^{\infty} lh^{l} P_l(x))+\sum_{l=1}^{\infty} lh^{l+1} P_l(x)=x\sum_{l=0}^{\infty} h^l P_l(x)-\sum_{l=0}^{\infty} h^{l+1} P_l(x)[1]$$
So now if I make eq [1]=0 like so:
$$\sum_{l=1}^{\infty} lh^{l-1} P_l(x))-2x\sum_{l=1}^{\infty} lh^{l} P_l(x))+\sum_{l=1}^{\infty} lh^{l+1} P_l(x)-x\sum_{l=0}^{\infty} h^l P_l(x)+\sum_{l=0}^{\infty} h^{l+1} P_l(x)=0$$
So from here I know that I need to get all the powers of h to $h^{l-1}$
So here what I do, which I believe is incorrect I jus't cant see why or to be honest understand why it wrong. My belief it has some thing to do with how the generating function works, but every book I have read and youtube video I have watched completely ignores the step that I get confused with, and without any explanation just gives the recurrence relation. Anyway, here's what I do.
Looking at the series, I make all the powers of h equal to $l-1$
$$\sum_{l=1}^{\infty} lh^{l-1} P_l(x))-2x\sum_{l=2}^{\infty} (l-1)h^{l-1} P_{l-1}(x))+\sum_{l=3}^{\infty} (l-2)h^{l-1} P_{l-2}(x)-x\sum_{l=1}^{\infty} h^{l-1} P_{l-1}(x)+\sum_{l=2}^{\infty} h^{l-1} P_{l-2}(x)=0$$
Now youtube videos I have watched and books I have read just change the powers of h without make any shift to the series like soo.
$$\sum_{l=0}^{\infty} lh^{l-1} P_l(x))-2x\sum_{l=0}^{\infty} (l-1)h^{l-1} P_{l-1}(x))+\sum_{l=0}^{\infty} (l-2)h^{l-1} P_{l-2}(x)-x\sum_{l=0}^{\infty} h^{l-1} P_{l-1}(x)+\sum_{l=0}^{\infty} h^{l-1} P_{l-2}(x)=0$$
Then just factor out the sum and the h and you're left with the desired recurrence relation, albeit with a little simplifying.
Whereas when I do it my way, I expand all the series until the are l=3 and then do the same thing. As I said, I believe this to be wrong, but I just can't see how that in the book I have read and video I have watched how they can just change the powers of the h without affecting the series itself.
I would much appreciate it if someone could help me on understanding this.
Last edited by skipjack; November 6th, 2017 at 06:31 AM.
Tags legendre, realtion, reccurence, relation, series or generating
Thread Tools Display Modes
Similar Threads Thread Thread Starter Forum Replies Last Post Legendre powered esthetic Calculus 1 April 27th, 2015 09:10 AM Recurrence Relation and Closed Form Relation uniquegel Algebra 4 September 8th, 2014 04:18 PM A reccurence problem Bromster Math Events 2 May 25th, 2013 12:00 PM legendre symbols. tinynerdi Number Theory 8 September 14th, 2010 04:58 PM the Legendre equation randarrthebarbarian Number Theory 3 October 8th, 2009 05:51 AM
|
Answer
One nautical mile is 1.1 statute miles.
Work Step by Step
We can convert the angle to radians: $\theta = 01' = (\frac{1}{60})^{\circ}(\frac{\pi~rad}{180^{\circ}}) = 0.00029~rad$ The can find the arc length $S$: $S = \theta ~r$ $S = (0.00029~rad)(3963~mi)$ $S = 1.1~mi$ One nautical mile is 1.1 statute miles.
|
Let $F$ be a field and $R$ a commutative ring where $R$ is not the zero ring. Suppose $\varphi : F \rightarrow R$ is a ring homomorphism. Show that $\varphi$ is injective.
Any help would be appreciated. I feel pretty lost on this.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
$\varphi$ is zero homomorphism, or it is injective:
Assume that $\varphi$ is not injective. Let $0\neq x\in F$, such that $\varphi (x)=0$. Since $F$ is a field, $x$ has an inverse. So $$1=\varphi (1)=\varphi (xx^{-1}) =\varphi (x)\varphi (x)^{-1}=0.$$
Hence $\varphi$ is zero homomorphism.
The kernel of a ring homomorphism is an ideal. A field has exactly two ideals, the unit ideal and the zero ideal. So either $\ker{\varphi}$ is the zero ideal or $\ker{\varphi}$ is the unit ideal, $F$ itself. If $\ker{\varphi}$ is the zero ideal, then $\varphi$ is injective (a general result you should know). If $\ker{\varphi}$ is the unit ideal $F$, then every element of $F$ is sent to $0_R$. However, since $R$ is not the zero ring, $0_R\neq 1_R$. But by the definition of a ring homomorphism, $\varphi(1_F)=\varphi(1_R)$. Therefore $\ker{\varphi}$ cannot be the unit ideal, so $\varphi$ is injective.
Do you know or can you show that $\phi(0_F) = 0_R$? (EDIT: see the comment below, this isn't actually sufficient. You have to show $\phi(a) = 0_R \implies a = 0_F.$)
Once you have that, say $a,b \in F$. To show that $\phi$ is an injection, go to the definition. You just need to show that $\phi(a) = \phi(b) \implies a=b$. So let's suppose $\phi(a) = \phi(b)$. Now use the definition of a ring homomorphism to show $\phi(a-b) = 0$. Now if you can logically piece this together with the first statement, you've got it.
|
The main reason to prefer the colon notation $t : T$ to the membership relation $t \in T$ is that the membership relation can be misleading because
types are not (just) collections.
[
Supplemental: I should note that historically type theory was written using $\in$. Martin-Löf's conception of type was meant to capture sets constructively, and already Russell and Whitehead used $\epsilon$ for the class memebrship. It would be interesting to track down the moment when $:$ became more prevalent than $\in$.]
A type describes a certain kind of construction, i.e., how to make objects with a certain structure, how to use them, and what equations holds about them.
For instance a product type $A \times B$ has introduction rules that explain how to make ordered pairs, and elimination rules explaining that we can project the first and the second components from any element of $A \times B$. The definition of $A \times B$ does
not start with the words "the collection of all ..." and neither does it say anywhere anything like "all elements of $A \times B$ are pairs" (but it follows from the definition that every element of $A \times B$ is propositionally equal to a pair). In constrast, the set-theoretic definition of $X \times Y$ is stated as "the set of all ordered pairs ...".
The notation $t : T$ signifies the fact that $t$ has the structure described by $T$.
A type $T$ is not to be confused with its
extension, which is the collection of all objects of type $T$. A type is not determined by its extension, just like a group is not determined by its carrier set. Furthermore, it may happen that two types have the same extension, but are different, for instance: The type of all even primes larger than two: $\Sigma (n : \mathbb{N}) . \mathtt{isprime}(n) \times \mathtt{iseven}(n) \times (n > 2)$. The type of all odd primes smaller than two: $\Sigma (n : \mathbb{N}) . \mathtt{isprime}(n) \times \mathtt{isodd}(n) \times (n < 2)$.
The extension of both is empty, but they are not the same type.
There are further differences between the type-theoretic $:$ and the set-theoretic $\in$. An object $a$ in set theory exists independently of what sets it belongs to, and it may belong to several sets. In contrast, most type theories satisfy uniqueness of typing: if $t : T$ and $t : U$ then $T \equiv U$. Or to put it differently, a type-theoretic construction $t$ has precisely one type $T$, and in fact there is no way to have just an object $t$ without its (uniquely determined) type.
Another difference is that in set theory we can
deny the fact that $a \in A$ by writing $\lnot (a \in A)$ or $a \not\in A$. This is not possible in type theory, because $t : T$ is a judgement which can be derived using the rules of type theory, but there is nothing in type theory that would allow us to state that something has not been derived. When a child makes something from LEGO blocks they proudly run to their parents to show them the construction, but they never run to their parents to show them what they didn't make.
|
Exercise :
Calculate a Maximum Likelihood Estimator for the model $X_1,\dots, X_n \; \sim U(-\theta,\theta)$.
Solution :
The distribution function $f(x)$ for the given Uniform model is :
$$f(x) = \begin{cases} 1/2\theta, \; \; -\theta \leq x \leq \theta \\ 0 \quad \; \; , \quad\text{elsewhere} \end{cases}$$
Thus, we can calculate the likelihood function as :
$$L(\theta)=\bigg(\frac{1}{2\theta}\bigg)^n\prod_{i=1}^n\mathbb I_{[-\theta,\theta]}(x_i)= \bigg(\frac{1}{2\theta}\bigg)^n\prod_{i=1}^n \mathbb I_{[0,\theta]}(|x_i|) $$
$$=$$
$$\bigg(\frac{1}{2\theta}\bigg)^n\prod_{i=1}^n \mathbb I_{[-\infty,\theta]}(|x_i|)\prod_{i=1}^n \mathbb I_{[0, +\infty]}(|x_i|)$$
$$=$$
$$\boxed{\bigg(\frac{1}{2\theta}\bigg)^n\prod_{i=1}^n \mathbb I_{[-\infty,\theta]}(\max|x_i|)}$$
Question : How does one derive the final expression in the box from the previous one ? I can't seem to comprehend how this is equal to the step before.
Other than that, to find the maximum likelihood estimator you need a $\theta$ sufficiently small but also $\max |x_i| \leq \theta$ which means that the MLE is : $\hat{\theta} = \max |x_i|$.
|
$\displaystyle\sum_{k=1}^\infty (2k)!/k!(k+1)!$
Let $a_k = (2k)!/k!(k+1)!$
$\lvert a_{k+1}/a_k\rvert \to 4$ as $k \to \infty$
Thus the series is divergent. Can someone double check ... my gut says it is convergent.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Alternate.
Binomial coefficient $$ \frac{(2k)!}{(k!)^2} $$ is a positive integer, thus $\ge 1$, so your series is $\ge$ $$ \sum_{k=1}^\infty\frac{1}{k+1} $$ which diverges.
Of course, the middle binomial coefficient is much bigger than $1$, so divergence is much worse than the harmonic series.
conducting the Ratio test:
You series is $$\frac{2k!}{k!(k+1)!}$$
Now, $$\lim_{k\rightarrow \infty} \frac{\frac{(2k+2)!}{(k+1)!(k+2)!}}{\frac{2k!}{k!(k+1)!}}$$
$$\lim_{k\rightarrow \infty} \frac{(2k+2)!}{(k+1)!(k+2)!}*\frac{k!(k+1)!}{2k!}$$ $$\lim_{k\rightarrow \infty} \frac{k!(2k+2)!}{(2k)!(k+2)!}$$ $$\lim_{k\rightarrow \infty} \frac{(2k+1)*(2k+2)}{(k+1)*(k+2)}$$ $$\lim_{k\rightarrow \infty} \frac{(2k+1)*(2k+2)}{(k+1)*(k+2)} = 4$$
You are correct.
|
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
|
Given a set $Q$ of $n$ points, we want to find the subset $S_\max \subset Q$ of $k$ elements that maximize the total distance between them.
$$S_\max = \max_S \sum_{\substack{ i,j\in S\\ i \neq j}} d(x_i,x_j)$$
where in my case $x_i$ is a boolean vector and the distance considered is the Manhattan/Hamming distance.
Is there any efficient way to solve this problem? Is it possible to rewrite it in another simpler way?
|
This question already has an answer here:
I know $ \omega ^ 2 $ is countable, but I'm unable to find a bijection from $ \omega * \omega \rightarrow \omega $
This should be simple, but I'm very stuck.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
This question already has an answer here:
I know $ \omega ^ 2 $ is countable, but I'm unable to find a bijection from $ \omega * \omega \rightarrow \omega $
This should be simple, but I'm very stuck.
This picture shows one way:
If you have to describe this bijection $\varphi:\omega\times\omega\to\omega$ more formally, it’s worth spending some time trying to work out a formula for $\varphi(a,b)$ in terms of $a$ and $b$; all you need is a little ingenuity and the familiar formula for the sum of the first $n$ positive integers, $\sum_{k=1}^nk=\frac12n(n+1)$. If you get stuck, you’ll find much help in this Wikipedia article.
think about what it actually is on the inside:
$$\begin{array}? \omega^2 = &\{& 0,1,2,3,\ldots, \\ && \omega,\omega+1,\omega+2,\omega+3,\ldots, \\ && 2\omega,2\omega+1,2\omega+2,2\omega+3,\ldots,\\ && 3\omega,3\omega+1,3\omega+2,3\omega+3,\ldots, \\ && \vdots \\ &\}& \end{array}$$
now can easily see why it is in bijection with $\mathbb N^2$ and hence $\mathbb N$ hence countable.
|
N Saradha
Articles written in Proceedings – Mathematical Sciences
Volume 100 Issue 2 August 1990 pp 107-132
Under certain assumptions, it is shown that eq. (2) has only finitely many solutions in integers
Volume 127 Issue 4 September 2017 pp 565-584 Research Article
Let $F(X, Y) = \sum^{s}_{i=0}a_{i}X^{r_i}Y^{r−r_i} \in \mathbb{Z}[X, Y]$ be a form of degree $r = r_{s} \geq 3$, irreducible over $\mathbb{Q}$ and having at most $s + 1$ non-zero coefficients. Mueller and Schmidt showed that the number of solutions of the Thue inequality $$\mid F(X, Y) \mid \leq h$$ is $\ll s^{2}h^{2/r}(1+log h^{1/r})$. They conjectured that $s^{2}$ may be replaced by $s$. Let $$\Psi = \mathop{\max}\limits_{0\leq i\leq s} max \left(\sum^{i-1}_{w=0} \frac{1}{r_{i}-r_{w}}, \sum^{s}_{w=i+1}\frac{1}{r_{w}-r_{i}}\right)$$. Then we show that $s^2$ may be replaced by ${max(s\log^{3} s, se^{\Psi})}$. We also show that if $\mid{a_0}\mid = \mid{a_s}\mid$ and $\mid{a_i} \leq \mid{a_0}\mid$ for $1 \leq i \leq s − 1$, then $s^2$ may be replaced by $s\log^{3/2} s$. In particular, this is true if $a_{i}\in {−1, 1}$.
Current Issue
Volume 129 | Issue 5 November 2019
Click here for Editorial Note on CAP Mode
|
Search
Now showing items 1-10 of 166
J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-02)
Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ...
Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV
(Elsevier, 2013-04-10)
The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ...
Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-12)
The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ...
Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC
(Springer, 2014-10)
Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ...
Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV
(Elsevier, 2017-12-21)
We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ...
Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV
(American Physical Society, 2017-09-08)
The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ...
Online data compression in the ALICE O$^2$ facility
(IOP, 2017)
The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ...
Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV
(American Physical Society, 2017-09-08)
In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ...
J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(American Physical Society, 2017-12-15)
We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ...
Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions
(Nature Publishing Group, 2017)
At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ...
|
I am having issues in integrating the numerical solution obtained with FEM method and NDSolve, over a boundary. Specifically, i am integrating a specific combination of the gradient over a boundary obtaining errors which are much larger than the specified tolerances. I would like to have a way to avoid the errors and to have a faster integration.
Let me give you the full story:
I am solving the equation $\boldsymbol{\nabla}^2 f(x,y) -(0.001)^2 f(x,y)=0$ in two dimensions and in cartesian coordinates.
My domain consists in a very big outer circle of radius $L=50000$ and two small circles $C_1$ and $C_2$ of radius $R=1$ whose $x$ and $y$ center's coordinates are $\left(- \frac{dist}{2},0 \right)$ and $\left(\frac{dist}{2},0 \right)$, respectively. $dist$ is a parameter that I vary to fix the circles center to center distance.
On the external circle i have no flux boundary condition (Neumann boundary condition). Whereas on the two circles i have Dirchelet boundary condition:
$f(x,y)=1 +(x-\frac{dist}{2})^2-y^2$ on $C_2$
$f(x,y)=1 +(x+\frac{dist}{2})^2-y^2$ on $C_1$
This is the code that i use to generate the mesh and solve the problem (I specify the mesh elements on the boundaries):
L = 50000;dist = 5.0;nptspart = 2000;nptsout = 60;ptspart = Table[{Cos[2.0*Pi*i/nptspart] - dist/2, Sin[2.0 Pi*i/nptspart]}, {i, 0, nptspart - 1}];elempart = Table[If[i < Length[ptspart], {i, i + 1}, {Length[ptspart], 1}], {i, 1, Length[ptspart]}];ptspart1 = Table[{Cos[2.0*Pi*i/nptspart] + dist/2, Sin[2.0 Pi*i/nptspart]}, {i, 0, nptspart - 1}];elempart1 = Table[If[i < Length[ptspart1], {nptspart + i, nptspart + i + 1}, {nptspart + Length[ptspart1], nptspart + 1}], {i, 1, Length[ptspart1]}];ptsout = Table[{L Cos[2.0*Pi*i/nptsout], L Sin[2.0 Pi*i/nptsout]}, {i, 0, nptsout - 1}];elemout = Table[If[i < Length[ptsout], {2 nptspart + i, 2 nptspart + i + 1}, {2 nptspart + Length[ptsout], 2 nptspart + 1}], {i, 1, Length[ptsout]}];bmesh = ToBoundaryMesh[ "Coordinates" -> Join[ptspart, ptspart1, ptsout], "BoundaryElements" -> {LineElement[ Join[elempart, elempart1, elemout]]}, "MeshOrder" -> 2];mesh1 = ToElementMesh[bmesh, "MeshOrder" -> 2, "RegionHoles" -> {{-dist/2, 0}, {dist/2, 0}}, MeshQualityGoal -> "Maximal"];ufun1 = NDSolveValue[{\!\(\*SubsuperscriptBox[\(∇\), \({x, y}\), \(2\)]\(f[x, y]\)\) - (0.001)^2 f[x, y] == NeumannValue[0.0, x^2 + y^2 == L^2], DirichletCondition[ f[x, y] == 1 + ((x - dist/2)^2 - y^2), (x - dist/2)^2 + y^2 - 1^2 == 0], DirichletCondition[ f[x, y] == 1 + ((x + dist/2)^2 - y^2), (x + dist/2)^2 + y^2 - 1^2 == 0]}, f, {x, y} ∈ mesh1, Method -> {"PDEDiscretization" -> "FiniteElement"}];
The parameter nptspart defines how many points i have on the circles boundaries $C_1$ and $C_2$, whereas nptsout defines the number of points on the bigger circle of radius L.
I have now to perform an integral on the boundary of each circle:
$int_1=\int_{C_1} \left( \boldsymbol{\nabla} f \boldsymbol{\nabla} f -\frac{1}{2} \boldsymbol{\nabla} f \cdot \boldsymbol{\nabla} f \boldsymbol{I} \right) \cdot \boldsymbol{n} \; dl$
$int_2=\int_{C_2} \left( \boldsymbol{\nabla} f \boldsymbol{\nabla} f -\frac{1}{2} \boldsymbol{\nabla} f \cdot \boldsymbol{\nabla} f \boldsymbol{I} \right) \cdot \boldsymbol{n} \; dl$
Where $\boldsymbol{I}$ is the identity matrix in two dimensions, $\boldsymbol{n}$ the outwardly directed normal to the circles boundary, $\nabla f \nabla f$ is the dyadic product (also called Kronecker product), and $dl$ is the differential element on the circle line.
In orer to perform the integration, on the circle $C_2$ for example, i use the following code:
NIntegrate[(KroneckerProduct[Grad[ufun1[x, y], {x, y}], Grad[ufun1[x, y], {x, y}]] - (1/ 2) (Grad[ufun1[x, y], {x, y}].Grad[ufun1[x, y], {x, y}])* IdentityMatrix[2]).{x - dist/2, y}, {x, y} ∈ Circle[{dist/2, 0}, 1], AccuracyGoal -> 6, PrecisionGoal -> 6, MaxRecursion -> 100, Method -> "InterpolationPointsSubdivision"]
However not only i get a lot of warning messages like:
NIntegrate::slwcon: Numerical integration converging too slowly; suspect one of the following: singularity, value of the integration is 0, highly oscillatory integrand, or WorkingPrecision too small.
NIntegrate::eincr: The global error of the strategy GlobalAdaptive has increased more than 400 times. The global error is expected to decrease monotonically after a number of integrand evaluations. Suspect one of the following: the working precision is insufficient for the specified precision goal; the integrand is highly oscillatory or it is not a (piecewise) smooth function; or the true value of the integral is 0. Increasing the value of the GlobalAdaptive option MaxErrorIncreases might lead to a convergent numerical integration. NIntegrate obtained 0.06632359368254395
and 0.0005234490597411253 for the integral and error estimates.
but also the Integration is very slow, which is something I don't want as i will have to perform the integration, in my future research, for, say, 200 circles. therefore i need the integration to be fast on each boundary.
If i plot the results of the two integrals, for $dist=5$, as a function of the number of points for the circle boundaries (nptspart), I get something like this.
Where you can clearly see an error of the order of $\pm 0.001$ which is way bigger than the specified precision (6 digits) i specified in the integration.
Furthermore the error doesn't seem to reduce appreciably as the mesh on the circles gets refined.
POSSIBLE REASON: I think this has something to do with the interpolation function being evaluated in points which are not mesh points and are specified in the integration as:
{x, y} ∈ Circle[{dist/2, 0}, 1]
Indeed if i let mathematica do a plot over the boundary $C_1$ (where the value of the function is specified by the Dirchelet boundary condition) i get weird results when i reduce the number of points on the boundary.
If i perform a parametric plot of the function $f(x(\theta),y(\theta))$ on $C_1$ (parametrised with the angle $\theta$ as shown in figure) for two different meshes
i get the following weird result:
In the above example i used directly the interpolation function obtained as result from the solution of NDSolve. An it is clear that is doing weird things on the boundary. So i guess that the gradient would be even worse.
So The question is. Is there a way to obtain the values of the function, and the gradient of the function, in the mesh nodes? Or if not, is it possible to perform an integration with the interpolation function that gives accurate results and is performed faster than it is at the moment? Do you have any tips?
|
Let $a, b, c \in \mathbb{R}^n$ , $p \in [1, +\infty)$, prove that
$$\left( \sum_{1\leq i < j <k \leq n} \left| \det\left(\begin{matrix} a_i & b_i & c_i \\ a_j & b_j & c_j \\ a_k & b_k & c_k \end{matrix}\right)\right|^p \right)^{\frac{1}{p}} \leq c_p \left( \sum_{i=1}^n |a_i|^p \right)^{\frac{1}{p}} \left( \sum_{1\leq j <k \leq n} \left| \det\left(\begin{matrix} b_j & c_j \\ b_k & c_k \end{matrix}\right)\right|^p \right)^{\frac{1}{p}} $$
where $c_p = \max(1, 3^{1-\frac{2}{p}})$.
A 2-dimensional analogue of this problem was discussed here: An inequality related to Lagrange's identity and $L_p$ norm
Remark:
When $p=1$, the proof is straightforward since $$\left| \det\left(\begin{matrix} a_i & b_i & c_i \\ a_j & b_j & c_j \\ a_k & b_k & c_k \end{matrix}\right)\right| \leq |a_i| \left| \det\left(\begin{matrix} b_j & c_j \\ b_k & c_k \end{matrix}\right)\right| + |a_j| \left| \det\left(\begin{matrix} b_i & c_i \\ b_k & c_k \end{matrix}\right)\right| +|a_k| \left| \det\left(\begin{matrix} b_i & c_i\\ b_j & c_j \end{matrix}\right)\right| $$, by Laplace expansion and triangle inequality. Summing up all these inequalities is enough. $p = \infty$ case can be proved in a similar way.
Using Holder's inequality directly on the Laplace expansion gives a weaker bound: $3^{1 - \frac{1}{p}}$
When $p=2$, LHS is the volume of Parallelepiped spanned by three vectors $a,b,c$, while RHS is the norm of $a$ times the area of parallelogram spanned by $b,c$, so the inequality is clearly true. (This fact can be proved by using Cauchy-Binet)
As users @fedja and @mahdi suggested in An inequality related to Lagrange's identity and $L_p$ norm , this problem is closely related to Riesz-Thorin interpolation theorem. However, I find it difficult to apply the theorem directly on my problem.
Thanks!
|
If you want the expected value, one answer is $n E[S_{(m)}]$, where $S_{(m)}$ is the $m$th order statistic of a sample of $n$ gamma$(k,1)$ random variables. While this expression may not have a simple closed form, you may be able to get a decent-sized approximate answer from the literature on moments of order statistics. (Edit: This appears to be the case, even when comparing with the known asymptotic expression for the case $m=n$. See discussion at end.)
Here's the argument for $n E[S_{(m)}]$: Take a Poisson process $P$ with rate 1 and interarrival times $Z_1, Z_2, \ldots$. Let each event in the process $P$ have probability of $1/n$ of being the first kind of coupon, probability $1/n$ of being the second kind of coupon, and so forth. By the decomposition property of Poisson processes, we can then model the arrival of coupon type $i$ as a Poisson process $P_i$ with rate $1/n$, and the $P_i$'s are independent. Denote the time until process $P_i$ obtains $k$ coupons by $T_i$. Then $T_{i}$ has a gamma$(k,1/n)$ distribution. The waiting time until $m$ processes have obtained $k$ coupons is the $m$th order statistic $T_{(m)}$ of the iid random variables $T_1, T_2, \ldots, T_n$. Let $N_m$ denote the total number of events in the processes at time $T_{(m)}$. Thus $N_m$ is the random variable the OP is interested in. We have
$$T_{(m)} = \sum_{r=1}^{N_m} Z_r.$$
Since $N_m$ and the $Z_r$'s are independent, and the $Z_r$ are iid exponential(1), we have
$$E[T_{(m)}] = E\left[E\left[\sum_{r=1}^{N_m} Z_r \bigg| N_m \right] \right] = E\left[\sum_{r=1}^{N_m} E[Z_r] \right] = E\left[N_m \right].$$
By scaling properties of the gamma distribution, $T_i = n S_i$, where $S_i$ has a gamma$(k,1)$ distribution. Thus $T_{(m)} = n S_{(m)}$, and so $E\left[N_m \right] = n E[S_{(m)}]$.
For more on this idea, see Lars Holt's paper "On the birthday, collectors', occupancy, and other classical urn problems,"
International Statistical Review 54(1) (1986), 15-27.
(ADDED: Looked up literature on moments of order statistics.)
David and Nagaraja's text
Order Statistics (pp. 91-92) implies the bound$$n P^{-1}\left(k,\frac{m-1}{n}\right) \leq n E[S_{(m)}] \leq n P^{-1}\left(k,\frac{m}{n}\right),$$where $P(k,x)$ is the regularized incomplete gamma function.
Some software programs can invert $P$ for you numerically. Trying a few examples, it appears that the bounds given by David and Nagaraja can be quite tight. For example, taking $n$ = 100,000, $m$ = 50,000, and $k$ = 25,000, the two bounds give estimates (via Mathematica) around $2.5 \times 10^9$, and the difference between the two estimates is about 400. More extreme values for $k$ and $m$ give results that are not as good, but even values as extreme as $m$ = 10, $k$ = 4 with $n$ = 100,000 still yield a relative error of less than 3%. Depending on the precision you need, this might be good enough.
Moreover, these bounds seem to give better results for $m \approx n$ versus using the asymptotic expression for the case $m = n$ given in Flajolet and Sedgewick's
Analytic Combinatorics as an estimate. The latter has error $o(n)$ and appears to be for fixed $k$. If $k$ is small, the asymptotic estimate is within or is quite close to the David and Nagaraja bounds. However, for large enough $k$ (say, on the order of $n$) the error in the asymptotic is on the order of the size of estimate, and the asymptotic expression can even produce a negative expected value estimate. In contrast, the bounds from the order statistics approach appear to get tighter when $k$ is on the order of $n$.
(Caution: There are two versions of the regularized incomplete gamma function: the
lower one $P$ that we want with bounds from $0$ to $x$, and the upper one $Q$ with bounds from $x$ to $\infty$. Some software programs use the upper one.)
|
Let $x$ be a nonnegative real number and denote $[x]$ as the greatest integer less than or equal to $x$. We will attempt to prove that $\big[\sqrt{x}\big] = \big[\sqrt{[x]}\big]$.
First suppose that $x$ is a perfect square. Then the equation trivially holds.
Assuming that $x$ is not a perfect square, we have $\sqrt{x} \not\in \mathbb{Z}$.
If $[x] \leq x$, then it clear that $\sqrt{[x]} \leq \sqrt{x}$ where $\sqrt{[x]}$ may or may not be an integer.
Further, it is true that there are no integers in the interval $\big(\sqrt{[x]},\sqrt{x}\big)$; because if such an integer $q$ existed, we would deduce that
$$[x] < q^2 < x$$
which is a blatant contradiction since $q^2 \in \mathbb{Z}$.
Because there are no integers in $\big(\sqrt{[x]},\sqrt{x}\big)$, it follows that
$$\big[\sqrt{x}\big] \leq \sqrt{[x]} \leq \sqrt{x} \tag{1}$$
But since $\big[\sqrt{x}\big]$ is the closest integer to $\sqrt{x}$ that satisfies $(1)$, it follows that it must also be the closest integer to $\sqrt{[x]}$, that is to say, it is the greatest integer less than or equal to $\sqrt{[x]}$.
Therefore, $\big[\sqrt{x}\big] = \big[\sqrt{[x]}\big]$.
Are there any problems with the logic of the above proof? Thank you.
|
The most frequently used evaluation metric of survival models is the concordance index (c index, c statistic). It is a measure of rank correlation between predicted risk scores $\hat{f}$ and observed time points $y$ that is closely related to Kendall’s τ. It is defined as the ratio of correctly ordered (concordant) pairs to comparable pairs. Two samples $i$ and $j$ are comparable if the sample with lower observed time $y$ experienced an event, i.e., if $y_j > y_i$ and $\delta_i = 1$, where $\delta_i$ is a binary event indicator. A comparable pair $(i, j)$ is concordant if the estimated risk $\hat{f}$ by a survival model is higher for subjects with lower survival time, i.e., $\hat{f}_i >\hat{f}_j \land y_j > y_i$, otherwise the pair is discordant. Harrell’s estimator of the c index is implemented in concordance_index_censored.
While Harrell’s concordance index is easy to interpret and compute, it has some shortcomings:
it has been shown that it is too optimistic with increasing amount of censoring [1], it is not a useful measure of performance if a specific time range is of primary interest (e.g. predicting death within 2 years).
Since version 0.8, scikit-survival supports an alternative estimator of the concordance index from right-censored survival data, implemented in concordance_index_ipcw, that addresses the first issue.
The second point can be addressed by extending the well known receiver operating characteristic curve (ROC curve) to possibly censored survival times. Given a time point $t$, we can estimate how well a predictive model can distinguishing subjects who will experience an event by time $t$ (sensitivity) from those who will not (specificity). The function cumulative_dynamic_auc implements an estimator of the cumulative/dynamic area under the ROC for a given list of time points.
The first part of this post will illustrate the first issue with simulated survival data, while the second part will focus on the time-dependent area under the ROC applied to data from a real study.
|
Laser Beam Expanders
This is a supplementary section of the Laser Optics Resource Guide.
Laser beam expanders increase the diameter of a collimated input beam to a larger collimated output beam. Beam expanders are used in applications such as laser scanning, interferometry, and remote sensing. Contemporary laser beam expanders are afocal systems developed from well-established optical telescope fundamentals. In such systems, the object rays enter parallel to the optical axis of the internal optics and exit parallel to them. This means that the entire system does not have a focal length.
Theory: Telescopes
Optical telescopes, traditionally used to view distant objects such as celestial bodies in outer space, are divided into two types: refracting and reflecting. Refracting telescopes utilize lenses to refract, or bend, light, while reflecting telescopes utilize mirrors to reflect light.
There are two categories of refracting telescopes: Keplerian and Galilean. A Keplerian telescope consists of lenses with positive focal lengths separated by the sum of their focal lengths (
Figure 1). The lens closest to the object being viewed, or source image, is called the objective lens, while the lens closest to the eye, or image created, is called the image lens. Figure 1: Keplerian telescope
A Galilean telescope consists of a positive lens and a negative lens that are also separated by the sum of their focal lengths (
Figure 2). However, since one of the lenses is negative, the separation distance between the two lenses is much shorter than in the Keplerian design. While using the effective focal length of the two lenses will provide a good approximation of the total length, using the back focal length will provide the most accurate length. Figure 2: Galilean telescope
The magnifying power, or the inverse of the magnification, of a telescope is based upon the focal lengths of the objective and eye lenses:
(1)$$ \text{Magnifying Power} \left( \text{MP} \right) = \frac{1}{\text{Magnification} \left[ \text{m} \right]} $$ (2)$$ \text{MP} = - \frac{\text{Focal Length}_{\text{Objective Lens}}}{\text{Focal Length}_\text{Image Lens}} $$
If the magnifying power is greater than 1, the telescope magnifies. When the magnifying power is less than 1, the telescope minifies.
Theory: Laser Beam Expanders
In a laser beam expander, the placement of the objective and image lenses is reversed. Keplerian beam expanders are designed so that the collimated input beam focuses to a spot between the objective and image lenses, producing a point within the system where the laser's energy is concentrated (
Figure 3). The focused spot heats the air between the lenses, deflecting light rays from their optical path, which can potentially lead to wavefront errors. In very high power laser applications, ionization of the air at the focused spot may also be an issue. For this reason, most beam expanders utilize the Galilean design or some variation of it ( Figure 4). However, Keplerian designs are still very useful in laser applications where spatial filtering is required because they provide a convenient focus point to place the spatial filter. Figure 3: Keplerian beam expanders have an internal focus which is detrimental to high power applications, but useful for spatial filtering in lower power applications Figure 4: Galilean beam expanders have no internal foci and are ideally suited for high power lasers applications
When using the Keplerian or Galilean designs in laser beam expander applications, it is important to be able to calculate the output beam divergence. This determines the deviation from a perfectly collimated source. The beam divergence is dependent on the diameters of the input and output laser beams.
(3)$$ \frac{\text{Input Beam Divergence} \left( \theta_I \right)}{\text{Output Beam Divergence} \left( \theta_O \right)} = \frac{\text{Output Beam Diameter} \left( D_O \right)}{\text{Input Beam Diameter} \left( D_I \right)} $$
The magnifying power (MP) can now be expressed in terms of the beam divergences or beam diameters.
(4)$$ \text{MP} = \frac{\theta _I}{\theta _O}$$ (5)$$ \text{MP} = \frac{D_O}{D_I} $$
When interpreting
Equation 4 and Equation 5, one can see that while the output beam diameter (D 0) increases, the output beam divergence (θ O) decreases and vice versa. Therefore, when using a beam expander to minimize the beam, its diameter will decrease but the divergence of the laser will increase. The price to pay for a small beam is a large divergence angle.
In addition, it is important to be able to calculate the output beam diameter at a specific working distance (L). The output beam diameter is a function of the input beam diameter and the beam divergence after a specific working distance (L) (
Figure 5). Figure 5: A laser's input beam diameter and divergence can be used to calculate the output beam diameter at a specific working distance (6)$$ D_O = D_I + L \cdot \tan{\left( 2 \theta_I \right)} $$
Laser beam divergence is specified in terms of a half angle, which is why a factor of 2 is required in the second term in
Equation 6.
A beam expander will increase the input beam and decrease the input divergence by the Magnifying Power. Substituting
Equations 4 and 5 with Equation 6 results in the following: (7)$$ D_O = \left( \text{MP} \times D_I \right) + L \cdot \tan{\left( \frac{2 \theta_I}{\text{MP}} \right)} $$ (8)$$ D_O = \left( \text{MP} \times D_I \right) + L \cdot \tan{\left( 2 \theta_O \right)} $$ Application 1: Reducing Power Density
Beam expanders increase the beam area quadratically with respect to their magnification without significantly affecting the total energy contained within the beam. This results in a reduction of the beam’s power density and irradiance, which increases the lifetime of laser components, reduces the chances of laser induced damage, and enables the use of more economical coatings and optics.
Application 2: Minimizing Beam Diameter at a Distance
Although it may seem unintuitive, increasing the diameter of a laser using a beam expander may result in a smaller beam diameter far from the laser aperture. A beam expander will increase the input laser beam by a specific expansion power, it will also decrease the divergence by the same expansion power, resulting in a smaller collimated beam at a large distance.
Example
A numerical example to explore the previously mentioned beam expander equations:
Initial Parameters
Beam expander magnifying power = MP = 10X
Input beam diameter = 1mm Input beam divergence = 0.5mrad Working distance = L = 100m Calculated Parameter
Output beam diameter
(9)\begin{align} D_O & = \left( \text{MP} \times D_I \right) + L \cdot \tan{ \left( \frac{2 \theta_I}{\text{MP}} \right)} \\ D_O & = \left( 10 \text{X} \times 1 \text{mm} \right) + 100,000 \text{mm} \cdot \tan{\left( \frac{2 \cdot 0.5 \text{mrad}}{10 \text{X}} \right)} = 20 \text{mm} \end{align} (10)\begin{align} D_O & = D_I + L \cdot \tan{\left( 2 \theta_I \right)} \\ D_O & = 1 \text{mm} + 100,000 \text{mm} \cdot \tan{\left(2 \cdot 0.5 \text{mrad} \right)} = 101 \text{mm} \end{align}
Compare this to the beam diameter without using a beam expander by using
Equation 6.
Using a 10X beam expander reduced the output beam diameter 100m away by over a factor of 5 when compared to the same laser without a beam expander.
Application 3: Minimizing Focused Spot Size
Spot size is typically defined as the radial distance from the center point of maximum irradiance to the point where the intensity drops to 1/e
2 of the initial value ( Figure 6). The focused spot size of an ideal lens can be calculated by using wavelength (λ), the focal length of the lens (f), the input beam diameter (D I), the refractive index of the lens (n), and the beam’s M 2 factor, which represents the degree of variation from an ideal Gaussian beam. (11)$$ \definecolor{Diffraction}{RGB}{0, 0, 255} \definecolor{Aberration}{RGB}{255, 0, 0} \phi_{\text{Spot Size}} = \color{Diffraction} \phi_{\text{Diffraction}} \color{black} + \color{Aberration} \phi_{\text{Aberration}} \color{black} = \color{Diffraction} \frac{4 \lambda M^2 f}{\pi D} \color{black} + \color{Aberration} \frac{k D^3}{f^2} $$ Figure 6: Spot size is usually measured at the point where the intensity I(r) drops to 1/e 2 of the initial value I 0
Spot size is fundamentally determined by the combination of diffraction and aberrations illustrated by red and blue, respectively, in
Figure 7. Generally, when focusing laser beams, spherical aberration is assumed to be the only and dominant type of aberration, which is why Equation 11 only takes spherical aberration into account. In regards to diffraction, the shorter the focal length, the smaller the spot size. More importantly, the larger the input beam diameter the smaller the spot size.
By expanding the beam within the system, the input diameter is increased by a factor of MP, reducing the divergence by a factor of MP. When the beam is focused down to a small spot, the spot is a factor of MP smaller than that of an unexpanded beam for an ideal, diffraction-limited spot. However, there is a tradeoff with spherical aberration because it increases along with the input beam diameter.
Figure 7: At small input beam diameters, the focused spot size is diffraction limited. As the input beam diameter increases, spherical aberration starts to dominate the spot size Application 4: Laser Beam Size Compensation
Variable laser beam expanders are often used to standardize laser beam sizes in applications. Lasers produce a specified beam diameter, but also have tolerances on that diameter. In order to achieve a set beam diameter further down the optical path in multiple systems, variable beam expanders can be used to compensate for this laser-to-laser variability in beam size.
Beam Expander Selection Criteria
When choosing a beam expander for an application, certain criteria must be determined in order to achieve the correct performance.
Sliding vs. Rotating Focusing Mechanisms:
The mechanics used to focus a beam expander or change the magnification of a variable beam expander are typically classified into two different types: sliding and rotating. Rotating focusing mechanisms, such as threaded focusing tubes, rotate the optical elements during translation. They have a lower cost than sliding focusing mechanisms due to their simplified mechanics, but they create the potential for beam wander due to the element rotation (
Figure 8). Figure 8: Exaggerated illustration of the beam wander that may be caused by rotating focus mechanisms
Sliding focusing mechanisms, such as helicoid barrels, translate the internal optics without rotating them, thus minimizing beam wander. However, this requires more complex mechanics than those of rotating focus mechanisms, increasing system cost. Poorly designed sliding optics may also have too much freedom of movement in the mechanics. While the pointing error in these poorly designed designs will not rotate when adjusted, it will be larger than for rotating optics or correctly designed sliding optics.
Internal Focus:
Keplerian beam expanders contain an internal focus that may be problematic in high power systems. The intense focused spot can ionize the air or lead to wavefront errors as a result of the heat deflecting light rays. Because of this, most beam expanders are Galilean to avoid complications caused by internal focusing. However, certain applications require spatial filtering which is only possible in Keplerian designs because of the internal focus capability.
Reflective vs. Transmissive:
Reflective beam expanders utilize curved mirrors instead of transmissive lenses to expand a beam (
Figure 9). Reflective beam expanders are much less common than transmissive beam expanders, but have several advantages that make them the right choice for certain applications. Reflective beam expanders do not suffer from chromatic aberration, whereas the magnification and output beam collimation of transmissive beam expanders is wavelength dependent. While this is not relevant for many laser applications because lasers tend to lase at a single wavelength, it may be critical in broadband applications. The achromatic performance of reflective beam expanders is required for multi-laser systems, some tunable lasers, and ultrafast lasers. Ultrafast lasers inherently span a broader wavelength range than other lasers due to their extremely short pulse duration. Quantum cascade lasers also benefit from reflective beam expanders as transmissive options may not exist at their operating wavelengths. Figure 9: Unlike transmissive beam expanders, the curved mirrors of this Canopus Reflective Beam Expander expand the incident laser beam. The holes on the side of the beam expander are integrated mounting features Edmund Optics Products
The TECHSPEC
® Scorpii Nd:YAG Beam Expanders are available for applications where cost is the driving factor. Featuring 2-element Galilean designs with diffraction limited performance at YAG wavelengths, the Scorpii Nd:YAG Beam Expanders offer a variety of magnification ranges from 2X to 10X ideal for prototyping and OEM integration.
The TECHSPEC
® Vega Laser Line Beam Expanders provide excellent value with λ/10 performance at design wavelength for apertures up to 4 mm. Featuring laser line V-coats for Nd:YAG harmonics down to 266 nm, these Galilean Designs use Fused Silica elements and provide divergence adjustability.
Examples of the application of the Galilean telescope design to laser beam expanders can be found in several Edmund Optics products, all of which can be used to collimate and focus laser beams. Our TECHSPEC
® Arcturus HeNe Beam Expanders is a simple two-lens design, consisting of a negative lens and achromatic lens. Drawing of the internal optical elements is shown for reference.
Our TECHSPEC® Vega Broadband Beam Expanders feature broadband, divergence adjustable designs ideal for demanding tunable laser sources. They are optimized at a wide range of wavelengths and feature λ/10 transmitted wavefront error and no internally focusing ghost images, making them compatible with high power laser sources.
Our TECHSPEC® Draconis Broadband Beam Expanders improves upon the simple two-lens design with a proprietary multi-element lens design that enhances its ability to create a collimated or focused laser beam diameter at a long working distance.
The patent pending TECHSPEC® Canopus Reflective Beam Expanders are easily mounted due to a variety of integrated alignment features. They feature broadband performance with minimal wavefront distortion from the UV to Infrared from 250nm to 10μm. Their monolithic structure provides performance stability independent of changes in temperature.
References Greivenkamp, John E. Field Guide to Geometrical Optics. Vol. FG01. Bellingham, WA: SPIE—The International Society for Optical Engineers, 2004. Smith, Warren J. Modern Optical Engineering. 3rd ed. New York, NY: McGraw-Hill Education, 2000.
|
11 0
In free electron 3D box model, we can calculate the density of state on the Fermi surface g([tex]\epsilon[/tex]f) easily, but how about the level spacing near the Fermi surface? I think this level spacing [tex]\Delta[/tex]E should satisfy [tex]\Delta[/tex]E=d/g([tex]\epsilon[/tex]f) where d is the degree of degenerate on the Fermi surface. So how many electrons are on the Fermi surface? Are there only 6? ({kf,0,0,[tex]\uparrow[/tex]};{kf,0,0,[tex]\downarrow[/tex]};{0,kf,0,[tex]\uparrow[/tex]};{0,kf,0,[tex]\downarrow[/tex]};{0,0,kf,[tex]\uparrow[/tex]};{0,0,kf,[tex]\downarrow[/tex]})
In one textbook I saw level spacing near the Fermi surface is nearly [tex]\epsilon[/tex]f/N, so how can d/g([tex]\epsilon[/tex]f) be connected with [tex]\epsilon[/tex]f/N? Thank you so much :-)
In one textbook I saw level spacing near the Fermi surface is nearly [tex]\epsilon[/tex]f/N, so how can d/g([tex]\epsilon[/tex]f) be connected with [tex]\epsilon[/tex]f/N?
Thank you so much :-)
Last edited:
|
Annals of Functional Analysis Ann. Funct. Anal. Volume 4, Number 1 (2013), 138-148. Coupled coincidence point theorems for nonlinear contractions under c-distance in cone metric spaces Abstract
In this paper, among others, we prove the following results:\\ $(1)$ Let $(X,d)$ be a complete cone metric space partially ordered by $\sqsubseteq$ and $q$ be a c-distance on $X$. Suppose $F : X \times X \to X$ and $g : X \to X$ be two continuous and commuting functions with $F(X \times X)\subseteq g(X)$.\ Let $F$ satisfy mixed g-monotone property and $q(F(x, y), F(u, v)) \preceq \frac{k}{2} (q(gx, gu)+q(gy,gv))$ for some $k \in [0, 1)$ and all $x, y, u, v \in X$ with $(gx \sqsubseteq gu)$ and $(gy \sqsupseteq gv)$ or $(gx \sqsupseteq gu)$ and $(gy \sqsubseteq gv)$.\ If there exist $x_0, y_0 \in X$ satisfying $gx_0 \sqsubseteq F(x_0, y_0)$ and $F(y_0, x_0) \sqsubseteq gy_0$, then there exist $x^*, y^*\in X$ such that $F(x^*, y^*) = gx^*$ and $F(y^*, x^*) = gy^*$, that is, $F$ and $g$ have a coupled coincidence point $(x^*, y^*)$.\ $(2)$ If, in $(1)$, we replace completeness of $(X,d)$ by completeness of $(g(X),d)$ and commutativity, continuity of mappings $F$ and $g$ by the condition: $(i)$ for any nondecreasing sequence $\{x_n\}$ in $X$ converging to $x$ we have $x_n \sqsubseteq x$ for all $n$.\ $(ii)$ for any nonincreasing sequence $\{y_n\}$ in $Y$ converging to $y$ we have $y \sqsubseteq y_n$ for all $n$, then $F$ and $g$ have a coupled coincidence point $(x^*,y^*)$.
Article information Source Ann. Funct. Anal., Volume 4, Number 1 (2013), 138-148. Dates First available in Project Euclid: 12 May 2014 Permanent link to this document https://projecteuclid.org/euclid.afa/1399899842 Digital Object Identifier doi:10.15352/afa/1399899842 Mathematical Reviews number (MathSciNet) MR3004216 Zentralblatt MATH identifier 1262.54016 Subjects Primary: 47H10: Fixed-point theorems [See also 37C25, 54H25, 55M20, 58C30] Secondary: 46B40 54H25 55M20 Citation
Batra, Rakesh; Vashistha, Sachin. Coupled coincidence point theorems for nonlinear contractions under c-distance in cone metric spaces. Ann. Funct. Anal. 4 (2013), no. 1, 138--148. doi:10.15352/afa/1399899842. https://projecteuclid.org/euclid.afa/1399899842
|
I was talking about the following tentative argument. The 2-category of distributors (also called profunctors) $\mathrm{Dist}$ has (small) categories for objects. For $C,D:\mathrm{Dist}$ the 2-category of morphisms is defined as $$\mathrm{Dist}(C,D):=[D^{op} \times C; Set]$$and denoted as $C \nrightarrow D$ (the middle line should be upright, but it's hard to latex). The order of arguments is chosen for the same variance as in the $Hom$-functor. The $Hom$-functor itself is the identity arrow $C \nrightarrow D$ The composition is given by the tensor product of bifunctors, i.e. if $F: D^{op} \times C \to Set$ and $G: E^{op} \times D \to Set$, then $$G\circ F(e, c) := \int^d G(e,d) \times F(d, c)$$ The fact that $Hom$ is identity is thus equivalent to the co-Yoneda lemma. There is an embedding $i: \mathrm{Cat} \to \mathrm{Dist}$, which is identity on objects and maps a functor $F: C \to D$ to the distributor $\hat F : C \nrightarrow D$ $$\hat F(d, c) = D(d, F c)$$The distributors coming from the functors can be described as precisely the morphisms with a right adjoint, the right adjoint given by $\hat F^!(c, d) = D(F c, d)$. The inclusion $i: \mathrm{Cat} \to \mathrm{Dist}$ itself has a right adjoint which maps a category $C$ to the presheaf category $\hat C := [C^{op}, Set]$ and the distributors to their left Kan extension along the Yoneda embedding.
A more familiar 1-categorical construction is the allegory of correspondences $\mathrm{Corr}(S)$ in a 1-topos $S$. The only difference is that in allegories people usually say that morphisms are correspondences with a
left adjoint. This is an unfortunate historical error due to the fact that correspondences are symmetric in both arguments, so the choice whether the graph of a function is a correspondence from $A$ to $B$ or vice versa is arbitrary. Since the subobjects of a set form a set, the adjunction realizes $\mathrm{Corr}(S)$ as an algebraic category over $S$. For $S=Set$ this is the statement that the sets of subsets are precisely the boolean algebras. Unfortunately $\mathrm{Dist}$ doesn't seem to be algebraic or presentable over $\mathrm{Cat}$ in a naive way, since $\mathrm{Cat}$ is the category of small categories but the right adjoint maps $\mathrm{Dist}$ to large categories of presheaves. We don't even need an adjunction: presentable categories have small hom-sets while $[D^{op} \times C, Set]$ is a large category with a large core. Maybe some different notion of presentability is required?
The cartesian product $\times$ gives a symmetric monoidal structure on $\mathrm{Dist}$ and any category is dualizable (but not fully dualizable) w.r.t. this monoidal structure, the dual is the opposite category. This allows us to talk algebraically about both covariant and contravariant functors, as well as adjunctions. First let's talk about cartesian closed categories. We can encode the cartesian product as the morphism $C \times C \nrightarrow C$ which is a functor (=has right adjoint) and which is right adjoint to the diagonal $C\to C \times C$. The diagonal is a functor which arises by unversality of cartesian product w.r.t. functors. Similarly we can define a structure of $X^{\times n}$ for any $n:\mathbb{N}$. Thus we can encode a cartesian category as a tensor functor from the category which objects are finite sets with function to $\pm 1$ (negative will be required shortly) which is a tensor closed category, tensor product given by disjoint union of sets, a dual of $(\ast, +)$ is $(\ast, -)$ and vice versa, together with extra nontrivial generating morphisms which encode the functors and adjunctions in the definition of cartesian structure above. Now, any morphism $(\ast, +) \sqcup (\ast, +) \to (\ast, +)$ equivalently gives a morphism $(\ast, +) \to (\ast, -) \sqcup (\ast, +)$. The inner hom is defined by the condition that for the functor $C \times C \xrightarrow{\times} C$ this dual morphism is itself a functor, i.e. has a right adjoint. Adding the corresponding right adjoint to the classifying tensor category, we get a theory of cartesian closed categories.
Locally cartesian closed categories are defined similarly, but we need to work with arrow categories, so our tensor categories will include not only signed sets but also arbitrary signed finite categories. In particular, for lcc structure we focus on the arrow $\bullet \to \bullet$ and demand that for a fixed endpoint the overcategory has a cartesian closed structure (this will also involve diagrams of the form $\bullet \to \bullet \leftarrow \bullet$ etc). This shows that the lcc categories are equivalent to tensor functors from a certain tensor 2-category into $\mathrm{Dist}$. This would be sufficient for presentability if $\mathrm{Dist}$ was presentable, but as noted above it isn't quite so.
|
Let $E$ be an elliptic curve defined over $\mathbb{Q}$.The
canonical height of a rational point $P\in E(\mathbb{Q})$ is computed by writing the $x$-coordinate $x(nP)=A_n(P)/D_n(P)$ as a fraction in lowest terms and setting $$ \hat h(P) = \lim_{n\to\infty} \frac{1}{n^2}\log \max\bigl\{|A_n(P)|,|D_n(P)|\bigr\}. $$( Note. Some sources define $\hat h$ to be $\frac12$ of this quantity.)
Properties of $\hat h$:
$\hat h(P)=\log \max\bigl\{|A_1(P)|,|D_1(P)|\bigr\}+O(1)$ as $P$ ranges over $E(\mathbb{Q})$. $\hat h(P)\ge0$; and $\hat h(P)=0$ if and only if $P$ is a torsion point. $\hat h:E(\mathbb Q)\to\mathbb R$ extends to a positive definite quadratic form on $E(\mathbb{Q})\otimes\mathbb{R}$. height pairingon $E$ is the associated bilinear form $\langle P,Q\rangle=\frac{1}{2}\bigl(\hat h(P+Q)-\hat h(P)-\hat h(Q)\bigr)$, which is used to compute the elliptic regulator of $E$. It is a symmetric positive definite bilinear form on $E(\Q)\otimes\R$.
For a number field $K$, the
canonical height of $P\in E(K)$ is given by $\hat h(P)=\lim_{n\to\infty} n^{-2}h\bigl(x(nP)\bigr)$, where $h$ is the Weil height. Knowl status: Review status: reviewed Last edited by John Cremona on 2019-02-08 12:16:24 Referred to by: History:(expand/hide all) 2019-02-08 12:16:24 by John Cremona (Reviewed)
|
The way I know of to bound generalization error by Rademacher complexity is Theorem 2.4 in this lecture notes, http://ttic.uchicago.edu/~tewari/lectures/lecture9.pdf. Here the quantity on the LHS that Rademacher complexity is trying to upperbound is given as, $L_{\phi}(\hat{f}_{\phi}^*)-\min_{f \in F} L_{\phi}(f)$ where $F$ is some "hypothesis class" of functions mapping $ f :X \rightarrow D$, $\phi : D \times Y \rightarrow [0,1]$ is the "loss function", "$\phi-$"loss of any function $g : X \rightarrow D$ is defined as, $L_\phi(g) = \mathbb{E}[\phi(f(x),y)]$ - where the expectation is taken over some distribution over the points $(x,y) \in X \times Y$ and $\hat{f}_{\phi}^*$ is what the ERM returns over some $m$ samples i.e $\hat{f}_{\phi}^* = argmin_{f \in F} \frac{1}{m} \sum_{i=1}^m \phi(f(x_i),y_i)$.
The above setting is called ``agnostic" because at no point was it assumed that, $\exists$ any ground-truth labelling function $L \in F$ such that $y = L(x)$ but rather the class $F$ is to be seen to be trying to learn via empirical risk minimization a distribution, say ${\cal D}$, over $X \times Y$.
My question is 3 fold , is there any analogue of this Theorem $2.4$ when,
(a) an existence of a $L$ is assumed with $L$ may or maynot be in $F$. (the later is I guess often called the ``realizable setting") (...I have seen some papers trying to bound generalization error of a specific algorithm in the realizable setting but I somehow dont see Rademacher complexity defined in those settings!..)
(b) the loss function $\phi$ is not assumed to be bounded above but only assumed to be bounded below.
(c)
AND most importantly, say I have a class of labelling functions ${\cal L}$ mapping $X \rightarrow Y$ and I want to say the following, "Given a loss function $\phi$, irrespective of which member of ${\cal L}$ labels the data (maybe also irrespective of the distribution over $X$ used to measure $L_{\phi}$) the member of class $F$ obtained via ERM on the data, can never generalize well". Is there a version of Rademacher complexity which captures this?
|
The solution is easy by employing $\ gf\bmod gh\, =\, g(f\bmod h)\ \ $ [mod Distributive Law]
$$\begin{align}f(x)\!-\!f(a)\bmod (x\!-\!a)(x\!-\!b) &= (x\!-\!a)\left(\dfrac{f(x)\!-\!f(a)}{x\!-\!a}\bmod x\!-\!b\right)\\&= (x\!-\!a)\left(\dfrac{f(b)\!-\!f(a)}{b\!-\!a}\right)\ \ {\rm if}\ \ a\neq b\\&= (x\!-\!a)\,\ f'(a)\qquad\qquad\ \ \, {\rm if}\ \ a = b\end{align}$$
In the OP $\,a=1,b=-2\,$ so above is $\,f(x)\!-\!3 \equiv (x\!-\!1)(42)\ $ so $\ f(x) \equiv 42x-39$
Note that this method does
not require solving a system of equations - as some methods do.
Below is a simple example - which may help to clarify the essence of the matter.
$\,\ \underbrace{x\!+\!2\mid f}_{\large f(-2)\ =\ 0\ }\Rightarrow\, f\bmod x^2\!-\!4\,$ $=\, (x\!+\!2)\Bigg[\dfrac{f}{x\!+\!2}\bmod x\!-\!\color{#c00}2\Bigg]$ $ =\, \underbrace{(x\!+\!2)\left[\dfrac{f(\color{#c00}{2})}{\color{#c00}2\!+\!2}\right] =\, 2(x\!+\!2)}_{\large f\bmod x-\color{#c00}2\,\ =\,\ f(\color{#c00}{2})\,\,\ =\,\ 8}$
Remark $ $ Alternatively, if modular arithmetic is unfamiliar we can eliminate it.
Write $\ f = f(a) + (x\!-\!a) g\,\ $ by dividing $\,f\,$ by $\,x\!-\!a.\,$ Dividing $\,g\,$ by $\,x\!-\!b\,$ yields
that $\,\ \ f = f(a) + (x\!-\!a)(g(a)+(x\!-\!b)h)$
So $\ f(b) = f(a) + (b\!-\!a)\,g(a)\,$ by eval at $\,x=b.\,$ Solving for $\,\color{#c00}{g(a)}\,$ and substituting in above
$$ f(x)\, =\, \underbrace{f(a)\,+\,\color{#c00}{\dfrac{f(b)-f(a)}{b-a}} (x\!-\!a)}_{\large f(x)\,\bmod\, \color{#0a0}{(x-a)(x-b)}}\, +\, \color{#0a0}{(x\!-\!a)(x\!-\!b)} h(x)$$
The above Newton / Lagrange interpolant is precisely the Easy CRT solution of the system $$\begin{align} f(x) \equiv f(a) &\pmod{x\!-\!a}\\ f(x)\equiv f(b)&\pmod{x\!-\!b}\end{align}$$ Generally Lagrange interpolation is a special case of CRT = Chinese remainder Theorem. The first solution amounts to using the mod distributive law to derive Easy CRT as explained here.
If we specialize $\,b = a\,$ above then we get the first order Taylor series expansion. For polynomials this can be done
purely algebraically (no limits) - see this purely algebraic definition of the derivative).
The $\!\bmod\!$ Distributive Law can be viewed as an equivalent "shifty" operational reformulation of CRT = Chinese Remainder Theorem, as I explain in the end of my Remark here. It is often more convenient to apply in practice because of its operational nature, e.g. here are many examples.
|
EDIT
I think I have to correct myself. I believe now that the integral is an analytic function of k and that the cut coming from Mathematica's antiderivative is spurious and so are the complications in performing the integral.
The argument is that the integrand is analytic in k and hence is expandable into a convergent series around k = 0 which can be integrated over t term by term.
The problem arises because Mathematica finds an antidrivative with respect to the variable t which contains a branch point singularity in the other varable k.We could choose a better antiderivative subtracting a Log[k] term which has the same branch point and thus get rid of the spurious singularity.
The integrand
h = Exp[ k Exp[I t]];
is obviously an analytic function of k for any fixed value of t, and also an analytic function of t for fixed k.
Expanding h in a series about k = 0, taking three terms, gives
hk = Series[h, {k, 0, 3}] // Normal
(* Out[4]= 1 + E^(I t) k + 1/2 E^(2 I t) k^2 + 1/6 E^(3 I t) k^3 *)
The definite integral becomes
hkint = Integrate[hk, {t, 0, \[Pi]}]
(* Out[77]= 2 I k + (I k^3)/9 + \[Pi] *)
Hence the real part, which is the original integral, is Pi.Whereas the imaginary part can be shown to sum up to
khi = 2 SinhIntegral[k]
SinhIntegral is defined as
Integrate[Sinh[z]/z, {z, 0, k}]
(* Out[80]= SinhIntegral[k] *)
And the documentation tells us that SinhIntegral[z] is an entire function of z with no branch cut discontinuities.
So we have found the complete function theoretic picture of the integral as a function of the complex variable k.
But now, let us have a look at the antiderivative obtained by Mathematica via the indefinite integral
a = Integrate[Exp[k Exp[I t]], t]
(* Out[81]= -I ExpIntegralEi[E^(I t) k] *)
This function is not well defined for k = 0:
a /. k -> 0
$i \infty$
The series expansion gives
Series[a, {k, 0, 2}] // Normal
(* Out[116]= -I EulerGamma - I E^(I t) k -
1/4 I E^(2 I t)
k^2 + t + \[Pi] Floor[(\[Pi] - Arg[E^(I t)] - Arg[k])/(
2 \[Pi])] - \[Pi] Floor[(\[Pi] - Arg[E^(-I t)] + Arg[k])/(
2 \[Pi])] + \[Pi] Floor[(\[Pi] - Re[t])/(
2 \[Pi])] - \[Pi] Floor[(\[Pi] + Re[t])/(2 \[Pi])] - I Log[k] *)
The
Log[k] term gives rise to the branch cut which is confirmed in the documentation whch says: the function
ExpIntegralEi[z] has a branch cut discontinuity in the complex z plane running from -
\[Infinity] to 0.
And even the difference appearing via the fundamental theorem of calculus
a1 = (a /. t -> \[Pi]) - (a /. t -> 0)
(* Out[84]= -I ExpIntegralEi[-k] + I ExpIntegralEi[k] *)
has some difficulties
In[85]:= a1 /. k -> 0
During evaluation of In[85]:= Infinity::indet: Indeterminate
expression (-I) [Infinity]+I [Infinity] encountered. >>
(* Out[85]= Indeterminate *)
Hence the unfavorable choice of the antiderivative gives rise to the problems encountered by Mathematica.
Original post
The problem arises due to the branch cut singularity at k=0.We can see it from the following developments:
Let the integral in question be
f := Integrate[Exp[k Cos[t]] Cos[k Sin[t]], {t, 0, Pi}]
Considering instead of
f this integral
Integrate[Exp[k Cos[t]] Exp[I k Sin[t]], {t, 0, Pi}]
or simplifying the exponent
g := Integrate[Exp[k Exp[I t]], {t, 0, Pi}]
For real
k we have
f = Re[g] .
If we let Mathematica calculate this definite integral without Assumptions we get
g
(*
Out[45]= ConditionalExpression[-I (ExpIntegralEi[-k] - ExpIntegralEi[k]),
Re[k] == 0 && Im[k] <= 0]
*)
Which is the bug noted in the OP because for
k->0 the result is zero:
Limit[-I (ExpIntegralEi[-k] - ExpIntegralEi[k]), k -> 0]
(* Out[75]= 0 *)
Notice that
g has exactly the form of applying the fundamental theorem of calculus to the antiderivative, which is
a = Integrate[Exp[k Exp[I t]], t]
(* Out[76]= -I ExpIntegralEi[E^(I t) k] *)
As is well known and has been discussed many times in this forum, errors may arise from doing this carelessly. In fact, there is a branch point (logarithmic) singularity at k = 0
Series[ExpIntegralEi[k], {k, 0, 1}, Assumptions -> k > 0] // Normal
(* Out[123]= EulerGamma + k + Log[k] *)
An alternative approach makes assumptions on k.
For example
g1 = Integrate[Exp[k Exp[I t]], {t, 0, Pi}, Assumptions -> k >= 0]
$-i \left(-2 \text{Shi}(k)+\left(\begin{array}{cc} \{ & \begin{array}{cc} i \pi & k>0 \\ 0 & \text{True} \\\end{array} \\\end{array}\right)\right)$
% /. k -> 0
(* Out[107]= 0 *)
Still wrong.
Now
g2 = Integrate[Exp[k Exp[I t]], {t, 0, Pi}, Assumptions -> {k > 0}]
(* Out[111]= \[Pi] + 2 I SinhIntegral[k] *)
% /. k -> 0
(* Out[112]= \[Pi] *)
g3 = Integrate[Exp[k Exp[I t]], {t, 0, Pi}, Assumptions -> {k < 0}]
(* Out[113]= \[Pi] + 2 I SinhIntegral[k] *)
% /. k -> 0
(* Out[114]= \[Pi] *)
g4 = Integrate[Exp[k Exp[I t]], {t, 0, Pi}, Assumptions -> {k == 0}]
(* Out[115]= \[Pi] *)
The expressions
g2 through
g4 look as if we can collectively assume this
g5 = Integrate[Exp[k Exp[I t]], {t, 0, Pi}, Assumptions -> Im[k] == 0]
$\text{ConditionalExpression}\left[-i \left(-2 \text{Shi}(k)+\left(\begin{array}{cc} \{ & \begin{array}{cc} i \pi & k>0 \\ 0 & \text{True} \\\end{array} \\\end{array}\right)\right),k\geq 0\right]$
% /. k -> 0
(* Out[117]= 0 *)
But it is not the case.Going on the two banks of the cut gives:
g6 = Integrate[Exp[k Exp[I t]], {t, 0, Pi}, Assumptions -> Im[k] > 0]
(* Out[120]= -\[Pi] + 2 I SinhIntegral[k] *)
g7 = Integrate[Exp[k Exp[I t]], {t, 0, Pi}, Assumptions -> Im[k] < 0]
(* Out[121]= \[Pi] + 2 I SinhIntegral[k] *)
These results show that Mathematica considers the initial integral as a function of the complex variable k, and care has to be taken in when approaching the branch point k = 0.
|
I'm not that familiar with stellar formation, so you might be right. According to this website:http://www.josleys.com/show_gallery.php?galid=313the earth is malleable because of its liquid core and tectonic plates on the surface (quite interestingly, according to that website, if the...
Thanks for your help. I appreciate it. This is not my area either, but I teach lab courses to pay for my tuition, and I wanted to give my students examples of some good applications of centripetal acceleration and I thought what could be a more grand example of centripetal acceleration than the...
Actually, according to Wikipedia:http://en.wikipedia.org/wiki/Clairaut's_theoremthe gravity is modified by:g[1+(\frac{5m}{2}-f)\sin^2 \varphi]where m is the ratio of the centrifugal force to gravity at the equator (which should be really small), and f is proportional to the difference...
Thanks. I appreciate it. I wish I could just slip in a factor of 2 onm\omega^2(r \cos \varphi) \sin\varphi = -mg \frac{1}{2}\frac{1}{r}\frac{dr}{d\varphi}and say that the 2 comes from considering mass spread over the entire ellipsoid. That would produce the right answer. But...
If you can get the right answer, let us know! I've been trying for hours and I've given up. I teach a lab course and I'm trying to give my students some information on centripetal acceleration, but I can't figure out how to calculate the bulge - like the original poster I get half the value.
I thought the formula should be:m\omega^2(r \cos \varphi) \sin\varphi = -mg\frac{1}{r}\frac{dr}{d\varphi}but maybe I'm wrong.You need the centripetal force along the tangential direction, so m\omega^2r \cos \varphi gets multiplied by \sin\varphi.When you integrate:m\omega^2(r \cos...
Shouldn't your r in the LHS of:m\omega^2r\sin\varphi = -mg\frac{1}{r}\frac{dr}{d\varphi}be r*cos(phi), or distance from the rotation axis? If you do that, then you get the original poster's value of half the bulge.
Sure. It's a little bit lengthy though, so it might take some work to read it:P(m)=\frac{\sqrt{2 \pi n}n^ne^{-n}}{\sqrt{2 \pi m}m^me^{-m}*\sqrt{2 \pi (n-m)}(n-m)^{(n-m)}e^{-(n-m)}}p^m(1-p)^{n-m}=\frac{n^{n+1}}{\sqrt{2 \pi n}*m^{m+\frac{1}{2}}*(n-m)^{(n-m)+\frac{1}{2}}}p^m(1-p)^{n-m}...
Normally I would just dismiss the formula, but I found it in two different sources (both particle physics sources though). One book talked about the vacuum bubble expansion of the integral:\int \frac{1}{[k^2-m^2][(k-p)^2-m^2]}=\int \frac{1}{[k^2-m^2]^2}-\int \frac{p^2}{[k^2-m^2]^3}...
This is probably a dumb question, but I have a book that claims that if you have a function of the momentum squared, f(p2), that:\frac{d}{dp^2}f=\frac{1}{2d}\frac{\partial }{\partial p_\mu}\frac{\partial }{\partial p^\mu}fwhere the d in the denominator is the number of spacetime...
I want to show that the binomial distribution:P(m)=\frac{n!}{(n-m)!m!}p^m(1-p)^{n-m}using Stirling's formula:n!=n^n e^{-n} \sqrt{2\pi n}reduces to the normal distribution:P(m)=\frac{1}{\sqrt{2 \pi n}} \frac{1}{\sqrt{p(1-p)}}exp[-\frac{1}{2}\frac{(m-np)^2}{np(1-p)}]...
For some information on tides, here's a website:http://www.lhup.edu/~dsimanek/scenario/tides.htmFor some mathematical detail (just algebra):http://mb-soft.com/public/tides.htmlYou only need gravity to explain the tides.Omega is a constant, equal to the angular velocity of the earth...
I'm pretty confused by the post - I think you might have a lot of misconceptions.First, tidal forces have nothing to do with centripetal acceleration. Tidal forces are due to gravity.Second, if you're standing still on the earth, unless you're at the poles, there is a centripetal force on...
I always thought it'd be dangerous to ground the primary side since it's at such high voltage. So maybe isolation transformers allow you to ground the secondary side, rather than, as you say, allowing the primary to float when you have a grounded secondary?
When lightning strikes near power lines, it breaks down the air which shorts the power lines, but why would this have an effect on the load? The short and the load are in parallel, so the load should not be affected. When you have two things in parallel what happens to one branch should not...
|
This question already has an answer here:
There is a set with $n$ elements. Why is the maximum number of subsets that can be formed out of it $2^n$?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
This question already has an answer here:
There is a set with $n$ elements. Why is the maximum number of subsets that can be formed out of it $2^n$?
We must show that $${n\choose 0}+{n\choose 1}+{n\choose 2}+\cdots +{n\choose n}=2^n$$ is the number of subsets of an $n$-element set $S$ where $n\geq0$.
Every subset of $S$ is a $k$-subset of $S$ where $k=0,1,2,...,n$. We know that ${n\choose k}$ equals the number of $k$-subsets of S. Thus by the Addition Principle $${n\choose 0}+{n\choose 1}+{n\choose 2}+\cdots +{n\choose n}$$ equals the number of subsets to the set $S$. We can count the same thing by observing that each element of the set $S$ has two choices, either they are in a subset or they are not in a subset. Let $S=\{x_1,x_2,x_3,...,x_n\}$. So, $x_1$ is either in a subset or it is not in a subset, $x_2$ is either in a subset or it is not in a subset,..., $x_n$ is either in a subset or it is not in a subset. Thus by the Multiplication Principle there are $2^n$ ways we can form a subset of the set $S$. Hence ${n\choose 0}+{n\choose 1}+{n\choose 2}+\cdots +{n\choose n}=2^n$.
Another approach is to consider the Binomial Theorem $$(x+y)^n=\sum_{k=0}^n {n\choose k}x^{n-k}y^k.$$ Letting $x=1$ and $y=1$ we obtain$$2^n=\sum_{k=0}^n{n\choose k}.$$
Ross Belgram's method is a classic way to do it. Here's a faster way.
Consider a subset of the set $S=\{a_1,a_2,\cdots,a_n\}$, which has $n$ elements. To create a subset, which may be empty, we can go through each of the elements in the set $S$ and either put it in the subset or not put it in the subset.
That is, we have two choices for a given $a_k$: in the subset or not. So, if we have $2$ choices for each of the $n$ elements, the total number of subsets possible is $$ \underbrace {2 \cdot 2 \cdots 2}_{n \, \text{checks}} = 2^n. $$This is sometimes known as committee-forming since it is analogous to the situation where we want to form a committee of arbitrary size, given $n$ people. And, so for each of the people, you can either put the person in the committee or not put him in the committee, thus giving $2^n$ possible committees.
$ \blacksquare $
Given the number of answer given so far that mention binomial coefficients, I just want to say you
don't need them for this problem. Repeating the comment by Daniel Fischer, all you need to know is that is subset $S$ is determined by telling for each of the $n$ elements $x$ whether $x\in S$ or $x\notin S$. That gives $2$ possibilities for $x$, and repetaing this for every $x$ gives $2^n$ possibilities. Once all those choices are fixed, the subset $S$ is completely determined. There are precisely $2^n$ subsets.
If we choose $r$ elements (where $0\le r\le n$) of $n$ elements to form subsets, there can be $\binom nr$ combinations
We know $$\sum_{0\le r\le n}\binom nr=(1+1)^n$$
|
(a) To find a basis for the plane $x-y+z = 0$, you could solve this equation in terms of $x$ and $z$: $y= x+z$. Then the set of vectors of your plane could be described as:
$$V = \left\{ (x, x+z, z) \ \vert \ x,z \in \mathbb{R} \right\} \ .$$
From this description it's easy to find a basis for your plane $V$: it will have two vectors, say $u,v \in \mathbb{R}^3$:
$$V = [u,v] \ .$$
Then, the orthonormal subspace to your plane is a straight line, generated by the cross product of $u$ and $v$
$$V^\bot = [u \times v] \ .$$
(b) The closest point $\hat{b}$ to $b$ in the plane $V$ is the orthogonal projection of $b$ onto $V$. In this case, you don't really need the formulae you'll see in Wikipedia, because it's just the intersection of $V$ with the straight line with direction $u\times v$ that passes through $b$. That is, you have to solve the system of linear equations
$$\begin{align}x - y +z &= 0 \\(x,y,z) &= (1,2,0) + \lambda u\times v \ .\end{align}$$
(c) The "error" between $b$ and $\hat{b}$ is the same as the distance from $b$ to $\hat{b}$; that is, $\| b - \hat{b} \|$.
|
$$\mathrm{molarity} = \frac{\text{amount of solute}}{\text{volume of solution}} $$ and amount of substance is based on quantity (larger mass means larger amount), so how come it is an intensive property. Shouldn't it be an extensive property?
Concentration is an intensive property. The value of the property does not change with scale. Let me give you an example:
Let us say you had a homogenous mixture (solution) of sodium carbonate in water prepared from 112 g of sodium carbonate dissolved in 1031 g of water.
The concentration (in mass percent, or mass of solute per mass of solution) is:
$$c=\frac{112\text{ g solute}}{(112+1031)\text{g solution}}=0.09799 =9.799\%\text{ sodium carbonate by mass}$$
The concentration is the ratio of sodium carbonate to the total mass of the solution, which does not change if you are dealing with the entire 1143 g of the solution or if you dispense some of that solution into another vessel.
If you dispense 11.7 g of that solution into a flask for a reaction, what is the concentration of sodium carbonate in that flask?
It is still 9.799% by mass. The ratio of the mass of sodium carbonate present to the total mass present has not changed. The
actual mass of sodium carbonate has changed:
$$0.09799\dfrac{\text{g solute}}{\text{g solution}}\times11.7\text{ g solution}=1.15\text{ g solute}$$
The concentration is a property dependent only on the
concentration of the solution, not the amount of solution you have. The concentration of a solution with defined composition is independent of the size of the system.
In general, any property that is a ratio of two extensive properties becomes an intensive property, since both extensive properties will scale similarly with increasing or decreasing size of the system.
Some examples include:
Concentration (including molarity) - ratio of amount of solute (mass, volume, or moles) to amount of solution (mass or volume usually) Density - ratio of mass of a sample to the volume of the sample Specific heat - ratio of heat transferred to a sample to the amount of the sample (mass or moles usually, but volume also)
Each of these intensive properties is a ratio of an extensive property we care about (amount of solute, mass of sample, heat transferred) divided by the scale of the system (amount of stuff usually). This is like finding the slope of a graph showing the relationship between two extensive properties. The graph is linear and the value of slope does not change based on how much stuff you have - thus the slope (the ratio) is an intensive property.
Consider the following picture:
Break the ice block shown in the picture into two equal halves.Now I hope you would be able to answer the following questions:
1.What are the physical properties of ice block which got halved? Absolutely mass,volume,etc.(These are all extensive properties.) 2.What are the physical properties of ice block which remained same? Density,etc.(These are all intensive property.) If you have the doubt so as to why the density remained same,here is the explanation: I hope you know basically even if block got halved,mass per unit volume remains the same in either of the pieces.All the way it mean that density remained the same(mass per unit volume).Thus it is an intensive property. Similarly if you imagine solution instead of ice block,you will find that molarity remains the same even if you divide solution into two equal halves.Thus molarity is a intensive property.
|
I would like to know how to find an adjoint of an operator $T$ on a Hilbert space.
I tried to find out on my own but it's not solid. Here is what I did:
I picked a concrete example. Let $H=\ell^2$ and let $R: H \to H$ be the right shift operator. Let $e_1 = (1,0,0,\dots), e_2=(0,1,0,0,\dots)$ etc. Then I used the definition of the adjoint and trial and error as follows:
Let $R^\ast$ denote the adjoint I am trying to find. I compute $\langle Re_1,e_1\rangle=0$ and $\langle e_1, R^\ast e_1\rangle = 1\cdot (R^\ast e_1)_1 = 0$. I did this same silly computation for a few more pairs of $e_1$ and $e_i$ until I started to suspect that $R^\ast e_1 = 0$. Similarly, $R^\ast e_2 = e_1$.
Applying a big leap of guessing I am pretty sure that $R^\ast$ is the left shift operator. But being pretty sure is just not good enough.
How does on go about this correctly? What is the correct method of finding the adjoint of a given operator on a Hilbert space?
|
I have been trying to derive the Einstein equation from the Einstein-Hilbert action $$ S[g_{\mu \nu}] = \frac{1}{16 \pi} \int_M \text{d}^4x \sqrt{-g}R $$ The standard derivation states that the variation $\delta S =0$ when we vary the metric components. In this derivation, we use the fact that $\delta (\sqrt{-g}R)= \delta \sqrt{-g} R + \sqrt{-g}\delta R$. To me this seems quite obvious but I thought I would try and prove this.
My understanding is that if we have an action $$ S[q] = \int \text{d}t L(q,\dot{q},t)$$ we vary it as $$ \delta S[q] = \int \text{d}t \delta L(q,\dot{q},t)$$ so $$ \delta L= \frac{\partial L}{\partial q} \delta q + \frac{\partial L}{\partial \dot{q}} \delta \dot{q} \\ = \left( \frac{\partial L}{\partial q} - \frac{\text{d}}{\text{d}t} \frac{\partial L}{\partial \dot{q}} \right) \delta q + \frac{\text{d}}{\text{d}t} \left( \frac{\partial L}{\partial \dot{q}} \delta q \right)$$ As we integrate over $\delta L$, we ignore the last term as this produces a boundary term which we can take to vanish, therefore I say
$$ \delta L = \left( \frac{\partial L}{\partial q} - \frac{\text{d}}{\text{d}t} \frac{\partial L}{\partial \dot{q}} \right) \delta q $$
Okay, now let's say my Lagrangian is a product: $L(q,\dot{q},t) = f(q,\dot{q},t)g(q,\dot{q},t)$. Plugging this into the above formula for the variation, I have
$$ \frac{\partial L}{\partial q} = \frac{\partial f}{\partial q} g + \frac{\partial g }{\partial q} f $$
$$ \frac{\partial L}{\partial \dot{q}} = \frac{\partial f}{\partial \dot{q}}g + \frac{\partial g}{\partial \dot{q}} f$$
$$ \frac{\text{d}}{\text{d}t} \frac{\partial L}{\partial \dot{q}} = \left( \frac{\text{d}}{\text{d}t} \frac{\partial f}{\partial \dot{q}} \right) g + \left( \frac{\text{d}}{\text{d}t}\frac{\partial g}{\partial \dot{q}} \right) f + \frac{\partial f}{\partial \dot{q}} \frac{\text{d}g}{\text{d}t} + \frac{\partial q}{\partial \dot{q}} \frac{\text{d}f}{\text{d}t}$$
so the variation of this Lagrangian is
$$ \delta L = \left( \frac{\partial f}{\partial q} - \frac{\text{d}}{\text{d}t} \frac{\partial f}{\partial \dot{q}} \right)g \delta q + \left( \frac{\partial g}{\partial q} - \frac{\text{d}}{\text{d}t} \frac{\partial g}{\partial \dot{q}} \right)f \delta q - \frac{\partial f}{\partial \dot{q}} \frac{\text{d}g}{\text{d}t}\delta q - \frac{\partial q}{\partial \dot{q}} \frac{\text{d}f}{\text{d}t}\delta q$$
or
$$ \delta L = g \delta f + f \delta g - \frac{\partial f}{\partial \dot{q}} \frac{\text{d}g}{\text{d}t}\delta q - \frac{\partial g}{\partial \dot{q}} \frac{\text{d}f}{\text{d}t}\delta q $$
Now I can't seem to get rid of those horrible extra terms - I can't see how they would produce a boundary term when integrated. Maybe my understanding was incorrect and variations do not obey the product rule? Many standard resources suggest that varying the Einstein-Hilbert action obeys the product rule... is this just an exception?
My question:
How can I show that variations obey $\delta (fg) = f \delta g + g \delta f$
|
Randomized learning of the second-moment matrix of a smooth function
1.
Institute of Electrical Engineering, École Polytechnique Fédérale de Lausanne, 1015 Lausanne, Switzerland
2.
Department of Electrical Engineering, Colorado School of Mines, Denver, CO 80401, USA
3.
Departments of Statistics and Computer Science, Rutgers University, Piscataway, NJ 08854, USA
4.
Department of Computer Science, University of Colorado Boulder, Boulder, CO 80309, USA
Consider an open set $ \mathbb{D}\subseteq\mathbb{R}^n $, equipped with a probability measure $ \mu $. An important characteristic of a smooth function $ f:\mathbb{D}\rightarrow\mathbb{R} $ is its
second-moment matrix $ \Sigma_{\mu}: = \int \nabla f(x) \nabla f(x)^* \mu(dx) \in\mathbb{R}^{n\times n} $, where $ \nabla f(x)\in\mathbb{R}^n $ is the gradient of $ f(\cdot) $ at $ x\in\mathbb{D} $ and $ * $ stands for transpose. For instance, the span of the leading $ r $ eigenvectors of $ \Sigma_{\mu} $ forms an active subspace of $ f(\cdot) $, which contains the directions along which $ f(\cdot) $ changes the most and is of particular interest in ridge approximation. In this work, we propose a simple algorithm for estimating $ \Sigma_{\mu} $ from random point evaluations of $ f(\cdot) $ without imposing any structural assumptions on $ \Sigma_{\mu} $. Theoretical guarantees for this algorithm are established with the aid of the same technical tools that have proved valuable in the context of covariance matrix estimation from partial measurements. Keywords:Active subspace, second-moment matrix, covariance estimation, ridge approximation, approximation theory. Mathematics Subject Classification:Primary: 68W25; Secondary: 68W20. Citation:Armin Eftekhari, Michael B. Wakin, Ping Li, Paul G. Constantine. Randomized learning of the second-moment matrix of a smooth function. Foundations of Data Science, 2019, 1 (3) : 329-387. doi: 10.3934/fods.2019015
References:
[1]
R. Adamczak,
Logarithmic Sobolev inequalities and concentration of measure for convex functions and polynomial chaoses,
[2]
F. P. Anaraki and S. Hughes, Memory and computation efficient PCA via very sparse random projections, in
[3]
M. Azizyan, A. Krishnamurthy and A. Singh, Extreme compressive sampling for covariance estimation,
[4]
D. S. Bernstein,
[5]
I. Bogunovic, V. Cevher, J. Haupt and J. Scarlett, Active learning of self-concordant like multi-index functions, in
[6] [7]
E. J. Candes,
[8]
Y. Chen, Y. Chi and A. J. Goldsmith,
Exact and stable covariance estimation from quadratic sampling via convex programming,
[9]
A. Cohen, I. Daubechies, R. DeVore, G. Kerkyacharian and D. Picard,
Capturing ridge functions in high dimensions from point queries,
[10] [11] [12]
P. G. Constantine, A. Eftekhari and R. Ward, A near-stationary subspace for ridge approximation,
[13]
R. D. Cook, Using dimension-reduction subspaces to identify important inputs in models of physical systems, in
[14]
G. Dasarathy, P. Shah, B. N. Bhaskar and R. D. Nowak,
Sketching sparse matrices, covariances, and graphs via tensor products,
[15]
R. DeVore, G. Petrova and P. Wojtaszczyk,
Approximation of functions of few variables in high dimensions,
[16] [17]
A. Eftekhari, L. Balzano and M. B. Wakin, What to expect when you are expecting on the Grassmannian,
[18]
A. Eftekhari, M. B. Wakin and R. A. Ward, MC$^2$: A two-phase algorithm for leveraged matrix completion,
[19]
M. Fornasier, K. Schnass and J. Vybiral,
Learning functions of few arbitrary linear parameters in high dimensions,
[20] [21] [22] [23]
K. Fukumizu, F. R. Bach and M. I. Jordan,
Dimensionality reduction for supervised learning with reproducing kernel Hilbert spaces,
[24] [25] [26] [27]
A. Gonen, D. Rosenbaum, Y. Eldar and S. Shalev-Shwartz, The sample complexity of subspace learning with partial information,
[28] [29]
N. Halko, P. G. Martinsson and J. A. Tropp,
Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions,
[30]
T. J. Hastie and R. J. Tibshirani,
[31]
J. Haupt, R. M. Castro and R. Nowak,
Distilled sensing: Adaptive sampling for sparse detection and estimation,
[32] [33] [34] [35]
S. Keiper, Analysis of generalized ridge functions in high dimensions, in
[36]
M. Kolar and E. P. Xing, Consistent covariance selection from data with missing values, in
[37] [38] [39] [40]
E. Liberty, F. Woolfe, P. G. Martinsson, V. Rokhlin and M. Tygert,
Randomized algorithms for the low-rank approximation of matrices,
[41]
P. L. Loh and M. J. Wainwright,
High-dimensional regression with noisy and missing data: Provable guarantees with non-convexity,
[42] [43] [44]
F. W. J. Olver,
[45] [46] [47]
F. Pourkamali-Anaraki, Estimation of the sample covariance matrix from compressive measurements,
[48]
C. E. Rasmussen and C. K. I. Williams,
[49]
P. Ravikumar, M. J. Wainwright, G. Raskutti and B. Yu,
High-dimensional covariance estimation by minimizing $l_1$-penalized log-determinant divergence,
[50] [51]
B. Recht, M. Fazel and P. A. Parrilo,
Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization,
[52]
A. M. Samarov,
Exploring regression structure using nonparametric functional estimation,
[53]
T. Sarlos, Improved approximation algorithms for large matrices via random projections, in
[54] [55]
J. F. Traub and H. Wozniakowski,
[56]
R. Tripathy, I. Bilionis and M. Gonzalez,
Gaussian processes with built-in dimensionality reduction: Applications to high-dimensional uncertainty propagation,
[57]
H. Tyagi and V. Cevher,
Learning non-parametric basis independent models from point queries via low-rank methods,
[58] [59]
F. Vivarelli and C. K. I. Williams, Discovering hidden features with Gaussian processes regression,
[60] [61]
H. Wendland,
[62]
Y. Xia, H. Tong, W. K. Li and L. X. Zhu,
An adaptive estimation of dimension reduction space,
[63] [64]
show all references
References:
[1]
R. Adamczak,
Logarithmic Sobolev inequalities and concentration of measure for convex functions and polynomial chaoses,
[2]
F. P. Anaraki and S. Hughes, Memory and computation efficient PCA via very sparse random projections, in
[3]
M. Azizyan, A. Krishnamurthy and A. Singh, Extreme compressive sampling for covariance estimation,
[4]
D. S. Bernstein,
[5]
I. Bogunovic, V. Cevher, J. Haupt and J. Scarlett, Active learning of self-concordant like multi-index functions, in
[6] [7]
E. J. Candes,
[8]
Y. Chen, Y. Chi and A. J. Goldsmith,
Exact and stable covariance estimation from quadratic sampling via convex programming,
[9]
A. Cohen, I. Daubechies, R. DeVore, G. Kerkyacharian and D. Picard,
Capturing ridge functions in high dimensions from point queries,
[10] [11] [12]
P. G. Constantine, A. Eftekhari and R. Ward, A near-stationary subspace for ridge approximation,
[13]
R. D. Cook, Using dimension-reduction subspaces to identify important inputs in models of physical systems, in
[14]
G. Dasarathy, P. Shah, B. N. Bhaskar and R. D. Nowak,
Sketching sparse matrices, covariances, and graphs via tensor products,
[15]
R. DeVore, G. Petrova and P. Wojtaszczyk,
Approximation of functions of few variables in high dimensions,
[16] [17]
A. Eftekhari, L. Balzano and M. B. Wakin, What to expect when you are expecting on the Grassmannian,
[18]
A. Eftekhari, M. B. Wakin and R. A. Ward, MC$^2$: A two-phase algorithm for leveraged matrix completion,
[19]
M. Fornasier, K. Schnass and J. Vybiral,
Learning functions of few arbitrary linear parameters in high dimensions,
[20] [21] [22] [23]
K. Fukumizu, F. R. Bach and M. I. Jordan,
Dimensionality reduction for supervised learning with reproducing kernel Hilbert spaces,
[24] [25] [26] [27]
A. Gonen, D. Rosenbaum, Y. Eldar and S. Shalev-Shwartz, The sample complexity of subspace learning with partial information,
[28] [29]
N. Halko, P. G. Martinsson and J. A. Tropp,
Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions,
[30]
T. J. Hastie and R. J. Tibshirani,
[31]
J. Haupt, R. M. Castro and R. Nowak,
Distilled sensing: Adaptive sampling for sparse detection and estimation,
[32] [33] [34] [35]
S. Keiper, Analysis of generalized ridge functions in high dimensions, in
[36]
M. Kolar and E. P. Xing, Consistent covariance selection from data with missing values, in
[37] [38] [39] [40]
E. Liberty, F. Woolfe, P. G. Martinsson, V. Rokhlin and M. Tygert,
Randomized algorithms for the low-rank approximation of matrices,
[41]
P. L. Loh and M. J. Wainwright,
High-dimensional regression with noisy and missing data: Provable guarantees with non-convexity,
[42] [43] [44]
F. W. J. Olver,
[45] [46] [47]
F. Pourkamali-Anaraki, Estimation of the sample covariance matrix from compressive measurements,
[48]
C. E. Rasmussen and C. K. I. Williams,
[49]
P. Ravikumar, M. J. Wainwright, G. Raskutti and B. Yu,
High-dimensional covariance estimation by minimizing $l_1$-penalized log-determinant divergence,
[50] [51]
B. Recht, M. Fazel and P. A. Parrilo,
Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization,
[52]
A. M. Samarov,
Exploring regression structure using nonparametric functional estimation,
[53]
T. Sarlos, Improved approximation algorithms for large matrices via random projections, in
[54] [55]
J. F. Traub and H. Wozniakowski,
[56]
R. Tripathy, I. Bilionis and M. Gonzalez,
Gaussian processes with built-in dimensionality reduction: Applications to high-dimensional uncertainty propagation,
[57]
H. Tyagi and V. Cevher,
Learning non-parametric basis independent models from point queries via low-rank methods,
[58] [59]
F. Vivarelli and C. K. I. Williams, Discovering hidden features with Gaussian processes regression,
[60] [61]
H. Wendland,
[62]
Y. Xia, H. Tong, W. K. Li and L. X. Zhu,
An adaptive estimation of dimension reduction space,
[63] [64]
[1]
Lori Badea, Marius Cocou.
Approximation results and subspace correction algorithms for
implicit variational inequalities.
[2] [3] [4] [5] [6]
Yongchao Liu, Hailin Sun, Huifu Xu.
An approximation scheme for stochastic programs with second
order dominance constraints.
[7]
Yong-Jung Kim.
A generalization of the moment problem to a complex measure
space and an approximation technique using backward moments.
[8]
Arnaud Münch, Ademir Fernando Pazoto.
Boundary stabilization of a nonlinear shallow beam: theory and numerical approximation.
[9]
Yanzhao Cao, Anping Liu, Zhimin Zhang.
Special section on differential equations: Theory, application, and numerical approximation.
[10] [11]
Laurent Baratchart, Sylvain Chevillard, Douglas Hardin, Juliette Leblond, Eduardo Andrade Lima, Jean-Paul Marmorat.
Magnetic moment estimation and bounded extremal problems.
[12] [13]
Marco Di Francesco, Simone Fagioli, Massimiliano D. Rosini.
Many particle approximation of the Aw-Rascle-Zhang second order model for vehicular traffic.
[14]
Mohamed Assellaou, Olivier Bokanowski, Hasnaa Zidani.
Error estimates for second order Hamilton-Jacobi-Bellman equations.
Approximation of probabilistic reachable sets.
[15] [16] [17]
Abraão D. C. Nascimento, Leandro C. Rêgo, Raphaela L. B. A. Nascimento.
Compound truncated Poisson normal distribution: Mathematical properties and Moment estimation.
[18] [19]
Bernd Aulbach, Martin Rasmussen, Stefan Siegmund.
Approximation of attractors of nonautonomous dynamical systems.
[20]
Impact Factor:
Tools Metrics Other articles
by authors
[Back to Top]
|
The square root of 2 is 1.41421356237... Multiply this successively by 1, by 2, by 3, and so on, writing down each result without its fractional part: 1 2 4 5 7 8 9 11 12... Beneath this, make a list of the numbers that are missing from the firstRead More →
Dear Uncle Colin How does $\sqrt{9 - \sqrt{17}} = \frac{\sqrt{34}-\sqrt{2}}{2}$? I tried applying a formula, but I couldn't make it work. - Roots Are Dangerous, It's Chaotic A-Level Simplification Hi, RADICALS, and thank you for your message! Square roots of square roots are not usually trivial, but this one canRead More →
In this month's episode of Wrong, But Useful, Colin and Dave are joined by Special Guest Co-Host @pecnut, who is Adam Townsend in real life. Adam studies the behaviour of sperm in mucus, and chocolate fountains1 - when he's not editing @chalkdustmag - Issue 6 of which is available nowRead More →
This puzzle was in February's MathsJam Shout, contributed by the Antwerp MathsJam. Visit mathsjam.com to find your nearest event! Consider the set ${1, 11, 111, ...}$ with 2017 elements. Show that at least one of the elements is a multiple of 2017. The Shout describes this one as tough; youRead More →
Dear Uncle Colin I'm stuck on a trigonometry proof: I need to show that $\cosec(x) - \sin(x) \ge 0$ for $0 < x < \pi$. How would you go about it? - Coming Out Short of Expected Conclusion Hi, COSEC, and thank you for your message! As is so oftenRead More →
On a recent1 episode of Wrong, But Useful, Dave mentioned something interesting2: if you take three regular shapes that meet neatly at a point - for example, three hexagons, or a square and two octagons - and make a cuboid whose edges are in the same ratio as the numberRead More →
Dear Uncle Colin, I'm normally pretty good at simultaneous equations, but I can't figure out how to solve this for $a$ and $b$. $\cos(a)-\cos(b) = x$ $\sin(a)-\sin(b) = y$ - Any Random Circle Hi, ARC, and thanks for your message! This is, it turns out, a bit trickier than itRead More →
What's that, @pickover? Shiver in ecstasy, you say? Just for a change. Shiver in ecstasy. The sides of a pentagon, hexagon, & decagon, inscribed in congruent circles, form [a] right triangle. pic.twitter.com/Uastgc7SJo — Cliff Pickover (@pickover) May 20, 2017 That's neat. But why? Let's suppose the circles all have radiusRead More →
Dear Uncle Colin, I've got a funny square and I can't find $x$. Can you help? - Oughta Be Simple, Can't Unravel Resulting Equations Hi, OBSCURE, and thanks for your message! You're right, it ought to be simple... but it turns out not to be. It is simple enough toRead More →
This puzzle presumably came to me by way of @ajk44, some time ago. Thanks, Alison! The problem, given here, is to find the equations of two lines that complete a square, given: Two of the lines are $y=ax+b$ and $y=ax+c$ One of the vertices is at $(0,b)$. The example givenRead More →
|
Let $H_{n}$ be the $n$th harmonic number. In Lagarias's paper "An Elementary Problem Equivalent to the Riemann Hypothesis," he shows that the statement $$\sum_{d\mid n}d\leq H_{n}+\exp(H_{n})\log(H_{n})\tag{1.1}$$ for every positive integer $n$, with equality if and only if $n=1$, is equivalent to the Riemann hypothesis.
In the last paragraph of section $2$, he says,
One can prove unconditionally that inequality $(1.1)$ holds for nearly all integers. Even if the Riemann hypothesis is false, the exceptions to $(1.1)$ will form a very sparse set. Furthermore, if there exists any counterexample to $(1.1)$ the value of $n$ will be very large.
So far as I can see, none of these statements is justified, or even suggested, by anything in the paper, though it may just be too subtle for me, of course.
Can anyone give me a reference justifying these statements, and quantifying the terms "nearly all integers", "very sparse set", "very large"? I assume that the first two mean "density $1$" and "density $0$" for some definition of density, but the last seems to indicate that the statement is known to be true below a specific bound, and it seems odd that the bound isn't mentioned.
|
The ideal encryption scheme $E$ would be one that, for every ciphertext $C=E(K, M)$, if the key remains secret for the adversary, the probability of identifying $M$ is
negligible. Since that is not possible in practice, the second most reasonable approach is to define constraints strong enough to satisfy some definition of security. The $\operatorname{IND-}$ notation provides such definitions in terms of games, where a challenger keeps his key secret, and an adversary has certain capabilities and his target is to break the encryption system.
To keep it general, an encryption scheme will have a
key generation algorithm $KG$, which will generate a key pair $K_E$, $K_D$, an encryption algorithm $E$ and a decryption algorithm $D$. Encryption is always revertible, but the encryption and decryption key can be different (covering public key crypto): $D(K_D, E(K_E, M))=M$
IND-CPA: INDistinguishability under Chosen Plaintext Attack
In words: the adversary generates two messages of equal length. The challenger decides, randomly, to encrypt one of them. The adversary tries to guess which of the messages was encrypted.
Algorithm: Challenger: $K_E, K_D$ = KG(security parameter) Adversary: $m_0, m_1 = $ choose two messages of the same length. Send $m_0,m_1$ to the challenger. Perform additional operations in polynomial time including calls to the encryption oracle. Challenger: $b=$ randomly choose between 0 and 1 Challenger: $C:=E(K_E, m_b)$. Send $C$ to the adversary. Adversary: perform additional operations in polynomial time including calls to the encryption oracle. Output $guess$. If $guess=b$, the adversary wins
Further comment: the main concept introduced by this scenario is the polynomial bound. Now, our expectations from crypto are weakened from probability of winning is negligible to probability of winning . The restriction for the messages to be of the same length aims to prevent the adversary to trivially win the game by just comparing the length of the ciphertexts. However, this requirement is too weak, especially because it assumes only a single interaction between the adversary and the challenger. within a reasonable timeframe is negligible
IND-CCA1: INDistinguishability under Chosen Ciphertext Attack
In words: the target of the game is the same as in IND-CPA. The adversary has an additional capability: to call an encryption or decryption oracle. That means: the adversary can encrypt or decrypt arbitrary messages before obtaining the challenge ciphertext.
Algorithm: Challenger: $K_E, K_D$ = KG(security parameter) Adversary (a polynomially-bounded number of times): call the encryption or decryption oracle for arbitrary plaintexts or ciphertexts, respectively Adversary: $m_0, m_1 = $ choose two messages of the same length Challenger: $b=$ randomly choose between 0 and 1 Challenger: $C:=E(K_E, m_b)$Send $C$ to the adversary. Adversary: perform additional operations in polynomial time. Output $guess$ If $guess=b$, the adversary wins
Further comment: IND-CCA1 considers the possibility of repeated interaction, implying that security does not weaken with time.
IND-CCA2: INDistinguishability under adaptive Chosen Ciphertext Attack
In words: In addition to its capabilities under IND-CCA1, the adversary is now given access to the oracles after receiving $C$, but cannot send $C$ to the decryption oracle.
Algorithm: Challenger: $K_E, K_D$ = KG(security parameter) Adversary (as many times as he wants): call the encryption or decryption oracle for an arbitrary plaintext/ciphertext Adversary: $m_0, m_1 = $ choose two messages of the same length Challenger: $b=$ randomly choose between 0 and 1 Challenger: $C:=E(K_E, m_b)$Send $C$ to the adversary. Adversary: perform additional operations in polynomial time, including calls to the oracles, for ciphertexts different than $C$. Output $guess$. If $guess=b$, the adversary wins
Further comment: IND-CCA2 suggests that using the decryption oracle after knowing the ciphertext can give a reasonable advantage in some schemes, since the requests to the oracle could be customized depending on the specific ciphertext.
The notion of IND-CCA3 is added based on the reference provided by @SEJPM. I add it for completeness, but it seems important to point out that there are few resources about it, and my interpretation could be misleading.
IND-CCA3: (authenticated) INDistinguishability under adaptive Chosen Ciphertext Attack
In words: It is not possible to create a valid forgery with non-negligible probability. The adversary is given two pairs of encryption/decryption oracles. The first pair performs the intended encryption and decryption operations, while the second one is defined as follows: $\mathcal{E}_K$: returns encryptions of random strings. $\mathcal{D}_K:$ returns INVALID. Instead of being presented as a game, it is presented using the mathematical concept of advantage: the improvement of the probability of winning by using the valid oracle against the probability of success under the "bogus" oracle.
Formula: $\mathbf{Adv}^{ind-cca3}_{\pi}(A)=Pr\left[K\overset{\$}{\leftarrow}\mathcal{K}:A^{\mathcal{E}_K(\cdot),\mathcal{D}_K(\cdot)}\Rightarrow 1\right] - Pr\left[A^{\mathcal{E}_K(\$|\cdot|),\perp(\cdot)}\Rightarrow 1\right] $
Further comment: the paper where IND-CCA3 is introduced a focus on one fundamental idea. IND-CCA3 is equivalent to authenticated encryption.
Note that in the case of public-key cryptography the adversary is always given access to the public key $K_E$ as well as the encryption function $E(K_E, \cdot)$.
|
Apologies if this is obvious -- I'm very new to
Mathematica.
I'm trying to minimize the solution to an ODE with respect to a variable. The following code generates the solution to the ODE,
sol=DSolve[ {(1/2) * σ^2 * k''[q] + μ*k'[q] - λ*k[q] == 0, k'[0] == -mc, k'[b] == me}, k, q]
but when I try minimizing using the
Minimize (with respect to only q) command,
Minimize[k[q] /.First@sol, q]
I'm coming up empty -- it should return the minimum value of k(q)* in terms of $b, \lambda, \mu, \sigma$, $me$ and $mc$ as well as q* in terms of $b, \lambda, \mu, \sigma$, $me$ and $mc$.
* Update *
The following code works (thank you @bbgodfrey):
s = Simplify[k[q] /. DSolve[{(1/2)*σ^2*k''[q] + μ*k'[q] - λ*k[q] == 0,k'[0] == -mc, k'[b] == me}, k[q], q][[1, 1]]]sq=q /. Solve[D[s, q] == 0, q][[2, 1]]kq = Simplify[s /. q -> sq]
But a second minimization with respect to b of the function
Gq = kq + b*\[Gamma]
I start to run into trouble again. I try:
Minimize[Gq,b]
returns unevaluated. This also doesn't work:
sb=b/. Solve[D[Gq, b] == 0, b]
Any help would be very much appreciated!
|
Your public key contains two numbers. First it is a number n, which is called the Modulus and are computed through $p \cdot q = n$.The second number is e, which is the public exponent and are used to encrypt your message m. The number e is choosen that it have the following properties:
\begin{equation}1 < e < \phi(n) = (p-1)(q-1)\end{equation}\begin{equation}gcd(e, (p-1)(q-1)) = 1 \end{equation}
$\phi$ is the euler's totient function.
The private key contains the numbers p,q and d. The number d is your private exponent and p,q are your prime numbers, which helps you do calculate n and the private and public exponent.
The private exponent have the following properties:
\begin{equation}1 < d < (p-1)(q-1)\end{equation}\begin{equation}d = e^{-1} mod~(p-1)(q-1)\end{equation}
I am not sure, what are Dp, Dq and QInv in your configuration is, but if you have d you are able to compute e with:
\begin{equation}e = d^{-1} mod~(p-1)(q-1)\end{equation}
I hope it will help you. If that doesn't help may specifiy what are Dp, Dq and QInv are.
EDIT:I think you are using the PKCS#1, which are mentioned in the comments below of this answer.
In the PKCS#1 you are also able to have a quintuple as a private key, which are p, q, Dp, Dq and QInv.
Dp and Dq satisfy the following equations:
\begin{equation}e \cdot Dp \equiv 1~mod~(p-1) \Leftrightarrow e = Dp^{-1} ~mod ~(p-1) \\e \cdot Dq \equiv 1~mod~(q-1) \Leftrightarrow e = Dq^{-1} ~mod ~(q-1)\end{equation}
and the number e have a little bit different property.The property of e is:
\begin{equation}gcd(e, \lambda(n)) = 1~and~\lambda(n) = LCM(p,q)\end{equation}
Additionally your d is satisfying this equation instead of that from above:
\begin{equation}ed \equiv 1 ~mod ~\lambda(n) \Leftrightarrow e = d^{-1} ~mod ~\lambda(n) \end{equation}
This equation should give you the right e from your giving d and $e = d^{-1}~mod~\lambda(n)$ means compute the modular inverse from d with the modulus $LCM(p,q)$.
I hope this will help you.
|
I am interested in the following problem which seems like an extension of the Kruskal-Katona Theorem.
Let $A_k \subseteq \{0,1\}^n$ be a subset of the hypercube such that every element in $A$ has exactly $k$ ones. For any element $x \in \{0,1\}^n$ let $N_l(x)$ be the set of elements obtained by flipping one of the 1's in x to 0. (Generally referred to as the lower shadow of X)
Let the majority upper shadow of $A_k$ referred to as $M_u(A_k)$ be the set such that for each $a \in M_u(A_k)$ number of ones in $a = k+1$ and $|N_l(a) \cap A_k| \geq (\frac{k+1}{2})$. That is more than half of a's neighbours are present in $A_k$. Given the size of $A_k$ can we put an upper bound on the size of $M_u(A_k)$.
Has this problem been studied and are there results are relevant to the above. Note that in case $|A_k| = \binom{n}{k}$ we of course have that $|M_u(A_k)|=\binom{n}{k+1}$. In general I am looking at the size of $A_k$ to be $\epsilon\cdot \binom{n}{k}$ where $\epsilon$ is a small constant.
Could you also refer to me a good survey of the Kruskal-Katona Theorem in general , one that surveys recent results in this setting ?
Thanks in advance
|
Given n random points on a circle, find, with proof, the probability that the convex polygon formed by these points does not contain the center of the circle. at first i thought it easiest to find probability that the center IS enclosed and then take 1 minus that . the probability would be that at least 1 point lie on one side of the circle or the semi circle while n-1 points lie on the other . the case fir a triangle , n=3 would mean two of the points lie on the same semi circle and the third lies on another
The space of all possible configurations is $C^n$, where $C$ is the unit circle, here identified with the interval $J=[0,2\pi)$ via $J\to C$, $t\to P(t)=\cos t+i\sin t$.
Now fix such a "winning configuration" of points $$(A_1,\dots,A_n)=(P(t_1),\dots,P(t_n))\ ,$$ so that the center $0$ is not in the convex hull. Now reorder them in cyclic trigonometric order on the circle $C$. Add the $0$ as a new point and build the "bigger" convex hull. Then $0$ is one of the vertices in this bigger convex hull, has two adjacent vertices, uniquely determined, $A_j$ and $A_k$, $1\le j,k\le n$, and exactly one of these two, denoted $A^*$, precedes the other one in the mentioned cyclic order. It is characterized by the fact that on the trigonometrical oriented arc segment $s(A^*)$ from the opposite of $A^*$ to $A^*$ we find all other points.
So to realize a "winning configuration" we have to make the choice of one index $j$, make an arbitrary choice of $A^j$ (which becomes $A^*$ in a second=, then choose arbitrarily the other $(n-1)$ points in the semicircle $s(A_j)$, so that for this configuration $A_j=A^*$. So and only so we can realize a "winning configuration". The corresponding probability is thus $$ \frac{n\cdot(2\pi)\cdot \pi^{n-1}}{(2\pi)^n} = \frac n{2^{n-1}}\ . $$
$\square$
Note: This seems to be a "high probability" for $n=3$ from our "feeling" of choosing a triangle and a circle around. Let us use some other argument for this special case. (Well, we take the "circle around" first, then three points on it.)
Because of the circular symmetry, we fix the first point $A_1=1+0i$ on the unit circle in $\Bbb C$. A second
arbitrary point $A_2$ on $C$ has either positive or negative imaginary part, we split in two cases, "upper" and "lower" semicircle. Where can we take $A_3$ now so that $A_1A_2A_3$ does not cover (in the convex hull) the origin $0$?
one possible choice of $A_3$ is in the "upper" semicircle if $A_2$ is there, this happens with probability $1/4$,
one other possible choice of $A_3$ is in the "lower" semicircle if $A_2$ is there, this happens again with probability $1/4$,
and one final chance is to take $A_3$ in the semicircle from $A_2$ containing $A_1$, so that among the points $A_2,A_3$ one is "lower", and one is "upper", so we are integrating w.r.t. $A_2$ in the "upper" semicircle, i.e. $t_2\in [0,\pi)$, the length of the corresponding interval for $t_3$, which is $(t_2-\pi,0]$. (Same by symmetry w.r.t. the real axis, or w.r.t. $A_2\leftrightarrow A_3$ if $A_2$ is "lower".) The graph of the map $t_2\to (\pi-t_2)$ (associating the above length) is simple... The contribution to the wanted probability from this case is again $1/4$.
|
The general theorem is: for all odd, distinct primes $p, q$, the following holds: $$\left( \frac{p}{q} \right) \left( \frac{q}{p} \right) = (-1)^{\frac{p-1}{2}\frac{q-1}{2}}$$
I've discovered the following proof for the case $q=3$: Consider the Möbius transformation $f(x) = \frac{1}{1-x}$, defined on $F_{p} \cup {\infty}$. It is a bijection of order 3: $f^{(3)} = Id$.
Now we'll count the number of fixed points of $f$, modulo 3:
1) We can calculate the number of solutions to $f(x) = x$: it is equivalent to $(2x-1)^2 = -3$. Since $p \neq 2,3$, the number of solutions is $\left( \frac{-3}{p} \right) + 1$ (if $-3$ is a non-square, there's no solution. Else, there are 2 distinct solutions, corresponding to 2 distinct roots of $-3$).
2) We know the structure of $f$ as a permutation: only 3-cycles or fixed points. Thus, number of fixed points is just $|F_{p} \cup {\infty}| \mod 3$, or: $p+1 \mod 3$.
Combining the 2 results yields $p = \left( \frac{-3}{p} \right) \mod 3$. Exploiting Euler's criterion gives $\left( \frac{p}{3} \right) = p^{\frac{3-1}{2}} = p \mod 3$, and using $\left( \frac{-1}{p} \right) = (-1)^{\frac{p-1}{2}}$, we get: $$\left( \frac{3}{p} \right) \left( \frac{p}{3} \right) = (-1)^{\frac{p-1}{2}\frac{3-1}{2}} \mod 3$$ and equality in $\mathbb{Z}$ follows.
My questions:
Can this idea be generalized, with other functions $f$? Is there a list\article of proofs to special cases of the theorem?
|
Publications by Caroline Terquem
MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY
482 (2019) 530-549
MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY
476 (2018) 5032-5056
MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY
464 (2017) 2429-2440
MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY
464 (2017) 924-932 CoRoT 223992193: Investigating the variability in a low-mass, pre-main sequence eclipsing binary with evidence of a circumbinary disk
ASTRONOMY & ASTROPHYSICS
599 (2017) ARTN A27
MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY
454 (2015) 3472-3479 CoRoT 223992193: A new, low-mass, pre-main sequence eclipsing binary with evidence of a circumbinary disk
Astronomy and Astrophysics EDP Sciences
562 (2014) 50-69
We present the discovery of CoRoT 223992193, a double-lined, detached eclipsing binary, comprising two pre-main sequence M dwarfs, discovered by the CoRoT space mission during a 23-day observation of the 3 Myr old NGC 2264 star-forming region. Using multi-epoch optical and near-IR follow-up spectroscopy with FLAMES on the Very Large Telescope and ISIS on the William Herschel Telescope we obtain a full orbital solution and derive the fundamental parameters of both stars by modelling the light curve and radial velocity data. The orbit is circular and has a period of $3.8745745 \pm 0.0000014$ days. The masses and radii of the two stars are $0.67 \pm 0.01$ and $0.495 \pm 0.007$ $M_{\odot}$ and $1.30 \pm 0.04$ and $1.11 ~^{+0.04}_{-0.05}$ $R_{\odot}$, respectively. This system is a useful test of evolutionary models of young low-mass stars, as it lies in a region of parameter space where observational constraints are scarce; comparison with these models indicates an apparent age of $\sim$3.5-6 Myr. The systemic velocity is within $1\sigma$ of the cluster value which, along with the presence of lithium absorption, strongly indicates cluster membership. The CoRoT light curve also contains large-amplitude, rapidly evolving out-of-eclipse variations, which are difficult to explain using starspots alone. The system's spectral energy distribution reveals a mid-infrared excess, which we model as thermal emission from a small amount of dust located in the inner cavity of a circumbinary disk. In turn, this opens up the possibility that some of the out-of-eclipse variability could be due to occultations of the central stars by material located at the inner edge or in the central cavity of the circumbinary disk.
MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY
444 (2014) 1738-1746
MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY
443 (2014) 568-583
Monthly Notices of the Royal Astronomical Society
428 (2013) 658-669
Monthly Notices of the Royal Astronomical Society
418 (2011) 1928-1934
We study the evolution of a protoplanet of a few earth masses embedded in a protoplanetary disc. If we assume that the atmosphere of the protoplanet, i.e. the volume of gas in hydrostatic equilibrium bound to the core, has a surface radius smaller than the Roche lobe radius, we show that it expands as it accretes both planetesimals and gas at a fixed rate from the nebula until it fills in the Roche lobe. The evolution occurs on a time-scale shorter than the formation or migration time-scales. Therefore, we conclude that protoplanets of a few earth masses have an atmosphere that extends to the Roche lobe surface, where it joins on to the nebula. This is true even when the Bondi radius is smaller than the Roche lobe radius. This is in contrast to the commonly used models in which the static atmosphere extends up to the Bondi radius and is surrounded by a cold accretion flow. As a result, any calculation of the tidal torque exerted by the disc on to the protoplanet should exclude the material present in the Roche lobe, since it is bound to the protoplanet. © 2011 The Authors Monthly Notices of the Royal Astronomical Society © 2011 RAS.
EAS Publications Series
41 (2010) 209-218
We review models of protoplanetary disks. In the earlier stages of evolution, disks are subject to gravitational instabilities that redistribute mass and angular momentum on short timescales. Later on, when the mass of the disk is below ten percent or so of that of the central star, accretion occurs through the magnetorotational instability. The parts of the disks that are not ionized enough to couple to the magnetic field may not accrete or accrete inefficiently. We also review theories of planet migration. Tidal interaction between a disk and an embedded planet leads to angular momentum exchange between the planetary orbital motion and the disk rotation. This results in low mass planets migrating with respect to the gas in the disk, while massive planets open up a gap in the vicinity of their orbit and migrate in as the disk is accreted. © EAS, EDP Sciences, 2010.
On the dynamics of multiple systems of hot super-Earths and Neptunes: Tidal circularization, resonance and the HD 40307 system
Monthly Notices of the Royal Astronomical Society
405 (2010) 573-592
In this paper, we consider the dynamics of a system of hot super-Earths or Neptunes such as HD 40307. We show that, as tidal interaction with the central star leads to small eccentricities, the planets in this system could be undergoing resonant coupling even though the period ratios depart significantly from very precise commensurability. In a three-planet system, this is indicated by the fact that resonant angles librate or are associated with long-term changes to the orbital elements. In HD 40307, we expect that three resonant angles could be involved in this way. We propose that the planets in this system were in a strict Laplace resonance while they migrated through the disc. After entering the disc inner cavity, tidal interaction would cause the period ratios to increase from two but with the inner pair deviating less than the outer pair, counter to what occurs in HD 40307. However, the relationship between these pairs that occur in HD 40307 might be produced if the resonance is impulsively modified by an event like a close encounter shortly after the planetary system decouples from the disc. We find this to be in principle possible for a small relative perturbation on the order of a few ×10-3, but then we find that the evolution to the present system in a reasonable time is possible only if the masses are significantly larger than the minimum masses and the tidal dissipation is very effective. On the other hand, we found that a system like HD 40307 with minimum masses and more realistic tidal dissipation could be produced if the eccentricity of the outermost planet was impulsively increased to ∼0.15. We remark that the form of resonantly coupled tidal evolution we consider here is quite general and could be of greater significance for systems with inner planets on significantly shorter orbital periods characteristic of, for example, CoRoT 7 b. © 2010 The Authors. Journal compilation © 2010 RAS.
Monthly Notices of the Royal Astronomical Society
404 (2010) 409-414
In this paper, we show that the eccentricity of a planet on an inclined orbit with respect to a disc can be pumped up to high values by the gravitational potential of the disc, even when the orbit of the planet crosses the disc plane. This process is an extension of the Kozai effect. If the orbit of the planet is well inside the disc inner cavity, the process is formally identical to the classical Kozai effect. If the planet's orbit crosses the disc but most of the disc mass is beyond the orbit, the eccentricity of the planet grows when the initial angle between the orbit and the disc is larger than some critical value which may be significantly smaller than the classical value of 39{ring operator}. Both the eccentricity and the inclination angle then vary periodically with time. When the period of the oscillations of the eccentricity is smaller than the disc lifetime, the planet may be left on an eccentric orbit as the disc dissipates. © 2010 The Authors. Journal compilation. © 2010 RAS.
Astrophysical Journal
689 (2008) 532-538
Astrophysical Journal
654 (2007) 1110-1120
Reports on Progress in Physics
69 (2006) 119-180
We review the observations of extrasolar planets, ongoing developments in theories of planet formation, orbital migration and the evolution of multiplanet systems. © 2006 IOP Publishing Ltd.
Monthly Notices of the Royal Astronomical Society
363 (2005) 943-953
Using 2D magnetohydrodynamic (MHD) numerical simulations performed with two different finite-difference Eulerian codes, we analyse the effect that a toroidal magnetic field has on low-mass planet migration in non-turbulent protoplanetary discs. The presence of the magnetic field modifies the waves that can propagate in the disc. In agreement with a recent linear analysis, we find that two magnetic resonances develop on both sides of the planet orbit, which contribute to a significant global torque. In order to measure the torque exerted by the disc on the planet, we perform simulations in which the latter is either fixed on a circular orbit or allowed to migrate. For a 5-M⊕ planet, when the ratio β between the square of the sound speed and that of the Alfven speed at the location of the planet is equal to 2, we find inward migration when the magnetic field Bφ is uniform in the disc, reduced migration when Bφ decreases as r -1 and outward migration when Bφ decreases as r -2. These results are in agreement with predictions from the linear analysis. Taken as a whole, our results confirm that even a subthermal stable field can stop inward migration of an earth-like planet. © 2005 RAS.
Extrasolar Planets: Today and Tomorrow
321 (2004) 379-392
|
There is a huge variety of feasible approaches. Which is best suited depends on
what you are trying to show, how much detail you want or need.
If the algorithm is a widely known one which you use as a subroutine, you often remain at a higher level. If the algorithm is the main object under investigation, you probably want to be more detailed. The same can be said for analyses: if you need a rough upper runtime bound you proceed differently from when you want precise counts of statements.
I will give you three examples for the well-known algorithm Mergesort which hopefully illustrate this.
High Level
The algorithm Mergesort takes a list, splits it in two (about) equally long parts, recurses on those partial lists and merges the (sorted) results so thatthe end-result is sorted. On singleton or empty lists, it returns the input.
This algorithm is obviously a correct sorting algorithm. Splitting the list and merging it can each be implemented in time $\Theta(n)$, which gives us a recurrence for worst case runtime $T(n) = 2T\left(\frac{n}{2}\right) + \Theta(n)$. By the Master theorem, this evaluates to $T(n) \in \Theta(n\log n)$.
Medium Level
The algorithm Mergesort is given by the following pseudo-code:
procedure mergesort(l : List) {
if ( l.length < 2 ) {
return l
}
left = mergesort(l.take(l.length / 2)
right = mergesort(l.drop(l.length / 2)
result = []
while ( left.length > 0 || right.length > 0 ) {
if ( right.length == 0 || (left.length > 0 && left.head <= right.head) ) {
result = left.head :: result
left = left.tail
}
else {
result = right.head :: result
right = right.tail
}
}
return result.reverse
}
We prove correctness by induction. For lists of length zero or one, the algorithm is trivially correct. As induction hypothesis, assume
mergesort performs correctly on lists of length at most $n$ for some arbitrary, but fixed natural $n>1$. Now let $L$ be a list of length $n+1$. By induction hypothesis,
left and
right hold (non-decreasingly) sorted versions of the first resp. second half of $L$ after the recursive calls. Therefore, the
while loop selects in every iteration the smallest element not yet investigated and appends it to
result; thus
result is a non-increasingly sorted list containing all elements from
left and
right. The reverse is a non-decreasingly sorted version of $L$, which is the returned -- and desired -- result.
As for runtime, let us count element comparisons and list operations (which dominate the runtime asymptotically). Lists of length less than two cause neither. For lists of length $n>1$, we have those operations caused by preparing the inputs for the recursive calls, those from the recursive calls themselves plus the
while loop and one
reverse. Both recursive parameters can be computed with at most $n$ list operations each. The
while loop is executed exactly $n$ times and every iteration causes at most one element comparison and exactly two list operations. The final
reverse can be implemented to use $2n$ list operations -- every element is removed from the input and put into the output list. Therefore, the operation count fulfills the following recurrence:
$\qquad \begin{align}T(0) = T(1) &= 0 \\T(n) &\leq T\left(\left\lceil\frac{n}{2}\right\rceil\right) + T\left(\left\lfloor\frac{n}{2}\right\rfloor\right) + 7n\end{align}$
As $T$ is clearly non-decreasing, it is sufficient to consider $n=2^k$ for asymptotic growth. In this case, the recurrence simplifies to
$\qquad \begin{align}T(0) = T(1) &= 0 \\T(n) &\leq 2T\left(\frac{n}{2}\right) + 7n\end{align}$
By the Master theorem, we get $T \in \Theta(n \log n)$ which extends to the runtime of
mergesort.
Ultra-low level
Consider this (generalised) implementation of Mergesort in Isabelle/HOL:
types dataset = "nat * string"
fun leq :: "dataset \<Rightarrow> dataset \<Rightarrow> bool" where
"leq (kx::nat, dx) (ky, dy) = (kx \<le> ky)"
fun merge :: "dataset list \<Rightarrow> dataset list \<Rightarrow> dataset list" where
"merge [] b = b" |
"merge a [] = a" |
"merge (a # as) (b # bs) = (if leq a b then a # merge as (b # bs) else b # merge (a # as) bs)"
function (sequential) msort :: "dataset list \<Rightarrow> dataset list" where
"msort [] = []" |
"msort [x] = [x]" |
"msort l = (let mid = length l div 2 in merge (msort (take mid l)) (msort (drop mid l)))"
by pat_completeness auto
termination
apply (relation "measure length")
by simp+
This already includes proofs of well-definedness and termination. Find an (almost) complete correctness proof here.
For the "runtime", that is number of comparisons, a recurrence similar to the one in the prior section can be set up. Instead of using the Master theorem and forgetting the constants, you can also analyse it to get an approximation that is asymptotically equal the true quantity. You can find the full analysis in [1]; here is a rough outline (it does not necessarily fit the Isabelle/HOL code):
As above, the recurrence for the number of comparisons is
$\qquad \begin{align}f_0 = f_1 &= 0 \\f_n &= f_{\left\lceil\frac{n}{2}\right\rceil} + f_{\left\lfloor\frac{n}{2}\right\rfloor} + e_n\end{align}$
where $e_n$ is the number of comparisons needed for merging the partial results². In order to get rid of the floors and ceils, we perform a case distinction over whether $n$ is even:
$\qquad \displaystyle \begin{cases}f_{2m} &= 2f_m + e_{2m} \\f_{2m+1} &= f_m + f_{m+1} + e_{2m+1} \end{cases}$
Using nested forward/backward differences of $f_n$ and $e_n$ we get that
$\qquad \displaystyle \sum\limits_{k=1}^{n-1} (n-k) \cdot \Delta\kern-.2em\nabla f_k = f_n - nf_1$.
The sum matches the right-hand side of Perron's formula. We define the Dirichlet generating series of $\Delta\kern-.2em\nabla f_k$ as
$\qquad \displaystyle W(s) = \sum\limits_{k\geq 1} \Delta\kern-.2em\nabla f_k k^{-s} = \frac{1}{1-2^{-s}} \cdot \underbrace{\sum\limits_{k \geq 1} \frac{\Delta\kern-.2em\nabla e_k}{k^s}}_{=:\ \boxminus(s)}$
which together with Perron's formula leads us to
$\qquad \displaystyle f_n = nf_1 + \frac{n}{2\pi i} \int\limits_{3-i\infty}^{3+i\infty} \frac{\boxminus(s)n^s}{(1-2^{-s})s(s+1)}ds$.
Evaluation of $\boxminus(s)$ depends on which case is analysed. Other than that, we can -- after some trickery -- apply the residue theorem to get
$\qquad \displaystyle f_n \sim n \cdot \log_2(n) + n \cdot A(\log_2(n)) + 1$
where $A$ is a periodic function with values in $[-1,-0.9]$.
Mellin transforms and asymptotics: the mergesort recurrence by Flajolet and Golin (1992) Best case: $e_n = \left\lfloor\frac{n}{2}\right\rfloor$ Worst case: $e_n = n-1$ Average case: $e_n = n - \frac{\left\lfloor\frac{n}{2}\right\rfloor}{\left\lceil\frac{n}{2}\right\rceil + 1} - \frac{\left\lceil\frac{n}{2}\right\rceil}{\left\lfloor\frac{n}{2}\right\rfloor + 1}$
|
What is a formal definition of a irrational number? Usually, we say that it is a number that it is not rational. Is it enough?
Uncle Google and auntie Wikipedia are your friends. Wikipedia correctly states:
In mathematics, an irrational number is any real number that cannot be expressed as a ratio of integers
In a way, it's not enough to say that any number that is not rational is irrational, because most complex numbers (like $i$) are neither rational nor irrational.
A real number is irrational if is not rational.
But the definition of real number is much less simple.
It is a number that cannot be written as $\frac{p}{q}$, for $(p,q)\in\mathbb{Z}\times\mathbb{N}^\ast$. Formally: $r$ is irrational iff
(i) $r\in\mathbb{R}$; and (ii) $\forall(p,q)\in\mathbb{Z}\times\mathbb{N}^\ast, r\neq \frac{p}{q}$.
Yes, irrational = "not rational". It is well-defined in any context where "rational" is well-defined, e.g. in any ring containing the rational numbers $\,\Bbb Q.\,$ For example, as I mentoned in an answer, to the question Is $i$ irrational?, many algebraic number theorists use the terminology for complex numbers $\not\in \Bbb Q.\,$ In particular, be aware that irrational need not imply real (as in the classical case).
|
Found this on Complexity Zoo warning expired certificate check NP Over The Complex Numbers.
[BCS+97] show the following striking result. For a positive integer $n$, let $t(n)$ denote the minimum number of additions, subtractions, and multiplications needed to construct $n$, starting from 1. If for every sequence ${n_k}$ of positive integers, $t(n_k k!)$ grows faster than polylogarithmically in $k$, then $P_C$ does not equal $NP_C$.
[BCS+97] L. Blum, F. Cucker, M. Shub, and S. Smale. Complexity and Real Computation, Springer-Verlag, 1997.
Couldn't find the paper online, so the exact definition would be helpful.
What are bounds for $t(n!)$?
Added laterWhat are bounds for $t(a n!)$ where $a$ is nonzero and no other properties of $a$ are required?
Didn't spend much time, but couldn't solve this:
Find $a>1,k>1$ and $t(a k!) < t(k!)$.
Added I doubt this is of any practical interest becauseof the space complexity of factorial.
$$ n\log\left(\frac{n}{e}\right)+1 \leq \log n! $$
In OEIS A025201 a(n) = floor(log(n!))..
We have $n!=\Gamma(n+1)$ and $\log \log \Gamma(2^{1000})=699.6\ldots$ and $\log \log 2^{2^{1000}}=692.7\ldots$.
Even if an oracle computes the factorial, it is impossible to store in the computer space of all computers on earth.
Added later Comment from Gerhard "Wants To See Better Bounds" Paseman
I'd like to add that a similar sounding problem https://mathoverflow.net/a/75792/3206 using additions and multiplications has easy lower and upper bounds of O(log n). The computation model for this problem is different from the above problem, as "repeated subterms" do not add to the complexity of the computation, to state the matter (from memory) roughly. Gerhard "Wants To See Better Bounds" Paseman, 2015.01.26
References for the above answer in OEIS: http://oeis.org/A005245
|
452 0 1. Homework Statement
A piece of wire is bent to form a circle with radius r. It has a steady current I flowing through it in a counterclockwise direction as seen from the top (looking in the negative z direction).
What is B_z(0), the z component of B at the center (i.e., x = y = z = 0) of the loop?
Express your answer in terms of I, r, and constants like mu_0 and pi.
2. Homework Equations 3. The Attempt at a Solution
I know this equation:
[tex]\frac{(\mu_0)I}{2(\pi)r}[/tex]
but there is a hint that says I need to find the Integrand.
Thank You.
|
2019-09-04 12:06
Soft QCD and Central Exclusive Production at LHCb / Kucharczyk, Marcin (Polish Academy of Sciences (PL)) The LHCb detector, owing to its unique acceptance coverage $(2 < \eta < 5)$ and a precise track and vertex reconstruction, is a universal tool allowing the study of various aspects of electroweak and QCD processes, such as particle correlations or Central Exclusive Production. The recent results on the measurement of the inelastic cross section at $ \sqrt s = 13 \ \rm{TeV}$ as well as the Bose-Einstein correlations of same-sign pions and kinematic correlations for pairs of beauty hadrons performed using large samples of proton-proton collision data accumulated with the LHCb detector at $\sqrt s = 7\ \rm{and} \ 8 \ \rm{TeV}$, are summarized in the present proceedings, together with the studies of Central Exclusive Production at $ \sqrt s = 13 \ \rm{TeV}$ exploiting new forward shower counters installed upstream and downstream of the LHCb detector. [...] LHCb-PROC-2019-008; CERN-LHCb-PROC-2019-008.- Geneva : CERN, 2019 - 6. Fulltext: PDF; In : The XXVII International Workshop on Deep Inelastic Scattering and Related Subjects, Turin, Italy, 8 - 12 Apr 2019 レコードの詳細 - ほとんど同じレコード 2019-08-15 17:39
LHCb Upgrades / Steinkamp, Olaf (Universitaet Zuerich (CH)) During the LHC long shutdown 2, in 2019/2020, the LHCb collaboration is going to perform a major upgrade of the experiment. The upgraded detector is designed to operate at a five times higher instantaneous luminosity than in Run II and can be read out at the full bunch-crossing frequency of the LHC, abolishing the need for a hardware trigger [...] LHCb-PROC-2019-007; CERN-LHCb-PROC-2019-007.- Geneva : CERN, 2019 - mult.p. In : Kruger2018, Hazyview, South Africa, 3 - 7 Dec 2018 レコードの詳細 - ほとんど同じレコード 2019-08-15 17:36
Tests of Lepton Flavour Universality at LHCb / Mueller, Katharina (Universitaet Zuerich (CH)) In the Standard Model of particle physics the three charged leptons are identical copies of each other, apart from mass differences, and the electroweak coupling of the gauge bosons to leptons is independent of the lepton flavour. This prediction is called lepton flavour universality (LFU) and is well tested. [...] LHCb-PROC-2019-006; CERN-LHCb-PROC-2019-006.- Geneva : CERN, 2019 - mult.p. In : Kruger2018, Hazyview, South Africa, 3 - 7 Dec 2018 レコードの詳細 - ほとんど同じレコード 2019-05-15 16:57 レコードの詳細 - ほとんど同じレコード 2019-02-12 14:01
XYZ states at LHCb / Kucharczyk, Marcin (Polish Academy of Sciences (PL)) The latest years have observed a resurrection of interest in searches for exotic states motivated by precision spectroscopy studies of beauty and charm hadrons providing the observation of several exotic states. The latest results on spectroscopy of exotic hadrons are reviewed, using the proton-proton collision data collected by the LHCb experiment. [...] LHCb-PROC-2019-004; CERN-LHCb-PROC-2019-004.- Geneva : CERN, 2019 - 6. Fulltext: PDF; In : 15th International Workshop on Meson Physics, Kraków, Poland, 7 - 12 Jun 2018 レコードの詳細 - ほとんど同じレコード 2019-01-21 09:59
Mixing and indirect $CP$ violation in two-body Charm decays at LHCb / Pajero, Tommaso (Universita & INFN Pisa (IT)) The copious number of $D^0$ decays collected by the LHCb experiment during 2011--2016 allows the test of the violation of the $CP$ symmetry in the decay of charm quarks with unprecedented precision, approaching for the first time the expectations of the Standard Model. We present the latest measurements of LHCb of mixing and indirect $CP$ violation in the decay of $D^0$ mesons into two charged hadrons [...] LHCb-PROC-2019-003; CERN-LHCb-PROC-2019-003.- Geneva : CERN, 2019 - 10. Fulltext: PDF; In : 10th International Workshop on the CKM Unitarity Triangle, Heidelberg, Germany, 17 - 21 Sep 2018 レコードの詳細 - ほとんど同じレコード 2019-01-15 14:22
Experimental status of LNU in B decays in LHCb / Benson, Sean (Nikhef National institute for subatomic physics (NL)) In the Standard Model, the three charged leptons are identical copies of each other, apart from mass differences. Experimental tests of this feature in semileptonic decays of b-hadrons are highly sensitive to New Physics particles which preferentially couple to the 2nd and 3rd generations of leptons. [...] LHCb-PROC-2019-002; CERN-LHCb-PROC-2019-002.- Geneva : CERN, 2019 - 7. Fulltext: PDF; In : The 15th International Workshop on Tau Lepton Physics, Amsterdam, Netherlands, 24 - 28 Sep 2018 レコードの詳細 - ほとんど同じレコード 2019-01-10 15:54 レコードの詳細 - ほとんど同じレコード 2018-12-20 16:31
Simultaneous usage of the LHCb HLT farm for Online and Offline processing workflows LHCb is one of the 4 LHC experiments and continues to revolutionise data acquisition and analysis techniques. Already two years ago the concepts of “online” and “offline” analysis were unified: the calibration and alignment processes take place automatically in real time and are used in the triggering process such that Online data are immediately available offline for physics analysis (Turbo analysis), the computing capacity of the HLT farm has been used simultaneously for different workflows : synchronous first level trigger, asynchronous second level trigger, and Monte-Carlo simulation. [...] LHCb-PROC-2018-031; CERN-LHCb-PROC-2018-031.- Geneva : CERN, 2018 - 7. Fulltext: PDF; In : 23rd International Conference on Computing in High Energy and Nuclear Physics, CHEP 2018, Sofia, Bulgaria, 9 - 13 Jul 2018 レコードの詳細 - ほとんど同じレコード 2018-12-14 16:02
The Timepix3 Telescope andSensor R&D for the LHCb VELO Upgrade / Dall'Occo, Elena (Nikhef National institute for subatomic physics (NL)) The VErtex LOcator (VELO) of the LHCb detector is going to be replaced in the context of a major upgrade of the experiment planned for 2019-2020. The upgraded VELO is a silicon pixel detector, designed to with stand a radiation dose up to $8 \times 10^{15} 1 ~\text {MeV} ~\eta_{eq} ~ \text{cm}^{−2}$, with the additional challenge of a highly non uniform radiation exposure. [...] LHCb-PROC-2018-030; CERN-LHCb-PROC-2018-030.- Geneva : CERN, 2018 - 8. レコードの詳細 - ほとんど同じレコード
|
In an occasion, I'd like to use Fubini's Theorem to swap the order of integration of a countour integral with an integral with respect to a measure (show that $\int_{\gamma} \int_{\Omega} f(x,z) d\mu(x) dz = \int_{\Omega} \int_{\gamma} f(x,z) dz d\mu(x)$). For that, I need to take two measure spaces, $(\Omega, \mathcal{F}, \mu)$ and $(\mathbb{C}, \mathcal{B}(\mathbb{C}), c)$, and construct the product measure in the appropriate way.
Looking at the definition of the (Riemann) contour integral, the first thought would be to proceed as is done with the Riemann-Stieltjes integrals to properly choose the measure $c$ so that $\int_{\gamma} f(z) dz = \int_{\tilde{\gamma}} f dc$ (where $\tilde{\gamma}$ is some subset of $\mathbb{C}$ associated to the countour $\gamma$. It seems it would be something like $c=m+im$, where $m$ is Lebesgue measure on $\mathcal{B}(\mathbb{R})$, but that wouldn't be a measure in the usual sense.
How can I make a correspondence between contour integrals and Lebesgue integrals and thus be able to apply Fubini's Theorem? It seems possible, since the Riemann sums for the contour are just the "simple functions" (as $c$ is not a measure) $\sum_{k=1}^n f(\gamma(t_k)) c(\Delta t_k)$...
Thank you very much and sorry, this is the first time I deal with complex-valued measurable functions.
|
Solving the complex equation $z^2=1+2\,i$ using
Solve[z^2 == 1 + 2 I]
returns $\left\{\left\{z\to -\sqrt{1+2\,i}\right\},\left\{z\to\sqrt{1+2\,i}\right\}\right\}$, but how do I force Mathematica to always output on the form $a+b\,i$, $a,b\in\mathbb{R}$? Or, if there is no output form from
Solve to do this, to convert/transform the answer to the $a+b\,i$ form?
I tried
z = a + b I;Solve[{z^2 == 1 + 2 I, {a, b} ∈ Reals}, {a, b}]
which returns
$$\left\{\left\{a\to-\sqrt{\frac{1}{2}\left(1+\sqrt{5}\right)},b\to\sqrt{\frac{1}{2}\left(1+\sqrt{5}\right)}-\frac{\left(1+\sqrt{5}\right)^{3/2}}{2\sqrt{2}}\right\},\left\{a\to \sqrt{\frac{1}{2}\left(1+\sqrt{5}\right)},b\to\frac{\left(1+\sqrt{5}\right)^{3/2}}{2\sqrt{2}}-\sqrt{\frac{1}{2}\left(1+\sqrt{5}\right)}\right\}\right\}$$ but don't think it's a very elegant (and short) way to solve the equation.
One solution is, in its best presentation$$z_1=\sqrt{\frac{1+\sqrt{5}}{2}}+i\sqrt{\frac{2}{1+\sqrt{5}}}$$Can this be output from
Solve (or transformation of the output from Solve)?
|
Definitions
correlation coefficient $= r = \frac{\sum_{i=1}^{n}(x_i - \bar{x})(y_i - \bar{y})}{\sqrt{\sum_{i=1}^{n}(x_i - \bar{x})^2\sum_{i=1}^{n}(y_i - \bar{y})^2}}$
My Question
What is the motivation of this formula? It's supposed to measure linear relationships on bivariate data, but I don't understand why it would do that as defined. For example, Riemann integrals are said to measure area under a curve, and that makes sense because $\sum f(x_i)\Delta x$ is adding areas of rectangles under the curve $f(x)$ approximating its area more and more as we take more samples. Does such an intuition exist for the correlation coefficient? What is it? My background in statistics is nothing but a bit of discrete probability. I know histograms, data plots, mean, median, range, variance, standard deviation, box plots and scatter plots at this point (from reading the first weeks material on an introductory statistics class).
My Research
All of the "Questions that may already have your answer" seemed to either be asking about what the formula said mathematically or asked questions that were more advanced than my knowledge.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.