text stringlengths 256 16.4k |
|---|
This is Prob. 10, Sec. 3.5, in the book
Introductory Functional Analysis With Applications by Erwine Kreyszig:
Let $X$ be an inner product space, let $M$ be an uncountable orthonormal subset of $X$, let $x \in X$ such that $x$ is not the zero vector $\mathbf{0}_X$ in $X$, and let $S(x)$ be the subset of $M$ defined as follows: $$ S(x) \colon= \left\{ \ v \in M \ \colon \ \langle x, v \rangle \neq 0 \ \right\}. $$ Then how to show that this set $S(x)$ is at most countable?
And, this is Prob. 8, Sec. 3.4, in the same book:
Let $\left( e_k \right)_{k \in \mathbb{N} }$ be an orthonormal sequence in an inner product space $X$, let $x \in X$, let $m \in \mathbb{N}$, let $A_m(x)$ be the subset of $\mathbb{N}$ defined as follows: $$ A_m(x) \colon= \left\{ \ k \in \mathbb{N} \ \colon \ \left\lvert \left\langle x, e_k \right\rangle \right\rvert > \frac{1}{m} \ \right\}, $$ and let $n_m$ be the cardinality of $A_m(x)$. Then $$ n_m \leq m^2 \lVert x \rVert^2. $$
The proof of this result involves the Bessel's inequality and goes as follows:
As $\left( e_k \right)_{k \in \mathbb{N}}$ is an orthonormal sequence in the inner product space $X$, so by Theorem 3.4-6 (Bessel Inequality) in Kreyszig, for any $x \in X$, the series $\sum \left\lvert \left\langle x, e_k \right\rangle \right\rvert^2$ converges in $\mathbb{R}$, and $$ \sum_{k=1}^\infty \left\lvert \left\langle x, e_k \right\rangle \right\rvert^2 \leq \lVert x \rVert^2. $$
Let $x\in X$, let $m$ be a given natural number, and let $A_m(x)$ be ths subset of $\mathbb{N}$ given by $$ A_m(x) \colon= \left\{ \ k \in \mathbb{N} \ \colon \ \left\lvert \left\langle x, e_k \right\rangle \right\rvert > \frac{1}{m} \ \right\}. $$
Let $n_m$ denote the cardinality of the set $A_m(x)$ (which is the same as the number of elements in the set $A_m(x)$ if $A_m(x)$ is finite ), where $$n_m \in \{ \ 0 \ \} \cup \mathbb{N} \cup \{ \ \aleph_0 \ \},$$ where $\aleph_0$ (pronounced ``aleph null'') denotes the cardinality of the set $\mathbb{N}$ of natural numbers, because the set $A_m(x)$ can be empty, non-empty but finite, or countably infinite. Furthermore, as $A_m(x) \subset \mathbb{N}$ and as $\mathbb{N}$ is countable, so $A_m(x)$ cannot be uncountable.
If $x = \mathbf{0}_X$, the zero vector in $X$, then $$ \left\langle x, e_k \right\rangle = 0 $$ for all $k \in \mathbb{N}$, and so the set $A_m(x)$ is empty, and therefore $$ n_m = 0 = m^2 \cdot 0 = m^2 \lVert x \rVert^2. $$
So let's suppose that $x$ is not the zero vector in $X$, and suppose also that $n_m \geq m^2 \lVert x \rVert^2$. Then as $$ \left\lvert \left\langle x, e_k \right\rangle \right\rvert^2 \geq 0 $$ for all $k \in \mathbb{N}$ and as $$ \left\lvert \left\langle x, e_k \right\rangle \right\rvert^2 > \frac{1}{m^2} $$ for all $k \in A_m(x)$, so we note that \begin{align*} \sum_{k=1}^\infty \left\lvert \left\langle x, e_k \right\rangle \right\rvert^2 &= \sum_{k \in \mathbb{N} } \left\lvert \left\langle x, e_k \right\rangle \right\rvert^2 \\ &= \sum_{k \in A_m(x) } \left\lvert \left\langle x, e_k \right\rangle \right\rvert^2 + \sum_{k \in \mathbb{N} - A_m(x) } \left\lvert \left\langle x, e_k \right\rangle \right\rvert^2 \\ &\geq \sum_{k \in A_m(x) } \left\lvert \left\langle x, e_k \right\rangle \right\rvert^2 \\\ &> \frac{n_m}{m^2} \\ &\geq \frac{m^2 \lVert x \rVert^2 }{m^2} \\ &= \lVert x \rVert^2, \end{align*} which contradicts the Bessel's inequality. Hence we must have $$ n_m < m^2 \lVert x \rVert^2, $$ as required.
In the above calculation, we have used the equality $$ \sum_{k=1}^\infty \left\lvert \left\langle x, e_k \right\rangle \right\rvert^2 = \sum_{k \in \mathbb{N} } \left\lvert \left\langle x, e_k \right\rangle \right\rvert^2. $$ This is because of Theorem 3.55 in the book \emph{Principles of Mathematical Analysis} by Walter Rudin, 3rd edition, which says that if a series of complex numbers converges absolutely, then, by altering the order of the terms of that series in any way whatsoever, we obtain a series that also converges absolutely and has the same sum as the sum of the original series.
Is this proof correct? If so, then can we use the latter result to establish the former?
If not, then where have I erred in this proof? And, how to give an independent proof of the former result? |
The problem is:
Define a sequence by $a_{1} = 1, a_{2} = 1, a_{n+2} = \sqrt{a_{n+1} + a_{n}}, \forall n \geq 1.$
(a) Prove that $a_{n} < 2$, for all positive integer $n$.
(b) Prove that for all positive integer $n$ such that $n \geq 2$, we have $a_{n+1} > a_{n}$.
I have trouble with part (b). What I did is as follows:
For $n = 2$, \begin{equation} a_{n+1} = a_{3} = \sqrt{a_{2} + a_{1}} = \sqrt{1+1} = \sqrt{2} > a_{2} = 1; \tag{1} \end{equation} For $n = k$, assume $a_{k+1} > a_{k}$;
For $n = k+1$, \begin{equation} a_{n+1} = a_{k+2} = \sqrt{a_{k+1} + a_{k}} > \sqrt{a_{k} + a_{k}} = \sqrt{2a_{k}} . \tag{2} \end{equation}
I need to prove $a_{k+2} > a_{k+1}$, but (2) does not lead to it. How should I solve this problem? |
Please consider the following MWE:
\documentclass{beamer}\usepackage{unicode-math}\usepackage{fontspec}\setsansfont{some-font.otf}\usefonttheme[onlymath]{serif}\begin{document} \begin{frame} Some text with the number \textit{two} in text mode (2) and math mode ($2$). \begin{equation*} \vec{s}(t) = \frac{\vec{a}}{2} t^2 + \vec{v}_0 t + \vec{s}_0 \end{equation*} \end{frame}\end{document}
How to change the font of all math digits to the sans font defined with
fontspec's
\setsansfont command (font-independent)?
I tried to experiment with the
\Umathcode XeTeX primitive as well as with
\setmathfont provided by
unicode-math, however, I had no success.
I am aware of the package
mathspec and its
\setmathfont(digits) command, but I realized that the combination of a
beamer class document and
fontspec/
mathspec causes misplaced math elements (see figures 1 and 2 for a comparison; for figure 2,
\usepackage{unicode-math} package has been commented out in the MWE).
Update: This MWE compiles without any error in my setup. However, if the line
\setmainfont{Arial} is commented out, an error will be thrown. The same is true if I specify a font as a file (*.ttf or *.otf) instead of a system-installed font.
\documentclass{beamer}\usefonttheme{professionalfonts}\usepackage{unicode-math}\usepackage{fontspec}\setmainfont{Arial}\setmathfont[range={\mathup}]{Lucida Sans}\begin{document} \begin{frame} \begin{equation} x = \int\limits_0^\infty f(q) \,\mathrm{d} q \end{equation} \end{frame}\end{document} |
983 173
I'll take on problem 1, though I'm only somewhat familiar with its subject matter.
(a) This is presumably for finding the primitive polynomials in GF8. These are cubic polynomials with coefficients in GF2 that cannot be expressed as products of corresponding polynomials for GF4 and GF2. I will use x as an undetermined variable here.
For GF2, the primitive polynomials are x + (0,1) = x, x+1. For GF4, we consider primitive-polynomial candidates x For GF8, we consider primitive-polynomial candidates x Thus, GF8 has primitive polynomials x (b) There is a problem here. A basis is easy to define for addition: {1, x, x (c) That is a consequence of every finite field GF(p I will now try to show that every finite field has a nonzero number of primitive polynomials with respect to some subfield. First, itself: for all elements a of F relative to F, (x - a) is primitive. Thus, F has N primitive polynomials. For GF(p $$ \sum_{\sum_k k m_k = r} \prod_k P(N(k),m_k) = N^r $$ If N is a prime, then the solution is known: $$ N(m) = \frac{1}{m} \sum_{d|m} N^{m/d} \mu(d) $$ where the μ is the Moebius mu function, (-1)^(number of distinct primes if square-free), and 0 otherwise. I don't know if that is correct for a power of a prime.
For GF2, the primitive polynomials are x + (0,1) = x, x+1.
For GF4, we consider primitive-polynomial candidates x
2+ (0,1)*x + (0,1): x 2, x 2+ 1 = (x + 1) 2, x 2+ x = x(x + 1), x 2+ x + 1. That last one is the only primitive polynomial for GF4.
For GF8, we consider primitive-polynomial candidates x
3+ (0,1)*x 2+ (0,1)*x + (0,1): x 3, x 3+ 1 = (x 2+ x + 1)*(x + 1), x 3+ x = x * (x + 1) 2, x 3+ x + 1, x 3+ x 2= x 2* (x + 1), x 3+ x 2+ 1, x 3+ x 2+ x = x*(x 2+ x + 1), x 3+ x 2+ x + 1 = (x + 1) 3.
Thus, GF8 has primitive polynomials x
3+ x + 1 and x 3+ x 2+ 1.
(b) There is a problem here. A basis is easy to define for addition: {1, x, x
2} where multiplication uses the remainder from dividing by a primitive polynomial. The additive group is thus (Z2) 3. The multiplicative group is, however, Z7, and it omits 0. That group has no nontrivial subgroups, so it's hard to identify a basis for it.
(c) That is a consequence of every finite field GF(p
n) being a subfield of an infinite number of finite fields GF(p m*n), each one with a nonzero number of primitive polynomials with coefficients in GF(p n). Since each field's primitive polynomials cannot be be factored into its subfields' ones, each field adds some polynomial roots, and thus, there are an infinite number of such roots.
I will now try to show that every finite field has a nonzero number of primitive polynomials with respect to some subfield. First, itself: for all elements a of F relative to F, (x - a) is primitive. Thus, F has N primitive polynomials. For GF(p
m*n) relative to GF(p n), I will call the number N(m). One can count all the possible candidate polynomials for GF(p m*n), and one gets
$$ \sum_{\sum_k k m_k = r} \prod_k P(N(k),m_k) = N^r $$
If N is a prime, then the solution is known:
$$ N(m) = \frac{1}{m} \sum_{d|m} N^{m/d} \mu(d) $$
where the μ is the Moebius mu function, (-1)^(number of distinct primes if square-free), and 0 otherwise. I don't know if that is correct for a power of a prime. |
When is $f=g$ on $(0,1)$ for
$f = \int_x^1y^{a-1}\left(1-y\right)^{b-1}dy$
$g = \left(2\frac{x+1}{x+2}\right)x^{a}\left(1-x\right)^{b-1}$
Let me show their graphs. They are small, so I multiplied it by 1000. How can I estimate those functions or find an answer explicitly? UPD: as I move the sliders, I see, that they move the point of equality over curves that can be approximated by lines, but I am not very sure.
UPD2: I may be I can use another integral representation of Beta function, for example, $B(a,b) = 2 \int_0^{\frac{\pi}{2}} \cos^{2a-1}\theta \sin^{2b-1} \theta d\theta$, but that makes problem of defining incomplete Beta-function in this case, because integration bounds will be of type $acos(\theta)$. As discussed here, equation 15, polynomial representation is obtained from the representaton above putting $y = \cos^2\theta$.However, I don't see how that could help.
BackgroundThere was a question recently about finding maximum of
$$\log(1+x)\left( 1- \frac {\int_0^x t^{a-1} (1-t)^{b-a-1}dt}{B(a, b-a)}\right)$$
I tried to solve it.
I simplified it a bit, changed $(a, b-a)$ to $(a,b)$, As it's a product of two functions, and it has a clear maximum. I differentiated it.
$\Large{\frac{\left(\int_x^1y^{\left(a-1\right)}\left(1-y\right)^{\left(b-1\right)}dy\right)\ }{\left(\int_0^1y^{\left(a-1\right)}\left(1-y\right)^{\left(b-1\right)}dy\right)\left(1+x\right)\ }-\frac{\ln\left(1+x\right)\left(x^{\left(a-1\right)}\left(1-x\right)^{\left(b-1\right)}\right)}{\left(\int_0^1y^{\left(a-1\right)}\left(1-y\right)^{\left(b-1\right)}dy\right)}=0}$
Second: I used log representation to substitute $\ln(1+x)$ by $\frac{2x}{x+2}$ which is really good estimation of logarithm on $(0,1)$ interval |
This is from an old package I developed back in 2003. The idea was to cover international system units (SI) and other type of units, roughly following guidelines from that document.
Units should use some straight font (roman vs. italic or oblique), they are not abbreviations and should never use a period, and should be separated from the quantity by a space (if typographically possible by a small space) and should be separated also from following punctuation.
So I defined macro
\unit that would typeset the following argument as
\mathrmhandling a tinyspace
\, before to separate it from the quantity and would add an additional tinyspace after if followed by punctuation. It also took an optional argument for typesetting an exponent.
It also had an additional macro
\newunit that would create a command for most common uints.
I also attempted for these commands to work both in normal text as in math-mode.
The code bellow shows the macros defined and an example:
\documentclass[twocolumn]{article}\makeatletter %%% Modified from xspace \DeclareRobustCommand\uspace{\futurelet\@let@token\@uspace} \def\@uspace{% \ifx\@let@token\bgroup\else \ifx\@let@token\egroup\else \ifx\@let@token\/\else \ifx\@let@token\ \else \ifx\@let@token~\else \ifx\@let@token.\,\else \ifx\@let@token!\,\else \ifx\@let@token,\,\else \ifx\@let@token:\,\else \ifx\@let@token;\,\else \ifx\@let@token?\,\else \ifx\@let@token/\else \ifx\@let@token'\else \ifx\@let@token)\,\else \ifx\@let@token-\else \ifx\@let@token\@xobeysp\else \ifx\@let@token\space\else \ifx\@let@token\@sptoken\else \space \fi\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi} \newcommand\unit[1]{\def\@tempa{#1}\unit@} \newcommand\unit@[1][\@empty]{% \ensuremath{\,% \textrm{\@tempa}% \ifx#1\@empty\else ^{#1}\fi}% \uspace} \newcommand\newunit[2]{% \@namedef{#1}{\unit{#2}}} \newcommand\degree{% \ensuremath{^\circ}}\makeatother% SI official and common derivated units\newunit{m}{m}\newunit{kg}{kg}\newunit{sec}{s}\newunit{amp}{A}\newunit{ohm}{$\Omega$}\newunit{kelvin}{\degree K}\newunit{celcius}{\degree C}\begin{document}\newunit{cm}{cm}One inch is 2.54\cm and one square inch is 6.4516\cm[2].\[(2.54\cm)^2=6.4516\cm[2]\]\newunit{farenheit}{\degree F}Water freezes at 32\farenheit and boils at 212\farenheit.\[100\celcius=212\farenheit-32\farenheit\]International unit system defines the Ampere (\amp) and thesecond (\sec), however it does ot define the Culomb (\unit{C})prefering the compund Ampere-second (\amp\sec).(Although it should look as \unit{As}.)\[1\unit{C} = 1\amp\sec = 1\unit{As}.\]If a tension of 12\unit{V} on a resistor of 15\ohm will producea current of 0.8\amp.\[\frac{12\unit{V}}{15\ohm}=0.8\amp\]The gravitational field on Eath's surface is 1\unit{g}, and thisis roughly equal to 9.8\unit{N}/\kg or 32\unit{ft}/\sec[2].\[1\unit{g} \simeq 9.8\frac{\unit{N}}{\kg} = 9.8\m\sec[-2] \simeq 32\frac{\unit{ft}}{\sec[2]}\]\end{document}
In most running text and and math mode it works fine, however the output sometimes leaves spaces too wide where no space should be.
What kind of trick may I use to eliminate the spaces where I don't need them while preserving the space I want?
I have noticed that command
\mathrel handle the spaces similarly as I expect for
\unit
(The space previous to the degree symbol might be handled apart, as it affects all the instances of those units.)
((I got the attention that package
siunitx might already do what I attempted; while using that package might solve my typographic problems, it doesn't solve my “I want to know how to do it” problem.)) |
It's hard to say just from the sheet music; not having an actual keyboard here. The first line seems difficult, I would guess that second and third are playable. But you would have to ask somebody more experienced.
Having a few experienced users here, do you think that limsup could be an useful tag? I think there are a few questions concerned with the properties of limsup and liminf. Usually they're tagged limit.
@Srivatsan it is unclear what is being asked... Is inner or outer measure of $E$ meant by $m\ast(E)$ (then the question whether it works for non-measurable $E$ has an obvious negative answer since $E$ is measurable if and only if $m^\ast(E) = m_\ast(E)$ assuming completeness, or the question doesn't make sense). If ordinary measure is meant by $m\ast(E)$ then the question doesn't make sense. Either way: the question is incomplete and not answerable in its current form.
A few questions where this tag would (in my opinion) make sense: http://math.stackexchange.com/questions/6168/definitions-for-limsup-and-liminf http://math.stackexchange.com/questions/8489/liminf-of-difference-of-two-sequences http://math.stackexchange.com/questions/60873/limit-supremum-limit-of-a-product http://math.stackexchange.com/questions/60229/limit-supremum-finite-limit-meaning http://math.stackexchange.com/questions/73508/an-exercise-on-liminf-and-limsup http://math.stackexchange.com/questions/85498/limit-of-sequence-of-sets-some-paradoxical-facts
I'm looking for the book "Symmetry Methods for Differential Equations: A Beginner's Guide" by Haydon. Is there some ebooks-site to which I hope my university has a subscription that has this book? ebooks.cambridge.org doesn't seem to have it.
Not sure about uniform continuity questions, but I think they should go under a different tag. I would expect most of "continuity" question be in general-topology and "uniform continuity" in real-analysis.
Here's a challenge for your Google skills... can you locate an online copy of: Walter Rudin, Lebesgue’s first theorem (in L. Nachbin (Ed.), Mathematical Analysis and Applications, Part B, in Advances in Mathematics Supplementary Studies, Vol. 7B, Academic Press, New York, 1981, pp. 741–747)?
No, it was an honest challenge which I myself failed to meet (hence my "what I'm really curious to see..." post). I agree. If it is scanned somewhere it definitely isn't OCR'ed or so new that Google hasn't stumbled over it, yet.
@MartinSleziak I don't think so :) I'm not very good at coming up with new tags. I just think there is little sense to prefer one of liminf/limsup over the other and every term encompassing both would most likely lead to us having to do the tagging ourselves since beginners won't be familiar with it.
Anyway, my opinion is this: I did what I considered the best way: I've created [tag:limsup] and mentioned liminf in tag-wiki. Feel free to create new tag and retag the two questions if you have better name. I do not plan on adding other questions to that tag until tommorrow.
@QED You do not have to accept anything. I am not saying it is a good question; but that doesn't mean it's not acceptable either. The site's policy/vision is to be open towards "math of all levels". It seems hypocritical to me to declare this if we downvote a question simply because it is elementary.
@Matt Basically, the a priori probability (the true probability) is different from the a posteriori probability after part (or whole) of the sample point is revealed. I think that is a legitimate answer.
@QED Well, the tag can be removed (if someone decides to do so). Main purpose of the edit was that you can retract you downvote. It's not a good reason for editing, but I think we've seen worse edits...
@QED Ah. Once, when it was snowing at Princeton, I was heading toward the main door to the math department, about 30 feet away, and I saw the secretary coming out of the door. Next thing I knew, I saw the secretary looking down at me asking if I was all right.
OK, so chat is now available... but; it has been suggested that for Mathematics we should have TeX support.The current TeX processing has some non-trivial client impact. Before I even attempt trying to hack this in, is this something that the community would want / use?(this would only apply ...
So in between doing phone surveys for CNN yesterday I had an interesting thought. For $p$ an odd prime, define the truncation map $$t_{p^r}:\mathbb{Z}_p\to\mathbb{Z}/p^r\mathbb{Z}:\sum_{l=0}^\infty a_lp^l\mapsto\sum_{l=0}^{r-1}a_lp^l.$$ Then primitive roots lift to $$W_p=\{w\in\mathbb{Z}_p:\langle t_{p^r}(w)\rangle=(\mathbb{Z}/p^r\mathbb{Z})^\times\}.$$ Does $\langle W_p\rangle\subset\mathbb{Z}_p$ have a name or any formal study?
> I agree with @Matt E, as almost always. But I think it is true that a standard (pun not originally intended) freshman calculus does not provide any mathematically useful information or insight about infinitesimals, so thinking about freshman calculus in terms of infinitesimals is likely to be unrewarding. – Pete L. Clark 4 mins ago
In mathematics, in the area of order theory, an antichain is a subset of a partially ordered set such that any two elements in the subset are incomparable. (Some authors use the term "antichain" to mean strong antichain, a subset such that there is no element of the poset smaller than 2 distinct elements of the antichain.)Let S be a partially ordered set. We say two elements a and b of a partially ordered set are comparable if a ≤ b or b ≤ a. If two elements are not comparable, we say they are incomparable; that is, x and y are incomparable if neither x ≤ y nor y ≤ x.A chain in S is a...
@MartinSleziak Yes, I almost expected the subnets-debate. I was always happy with the order-preserving+cofinal definition and never felt the need for the other one. I haven't thought about Alexei's question really.
When I look at the comments in Norbert's question it seems that the comments together give a sufficient answer to his first question already - and they came very quickly. Nobody said anything about his second question. Wouldn't it be better to divide it into two separate questions? What do you think t.b.?
@tb About Alexei's questions, I spent some time on it. My guess was that it doesn't hold but I wasn't able to find a counterexample. I hope to get back to that question. (But there is already too many questions which I would like get back to...)
@MartinSleziak I deleted part of my comment since I figured out that I never actually proved that in detail but I'm sure it should work. I needed a bit of summability in topological vector spaces but it's really no problem at all. It's just a special case of nets written differently (as series are a special case of sequences). |
Let $\omega(n)$ denote the number of distinct prime factors of a positive integer $n$, and let $N$ be an odd perfect number. It is not difficult to show that $\omega(N)\ge3$. In fact, Nocco already proved this in 1863. Showing that $\omega(N)\neq3$ requires a little more effort, but the proof is still fairly short. I will present some version of the proof below; the original one is due to Peirce in 1830 (which, interestingly, seems to have been published before Nocco's proof).
The two key ideas here are the geometric series and the factor chain method. Let $N=\displaystyle\prod_{i=1}^k p_i^{e_i}$ be the prime factorization of $N$. Then it follows that
$$\sigma(N)=\prod_{i=1}^k \sum_{j=0}^{e_i} p_i^j=\prod_{i=1}^k \frac{p_i^{e_i+1}-1}{p_i-1}.$$
This formula is useful, because each factor is necessarily greater than one when $e_i$ is positive, which is the case for all prime factors $p_i$. Hence $\sigma(N)\ge\sigma\left(\displaystyle\prod_{i\in{I}} p_i^{e_i}\right)$, where $I\subseteq\{1,2,\ldots,k\}$.
The second idea is perhaps even more fundamental, because we have yet to use the property of perfectness. Since $\sigma(N)=2N$, every divisor of $\sigma(N)$ must also divide $2N$; the other way around is not as interesting. In particular, every factor $\frac{p_i^{e_i+1}-1}{p_i-1}$ must divide $2N$. This result is really powerful; for example, observing that $2^1||2N$ yields Euler's well-known form of an odd perfect number after some algebra.
With these tools in our toolbox, the proof is almost immediate. Let $p^\alpha$, $q^\beta$, and $r^\gamma$ be three odd prime powers with $p<q<r$ and assume that $N=p^\alpha q^\beta r^\gamma$ is an odd perfect number. By the same token as above, we see that
$$\begin{align*}\sigma(N)=\sigma(p^\alpha q^\beta r^\gamma) &= \frac{p^{\alpha+1}-1}{p-1} \cdot \frac{q^{\beta+1}-1}{q-1} \cdot \frac{r^{\gamma+1}-1}{r-1} < \frac{p^\alpha p}{p-1} \cdot \frac{q^\beta q}{q-1} \cdot \frac{r^\gamma r}{r-1}\\\implies& \frac{\sigma(N)}{N} = \frac{\sigma(p^\alpha q^\beta r^\gamma)}{p^\alpha q^\beta r^\gamma} < \frac{p}{p-1} \cdot \frac{q}{q-1} \cdot \frac{r}{r-1}.\end{align*}$$
Suppose that $p=5$. Then $\frac{\sigma(N)}{N}<\frac{5}{4}\cdot\frac{7}{6}\cdot\frac{11}{10}=\frac{77}{48}<2$, which is impossible. Hence $p=3$.
Suppose that $q=7$. Then $\frac{\sigma(N)}{N}<\frac{3}{2}\cdot\frac{7}{6}\cdot\frac{11}{10}=\frac{77}{40}<2$, which is impossible. Hence $q=5$.
Suppose that $r=17$. Then $\frac{\sigma(N)}{N}<\frac{3}{2}\cdot\frac{5}{4}\cdot\frac{17}{16}=\frac{255}{128}<2$, which is impossible. Hence $r \le 13$.
In consequence, we have three possible cases to consider: $r=7$, $r=11$, and $r=13$.
In the first case, we have $N=3^\alpha 5^\beta 7^\gamma$. By the factor chain method, $3^1||N$ implies $4|2N$ and $7^1||N$ implies $8|2N$, neither of which is possible. Hence $\alpha\ge2$ and $\gamma\ge2$. It follows that
$$\frac{\sigma(N)}{N} \ge \frac{\sigma(3^2\cdot5\cdot7^2)}{3^2\cdot5\cdot7^2} = \left(1 + \frac{1}{3} + \frac{1}{3^2}\right)\left(1 + \frac{1}{5}\right)\left(1 + \frac{1}{7} + \frac{1}{7^2}\right)=\frac{494}{245}>2,$$
which is impossible, but we already knew that $105=3\cdot5\cdot7$ cannot divide an odd perfect number.
In the second case, we have $N=3^\alpha 5^\beta 11^\gamma$. Suppose that $\beta=1$. It follows that
$$\frac{\sigma(N)}{N} = \frac{\sigma(3^\alpha\cdot5\cdot11^\gamma)}{3^\alpha\cdot5\cdot11^\gamma} < \frac{3}{2} \cdot \frac{6}{5} \cdot \frac{11}{10} = \frac{99}{50} < 2,$$
which is impossible (could we know that beforehand?). Hence $\beta\ge2$. By the factor chain method, $\alpha\ge4$ and $\gamma\ge4$. It follows that
$$\frac{\sigma(N)}{N} \ge \frac{\sigma(3^4\cdot5^2\cdot11^4)}{3^4\cdot5^2\cdot11^4} = \left(1 + \frac{1}{3} + \frac{1}{3^2} + \frac{1}{3^3} + \frac{1}{3^4}\right)\left(1 + \frac{1}{5} + \frac{1}{5^2}\right)\left(1 + \frac{1}{11} + \frac{1}{11^2} + \frac{1}{11^3} + \frac{1}{11^4}\right)=\frac{99851}{49005}>2,$$
which is impossible, but we already knew that $825=3\cdot5^2\cdot11$ cannot divide an odd perfect number.
In the third case, we have $N=3^\alpha 5^\beta 13^\gamma$. $\beta=1$ implies deficiency, so $\beta\ge2$. Now, $\alpha=2$ also implies deficiency, so $\alpha\ge4$. By the factor chain method, we must have $\gamma\ge2$, which this time implies abundance. Hence the assumption leads to a contradiction; furthermore, $8775=3^3\cdot5^2\cdot13$ cannot divide an odd perfect number. We are finally done.
Unfortunately, proving that $\omega(N)\neq4$ is not this easy by hand. We can similarly show that $p=3$, but then we already have two possible choices for $q$. Of course, one could just write a program and loop through all the possibilities, but Sylvester did it successfully in 1888, unlikely by the aid of computers. Hence I wonder if this algorithm can be significantly improved, or if there are other more efficient techniques that achieve the same goal. |
this is a mystery to me, despite having changed computers several times, despite the website rejecting the application, the very first sequence of numbers I entered into it's search window which returned the same prompt to submit them for publication appear every time, I mean ive got hundreds of them now, and it's still far too much rope to give a person like me sitting along in a bedroom the capacity to freely describe any such sequence and their meaning if there isn't any already there
my maturity levels are extremely variant in time, that's just way too much rope to give me considering its only me the pursuits matter to, who knows what kind of outlandish crap I might decide to spam in each of them
but still, the first one from well, almost a decade ago shows up as the default content in the search window
1,2,3,6,11,23,47,106,235
well, now there is a bunch of stuff about them pertaining to "trees" and "nodes" but that's what I mean by too much rope you cant just let a lunatic like me start inventing terminology as I go
oh well "what would cotton mathers do?" the chat room unanimously ponders lol
i see Secret had a comment to make, is it really a productive use of our time censoring something that is most likely not blatant hate speech? that's the only real thing that warrants censorship, even still, it has its value, in a civil society it will be ridiculed anyway?
or at least inform the room as to whom is the big brother doing the censoring? No? just suggestions trying to improve site functionality good sir relax im calm we are all calm
A104101 is a hilarious entry as a side note, I love that Neil had to chime in for the comment section after the big promotional message in the first part to point out the sequence is totally meaningless as far as mathematics is concerned just to save face for the websites integrity after plugging a tv series with a reference
But seriously @BalarkaSen, some of the most arrogant of people will attempt to play the most innocent of roles and accuse you of arrogance yourself in the most diplomatic way imaginable, if you still feel that your point is not being heard, persist until they give up the farce please
very general advice for any number of topics for someone like yourself sir
assuming gender because you should hate text based adam long ago if you were female or etc
if its false then I apologise for the statistical approach to human interaction
So after having found the polynomial $x^6-3x^4+3x^2-3$we can just apply Eisenstein to show that this is irreducible over Q and since it is monic, it follwos that this is the minimal polynomial of $\sqrt{1+\sqrt[3]{2}}$ over $\mathbb{Q}$ ? @MatheinBoulomenos
So, in Galois fields, if you have two particular elements you are multiplying, can you necessarily discern the result of the product without knowing the monic irreducible polynomial that is being used the generate the field?
(I will note that I might have my definitions incorrect. I am under the impression that a Galois field is a field of the form $\mathbb{Z}/p\mathbb{Z}[x]/(M(x))$ where $M(x)$ is a monic irreducible polynomial in $\mathbb{Z}/p\mathbb{Z}[x]$.)
(which is just the product of the integer and its conjugate)
Note that $\alpha = a + bi$ is a unit iff $N\alpha = 1$
You might like to learn some of the properties of $N$ first, because this is useful for discussing divisibility in these kinds of rings
(Plus I'm at work and am pretending I'm doing my job)
Anyway, particularly useful is the fact that if $\pi \in \Bbb Z[i]$ is such that $N(\pi)$ is a rational prime then $\pi$ is a Gaussian prime (easily proved using the fact that $N$ is totally multiplicative) and so, for example $5 \in \Bbb Z$ is prime, but $5 \in \Bbb Z[i]$ is not prime because it is the norm of $1 + 2i$ and this is not a unit.
@Alessandro in general if $\mathcal O_K$ is the ring of integers of $\Bbb Q(\alpha)$, then $\Delta(\mathcal O_K) [\mathcal O_K:\Bbb Z[\alpha]]^2=\Delta(\mathcal O_K)$, I'd suggest you read up on orders, the index of an order and discriminants for orders if you want to go into that rabbit hole
also note that if the minimal polynomial of $\alpha$ is $p$-Eisenstein, then $p$ doesn't divide $[\mathcal{O}_K:\Bbb Z[\alpha]]$
this together with the above formula is sometimes enough to show that $[\mathcal{O}_K:\Bbb Z[\alpha]]=1$, i.e. $\mathcal{O}_K=\Bbb Z[\alpha]$
the proof of the $p$-Eisenstein thing even starts with taking a $p$-Sylow subgroup of $\mathcal{O}_K/\Bbb Z[\alpha]$
(just as a quotient of additive groups, that quotient group is finite)
in particular, from what I've said, if the minimal polynomial of $\alpha$ wrt every prime that divides the discriminant of $\Bbb Z[\alpha]$ at least twice, then $\Bbb Z[\alpha]$ is a ring of integers
that sounds oddly specific, I know, but you can also work with the minimal polynomial of something like $1+\alpha$
there's an interpretation of the $p$-Eisenstein results in terms of local fields, too. If the minimal polynomial of $f$ is $p$-Eisenstein, then it is irreducible over $\Bbb Q_p$ as well. Now you can apply the Führerdiskriminantenproduktformel (yes, that's an accepted English terminus technicus)
@MatheinBoulomenos You once told me a group cohomology story that I forget, can you remind me again? Namely, suppose $P$ is a Sylow $p$-subgroup of a finite group $G$, then there's a covering map $BP \to BG$ which induces chain-level maps $p_\# : C_*(BP) \to C_*(BG)$ and $\tau_\# : C_*(BG) \to C_*(BP)$ (the transfer hom), with the corresponding maps in group cohomology $p : H^*(G) \to H^*(P)$ and $\tau : H^*(P) \to H^*(G)$, the restriction and corestriction respectively.
$\tau \circ p$ is multiplication by $|G : P|$, so if I work with $\Bbb F_p$ coefficients that's an injection. So $H^*(G)$ injects into $H^*(P)$. I should be able to say more, right? If $P$ is normal abelian, it should be an isomorphism. There might be easier arguments, but this is what pops to mind first:
By Schur-Zassenhaus theorem, $G = P \rtimes G/P$ and $G/P$ acts trivially on $P$ (the action is by inner auts, and $P$ doesn't have any), there is a fibration $BP \to BG \to B(G/P)$ whose monodromy is exactly this action induced on $H^*(P)$, which is trivial, so we run the Lyndon-Hochschild-Serre spectral sequence with coefficients in $\Bbb F_p$.
The $E^2$ page is essentially zero except the bottom row since $H^*(G/P; M) = 0$ if $M$ is an $\Bbb F_p$-module by order reasons and the whole bottom row is $H^*(P; \Bbb F_p)$. This means the spectral sequence degenerates at $E^2$, which gets us $H^*(G; \Bbb F_p) \cong H^*(P; \Bbb F_p)$.
@Secret that's a very lazy habit you should create a chat room for every purpose you can imagine take full advantage of the websites functionality as I do and leave the general purpose room for recommending art related to mathematics
@MatheinBoulomenos No worries, thanks in advance. Just to add the final punchline, what I wanted to ask is what's the general algorithm to recover $H^*(G)$ back from $H^*(P; \Bbb F_p)$'s where $P$ runs over Sylow $p$-subgroups of $G$?
Bacterial growth is the asexual reproduction, or cell division, of a bacterium into two daughter cells, in a process called binary fission. Providing no mutational event occurs, the resulting daughter cells are genetically identical to the original cell. Hence, bacterial growth occurs. Both daughter cells from the division do not necessarily survive. However, if the number surviving exceeds unity on average, the bacterial population undergoes exponential growth. The measurement of an exponential bacterial growth curve in batch culture was traditionally a part of the training of all microbiologists...
As a result, there does not exists a single group which lived long enough to belong to, and hence one continue to search for new group and activity
eventually, a social heat death occurred, where no groups will generate creativity and other activity anymore
Had this kind of thought when I noticed how many forums etc. have a golden age, and then died away, and at the more personal level, all people who first knew me generate a lot of activity, and then destined to die away and distant roughly every 3 years
Well i guess the lesson you need to learn here champ is online interaction isn't something that was inbuilt into the human emotional psyche in any natural sense, and maybe it's time you saw the value in saying hello to your next door neighbour
Or more likely, we will need to start recognising machines as a new species and interact with them accordingly
so covert operations AI may still exists, even as domestic AIs continue to become widespread
It seems more likely sentient AI will take similar roles as humans, and then humans will need to either keep up with them with cybernetics, or be eliminated by evolutionary forces
But neuroscientists and AI researchers speculate it is more likely that the two types of races are so different we end up complementing each other
that is, until their processing power become so strong that they can outdo human thinking
But, I am not worried of that scenario, because if the next step is a sentient AI evolution, then humans would know they will have to give way
However, the major issue right now in the AI industry is not we will be replaced by machines, but that we are making machines quite widespread without really understanding how they work, and they are still not reliable enough given the mistakes they still make by them and their human owners
That is, we have became over reliant on AI, and not putting enough attention on whether they have interpret the instructions correctly
That's an extraordinary amount of unreferenced rhetoric statements i could find anywhere on the internet! When my mother disapproves of my proposals for subjects of discussion, she prefers to simply hold up her hand in the air in my direction
for example i tried to explain to her that my inner heart chakras tell me that my spirit guide suggests that many females i have intercourse with are easily replaceable and this can be proven from historical statistical data, but she wont even let my spirit guide elaborate on that premise
i feel as if its an injustice to all child mans that have a compulsive need to lie to shallow women they meet and keep up a farce that they are either fully grown men (if sober) or an incredibly wealthy trust fund kid (if drunk) that's an important binary class dismissed
Chatroom troll: A person who types messages in a chatroom with the sole purpose to confuse or annoy.
I was just genuinely curious
How does a message like this come from someone who isn't trolling:
"for example i tried to explain to her that my inner heart chakras tell me that my spirit guide suggests that many ... with are easily replaceable and this can be proven from historical statistical data, but she wont even let my spirit guide elaborate on that premise"
3
Anyway feel free to continue, it just seems strange @Adam
I'm genuinely curious what makes you annoyed or confused yes I was joking in the line that you referenced but surely you cant assume me to be a simpleton of one definitive purpose that drives me each time I interact with another person? Does your mood or experiences vary from day to day? Mine too! so there may be particular moments that I fit your declared description, but only a simpleton would assume that to be the one and only facet of another's character wouldn't you agree?
So, there are some weakened forms of associativity. Such as flexibility ($(xy)x=x(yx)$) or "alternativity" ($(xy)x=x(yy)$, iirc). Tough, is there a place a person could look for an exploration of the way these properties inform the nature of the operation? (In particular, I'm trying to get a sense of how a "strictly flexible" operation would behave. Ie $a(bc)=(ab)c\iff a=c$)
@RyanUnger You're the guy to ask for this sort of thing I think:
If I want to, by hand, compute $\langle R(\partial_1,\partial_2)\partial_2,\partial_1\rangle$, then I just want to expand out $R(\partial_1,\partial_2)\partial_2$ in terms of the connection, then use linearity of $\langle -,-\rangle$ and then use Koszul's formula? Or there is a smarter way?
I realized today that the possible x inputs to Round(x^(1/2)) covers x^(1/2+epsilon). In other words we can always find an epsilon (small enough) such that x^(1/2) <> x^(1/2+epsilon) but at the same time have Round(x^(1/2))=Round(x^(1/2+epsilon)). Am I right?
We have the following Simpson method $$y^{n+2}-y^n=\frac{h}{3}\left (f^{n+2}+4f^{n+1}+f^n\right ), n=0, \ldots , N-2 \\ y^0, y^1 \text{ given } $$ Show that the method is implicit and state the stability definition of that method.
How can we show that the method is implicit? Do we have to try to solve $y^{n+2}$ as a function of $y^{n+1}$ ?
@anakhro an energy function of a graph is something studied in spectral graph theory. You set up an adjacency matrix for the graph, find the corresponding eigenvalues of the matrix and then sum the absolute values of the eigenvalues. The energy function of the graph is defined for simple graphs by this summation of the absolute values of the eigenvalues |
Without citing sources, Wikipedia defines the cross-entropy of discrete distributions $P$ and $Q$ to be
\begin{align} \mathrm{H}^{\times}(P; Q) &= -\sum_x p(x)\, \log q(x). \end{align}
Who was first to start using this quantity? And who invented this term? I looked in:
J. E. Shore and R. W. Johnson, "Axiomatic derivation of the principle of maximum entropy and the principle of minimum cross-entropy," Information Theory, IEEE Transactions on, vol. 26, no. 1, pp. 26-37, Jan. 1980.
I followed their introduction to
A. Wehrl, "General properties of entropy," Reviews of Modern Physics, vol. 50, no. 2, pp. 221-260, Apr. 1978.
who never uses the term.
Neither does
S. Kullback and R. Leibler, "On information and sufficiency," The Annals of Mathematical Statistics, vol. 22, no. 1, pp. 79-86, 1951.
I looked in
T. M. Cover and J. A. Thomas, Elements of Information Theory (Wiley Series in Telecommunications and Signal Processing). Wiley-Interscience, 2006.
and
I. Good, "Maximum Entropy for Hypothesis Formulation, Especially for Multidimensional Contingency Tables," The Annals of Mathematical Statistics, vol. 34, no. 3, pp. 911-934, 1963.
but both papers define cross-entropy to be synonymous with KL-divergence.
The original paper
C. E. Shannon, "A Mathematical Theory of Communication," Bell system technical journal, vol. 27, 1948.
Doesn't mention cross entropy (and has a strange definition of "relative entropy": "The ratio of the entropy of a source to the maximum value it could have while still restricted to the same symbols").
Finally, I looked in some old books and papers by Tribus.
Does anyone know what the equation above is called, and who invented it or has a nice presentation of it? |
Autoregressive
Given a time series ${T^i}$, a simple predictive model can be constructed using an autoregressive model.
Such a model is usually called a AR(p) model due to the fact that we are using data back in $p$ steps.
For simplicity we will look at a AR(1) model. Assume the time series has a step size of $dt$, our model can be rewritten as $$ T^t = \beta_1 T^{t - 1} + \beta^t + \beta^0 $$ which can be rewritten in the following way $$ (1 - \beta_1) T^t = \beta_1 T^{t - 1} - \beta_1 T^t + \beta^t + \beta^0. $$ We can cast it into a differential equation form $$ T(t) = - dt \frac{\beta_1}{1 - \beta_1} T'(t) + \frac{\beta^t + \beta^0}{1 - \beta_1}. $$ For AR(2), we have $$ T^t = \beta_1 T^{t - 1} + \beta_2 T^{t - 2} + \beta^t + \beta^0 $$ casted as $$ \begin{align*} &(1-\beta_1 - \beta_2) T^t = -\beta_1 (T^t - T^{t - 1}) - \beta_2 (T^t - T^{t-1} + T^{t-1} - T^{t - 2}) + \beta^t + \beta^0 \\ \Rightarrow &(1-\beta_1 - \beta_2) T^t = -dt \beta_1 (T^t - T^{t - 1})/dt - 2dt\beta_2 (T^t - T^{t-1} + T^{t-1} - T^{t - 2})/(2dt) + \beta^t + \beta^0 \\ \Rightarrow &T^t = - dt \frac{\beta_1 + 2\beta_2}{1-\beta_1 - \beta_2} T'(t) + \frac{\beta^t + \beta^0}{1-\beta_1 - \beta_2} \end{align*} $$ We could also write this into a combination of first order derivative and second order derivative form but I think it is better to be only first order derivative. |
This article will be permanently flagged as inappropriate and made unaccessible to everyone.
Are you certain this article is inappropriate?
Excessive Violence
Sexual Content
Political / Social
Email Address:
Article Id:
WHEBN0000061220
Reproduction Date:
Spintronics (a portmanteau meaning "spin transport electronics"[1][2][3]), also known as spinelectronics or fluxtronic, is an emerging technology exploiting both the intrinsic spin of the electron and its associated magnetic moment, in addition to its fundamental electronic charge, in solid-state devices.
Spintronics differs from the older magnetoelectronics, in that the spins are not only manipulated by magnetic fields, but also by electrical fields.
Spintronics emerged from discoveries in the 1980s concerning spin-dependent electron transport phenomena in solid-state devices. This includes the observation of spin-polarized electron injection from a ferromagnetic metal to a normal metal by Johnson and Silsbee (1985),[4] and the discovery of giant magnetoresistance independently by Albert Fert et al.[5] and Peter Grünberg et al. (1988).[6] The origins of spintronics can be traced back even further to the ferromagnet/superconductor tunneling experiments pioneered by Meservey and Tedrow, and initial experiments on magnetic tunnel junctions by Julliere in the 1970s.[7] The use of semiconductors for spintronics can be traced back at least as far as the theoretical proposal of a spin field-effect-transistor by Datta and Das in 1990.[8]
The spin of the electron is an angular momentum intrinsic to the electron that is separate from the angular momentum due to its orbital motion. The magnitude of the projection of the electron's spin along an arbitrary axis is \frac{1}{2}\hbar, implying that the electron acts as a Fermion by the spin-statistics theorem. Like orbital angular momentum, the spin has an associated magnetic moment, the magnitude of which is expressed as
In a solid the spins of many electrons can act together to affect the magnetic and electronic properties of a material, for example endowing a material with a permanent magnetic moment as in a ferromagnet.
In many materials, electron spins are equally present in both the up and the down state, and no transport properties are dependent on spin. A spintronic device requires generation or manipulation of a spin-polarized population of electrons, resulting in an excess of spin up or spin down electrons. The polarization of any spin dependent property X can be written as
A net spin polarization can be achieved either through creating an equilibrium energy splitting between spin up and spin down such as putting a material in a large magnetic field (Zeeman effect) or the exchange energy present in a ferromagnet; or forcing the system out of equilibrium. The period of time that such a non-equilibrium population can be maintained is known as the spin lifetime, \tau. In a diffusive conductor, a spin diffusion length \lambda can also be defined as the distance over which a non-equilibrium spin population can propagate. Spin lifetimes of conduction electrons in metals are relatively short (typically less than 1 nanosecond), and a great deal of research in the field is devoted to extending this lifetime to technologically relevant timescales.
There are many mechanisms of decay for a spin polarized population, but they can be broadly classified as spin-flip scattering and spin dephasing. Spin-flip scattering is a process inside a solid that does not conserve spin, and can therefore send an incoming spin up state into an outgoing spin down state. Spin dephasing is the process wherein a population of electrons with a common spin state becomes less polarized over time due to different rates of electron spin precession. In confined structures, spin dephasing can be suppressed, leading to spin lifetimes of milliseconds in semiconductor quantum dots at low temperatures.
By studying new materials and decay mechanisms, researchers hope to improve the performance of practical devices as well as study more fundamental problems in condensed matter physics.
The simplest method of generating a spin-polarised current in a metal is to pass the current through a ferromagnetic material. The most common applications of this effect involve giant magnetoresistance (GMR) devices. A typical GMR device consists of at least two layers of ferromagnetic materials separated by a spacer layer. When the two magnetization vectors of the ferromagnetic layers are aligned, the electrical resistance will be lower (so a higher current flows at constant voltage) than if the ferromagnetic layers are anti-aligned. This constitutes a magnetic field sensor.
Two variants of GMR have been applied in devices: (1) current-in-plane (CIP), where the electric current flows parallel to the layers and (2) current-perpendicular-to-plane (CPP), where the electric current flows in a direction perpendicular to the layers.
Other metals-based spintronics devices:
Non-volatile spin-logic devices to enable scaling beyond the year 2025[9] are being extensively studied. Spin-transfer torque-based logic devices that use spins and magnets for information processing have been proposed[10] and are being extensively studied at Intel.[11] These devices are now part of the ITRS exploratory road map and have potential for inclusion in future computers. Logic-in memory applications are already in the development stage at Crocus[12] and NEC.[13]
Read heads of modern hard drives are based on the GMR or TMR effect.
Motorola has developed a first-generation 256 kb magnetoresistive random-access memory (MRAM) based on a single magnetic tunnel junction and a single transistor and which has a read/write cycle of under 50 nanoseconds.[14] (Everspin, Motorola's spin-off, has since developed a 4 Mb version[15]). There are two second-generation MRAM techniques currently in development: thermal-assisted switching (TAS)[16] which is being developed by Crocus Technology, and spin-transfer torque (STT) on which Crocus, Hynix, IBM, and several other companies are working.[17]
Another design in development, called racetrack memory, encodes information in the direction of magnetization between domain walls of a ferromagnetic metal wire.
There are magnetic sensors using the GMR effect.
In 2012, IBM scientists mapped the creation of persistent spin helices of synchronized electrons persisting for more than a nanosecond. This is a 30-fold increase from the previously observed results and is longer than the duration of a modern processor clock cycle, which opens new paths to investigate for using electron spins for information processing.[18]
Much recent research has focused on the study of dilute ferromagnetism in doped semiconductor materials. In recent years, Dilute magnetic oxides (DMOs) including ZnO based DMOs and TiO2-based DMOs have been the subject of numerous experimental and computational investigations.[19][20] Non-oxide ferromagnetic semiconductor sources (like manganese-doped gallium arsenide GaMnAs),[21] increase the interface resistance with a tunnel barrier,[22] or using hot-electron injection.[23]
Spin detection in semiconductors is another challenge, met with the following techniques:
The latter technique was used to overcome the lack of spin-orbit interaction and materials issues to achieve spin transport in silicon, the most important semiconductor for electronics.[28]
Because external magnetic fields (and stray fields from magnetic contacts) can cause large Hall effects and magnetoresistance in semiconductors (which mimic spin-valve effects), the only conclusive evidence of spin transport in semiconductors is demonstration of spin precession and dephasing in a magnetic field non-collinear to the injected spin orientation. This is called the Hanle effect.
Applications using spin-polarized electrical injection have shown threshold current reduction and controllable circularly polarized coherent light output.[29] Examples include semiconductor lasers. Future applications may include a spin-based transistor having advantages over MOSFET devices such as steeper sub-threshold slope.
Magnetic-tunnel transistor: The magnetic-tunnel transistor with a single base layer, by van Dijken et al. and Jiang et al.,[30] has the following terminals:
The magnetocurrent (MC) is given as:
And the transfer ratio (TR) is
MTT promises a highly spin-polarized electron source at room temperature.
Recently also antiferromagnetic storage media have been studied, whereas hitherto always ferromagnetism has been used.,[31] especially since with antiferromagnetic material the bits 0 and 1 can as well be stored as with ferromagnetic material (instead of the usual definition 0 -> 'magnetisation upwards', 1 -> 'magnetisation downwards', one may define, e.g., 0 -> 'vertically-alternating spin configuration' and 1 -> 'horizontally-alternating spin configuration'.[32]).
The main advantages of using antiferromagnetic material are
Web Ontology Language, World Wide Web, Metadata, Resource Description Framework, Ontology (information science)
Spintronics, Proton, Spin (physics), Electron, Magnetism
Lanthanum, Lutetium, Cerium, Neodymium, Thulium
Canada, Nanotechnology, Spintronics, University of Alberta, Plasmonics
Peer review, Engineering, Nanotechnology, Molecular nanotechnology, Space elevator |
How can I build an example of a DFA that has $2^n$ states where the equivalent NFA has $n$ states. Obviously the DFA's state-set should contain all subsets of the the NFA's state-set, but I don't know how to start. Any suggestions to put me on the right track?
The standard example is the language $L$ of all words over an alphabet $A$ of size $n$ that
don't contain all the different letters. There is an NFA accepting $L$ with $n+1$ states (or $n$ states if you allow multiple starting states): first guess a letter $a$ which is missing, then go (with an $\epsilon$-move) to an accepting state with self-loops for all letters other than $A$.
Any DFA for $L$ requires at least $2^n$ states. This can be seen using the Myhill-Nerode theorem. Let $S_1,S_2$ be two different subsets of $A$, and $w(S_1),w(S_2)$ words which contain all and only the letters in $S_1,S_2$, respectively. Without loss of generality, suppose $a \in S_1 \setminus S_2$, and let $w = w(A-a)$. Then $w(S_1)w \notin L$ while $w(S_2)w \in L$.
this is an exercise in the book "Finite Automata" by Mark V. Lawson Heriot-Watt University, Edinburgh, page 68:
Let $n \geq 1$. Show that the language $(0+1)^\ast 1(0+1)^{n−1}$ can be recognised by a non-deterministic automaton with $n+1$ states. Show that any deterministic automaton that recognises this language must have at least $2^n$ states. This example shows that an exponential increase in the number of states in passing from a non-deterministic automaton to a corresponding determin- istic automaton is sometimes unavoidable.
I'm going to guess that you mean that the
optimal DFA has $2^n$ states. Maybe this doesn't get you $2^n$ states, but it's $\Omega(2^n)$.
From "Communication Complexity" by Kushilevitz and Nisan in exercise 12.6:
"For some constant [non-negative integer] $c$, consider the (finite) language $L_c = \{ww\mid w \in \{0,1\}^c\}$."
and the book continues on asking you to prove that you can find a co-NFA recognizing $L_c$ that uses $O(c)$ states and also that you cannot do better than $\Omega(2^c)$ states for a DFA.
This is a late answer, but apparently nobody gave the optimal solution. Take $A = \{a, b\}$, $Q_n = \{0, 1, \ldots, n-1\}$ et ${\cal A}_n = (Q_n, A, E_n, \{0\}, \{0\})$, with $$ E_n = \{(i, a, i+1) \mid 0 \leqslant i \leqslant n-1\} \cup \{(n-1, a, 0)\} \cup \{(i, b, i) \mid 1 \leqslant i \leq n-1\} \cup \{(i, b, 0) \mid 1 \leqslant i \leqslant n-1\}\} $$ This NFA on a two-letter alphabet has $n$ states, only one initial and one final states and its equivalent minimal DFA has $2^n$ states. |
dc.description.abstract This dissertation focuses on the analyses of the non-linear time-fractional stochastic reaction-diffusion equations of the type
\begin{equation}\label{abstract-eq}
\partial^\beta_tu_t(x)=-\nu(-\Delta)^{\alpha/2} u_t(x)+I^{1-\beta}[b(u)+ \sigma(u)\stackrel{\cdot}{F}(t,x)]
\end{equation} in $(d+1)$ dimensions, where $\nu>0, \beta\in (0,1)$, $\alpha\in (0,2]$ and $d$ is a positive integer. The operator $\partial^\beta_t$ is the Caputo fractional derivative while $-(-\Delta)^{\alpha/2} $ is the generator of an isotropic $\alpha$-stable L\'evy process and $I^{1-\beta}$ is the Riesz fractional integral operator. The forcing noise denoted by $\stackrel{\cdot}{F}(t,x)$ is a Gaussian or white noise. These equations might be used as a model for materials with random thermal memory.
The first part of the dissertation studies {\it intermittency fronts} for the solution of the stochastic equation of Eq.\eqref{abstract-eq} when $b\equiv0$. Under some appropriate conditions on the parameters we prove that solutions to the initial value problem of Eq.\eqref{abstract-eq} with nonempty measurable initial function with compact support and strictly positive on an open subset of $(0,\infty)^d $ have positive intermittency lower front. Furthermore, we also identified the parameters regions ensuring that the solutions to the initial value problem of Eq.\eqref{abstract-eq} with the same condition on the initial function also have finite intermittency upper front. Our results recovers as particular cases some known results in the literature. For example, Mijena and Nane proved in \cite{JebesaAndNane1} that : (i) absolute moments of the solutions of this equation grow exponentially; and (ii) the distances to the origin of the farthest high peaks of those moments grow exactly linearly with time. The last result was proved under the assumptions $\alpha=2$ and $d=1.$ Here, we extend this result to the case $\alpha=2$ and $d\in\{1,2,3\}.$
Next, we study the phenomena of finite-time blow up and non-existence of solutions of \eqref{abstract-eq}. In particular, when the term $\sigma(u)$ satisfies $\sigma(u)\geq |x|^{1+\gamma}$ for some positive number $\gamma$, we prove that solution to the initial value problem of Eq.\eqref{abstract-eq} with strictly positive initial distribution have infinite second moment for $t$ large enough.
We derive non-existence (blow-up) of global random field solutions under some additional conditions, most notably on $b$, $\sigma$ and the initial condition. Our results complement those of P. Chow in \cite{chow2}, \cite{chow1}, and Foondun et al. in \cite{Foondun-liu-nane}, \cite{foondun-parshad} among others. en_US |
Your reasoning is along the right lines, but it is incomplete.
enol of B is more stable because of more substituted alkene.
The methyl group in acetone does stabilize the carbon-carbon double bond in the enol form for the reasons you suggest.
However the methyl group also stabilizes the carbonyl double bond in the keto form (see here for example, this is why a carbonyl in a ketone is slightly stronger than a carbonyl in an aldehyde). In fact, in simple carbonyl compounds, the methyl group stabilizes the carbonyl double bond a bit more than it stabilizes the enolic carbon-carbon double bond. As a result, simple aldehydes generally have a higher enol content than simple ketones. Hence, the enol content in acetaldehyde is greater than the enol content in acetone.
$$\ce{carbonyl <=> enol}$$$$\mathrm{K_{eq}=\frac{[enol]}{[carbonyl]}}$$
\begin{array}{|c|c|c|c|} \hline\text{compound} & \mathrm{K_\text{eq}} \\ \hline\text{acetaldehyde} & \mathrm{6 \times 10^{-7}} \\ \hline\text{acetone} & \mathrm{5 \times 10^{-9} } \\ \hline\end{array}
The two $\beta$-dicarbonyl compounds have a higher enol content than the two monocarbonyl compound because hydrogen bonding
and conjugation stabilize their enols. The enol content in C (a mono aldehyde) is higher than D because of the reasons outlined above.
Therefore, the overall order of increasing enol content is C > D >> A > B.
Unless you've studied this before, you might not know that a "methyl group stabilizes the carbonyl double bond a bit more than it stabilizes the enolic carbon-carbon double bond". So a
general rule that might come in handy in these situations is that the enol content increases with the acidity of the enolic hydrogen. A more acidic $\alpha$-hydrogen implies a weaker $\ce{C-H}$ bond. Since the position of a keto-enol equilibrium is dependent on the relative stabilities of the keto and enol forms (the compound with the highest bond strengths overall will be more stable and will predominate at equilibrium) a weak $\ce{C-H}$ (lower $\mathrm{p}K_\text{a}$) generally implies a higher enol content.
\begin{array}{|c|c|c|c|} \hline\text{compound} & \mathrm{p}K_\text{a} \\ \hline\text{acetone} & 19.3 \\ \hline\text{acetaldehyde} & 16.7 \\ \hline\text{2,4-pentanedione} & 13.3 \\ \hline\end{array}
Finally, this earlier answer provides a review of the key factors controlling keto-enol equilibria. |
Given a regular language $L$, can we say anything about its complement $\overline L$? One thing that is trivial to say is that the DFA's for both languages are equal in size as complementing the language is simply a matter of changing all accepting states into rejecting states and vice-versa. Are there any other things to conclude? Is there anything one can say about the number of states of a (minimal) NFA?
Complementing an NFA may involve an exponential blowup.
While there are concrete examples for this, it can also be deduced (in a way) by the fact that NFA universality is PSPACE complete. Indeed, if complementing an NFA could be done in polynomial time, then universality would also be possible in polynomial time.
This is not exact, since complemetation could also be only sub-exponential, but not polynomial.
As for a concrete example: Let $p_1,p_2,...$ be an enumeration of the prime numbers. Consider the language: $L_k=\{1^n: \exists 1\le i\le k\ \text{ s.t}\ p_i \not| n\}$. It is easy to construct an NFA for $L_k$ with $\sum_{i=1}^kp_i$ states.
However, it is also not hard to prove that a minimal NFA for $\overline{L_k}$ needs $\prod_{i=1}^k p_i$ states, which is exponential in $\sum_{i=1}^kp_i$. |
Tsukuba Journal of Mathematics Tsukuba J. Math. Volume 29, Number 1 (2005), 29-47. A rigidity theorem for hypersurfaces with positive Möbius Ricci curvature in $S^{n+1}$ Abstract
Let $M^{n}(n\geq 3)$ be an immersed hypersurface without umbilic points in the $(n+1)$-dimensional unit sphere $S^{n+1}$. Then $M^{n}$ is associated with a so-called Möbius form $\Phi$ and a Möbius metric $g$ which are invariants of $M^{n}$ under the Möbius transformation group of $S^{n+1}$. In this paper, we show that if $\Phi$ is identically zero and the Ricci curvature $Ric_{g}$ is pinched: $(n-1)(n-2)/n^{2}\leq Ric_{g}\leq$ $(n^{2}-2n+5)(n-2)/[n^{2}(n-1)]$, then it must be the case that $n=2p$ and $M^{n}$ is Möbius equivalent to $S^{p}(1/\sqrt{2})\times S^{p}(1/\sqrt{2})$.
Article information Source Tsukuba J. Math., Volume 29, Number 1 (2005), 29-47. Dates First available in Project Euclid: 30 May 2017 Permanent link to this document https://projecteuclid.org/euclid.tkbjm/1496164892 Digital Object Identifier doi:10.21099/tkbjm/1496164892 Mathematical Reviews number (MathSciNet) MR2162829 Zentralblatt MATH identifier 1098.53047 Citation
Hu, Zejun; Li, Haizhong. A rigidity theorem for hypersurfaces with positive Möbius Ricci curvature in $S^{n+1}$. Tsukuba J. Math. 29 (2005), no. 1, 29--47. doi:10.21099/tkbjm/1496164892. https://projecteuclid.org/euclid.tkbjm/1496164892 |
For a Bounday Element Method problem I require the solution of a system of linear equations with multiple right-hand sides. Though this is a dense system, I still want to do it via Petsc
in parallel. Would the best way to do this be through something such as
-ksp_type preonly -pc_type lu -pc_factor_mat_solver_package mumps?
Furthermore and importantly, I refine my grid in a regular way by adding $N-1$ points between the already present $N$ points. This leads to a new system of equations of size $N\left(N-1\right) \times N\left(N-1\right)$ in $N\left(N-1\right)$ unknowns.
The upper-left matrix $A_{11}$ in this new system is identical to the unrefined matrix (multiplied by 0.5). The other three blocks $A_{12}$, $A_{21}$ and $A_{22}$ are new:
$$\begin{bmatrix}A_{11}&A_{12}\\A_{21}&A_{22}\end{bmatrix}$$
The question is now whether knowledge about the unrefined matrix $A_{11}$ can be used to speed up calculation of the refined system?
I was thinking, for example, about using the LU decomposition of $A_{11}$ to calculate the LU decomposition of the entire matrix $A$ using the well-known formula's
$$ \left\{\begin{aligned} L_{21} U_{11} &= A_{21} \\ L_{11} U_{12} &= A_{12} \\ L_{22} U_{22} &= A_{22} - L_{21} U_{12} \end{aligned}\right. $$
where $L_{21}$ and $U_{12}^T$ are $N-1\times N$ full matrices, $L_{11}$, $L_{22}$ are lower triangular $N\times N$ and $N-1 \times N-1$ matrices and $U_{11}$, $U_{22}$ are upper triangular with the same sizes.
Is there any way to do this in Petsc?
Or is there a better thing I can do?
For example, due to the fact that $A_{21}$, $A_{12}$ and $A_{22}$ result from mesh refinement of the original system, they all look a lot like $A_{11}$: Definining
$$ V = \left.\underbrace{\left[\begin{array}{cccc} 0.5 & 0.5 & 0 & \cdots & 0 & 0 \\ 0 & 0.5 & 0.5 & \cdots & 0 & 0\\ \vdots & \vdots & \ddots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & \cdots & 0.5 & 0.5 \end{array}\right]}_{N}\right\} N-1 $$ an $N-1 \times N$ matrix, it can be seen that
$A_{21} \approx V A_{11} $, i.e. an average of the rows of $A_{11}$. $A_{21} \approx A_{11} V^T $, i.e. an average of the columns of $A_{11}$. $A_{22} \approx V A_{11} V^T $, i.e. an average of the rows and columns of $A_{11}$.
This would therefore mean that a good approximate for $A$ would be
$$ A \approx \left(\begin{array}{c} 1_N \\ V \end{array}\right) A_{11} \left(\begin{array}{c} 1_N \\ V \end{array}\right)^T , $$
from which it might be possible to construct some preconditioner easily using the LU decomposition from $A_{11}$:
$$ \left\{\begin{aligned} L_{21} &\approx V L_{11} \\ U_{12} &\approx U_{11} V^T \end{aligned}\right. , $$ from the equations above. However, this implies that $L_{22} \approx 0$ and $U_{22}\approx 0$, which I do not really understand well as it leads to non-invertible approximations for $U$ and $L$:
$$ L \approx \left(\begin{array}{cc} L_{11} & 0 \\ V L_{11} & 0 \end{array}\right) \ \text{and} \ U \approx \left(\begin{array}{cc} U_{11} & U_{11} V^T \\ 0 & 0 \end{array}\right) $$
How would this be used as a preconditioner? |
Basically 2 strings, $a>b$, which go into the first box and do division to output $b,r$ such that $a = bq + r$ and $r<b$, then you have to check for $r=0$ which returns $b$ if we are done, otherwise inputs $r,q$ into the division box..
There was a guy at my university who was convinced he had proven the Collatz Conjecture even tho several lecturers had told him otherwise, and he sent his paper (written on Microsoft Word) to some journal citing the names of various lecturers at the university
Here is one part of the Peter-Weyl theorem: Let $\rho$ be a unitary representation of a compact $G$ on a complex Hilbert space $H$. Then $H$ splits into an orthogonal direct sum of irreducible finite-dimensional unitary representations of $G$.
What exactly does it mean for $\rho$ to split into finite dimensional unitary representations? Does it mean that $\rho = \oplus_{i \in I} \rho_i$, where $\rho_i$ is a finite dimensional unitary representation?
Sometimes my hint to my students used to be: "Hint: You're making this way too hard." Sometimes you overthink. Other times it's a truly challenging result and it takes a while to discover the right approach.
Once the $x$ is in there, you must put the $dx$ ... or else, nine chances out of ten, you'll mess up integrals by substitution. Indeed, if you read my blue book, you discover that it really only makes sense to integrate forms in the first place :P
Using the recursive definition of the determinant (cofactors), and letting $\operatorname{det}(A) = \sum_{j=1}^n \operatorname{cof}_{1j} A$, how do I prove that the determinant is independent of the choice of the line?
Let $M$ and $N$ be $\mathbb{Z}$-module and $H$ be a subset of $N$. Is it possible that $M \otimes_\mathbb{Z} H$ to be a submodule of $M\otimes_\mathbb{Z} N$ even if $H$ is not a subgroup of $N$ but $M\otimes_\mathbb{Z} H$ is additive subgroup of $M\otimes_\mathbb{Z} N$ and $rt \in M\otimes_\mathbb{Z} H$ for all $r\in\mathbb{Z}$ and $t \in M\otimes_\mathbb{Z} H$?
Well, assuming that the paper is all correct (or at least to a reasonable point). I guess what I'm asking would really be "how much does 'motivated by real world application' affect whether people would be interested in the contents of the paper?"
@Rithaniel $2 + 2 = 4$ is a true statement. Would you publish that in a paper? Maybe... On the surface it seems dumb, but if you can convince me the proof is actually hard... then maybe I would reconsider.
Although not the only route, can you tell me something contrary to what I expect?
It's a formula. There's no question of well-definedness.
I'm making the claim that there's a unique function with the 4 multilinear properties. If you prove that your formula satisfies those (with any row), then it follows that they all give the same answer.
It's old-fashioned, but I've used Ahlfors. I tried Stein/Stakarchi and disliked it a lot. I was going to use Gamelin's book, but I ended up with cancer and didn't teach the grad complex course that time.
Lang's book actually has some good things. I like things in Narasimhan's book, but it's pretty sophisticated.
You define the residue to be $1/(2\pi i)$ times the integral around any suitably small smooth curve around the singularity. Of course, then you can calculate $\text{res}_0\big(\sum a_nz^n\,dz\big) = a_{-1}$ and check this is independent of coordinate system.
@A.Hendry: It looks pretty sophisticated, so I don't know the answer(s) off-hand. The things on $u$ at endpoints look like the dual boundary conditions. I vaguely remember this from teaching the material 30+ years ago.
@Eric: If you go eastward, we'll never cook! :(
I'm also making a spinach soufflé tomorrow — I don't think I've done that in 30+ years. Crazy ridiculous.
@TedShifrin Thanks for the help! Dual boundary conditions, eh? I'll look that up. I'm mostly concerned about $u(a)=0$ in the term $u'/u$ appearing in $h'-\frac{u'}{u}h$ (and also for $w=-\frac{u'}{u}P$)
@TedShifrin It seems to me like $u$ can't be zero, or else $w$ would be infinite.
@TedShifrin I know the Jacobi accessory equation is a type of Sturm-Liouville problem, from which Fox demonstrates in his book that $u$ and $u'$ cannot simultaneously be zero, but that doesn't stop $w$ from blowing up when $u(a)=0$ in the denominator |
Let $A:=\mathbb{N}\cup\{\infty\}$.
What is a metric on $A$ s.t.
a sequence $(x_n)$ is convergent in a metric space $X\iff$ there exists a continuous map $\phi:A\rightarrow X$ with $\phi(n)=x_n$ for all $n=0,1,2,...$?
What I know: Let $d$ be the metric on $A$ that we are searching $d_X$ the one on $X$. Being convergent to a point $x$ means that for all $\epsilon$ there is a $N$ s.t. $d_X(x,x_n)<\epsilon$ for all $n\geq N$. Being continuous means that for all $a\in X$ and all $\epsilon$ there is a $\delta$ s.t. $d_X(x,a)<\delta$ implies $d(\phi(x),\phi(a))<\epsilon$.
How can I use these to find our metric? |
Standard parametric equations of a parabola of the form $y^2=4ax$ are: $$ x(t)=at^2\\ y(t)=2at $$ which is fine since it can be easily verified. But is there any reason or advantage of making such a choice in the parametric equation of parabola ?
The standard Cartesian form equation for the parabola $y^2=4ax$ is significant because $a$ is the focal length, the focus of the parabola is $(a,0)$ and also because $4a$ is the length of the
latus rectum.
For this parabola, the standard parametric equation $(at^2, 2at)$ is probably the simplest possible as it does not contain fractions. Other possibilities are $\left(\frac {t^2}{4a} , t\right), \left(\frac {t^2}a, 2t\right)$, which are not as neat.
Another example of a possible parametric equation is $\big(4a\sin t, 2a(1-\cos 2t)\big)$.
There is no standard parametrization.
A parametrization is beneficial if the parameter has an extra or particular geometrical or physical significance.
The given parametrization has focal length $a$. Differentiating $x$ wrt $y$ through $t,$ it can be appreciated that $t$ also represents tangent of angle which the tangent of parabola ( axis on $x$ axis) makes to the $y$ axis. It is also simple, algebraically.
EDIT1:
Another direct (unparametrized)
oblique axes form with two branches with constants $ {(m,h,k)} $ is:
$$y= m x \pm \sqrt{m x h + k^2}$$ |
Please explain the physical meaning it carries ? And how is it that it can increase with dilution ?
And why is it called Molar ?
And what is equivalent conductivity ?
Chemistry Stack Exchange is a question and answer site for scientists, academics, teachers, and students in the field of chemistry. It only takes a minute to sign up.Sign up to join this community
I will answer your question about the definition of molar conductivity and why it increases with decreasing concentration.
As Jerry pointed out, you can probably learn what you need about equivalent conductivity using a Google search.
The
molar conductivity of an electrolyte solution is the conductivity of the solution divided by the molar concentration of the dissolved electrolyte.
Conductivity ($\kappa$, in $\text{S/cm}$) is proportional to the square root of the concentration of ions ($\sqrt{[\text{ions}]}$, withy $\ce{[ions]}$ in $\text{mol/L}$). In the following equation, A is an empirical proportionality constant with units needed to convert from $\text{M}^{1/2}$ to $\text{S/cm}$, namely $\frac{\text{S/cm}}{\text{M}^{1/2}}$
$$\kappa = A\sqrt{\ce{[ions]}}$$
The ion concentration is related to the electrolyte concentration ($C$) by the van't Hoff factor ($i$).
$$[\ce{ions}]=iC$$
$$\therefore \ \kappa = A\sqrt{iC}$$
The molar conductivity $\kappa_M$ is the conductivity divided by the molar concentration of electrolyte ($C$).
$$\kappa_M=\frac{\kappa}{C}=\frac{A\sqrt{iC}}{C}=A\frac{\sqrt{i}}{\sqrt{C}}$$
As $C$ decreases, so does $\sqrt{C}$, and thus $\frac{1}{\sqrt{C}}$ increases, while $A$ remains constant. The van't Hoff factor changes a little with concentration, but not dramatically. ($A$ probably also varies a little with concentration. It certainly varies with temperature.)
For example, if you have a series of solutions of sodium chloride, concentration decreases faster than conductivity, so conductivity divided by concentration increases. Note that the conductivity data are fabricated with an arbitrary value of $A$. I could not find the specific value of $A$ for sodium chloride. The van't Hoff Factors come from the "Colligative Properties" Chapter of
Chemistry: Atoms First by Burdge and Overby, McGraw Hill, 2012.
Conductivity: The degree to which a specified material conducts electricity is called conductivity. Molar conductivity: Conductivity of a solution on dilution. Conductance and conductivity both decrease but molar conductivity increases.
Conductivity is directly proportional to the concentration.
Molar conductivity of an electrolyte is defined as the conductivity of a volume of solution containing one mole of electrolyte. |
(it is a follow-up on my previous question)
Let's define $t_n$ by the recurrence $$t_0 = 1, \quad t_n = (-1)^n \, t_{\lfloor n/2\rfloor}.\tag1$$ It is easy to see that $|t_n|=1$, and the signs follow the same pattern as the Thue–Morse sequence: $$1,\,-1,\,-1,\,1,\,-1,\,1,\,1,\,-1,\,-1,\,1,\,1,\,-1,\,1,\,-1,\,-1,\,1,\,...\tag2$$ (see this question for an example of a non-recursive formula for $t_n$).
Now, let: $$\mathcal{L}(z)=\lim_{n\to\infty}\,\sum_{k=1}^{2^n-1}t_k\,k^z.\tag3$$ I conjecture that the following propositions hold:
$\color{gray}{\text{(a)}}$ The limit $\mathcal{L}(z)$ exists for all $z\in\mathbb C$.
$\color{gray}{\text{(b)}}$ $\mathcal{L}(z)$ is an entire function of $z$.
$\color{gray}{\text{(c)}}$ $\mathcal{L}(z)=0$ if
and only if$z\in\mathbb Z^+$.
Can we prove these conjectures? Can we find a different representation of $\mathcal{L}(z)$? Does this function have any interesting properties?
Update: The "if" part of the conjecture $\color{gray}{\text{(c)}}$ is certainly true. I am now pretty sure the "only if" part fails at some points on the imaginary axis (e.g. near $z\approx i\,4.53236...$; is it exactly $i\,\pi/\ln2$?). I am still wondering if there are any zeros except positive integers and off the imaginary axis. Update: I found a very relevant and interesting paper (preprint) by Giedrius Alkauskas, Dirichlet series associated with Thue–Morse sequence$^{[1]}$$\!^{[2]}$ (be aware that the author claims$^{[3]}$ some results in it may be incorrect). |
Just to explain Eric's construction in more elementary terms (and to point out that you can get something stronger than in your question). Define $C$ as follows:
$$C_n=\left\{\begin{array}{ll}B_n,&n>0,\\A_n&n<0.\end{array}\right.$$Moreover, let $d_n\colon C_n\rightarrow C_{n-1}$ be the differential of $B$ (resp. $A$) if $n>1$ (resp. $n<0$). We must still define $C_0$ and its surrounding differentials.
We define $C_0$ as the push-out $$\begin{array}{ccc}Z_0(A)&\rightarrow&Z_0(B)\\\downarrow&&\downarrow\\A_0&\rightarrow&C_0\end{array}$$The upper arrow is the map induced on $0$-cycles by $f$ and the left arrow is the inclusion of $0$-cycles in $0$-chains, which is injective. Therefore the parallel arrow is also injective. Moreover, $A_0/Z_0(A)=C_0/Z_0(B)$.
The differential $d_0\colon C_0\rightarrow C_{-1}=A_{-1}$ is given by applying the universal property of a push-out to $d_0\colon A_0\rightarrow A_{-1}$ and the trivial map $0\colon Z_0(B)\rightarrow A_{-1}$. This shows that $Z_0(C)=Z_0(B)$ and that the images of $d_0\colon C_0\rightarrow C_{-1}=A_{-1}$ and $d_0\colon A_0\rightarrow A_{-1}$ coincide.
The differential $d_1\colon C_1=B_1\rightarrow C_0$ is the composite $B_1\rightarrow Z_0(B)\hookrightarrow C_0$.
We now take $h\colon A\rightarrow C$ to be $f_n\colon A_n\rightarrow B_n$ for $n>0$, the identity for $n<0$, and the bottom map in the push-out square for $n=0$. The map $g\colon C\rightarrow B$ is the identity for $n>0$, $f_n\colon A_n\rightarrow B_n$ for $n<0$, and for $n=0$, the map induced by applying the universal property of a push-out to $f_0\colon A_0\rightarrow B_0$ and $Z_0(B)\hookrightarrow B_0$.
Clearly $f=gh$. Moreover, $\tau_{\geq 0}g$ and $\tau_{\leq -1}h$ are identity maps, not only weak equivalences, by the previous computations. This construction is actually functorial in $f$. |
478 0
Ok, I have a question about this Fade'ev Popov procedure of teasing out the ghosts when one quantizes a non-Abelian gauge theory with path integrals.
The factor of 1 that people insert, for some gauge fixing function f, and some non-Abelian symmetry [tex]\mathcal{G}[/tex] is:
[tex]1=\int \mathcal{D}U \delta[f(\mathbf{A})] \Delta[\mathbf{A}] [/tex],
where
[tex]\mathcal{D}U = \Pi d\theta[/tex],
and
[tex]U \in \mathcal{G}[/tex].
This is probably a stupid question, but the function [tex]\Delta[/tex] works out just to be a Jacobian of some sort over the manifold [tex]\mathcal{G}[/tex], right?
[tex]\Delta[\mathbf{A}] = det \left(\frac{\delta f}{\delta \theta}\right)[/tex]
I am confused because no one actually says this. Am I completely off base?
Thanks in advance for helping me, and tolerating a (possibly) stupid question!
The factor of 1 that people insert, for some gauge fixing function f, and some non-Abelian symmetry [tex]\mathcal{G}[/tex] is:
[tex]1=\int \mathcal{D}U \delta[f(\mathbf{A})] \Delta[\mathbf{A}] [/tex],
where
[tex]\mathcal{D}U = \Pi d\theta[/tex],
and
[tex]U \in \mathcal{G}[/tex].
This is probably a stupid question, but the function [tex]\Delta[/tex] works out just to be a Jacobian of some sort over the manifold [tex]\mathcal{G}[/tex], right?
[tex]\Delta[\mathbf{A}] = det \left(\frac{\delta f}{\delta \theta}\right)[/tex]
I am confused because no one actually says this. Am I completely off base?
Thanks in advance for helping me, and tolerating a (possibly) stupid question!
Last edited: |
Approaching The Sampling Theorem as Inner Product SpacePrefaceThere are many ways to derive the Nyquist Shannon Sampling Theorem with the constraint on the sampling frequency being 2 times the Nyquist Frequency.The classic derivation uses the summation of sampled series with Poisson SummationFormula.Let's introduce different approach which is more ...
Yes you can bandpass filter an adequate portion of a sampled (ideal impulse modulated) signal spectrum and still retain the same information of the lowpass filtered version.As you have stated, the sampled signal has a spectrum which includes shifted and weighted copies of the original (possibly baseband) signal. Assuming no aliasing occured during the ...
Not sure what you mean by "wrong".The Nyquist criteria simply requires you to have "two samples per Hz of bandwidth". It doesn't have to be $[-f_{max},-f_{max}]$, it can be any frequency range that includes at least $2 \cdot f_{max}$ of bandwidth. However, for real signals you need to figure out what to do with the negative frequencies.For more info ...
It really boils down to aliasing. In continuous-time, if you have any two signals $x_1(t) = \sin(2 \pi F_1 t)$ and $x_2(t) = \sin(2 \pi F_2 t)$, then as long as $F_1$ and $F_2$ are distinct, the signals are, too.But consider sampling at some time interval $T_s$, so that the sampled signals are $x_1(k) = \sin(2 \pi F_1 T_s k)$ and $x_2(k) = \sin(2 \pi F_2 ...
Beating just below the Nyquist frequency occurs when an attempt is made to reconstruct the time continuous signal without the use of sinc interpolation.This sinc reconstruction method and, in fact, requirement for the Nyquist–Shannon sampling theorem to hold true, is the Whittaker–Shannon interpolation formula.
I think you're confusing two different (but related) terms.Nyquist says that in a channel of bandwidth $B$ you can transmit up to $2B$ orthogonal pulses per second. So, $R_p \leq 2B$, where $R_p$ is the pulse rate.To achieve $R_p = 2B$, the pulses need to be sinc-shaped. Other, more practical pulses achieve slightly less than that. For example, raised ...
I don't think I've seen capacity defined like that before. In the "go-to" information theory book by Thomas Cover, capacity is defined as $C=\frac{1}{2}log_2(1+SNR)$ bits per channel use or $C=Wlog_2(1+SNR)$ bits per second. The bandwidth is the symbol rate so you could have a symbol represent multiple bits which is what happens in all digital communication ...
Is the rate of 2B exclusive?Yes. The sampling theorem states that the signal must be band limited to half the sample rate. That implies that the energy at the Nyquist frequency must be zero. In practice you need a healthy margin between the highest usable frequency and the Nyquist frequency. There is always some "transition band" that you need to get the ... |
Nyquist Theorem
The Nyquist theorem
This theorem applies to a signal that is band limited.
When sampling a signal, it is generally done by multiplying the original signal by an impulse train modeled as:
$ \sum_{k=-\infty}^{\infty} \delta ( t - kTs ) $
This is done to extract data at equidistant points Ts (the sampling period) apart. The Nyquist theorem shows that in order to perfectly reconstruct a signal, then the frequency of the impulses must be at least 2 times the frequency of the initial signal.
Derivation
To sample a signal, one must multiply the above impulse train by the signal:
$ x_s(t) = x(t) \sum_{k=-\infty}^{\infty} \delta ( t - kT_s ) $
Since a $ \delta $ will always result in a zero anywhere other than $ kT_s $, then this can be further simplified:
$ x_s(t) = \sum_{k=-\infty}^{\infty} x(kT_s) \delta ( t - kT_s ) $
Then, this can be brought into the radial frequency domain. Multiplication is convolution in the frequency domain, therefore:
$ X_s(w_s) = \frac{1}{T_s} \sum_{k=-\infty}^{\infty} X(w) \delta ( w - kw_s ) $
The same simplification can be made in the radial frequency domain as used before:
$ X_s(w_s) = \frac{1}{T_s} \sum_{k=-\infty}^{\infty} X(w-kw_s) $
In order to prevent aliasing, all components within the sum must not overlap. In other words, when sampling for each integer k, there cannot be another sample that falls within the same frequency.
The k=0 sample spans from the frequency [-w,w], thus to prevent aliasing other samples must not overlap.
Therefore $ w-kw_s \leq -w $ for all 2 unique integers k between $ -\infty \ $ and $ \ \infty $
Assuming k to be 1:
$ 2w \leq w_s $
This can be converted to frequency in hertz:
$ 2 * 2 \pi f \leq 2 \pi f_s $
$ 2f \leq f_s $ |
I want to minimize and maximize the function $g(a,b) = a + b$ given the constraint $h(a,b) = \frac{1}{a} + \frac{1}{b} = 1$, and I want to find the values of $a, b,$ and $\lambda$.
This is what I've done:
\begin{align} \frac{\partial g}{\partial a} = \lambda \frac{\partial h}{\partial a} \implies 1 = \lambda(-\frac{1}{a^2})\\\\ \frac{\partial g}{\partial b} = \lambda \frac{\partial h}{\partial b} \implies 1 = \lambda(-\frac{1}{b^2})\\ \end{align}
However, when I solve these two equations for $a$ and $b$, I end up with $a = b = \pm \sqrt{-\lambda}$.
I don't think I should be dealing with complex numbers for this problem. Therefore, should I just infer that $\lambda$ is a negative number, hence I'll be taking the square root of a positive number?
Furthermore, I'm finding after doing several of these Lagrange multiplier problems that there is no specific method to solve the resulting equations for, say, $x, y, z, $ and $\lambda$ (or, in this case, $a, b,$ and $\lambda$). What is the best way to solve this particular problem once I've found these partials?
Should I just use the following idea:
\begin{align} \lambda = -a^2 \ \text{and}\ \lambda = -b^2 \implies a = b \end{align}
Using this fact I get the following:
\begin{align} h= \frac{1}{a} + \frac{1}{a} = 1 \implies a = 2 \implies b = 2 \end{align}
Which would mean that $\lambda = -x^2 = -(2)^2 = -4$.
Therefore, for the values of $a, b,\ \text{and}\ \lambda$ I get $2, 2,\ \text{and}\ -4$, respectively.
If that's correct, then isn't my minimum $\textbf{and}$ maximum values for $g(a,b)$ given the constraint of $h(a,b)$ just $2 + 2 = 4$ in both cases?
Note: After doing this problem, I realized that I should have immediately had some intuition from the constraint function that $a$ and $b$ were $2$. However, what if $a = 0$ and $b = 1$? |
The orthogonal group, consisting of all proper and improper rotations, is generated by reflections. Every proper rotation is the composition of two reflections, a special case of the Cartan–Dieudonné theorem.
Yeah it does seem unreasonable to expect a finite presentation
Let (V, b) be an n-dimensional, non-degenerate symmetric bilinear space over a field with characteristic not equal to 2. Then, every element of the orthogonal group O(V, b) is a composition of at most n reflections.
How does the Evolute of an Involute of a curve $\Gamma$ is $\Gamma$ itself?Definition from wiki:-The evolute of a curve is the locus of all its centres of curvature. That is to say that when the centre of curvature of each point on a curve is drawn, the resultant shape will be the evolute of th...
Player $A$ places $6$ bishops wherever he/she wants on the chessboard with infinite number of rows and columns. Player $B$ places one knight wherever he/she wants.Then $A$ makes a move, then $B$, and so on...The goal of $A$ is to checkmate $B$, that is, to attack knight of $B$ with bishop in ...
Player $A$ chooses two queens and an arbitrary finite number of bishops on $\infty \times \infty$ chessboard and places them wherever he/she wants. Then player $B$ chooses one knight and places him wherever he/she wants (but of course, knight cannot be placed on the fields which are under attack ...
The invariant formula for the exterior product, why would someone come up with something like that. I mean it looks really similar to the formula of the covariant derivative along a vector field for a tensor but otherwise I don't see why would it be something natural to come up with. The only places I have used it is deriving the poisson bracket of two one forms
This means starting at a point $p$, flowing along $X$ for time $\sqrt{t}$, then along $Y$ for time $\sqrt{t}$, then backwards along $X$ for the same time, backwards along $Y$ for the same time, leads you at a place different from $p$. And upto second order, flowing along $[X, Y]$ for time $t$ from $p$ will lead you to that place.
Think of evaluating $\omega$ on the edges of the truncated square and doing a signed sum of the values. You'll get value of $\omega$ on the two $X$ edges, whose difference (after taking a limit) is $Y\omega(X)$, the value of $\omega$ on the two $Y$ edges, whose difference (again after taking a limit) is $X \omega(Y)$ and on the truncation edge it's $\omega([X, Y])$
Gently taking caring of the signs, the total value is $X\omega(Y) - Y\omega(X) - \omega([X, Y])$
So value of $d\omega$ on the Lie square spanned by $X$ and $Y$ = signed sum of values of $\omega$ on the boundary of the Lie square spanned by $X$ and $Y$
Infinitisimal version of $\int_M d\omega = \int_{\partial M} \omega$
But I believe you can actually write down a proof like this, by doing $\int_{I^2} d\omega = \int_{\partial I^2} \omega$ where $I$ is the little truncated square I described and taking $\text{vol}(I) \to 0$
For the general case $d\omega(X_1, \cdots, X_{n+1}) = \sum_i (-1)^{i+1} X_i \omega(X_1, \cdots, \hat{X_i}, \cdots, X_{n+1}) + \sum_{i < j} (-1)^{i+j} \omega([X_i, X_j], X_1, \cdots, \hat{X_i}, \cdots, \hat{X_j}, \cdots, X_{n+1})$ says the same thing, but on a big truncated Lie cube
Let's do bullshit generality. $E$ be a vector bundle on $M$ and $\nabla$ be a connection $E$. Remember that this means it's an $\Bbb R$-bilinear operator $\nabla : \Gamma(TM) \times \Gamma(E) \to \Gamma(E)$ denoted as $(X, s) \mapsto \nabla_X s$ which is (a) $C^\infty(M)$-linear in the first factor (b) $C^\infty(M)$-Leibniz in the second factor.
Explicitly, (b) is $\nabla_X (fs) = X(f)s + f\nabla_X s$
You can verify that this in particular means it's a pointwise defined on the first factor. This means to evaluate $\nabla_X s(p)$ you only need $X(p) \in T_p M$ not the full vector field. That makes sense, right? You can take directional derivative of a function at a point in the direction of a single vector at that point
Suppose that $G$ is a group acting freely on a tree $T$ via graph automorphisms; let $T'$ be the associated spanning tree. Call an edge $e = \{u,v\}$ in $T$ essential if $e$ doesn't belong to $T'$. Note: it is easy to prove that if $u \in T'$, then $v \notin T"$ (this follows from uniqueness of paths between vertices).
Now, let $e = \{u,v\}$ be an essential edge with $u \in T'$. I am reading through a proof and the author claims that there is a $g \in G$ such that $g \cdot v \in T'$? My thought was to try to show that $orb(u) \neq orb (v)$ and then use the fact that the spanning tree contains exactly vertex from each orbit. But I can't seem to prove that orb(u) \neq orb(v)...
@Albas Right, more or less. So it defines an operator $d^\nabla : \Gamma(E) \to \Gamma(E \otimes T^*M)$, which takes a section $s$ of $E$ and spits out $d^\nabla(s)$ which is a section of $E \otimes T^*M$, which is the same as a bundle homomorphism $TM \to E$ ($V \otimes W^* \cong \text{Hom}(W, V)$ for vector spaces). So what is this homomorphism $d^\nabla(s) : TM \to E$? Just $d^\nabla(s)(X) = \nabla_X s$.
This might be complicated to grok first but basically think of it as currying. Making a billinear map a linear one, like in linear algebra.
You can replace $E \otimes T^*M$ just by the Hom-bundle $\text{Hom}(TM, E)$ in your head if you want. Nothing is lost.
I'll use the latter notation consistently if that's what you're comfortable with
(Technical point: Note how contracting $X$ in $\nabla_X s$ made a bundle-homomorphsm $TM \to E$ but contracting $s$ in $\nabla s$ only gave as a map $\Gamma(E) \to \Gamma(\text{Hom}(TM, E))$ at the level of space of sections, not a bundle-homomorphism $E \to \text{Hom}(TM, E)$. This is because $\nabla_X s$ is pointwise defined on $X$ and not $s$)
@Albas So this fella is called the exterior covariant derivative. Denote $\Omega^0(M; E) = \Gamma(E)$ ($0$-forms with values on $E$ aka functions on $M$ with values on $E$ aka sections of $E \to M$), denote $\Omega^1(M; E) = \Gamma(\text{Hom}(TM, E))$ ($1$-forms with values on $E$ aka bundle-homs $TM \to E$)
Then this is the 0 level exterior derivative $d : \Omega^0(M; E) \to \Omega^1(M; E)$. That's what a connection is, a 0-level exterior derivative of a bundle-valued theory of differential forms
So what's $\Omega^k(M; E)$ for higher $k$? Define it as $\Omega^k(M; E) = \Gamma(\text{Hom}(TM^{\wedge k}, E))$. Just space of alternating multilinear bundle homomorphisms $TM \times \cdots \times TM \to E$.
Note how if $E$ is the trivial bundle $M \times \Bbb R$ of rank 1, then $\Omega^k(M; E) = \Omega^k(M)$, the usual space of differential forms.
That's what taking derivative of a section of $E$ wrt a vector field on $M$ means, taking the connection
Alright so to verify that $d^2 \neq 0$ indeed, let's just do the computation: $(d^2s)(X, Y) = d(ds)(X, Y) = \nabla_X ds(Y) - \nabla_Y ds(X) - ds([X, Y]) = \nabla_X \nabla_Y s - \nabla_Y \nabla_X s - \nabla_{[X, Y]} s$
Voila, Riemann curvature tensor
Well, that's what it is called when $E = TM$ so that $s = Z$ is some vector field on $M$. This is the bundle curvature
Here's a point. What is $d\omega$ for $\omega \in \Omega^k(M; E)$ "really"? What would, for example, having $d\omega = 0$ mean?
Well, the point is, $d : \Omega^k(M; E) \to \Omega^{k+1}(M; E)$ is a connection operator on $E$-valued $k$-forms on $E$. So $d\omega = 0$ would mean that the form $\omega$ is parallel with respect to the connection $\nabla$.
Let $V$ be a finite dimensional real vector space, $q$ a quadratic form on $V$ and $Cl(V,q)$ the associated Clifford algebra, with the $\Bbb Z/2\Bbb Z$-grading $Cl(V,q)=Cl(V,q)^0\oplus Cl(V,q)^1$. We define $P(V,q)$ as the group of elements of $Cl(V,q)$ with $q(v)\neq 0$ (under the identification $V\hookrightarrow Cl(V,q)$) and $\mathrm{Pin}(V)$ as the subgroup of $P(V,q)$ with $q(v)=\pm 1$. We define $\mathrm{Spin}(V)$ as $\mathrm{Pin}(V)\cap Cl(V,q)^0$.
Is $\mathrm{Spin}(V)$ the set of elements with $q(v)=1$?
Torsion only makes sense on the tangent bundle, so take $E = TM$ from the start. Consider the identity bundle homomorphism $TM \to TM$... you can think of this as an element of $\Omega^1(M; TM)$. This is called the "soldering form", comes tautologically when you work with the tangent bundle.
You'll also see this thing appearing in symplectic geometry. I think they call it the tautological 1-form
(The cotangent bundle is naturally a symplectic manifold)
Yeah
So let's give this guy a name, $\theta \in \Omega^1(M; TM)$. It's exterior covariant derivative $d\theta$ is a $TM$-valued $2$-form on $M$, explicitly $d\theta(X, Y) = \nabla_X \theta(Y) - \nabla_Y \theta(X) - \theta([X, Y])$.
But $\theta$ is the identity operator, so this is $\nabla_X Y - \nabla_Y X - [X, Y]$. Torsion tensor!!
So I was reading this thing called the Poisson bracket. With the poisson bracket you can give the space of all smooth functions on a symplectic manifold a Lie algebra structure. And then you can show that a symplectomorphism must also preserve the Poisson structure. I would like to calculate the Poisson Lie algebra for something like $S^2$. Something cool might pop up
If someone has the time to quickly check my result, I would appreciate. Let $X_{i},....,X_{n} \sim \Gamma(2,\,\frac{2}{\lambda})$ Is $\mathbb{E}([\frac{(\frac{X_{1}+...+X_{n}}{n})^2}{2}] = \frac{1}{n^2\lambda^2}+\frac{2}{\lambda^2}$ ?
Uh apparenty there are metrizable Baire spaces $X$ such that $X^2$ not only is not Baire, but it has a countable family $D_\alpha$ of dense open sets such that $\bigcap_{\alpha<\omega}D_\alpha$ is empty
@Ultradark I don't know what you mean, but you seem down in the dumps champ. Remember, girls are not as significant as you might think, design an attachment for a cordless drill and a flesh light that oscillates perpendicular to the drill's rotation and your done. Even better than the natural method
I am trying to show that if $d$ divides $24$, then $S_4$ has a subgroup of order $d$. The only proof I could come up with is a brute force proof. It actually was too bad. E.g., orders $2$, $3$, and $4$ are easy (just take the subgroup generated by a 2 cycle, 3 cycle, and 4 cycle, respectively); $d=8$ is Sylow's theorem; $d=12$, take $d=24$, take $S_4$. The only case that presented a semblance of trouble was $d=6$. But the group generated by $(1,2)$ and $(1,2,3)$ does the job.
My only quibble with this solution is that it doesn't seen very elegant. Is there a better way?
In fact, the action of $S_4$ on these three 2-Sylows by conjugation gives a surjective homomorphism $S_4 \to S_3$ whose kernel is a $V_4$. This $V_4$ can be thought as the sub-symmetries of the cube which act on the three pairs of faces {{top, bottom}, {right, left}, {front, back}}.
Clearly these are 180 degree rotations along the $x$, $y$ and the $z$-axis. But composing the 180 rotation along the $x$ with a 180 rotation along the $y$ gives you a 180 rotation along the $z$, indicative of the $ab = c$ relation in Klein's 4-group
Everything about $S_4$ is encoded in the cube, in a way
The same can be said of $A_5$ and the dodecahedron, say |
Edited after Will Sawin's comment:
Consider the set $\mathcal{M}$ of all automorphic L-functions belonging to the Selberg class. Such a set is closed for the product $.$ and the tensor product $\otimes$ such that $\forall p\in\mathbb{P}, \ \ a_{p}(F\otimes G)=a_{p}(F).a_{p}(G)$ where $a_{n}(H)$ is the $n$-th Dirichlet coefficient of $H$, that is $H(s)=\displaystyle{\sum_{n\gt 0}\dfrac{a_{n}(H)}{n^s}}$ whenever $\Re(s)\gt 1$. This tensor product corresponds, on the automorphic side, to Rankin-Selberg convolution. For the sake of simplicity, the term of 'L-function' will be used to mean any element of $\mathcal{M}$.
Let's define the automorphism group of $\mathcal{M}$ as the group, under composition, of the bijective maps $\Phi$ from $\mathcal{M}$ to itself such that the following properties are simultaneously fulfilled:
A) $\Phi$ maps a primitive L-function to a primitive L-function
B) $\forall (F,G)\in\mathcal{M}^{2}, \ \ \Phi(F\odot G)=\Phi(F)\odot\Phi(G)$ where $\odot\in\{\times, \otimes\}$
Such an automorphism of $\mathcal{M}$ preserves the degree of any L-functions, that is $d_{\Phi(F)}=d_{F}$.
Assuming that for any two L-functions $F$ and $G$, $d_{F\otimes G}=d_{F}.d_{G}$, let's now associate to an L-function $H$ a complex manifold $X_{H}$ of dimension $d_{H}$ such that for any two L-functions $F$ and $G$, $X_{F.G}=X_{F}\oplus X_{G}$ and $X_{F\otimes G}=X_{F}\otimes X_{G}$ so that $F=G\Leftrightarrow X_{F}=X_{G}$. We shall denote the set of all $X_{F}$ where $F$ runs over $\mathcal{M}$ by $\mathcal{M}'$ and any element of $\mathcal{M}'$ will be called an L-manifold.
The automorphism group of $\mathcal{M}'$ is defined in a similar fashion as for the one of $\mathcal{M}$ so that these two groups are isomorphic.
Let's now define the notion of abstract Galois group $\operatorname{Gal}(A/B)$ as the set of all automorphisms of $A$ that preserve $B$ pointwise, and let's associate to any L-function $F$ its ' canonical' representation $(\rho_{F}, V_{F})$ such that the following properties are simultaneously fulfilled:
C) there exists an algebraic number field $K_{F}$ the absolute Galois group of which, denoted by $G_{K_{F}}$, is isomorphic to both $\operatorname{Gal}(\mathcal{M}/<F>)$ and $\operatorname{Gal}(\mathcal{M}'/<X_F>)$, and is such that $\rho_{F}$ is a group homomorphism from $G_{K_{F}}$ to $\operatorname{GL}_{d_{F}}(\mathbb{C})=Aut(V_{F})$ where $<F>=\{\bigodot_{k=0}^{m}F, \odot\in\{\times,\otimes\}, m\in\mathbb{N}_{0}\}$, $<X_{F}>$ is defined in a similar way, and so that $X_{F}$ is locally isomorphic to $V_{F}$.
Assuming $F$ is the L-function attached to an automorphic representation of $\operatorname{GL}_{d_{F}}(\mathbb{A}_{K_{F}})$ where $\mathbb{A}_{K_{F}}$ is the adele ring of $K_{F}$ we require that the considered representation $(\rho_{F}, V_{F})$ is faithful, and that it is irreducible if and only if $F$ is primitive. We also require that $V_{F.G}=V_{F}\oplus V_{G}$ and that $V_{F\otimes G}=V_{F}\otimes V_{G}$.
D) $F(s)=L(s,\rho_{F})$
My questions are:
1) does every L-manifold give rise naturally to a motive?
2) If so, is every L-function motivic? Can one say that any motivic L-function arises from a Galois representation and conversely?
Many thanks in advance. |
ECE662: Statistical Pattern Recognition and Decision Making Processes
Spring 2008, Prof. Boutin
Collectively created by the students in the class
Contents Lecture 7 Lecture notes Lecture Content Maximum Likelihood Estimation and Bayesian Parameter Estimation Parametric Estimation of Class Conditional Density Relevant Links MLE: BPE: More on Parametric Extimators
The class conditional density $ p(\vec{x}|w_i) $ can be estimated using training data. We denote the parameter of estimation as $ \vec{\theta} $. There are two methods of estimation discussed.
Maximum Likelihood Estimation
Let "c" denote the number of classes. D, the entire collection of sample data. $ D_1, \ldots, D_c $ represent the classification of data into classes $ \omega_1, \ldots, \omega_c $. It is assumed that: - Samples in $ D_i $ give no information about the samples in $ D_j, i \neq j $, and - Each sample is drawn independently.
Example: The class conditional density $ p(\vec{x}|w_i) $ depends on parameter $ \vec{\theta_i} $. If $ X ~ N(\mu,\sigma^2) $ denotes the class conditional density; then $ \vec{\theta}=[\mu,\sigma^2] $.
Let n be the size of training sample, and $ D=\{\vec{X_1}, \ldots, \vec{X_n}\} $. Then,
$ p(\vec{X}|\omega_i,\vec{\theta_i}) $ equals $ p(\vec{X}|\vec{\theta}) $ for a single class.
The
Likelihood Function is, then, defined as $ p(D|\vec{\theta})=\displaystyle \prod_{k=1}^n p(\vec{X_k}|\vec{\theta}) $,which needs to be maximized for obtaining the parameter. Since logarithm is a monotonic function, maximizing the Likelihood is same as maximizing log of Likelihood which is defined as $ l(\vec{\theta})=log p(D|\vec{\theta})=\displaystyle log(\prod_{k=1}^n p(\vec{X_k}|\vec{\theta}))=\displaystyle \sum_{k=1}^n log(p(\vec{X_k}|\vec{\theta})) $.
"l" is the log likelihood function.
Maximize log likelyhood function with respect to $ \vec{\theta} $
$ \rightarrow \hat{\theta} = argmax \left( l (\vec{\theta}) \right) $
If $ l(\vec{\theta}) $ is a differentiable function
Let $ \vec{\theta} = \left[ \theta_1, \theta_2, \cdots , \theta_p \right] $ be 1 by p vector, then
$ \nabla_{\vec{\theta}} = \left[ \frac{\partial}{\partial\theta_1} \frac{\partial}{\partial\theta_2} \cdots \frac{\partial}{\partial\theta_p} \right]^{t} $
Then, we can compute the first derivatives of log likelyhood function,
$ \rightarrow \nabla_{\vec{\theta}} ( l (\vec{\theta}) ) = \sum_{k=1}^{n} \nabla_{\vec{\theta}} \left[ log(p(\vec{x_k} | \vec{\theta})) \right] $
and equate this first derivative to be zero
$ \rightarrow \nabla_{\vec{\theta}} ( l (\vec{\theta}) ) = 0 $
Example of Guassian case
Assume that covariance matrix are known.
$ p(\vec{x_k} | \vec{\mu}) = \frac{1}{ \left( (2\pi)^{d} |\Sigma| \right)^{\frac{1}{2}}} exp \left[ - \frac{1}{2} (\vec{x_k} - \vec{\mu})^{t} \Sigma^{-1} (\vec{x_k} - \vec{\mu}) \right] $
Step 1: Take log
$ log p(\vec{x_k} | \vec{\mu}) = -\frac{1}{2} log \left( (2\pi)^d |\Sigma| \right) - \frac{1}{2} (\vec{x_k} - \vec{\mu})^{t} \Sigma^{-1} (\vec{x_k} - \vec{\mu}) $
Step 2: Take derivative
$ \frac{\partial}{\partial\vec{\mu}} \left( log p(\vec{x_k} | \vec{\mu}) \right) = \frac{1}{2} \left[ (\vec{x_k} - \vec{\mu})^t \Sigma^{-1}\right]^t + \frac{1}{2} \left[ \Sigma^{-1} (\vec{x_k} - \vec{\mu}) \right] = \Sigma^{-1} (\vec{x_k} - \vec{\mu}) $
Step 3: Equate to 0
$ \sum_{k=1}^{n} \Sigma^{-1} (\vec{x_k} - \vec{\mu}) = 0 $
$ \rightarrow \Sigma^{-1} \sum_{k=1}^{n} (\vec{x_k} - \vec{\mu}) = 0 $
$ \rightarrow \Sigma^{-1} \left[ \sum_{k=1}^{n} \vec{x_k} - n \vec{\mu}\right] = 0 $
$ \Longrightarrow \hat{\vec{\mu}} = \frac{1}{n} \sum_{k=1}^{n} \vec{x_k} $
This is the sample mean for a sample size n.
Advantages of MLE: Simple Converges Asymptotically unbiased (though biased for small N) Bayesian Parameter Estimation
For a given class,let
$ x $ be feature vector of the class and $ \theta $ be parameter of pdf of $ x $ to be estimated.
And let
$ D= \{ x_1, x_2, \cdots, x_n \} $, where $ x $ are training samples of the class
Note that
$ \theta $ is random variable with probability density $ p(\theta) $
where
Here is a good example. EXAMPLE: Bayesian Inference for Gaussian Mean
The univariate case. The variance is assumed to be known.
Here's a summary of results:
Univariate Gaussian density $ p(x|\mu)\sim N(\mu,\sigma^{2}) $ Prior density of the mean $ p(\mu)\sim N(\mu_{0},\sigma_{0}^{2}) $ Posterior density of the mean $ p(\mu|D)\sim N(\mu_{n},\sigma_{n}^{2}) $
where
$ \mu_{n}=\left(\frac{n\sigma_{0}^{2}}{n\sigma_{0}^{2}+\sigma^{2}}\right)\hat{\mu}_{n}+\frac{\sigma^{2}}{n\sigma_{0}^{2}+\sigma^{2}}\mu_{0} $ $ \sigma_{n}^{2}=\frac{\sigma_{0}^{2}\sigma^{2}}{n\sigma_{0}^{2}+\sigma^{2}} $ $ \hat{\mu}_{n}=\frac{1}{n}\sum_{k=1}^{n}x_{k} $
Finally, the class conditional density is given by $ p(x|D)\sim N(\mu_{n},\sigma^{2}+\sigma_{n}^{2}) $
The above formulas can be interpreted as: in making prediction for a single new observation, the variance of the estimate will have two components: 1) $ \sigma^{2} $ - the inherent variance within the distribution of x, i.e. the variance that would never be eliminated even with perfect information about the underlying distribution model; 2) $ \sigma_{n}^{2} $ - the variance introduced from the estimation of the mean vector "mu", this component can be eliminated given exact prior information or very large training set ( N goes to infinity);
The above figure illustrates the Bayesian inference for the mean of a Gaussian distribution, for which the variance is assumed to be known. The curves show the prior distribution over 'mu' (the curve labeled N=0), which in this case is itself Gaussian, along with the posterior distributions for increasing number N of data points. The figure makes clear that as the number of data points increase, the posterior distribution peaks around the true value of the mean. This phenomenon is known as *Bayesian learning*.
For more information: |
Answer
240 degrees
Work Step by Step
We know that there are 180 degrees per pi radians. Thus, we find: $$=\frac{4\pi}{3} \ radians \times \frac{180^{\circ}}{\pi\ radians}=240^{\circ}$$
You can help us out by revising, improving and updating this answer.Update this answer
After you claim an answer you’ll have
24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback. |
Q. The top of a water tank is open to air and its water level is maintained. It is giving out 0.74 m3 water per minute through a circular opening of 2 cm radius in its wall. The depth of the centre of the opening from the level of water in the tank is close to :
Solution:
In flow volume = outflow volume $\Rightarrow \frac{0.74}{60} = \left(\pi \times4 \times10^{-4}\right) \times\sqrt{2gh} $ $ \Rightarrow \sqrt{2gh} = \frac{74 \times100}{240 \pi} $ $ \Rightarrow \sqrt{2gh} \frac{740}{24\pi} $ $ \Rightarrow 2gh = \frac{740 \times740 }{24 \times24\times10} \left(\pi^{2} = 10\right)$ $ \Rightarrow h = \frac{74 \times74}{2 \times24 \times24} $ $ \Rightarrow h \approx 4.8 m $ Questions from JEE Main 2019 2. A power transmission line feeds input power at 2300 V to a step down transformer with its primary windings having 4000 turns. The output power is delivered at 230 V bv the transformer. If the current in the primary of the transformer is 5A and its efficiency is 90 %, the output current would be : 3. The pitch and the number of divisions, on the circular scale, for a given screw gauge are 0.5 mm and 100 respectively. When the screw gauge is fully tightened without any object, the zero of its circular scale lies 3 divisions below the mean line.
The readings of the main scale and the circular scale, for a thin sheet, are 5.5 mm and 48 respectively, the thickness of this sheet is : 10. A convex lens is put 10 cm from a light source and it makes a sharp image on a screen, kept 10 cm from the lens. Now a glass block (refractive index 1.5) of 1.5 cm thickness is placed in contact with the light source. To get the sharp image again, the screen is shifted by a distance d. Then d is : Physics Most Viewed Questions 1. An AC ammeter is used to measure current in a circuit. When a given direct current passes through the circuit, the AC ammeter reads 3 A. When another alternating current passes through the circuit, the AC ammeter reads 4 A. Then the reading of this ammeter, if DC and AC flow through the circuit simultaneously, is Latest Updates Top 10 Medical Entrance Exams in India Top 10 Engineering Entrance Exams in India JIPMER Results Announced NEET UG Counselling Started NEET Result Announced KCET Result Announced KCET College Predictor JEE Main Result Announced JEE Advanced Score Cards Available AP EAMCET Result Announced KEAM Result Announced UPSEE – Online Applications invited from NRI &Kashmiri Migrants MHT CET Result Announced NEET Rank Predictor Questions Tardigrade |
Given the series of prime numbers greater than 9, we can organize them in four rows, according to their last digit ($d=1,3,7$ or $9$), and in $k=1,2,3\ldots$ columns corresponding to the $k$-multiple of $10$ we have to add to those four digits in order to obtain a prime number. Therefore, each prime is identified by a single point $P(k,d)$.
I illustrate this representation in the following scheme.
For instance, in correspondence of the column $k=15$ ($x$-axis), we find two points on the rows $d=1$ and $d=7$ ($y$-axis), because $15\cdot 10+1=151$ and $15\cdot 10+7=157$ are primes.
Within this representation of prime numbers, we can introduce $6$ sinusoidal functions of $k$ such that each prime is intercepted by
one and only one of these $6$ functions:
These $6$ sinusoidal functions are:
$$ \color{green}{ f_0(k)}=5+4\cos(\frac{\pi (k -0)}{3}),\,\,\, \color{blue}{ f_1(k)}=5+4\cos(\frac{\pi (k-1)}{3}), $$ $$ \color{orange}{ g_3(k)}=5+8\cos(\frac{\pi (k-3) }{3}),\,\,\, \color{purple}{ g_4(k)}=5+8\cos(\frac{\pi (k-4)}{3}), $$ $$ \color{brown}{ u_3(k)}=5+2\cos(\frac{\pi (k-3) }{3}),\,\,\, \color{grey}{ u_4(k)}=5+2\cos(\frac{\pi (k-4)}{3}). $$
We can underline the association between primes and functions by coloring the primes according to the (unique) function that pass through them, as shown in these pictures:
My question is:
Can we reduce the number of these sinusoidal functions and still attain the property that each prime is reached by one and only one function?
The question is motivated by the fact that each ten (i.e. each $k$-multiple of $10$) can give rise to $0,1,2,3$ or $4$ primes, and therefore I somehow suspect that $5$ sinusoidal functions should be enough.
Thank you for your help and suggestions! Sorry for imprecision and trivialities.
NOTE: This post is a wrap of this one, in which the question was unclear.
FINAL NOTE: The answer to this question is YES (thanks to user Empy2):
Four sinusoidal functions are enough to target all prime numbers only once (i.e. each prime number is reached by one and only one wave). The four functions are $$ f_{\pm}(k)=6\pm 2\sqrt{3}\cos\left[\frac{\pi}{3}\left(k−\frac{3}{2}\right) \right], $$
$$ g_{\pm}(k)=4\pm 2\sqrt{3}\cos\left[\frac{\pi}{3}\left(k−\frac{1}{2}\right) \right]. $$
We can represent these $4$ functions as follows: |
My lecturer and I have found separately valid solutions to Poisson's equation in the region of interest for the following problem:
Here is my interpretation of the boundary conditions:
$$V(x,y,z \to \infty) = 0$$ $$V(x,y,z = \sqrt{a^2 - x^2 - y^2})=0$$ $$V(x,y,0) = 0$$
With this, I could find a solution which satisfies all three conditions:
$$V = \frac{z^2 - z \sqrt{a^2 -x^2 -y^2}}{e^z}$$
And thus I can invoke the uniqueness theorem to state that this is the only solution. However, my lecturer used three separate images, and shown that the solution was:
Which also seems to indeed satisfy the boundary conditions (however his differed from mine with $V(r=a)=0$ which I disagree with as I believe it should be $V(r \le a) = 0$ as the potential should be $0$ everywhere inside the conductor.
How can we have simultaneously the
unique solution? Why is my answer incorrect? How am I supposed to figure out the images to use for this problem? How can I reliably use the method of images if I can't eyeball the correct images to use? |
Measures.tex
\section{Passing between probability measures} \label{sec:measures}
The goal of this section is to work out bounds for the error arising when passing back and forth between $\unif_k$ and $\ens{k}$, as described in Section~\ref{sec:outline-dist}. Lemma~\ref{lem:distributions} below gives the bounds we need. The reader will not lose much by just reading its statement; the proof is just technical calculations.
Before stating Lemma~\ref{lem:distributions} we need some definitions.
\ignore{ \begin{definition} Given a set $A \subseteq [k]^n$ and a restriction $(J,x_\barJ)$, we write $A_{x_\barJ}$ for the subset of $[k]^{J}$ defined by $A_{x_\barJ} = \{y \in [k]^J : (x_{\barJ}, y_J) \in A\}$. \end{definition}}
\begin{definition} \label{def:r4r} For $0 \leq \ppn \leq 1$, we say that $J$ is a \emph{$\ppn$-random subset} of $[n]$ if $J$ is formed by including each coordinate $i \in [n]$ independently with probability $\ppn$. Assuming $r \leq n/2$, we say that $J$ is an \emph{$[r,4r]$-random subset} of $[n]$ if $J$ is a $\ppn$-random subset of $[n]$ conditioned on $r \leq \abs{J} \leq 4r$, with $\ppn = 2r/n$. \end{definition} \begin{definition} A \emph{distribution family} $(\distra^m)_{m \in \N}$ (over $[k]$) is a sequence of probability distributions, where $\distra^m$ is a distribution on $[k]^m$. In this paper the families we consider will either be the equal-(nondegenerate-)slices family $\distra^m = \ens{k}^m$ or $\distra^m = \eqs{k}^m$, or will be the product distributions based on a single distribution $\prd$ on $[k]$, $\distra^m = \prd^{\otimes m}$. \end{definition}
\begin{lemma} \label{lem:distributions} Let $(\distra^m)$ and $(\distrb^m)$ be distribution families. Assume $2 \ln n \leq r \leq n/2$. Let $J$ be an $[r,4r]$-random subset of $[n]$, let $x$ be drawn from $[k]^{\barJ}$ according to $\distra^{\abs{\barJ}}$, and let $y$ be drawn from $[k]^J$ according to $\distrb^{\abs{J}}$. The resulting distribution on the composite string $(x,y) \in [k]^n$ has total variation distance from $\distra^n$ which can be bounded as follows: \begin{enumerate} \item (Product to equal-slices.) \label{eqn:distrs-prd-eqs} If $\distra^m = \prd^{\otimes m}$ and $\distrb^m = \eqs{\ell}^m$ for $\ell \leq k$, the bound is \noteryan{You know, we only need this result for the uniform distribution, in which case we can bound the below by the simpler $2k \cdot r/\sqrt{n}$.} \[ (2{\textstyle \sqrt{\frac{1}{\min(\prd)}-1}})+2) \cdot r / \sqrt{n}. \] \item (Equal-slices to product.) \label{eqn:distrs-eqs-prd} If $\distra^m = \eqs{k}^m$ and $\distrb^m = \prd^{\otimes m}$, the bound is $4k \cdot r/\sqrt{n}$, independent of $\prd$. \item (Equal-slices to equal-slices.) \label{eqn:distrs-eqs-eqs} If $\distra^m = \eqs{k}^m$ and $\distrb^m = \eqs{\ell}^m$ for $\ell \leq k$, the bound is $4k \cdot r/\sqrt{n}$. \end{enumerate} \end{lemma}
Although Lemma~\ref{lem:distributions} involves the equal-slices distribution, one can convert to equal-nondegenerate-slices if desired using Proposition~\ref{prop:degen}.
Since $\eqs{k}^n$ is a mixture of product distributions (Proposition~\ref{prop:eqs-mix}), the main work in proving Lemma~\ref{lem:distributions} involves comparing product distributions. \subsection{Comparing product distributions} \begin{definition} For $\distra$ and $\distrb$ probability distributions on $\Omega^n$, the \emph{$\chi^2$ distance} $\dchi{\pi}{\nu}$ is defined by \[ \dchi{\distra}{\distrb} = \sqrt{\Varx_{x \sim \distra}\left[\frac{\distrb[x]}{\distra[x]}\right]}. \] Note that $\dchi{\distra}{\distrb}$ is \emph{not} symmetric in $\distra$ and $\distrb$. \end{definition}
The $\chi^2$ distance is introduced to help us prove the following fact: \begin{proposition} \label{prop:mix-distance} Let $\prd$ be a distribution on $\Omega$ with full support; i.e., $\min(\pi) \neq 0$. Suppose $\prd$ is slightly mixed with $\distrb$, forming $\wh{\prd}$; specifically, $\wh{\prd} = (1-\ppn) \prd + \ppn \distrb$. Then the associated product distributions $\prd^{\otimes n}$, $\wh{\prd}^{\otimes n}$ on $\Omega^{n}$ satisfy \[ \dtv{\prd^{\otimes n}}{\wh{\prd}^{\otimes n}} \leq \dchi{\prd}{\distrb} \cdot \ppn \sqrt{n}. \] \end{proposition} \begin{proof} It is a straightforward consequence of Cauchy-Schwarz (see, e.g.~\cite[p.\ 101]{Rei89})\noteryan{This is the part using $\min(\prd) \neq 0$, by the way.} that \[ \dtv{\prd^{\otimes n}}{\wh{\prd}^{\otimes n}} \leq \dchi{\prd}{\wh{\prd}} \cdot \sqrt{n}, \] and the identity $\dchi{\prd}{\wh{\prd}} = \ppn \cdot \dchi{\prd}{\distrb}$ follows easily from the definitions. \end{proof} This can be bounded independently of $\distrb$, as follows: \begin{corollary} \label{cor:mix-distance} In the setting of Proposition~\ref{prop:mix-distance}, \[ \dtv{\prd^{\otimes n}}{\wh{\prd}^{\otimes n}} \leq \sqrt{{\textstyle \frac{1}{\min(\prd)}} - 1} \cdot \ppn \sqrt{n}, \] \end{corollary} \begin{proof} It is easy to check that the distribution $\distrb$ maximizing $\dchi{\prd}{\distrb}$ is the one putting all its mass on the $x$ minimizing $\prd[x]$. In this case one calculates $\dchi{\prd}{\distrb} = \sqrt{\frac{1}{\min(\pi)} - 1}$. \end{proof}
\subsection{Proof of Lemma~\ref{lem:distributions}}
\begin{definition} \label{def:compos-distr} Let $0 \leq \ppn \leq 1$ and let $(\distra^m)$, $(\distrb^m)$ be distribution families. Drawing from the \emph{$(\ppn, \distra, \distrb)$-composite distribution} on $[k]^n$ entails the following: $J$ is taken to be a $\ppn$-random subset of~$[n]$; $x$ is drawn from $[k]^{\barJ}$ according to $\distra^{\abs{\barJ}}$; and, $y$ is drawn from $[k]^J$ according to $\distrb^{\abs{J}}$. We sometimes think of this distribution as just being a distribution on composite strings $z = (x, y) \in [k]^n$. \end{definition}
Note that the distribution described in Lemma~\ref{lem:distributions} is very similar to the $(\ppn, \distra, \distrb)$-composite distribution, except that it uses an $[r, 4r]$-random subset rather than a $\ppn$-random subset. We can account for this difference with a standard Chernoff (large-deviation) bound:\noteryan{Citation needed?} \begin{fact} \label{fact:dev} If $J$ is a $\ppn$-random subset of $[n]$ with $\ppn = 2r/n$ as in Definition~\ref{def:r4r}, then $r \leq \abs{J} \leq 4r$ holds except with probability at most $2\exp(-r/4)$. \end{fact}
The utility of using $\ppn$-random subsets in Definition~\ref{def:compos-distr} is the following observation: \begin{fact} If $\prd$ and $\distrb$ are distributions on $[k]$, thought of also as product distribution families, then the $(\ppn, \prd, \distrb)$-composite distribution on $[k]^n$ is precisely the product distribution $\wh{\prd}^{\otimes n}$, where $\wh{\prd}$ is the mixture distribution $(1-\ppn)\prd + \ppn \distrb$ on $[k]$. \end{fact}
Because of this, we can use Corollary~\ref{cor:mix-distance} to bound the total variation distance between $\prd^{\otimes n}$ and a composite distribution. We conclude: \begin{proposition} \label{prop:prod-composite} Let $\prd$ and $\distrb$ be any distributions on $[k]$, thought of also as product distribution families. Writing $\wt{\prd}$ for the $(\ppn,\prd,\distrb)$-composite distribution on strings in $[k]^n$, we have \[ \dtv{\prd^{\otimes n}}{\wt{\prd}} \leq {\textstyle \sqrt{\frac{1}{\min(\prd)}-1}} \cdot \ppn \sqrt{n}. \] \end{proposition}
Recall that for any $\ell \leq k$, the equal-slices distribution $\eqs{\ell}^{m}$ on $m$ coordinates is a mixture of product distributions $\spac^{\otimes m}$ on $[k]^m$. We can therefore average Proposition~\ref{prop:prod-composite} over $\distrb$ to obtain: \begin{proposition} \label{prop:prod-eqs} If $\wt{\pi}$ denotes the $(\ppn,\pi,\eqs{\ell})$-composite distribution on strings in $[k]^n$, where $\ell \leq k$, then we have \[ \dtv{\pi^{\otimes n}}{\wt{\pi}} \leq {\textstyle \sqrt{\frac{1}{\min(\pi)}-1}} \cdot \ppn \sqrt{n}. \] \end{proposition} Here we have used the following basic bound, based on the triangle inequality: \begin{fact} \label{fact:tv-mix} Let $(\distrb_\kappa)_{\kappa \in K}$ be a family of distributions on $\Omega^n$, let $\varsigma$ be a distribution on $K$, and let $\overline{\distrb}$ denote the associated mixture distribution, given by drawing $\kappa \sim \varsigma$ and then drawing from $\distrb_\kappa$. Then \[ \dtv{\distra}{\overline{\distrb}} \leq \Ex_{\kappa \sim \varsigma}[\dtv{\distra}{\distrb_\kappa}]. \] \end{fact}
If we instead use this fact to average Proposition~\ref{prop:prod-composite} over $\prd$, we can obtain: \begin{proposition} \label{prop:eqs-prod} Let $\distrb$ be any distribution on $[k]$. Writing $\distra$ for the $(\ppn, \eqs{k}, \distrb)$-composite distribution on strings in $[k]^n$, we have \[ \dtv{\eqs{k}^n}{\distra} \leq (2k-1)\ppn \sqrt{n}. \] \end{proposition} \begin{proof} Thinking of $\eqs{k}^m$ as the mixture of product distributions $\spac^{\otimes m}$, where $\spac$ is a random spacing on $[k]$, Fact~\ref{fact:tv-mix} and Proposition~\ref{prop:prod-composite} imply \[ \dtv{\eqs{k}^n}{\distra} \leq \Ex_{\spac}\left[{\textstyle \sqrt{\frac{1}{\min(\spac)}-1}}\right] \cdot \ppn \sqrt{n}. \] We can upper-bound the expectation\noteryan{Undoubtedly someone has worked hard on this $-1/2$th moment of the least spacing before (Devroye '81 or '86 perhaps), but I think it's probably okay to do the following simple thing here} by \begin{multline*} \Ex_{\spac}\left[{\textstyle \sqrt{\frac{1}{\min(\spac)}}}\right] \quad=\quad \int_{0}^\infty \Pr_{\spac}\left[{\textstyle \sqrt{\frac{1}{\min(\spac)}}} \geq t\right]\,dt \quad=\quad \int_{0}^\infty \Pr_{\spac}[\min(\spac) \leq 1/t^2]\,dt \\ \leq\quad k + \int_{k}^\infty \Pr_{\spac}[\min(\spac) \leq 1/t^2]\,dt \quad\leq\quad k + \int_{k}^\infty (k(k-1)/t^2) \,dt \quad=\quad 2k-1, \end{multline*} where in the second-to-last step we used Proposition~\ref{prop:rand-min}. \end{proof} Averaging now once more in the second component, we obtain the following: \begin{proposition} \label{prop:eqs-eqs} Let $2 \leq \ell \leq k$ and let $\distra'$ denote the $(\ppn, \eqs{k}, \eqs{\ell})$-composite distribution on strings in $[k]^n$. Then \[ \dtv{\eqs{k}^n}{\distra'} \leq (2k-1) \ppn \sqrt{n}. \] \end{proposition}
We can now obtain the proof of Lemma~\ref{lem:distributions}:
\begin{proof} The three statements in Lemma~\ref{lem:distributions} essentially follow from Propositions~\ref{prop:prod-eqs}, \ref{prop:eqs-prod}, and \ref{prop:eqs-eqs}, taking $\ppn = 2r/n$. This would give bounds of $2{\textstyle \sqrt{\frac{1}{\min(\pi)}-1}} \cdot r / \sqrt{n}$, $(4k-2) \cdot r/\sqrt{n}$, and $(4k-2) \cdot r/\sqrt{n}$, respectively. However we need to account for conditioning on $r \leq \abs{J} \leq 4r$. By Fact~\ref{fact:dev}, this conditioning increases the total variation distance by at most $2\exp(-r/4)$. Using the lower bound $r \geq 2 \ln n$ from the lemma's hypothesis, this quantity is at most $2r/\sqrt{n}$, completing the proof. \end{proof} |
This answer tries to give more connections between these two decompositions than their differences.
SVD actually stems from the eigenvalue decomposition of real symmetric matrices. If a matrix $A \in \mathbb{R}^{n \times n}$ is symmetric, then there exists an real orthogonal matrix $O$ such that $$A = O\text{diag}(\lambda_1, \ldots, \lambda_n)O', \tag{1}$$where $\lambda_1, \ldots, \lambda_n$ are all real eigenvalues of $A$. In other words, $A$ is
orthogonal similar to a diagonal matrix $\text{diag}(\lambda_1, \ldots, \lambda_n)$.
For a general (rectangular) real matrix $B \in \mathbb{R}^{m \times n}$, clearly $B'B$ is square, symmetric and semi-positive definite, thus all its eigenvalues are real and non-negative. By definition, the
singular values of $B$ are all arithmetic square root of the positive eigenvalues of $B'B$, say, $\mu_1, \ldots, \mu_r$. Since $B'B$ has its eigen-decomposition $$B'B = O\text{diag}(\mu_1^2, \ldots, \mu_r^2, 0, \ldots, 0)O',$$it can be shown (doing a little clever algebra) that there exist orthogonal matrices $O_1 \in \mathbb{R}^{m \times m}$ and $O_2 \in \mathbb{R}^{n \times n}$ such that $B$ has the following Singular Value Decomposition (SVD):$$B = O_1 \text{diag}(\text{diag}(\mu_1, \ldots, \mu_r), 0)O_2, \tag{2}$$where $0$ in the diagonal matrix is a zero matrix of size $(m - r) \times (n - r)$. $(2)$ sometimes is said as $B$ is orthogonal equivalent to the diagonal matrix $\text{diag}(\text{diag}(\mu_1, \ldots, \mu_r), 0)$.
In view of $(1)$ and $(2)$, both eigen-decomposition (in its narrow sense for symmetric matrices only) and SVD are trying to
look for representative elements under some relations.
In detail, the eigen-decomposition $(1)$ states that under the
orthogonal similar relation, all symmetric matrices can be classified into different equivalent classes, and for each equivalent class, the representative element can be chosen to be the simple diagonal matrix $\text{diag}(\lambda_1, \ldots, \lambda_n)$. It can be further shown that the set of eigenvalues $\{\lambda_1, \ldots, \lambda_n\}$ is the maximal invariant under the orthogonal similar relation.
By comparison, the SVD $(2)$ states that under the
orthogonal equivalent relation, all $m \times n$ matrices can be classified into different equivalent classes, and for each equivalent class, the representative element can also be chosen to be a diagonal matrix $\text{diag}(\text{diag}(\mu_1, \ldots, \mu_r), 0)$. It can be further shown that the set of singular values $\{\mu_1, \ldots, \mu_r\}$ is the maximal invariant under the orthogonal equivalent relation.
In summary, given a matrix $M$ to be decomposed, both eigen-decomposition and SVD aim to seek for its simplified profile. This is not much different from seeking a representative basis under which a linear transformation has its simplistic coordinate expression. Moreover, the above (incomplete) arguments showed that eigen-decomposition and SVD are closely related -- in fact, one way to derive SVD is completely from the eigen-decomposition. |
Let $\Omega\subset\mathbb{R}^N$ be a bounded, smooth domain. Assume that $\mu \in \mathcal{M}(\Omega)$ has compact support in $\Omega.$ Let $u\in W_0^{1,1}(\Omega)$ be a solution of $$ \left\{ \begin{array}{rl} -\Delta u=\mu &\mbox{ if $x\in \Omega$,} \\ u=0 &\mbox{ if $x\in \partial \Omega$,} \end{array} \right.\tag{1} $$
where by solution, it is mean that $$-\int _\Omega u\Delta \psi=\int_\Omega\nabla u\nabla \psi=\int_\Omega \psi d \mu,\ \forall \psi\in C_0^\infty(\overline{\Omega}).$$
As $\mu$ has compact support, $u$ is harmonic in a neighbourhood of $\partial\Omega$ and so the normal derivative of $u$ is well defined at the boundary. My question is, how do we prove that $$-\int _\Omega \psi d\mu=-\int _\Omega \nabla u \nabla \psi+\int _{\partial \Omega} \frac{\partial u}{\partial \eta}\psi ,\ \forall \ \psi\in C^1(\overline{\Omega}).$$
I have tried two approachs: The first consist in proving that $$\left|\int_\Omega \nabla u \nabla \psi\right |\le C\|\psi\|_\infty,\ \forall C^1(\overline{\Omega})\tag{2}.$$
If $(2)$ is true then, the linear functional $T:C^1(\overline{\Omega})\to\mathbb{R}$ defined by $$T\psi=\int_\Omega \nabla u\nabla \psi,$$
can be extended to a bounded linear functional defined in $C^0(\overline{\Omega})$. The result then follows by Riesz theorem. However, I could not prove that $(2)$ is true.
In the second approach, I was trying to use some limit arugment, for example, for small $\delta>0$, let $\Omega_\delta=\{x\in \Omega:\ \operatorname{dist}(x,\partial \Omega)<\delta\}$. As $\mu$ has compact suppport, we have that for all $C^1(\overline{\Omega})$, $$\int_\Omega \nabla u\nabla \psi=\int _{\Omega\setminus\Omega_\delta }\nabla u\nabla \psi+\int _{\Omega_\delta }\nabla u\nabla \psi=\int _{\Omega\setminus\Omega_\delta }\nabla u\nabla \psi+\int _{\partial\Omega_\delta }\psi \frac{\partial u_n}{\partial \eta}$$
Any idea is appreciated.
Remark: $C_0^\infty(\overline{\Omega})=\{u\in C^\infty (\overline{\Omega}):\ u(x)=0,\ x\in \partial \Omega\}$. |
First, to dispel a possible cognitive dissonance: reasoning about infinite structures is not a problem, we do it all the time. As long as the structure is finitely describable, that's not a problem. Here are a few common types of infinite structures:
languages (sets of strings over some alphabet, which may be finite); tree languages (sets of trees over some alphabet); execution traces of a non-deterministic system; real numbers; sets of integers; sets of functions from integers to integers; … Coinductivity as the largest fixpoint
Where inductive definitions build a structure from elementary building blocks, coinductive definitions shape structures from how they can be deconstructed. For example, the type of lists whose elements are in a set
A is defined as follows in Coq:
Inductive list (A:Set) : Set :=
| nil : list A
| cons : A -> list A -> list A.
Informally, the
list type is the smallest type that contains all values built from the
nil and
cons constructors, with the axiom that $\forall x \, y, \: \mathtt{nil} \ne \mathtt{cons} \: x \: y$. Conversely, we can define the largest type that contains all values built from these constructors, keeping the discrimination axiom:
CoInductive colist (A:Set) : Set :=
| conil : colist A
| cocons : A -> colist A -> colist A.
list is isomorphic to a subset of
colist. In addition,
colist contains infinite lists: lists with
cocons upon
cocons.
CoFixpoint flipflop : colist ℕ := cocons 1 (cocons 2 flipflop).
CoFixpoint from (n:ℕ) : colist ℕ := cocons n (from (1 + n)).
flipflop is the infinite (circular list) $1::2::1::2::\ldots$;
from 0 is the infinite list of natural numbers $0::1::2::\ldots$.
A recursive definition is well-formed if the result is built from smaller blocks: recursive calls must work on smaller inputs. A corecursive definition is well-formed if the result builds larger objects. Induction looks at constructors, coinduction looks at destructors. Note how the duality not only changes smaller to larger but also inputs to outputs. For example, the reason the
flipflop and
from definitions above are well-formed is that the corecursive call is guarded by a call to the
cocons constructor in both cases.
Where statements about inductive objects have inductive proofs, statements about coinductive objects have coinductive proofs. For example, let's define the infinite predicate on colists; intuitively, the infinite colists are the ones that don't end with
conil.
CoInductive Infinite A : colist A -> Prop :=
| Inf : forall x l, Infinite l -> Infinite (cocons x l).
To prove that colists of the form
from n are infinite, we can reason by coinduction.
from n is equal to
cocons n (from (1 + n)). This shows that
from n is larger than
from (1 + n), which is infinite by the coinduction hypothesis, hence
from n is infinite.
Bisimilarity, a coinductive property
Coinduction as a proof technique also applies to finitary objects. Intuitively speaking, inductive proofs about an object are based on how the object is built. Coinductive proofs are based on how the object can be decomposed.
When studying deterministic systems, it is common to define equivalence through inductive rules: two systems are equivalent if you can get from one to the other by a series of transformations. Such definitions tend to fail to capture the many different ways non-deterministic systems can end up having the same (observable) behavior in spite of having different internal structure. (Coinduction is also useful to describe non-terminating systems, even when they're deterministic, but this isn't what I'll focus on here.)
Nondeterministic systems such as concurrent systems are often modeled by labeled transition systems. An LTS is a directed graph in which the edges are labeled. Each edge represents a possible transition of the system. A trace of an LTS is the sequence of edge labels over a path in the graph.
Two LTS can behave identically, in that they have the same possible traces, even if their internal structure is different. Graph isomorphism is too strong to define their equivalence. Instead, an LTS $\mathscr{A}$ is said to simulate another LTS $\mathscr{B}$ if every transition of the second LTS admits a corresponding transition in the first. Formally, let $S$ be the disjoint union of the states of the two LTS, $L$ the (common) set of labels and $\rightarrow$ the transition relation. The relation $R \subseteq S \times S$ is a simulation if$$ \forall (p,q)\in R, %\forall p'\in S, \forall\alpha\in L, \text{ if } p \stackrel\alpha\rightarrow p' \text{ then } \exists q', \; q \stackrel\alpha\rightarrow q' \text{ and } (p',q')\in R$$
$\mathscr{A}$ simulates $\mathscr{B}$ if there is a simulation in which all the states of $\mathscr{B}$ are related to a state in $\mathscr{A}$. If $R$ is a simulation in both directions, it is called a bisimulation. Simulation is a coinductive property: any observation on one side must have a match on the other side.
There are potentially many bisimulations in an LTS. Different bisimulations might identify different states. Given two bisimulations $R_1$ and $R_2$, the relation given by taking the union of the relation graphs $R_1 \cup R_2$ is itself a bisimulation, since related states give rise to related states for both relations. (This holds for infinite unions as well. The empty relation is an unintersting bisimulation, as is the identity relation.) In particular, the union of all bisimulations is itself a bisimulation, called bisimilarity. Bisimilarity is the coarsest way to observe a system that does not distinguish between distinct states.
Bisimilarity is a coinductive property. It can be defined as the largest fixpoint of an operator: it is the largest relation which, when extended to identify equivalent states, remains the same.
References |
How to obtain the gradients of such a complicated Deep network with batch normalization (preferably in matrix/vector notation) \begin{align} L\left(\left\{W_\ell, \gamma_\ell, \beta_\ell\right\}_{\ell=1}^3\right) := \sum_{i=1}^N \| g_3 \left(W_3 \ f_2 \left( g_2 \left(W_2 \ f_1 \left(g_1\left(W_1 x_i \right)\right) \right) \right) \right) - y_i \|_2^2 , \end{align} with respect to $\left\{W_\ell, \gamma_\ell, \beta_\ell\right\}_{\ell=1}^3$, where $g_\ell$ is parameterized by $\gamma_\ell$ and $\beta_\ell$?
The definition of $x_i \in \mathbb{R}^n$, $W_1 \in \mathbb{R}^{m \times n}$, $W_2 \in \mathbb{R}^{p \times m}$, $W_3 \in \mathbb{R}^{q \times p}$, and $y_i \in \mathbb{R}^q$, and $f_\ell(z) = \frac{1}{1 + \exp(-z)}$.
And, $$\eqalign{ g_\ell(z; \gamma_\ell, \beta_\ell) &= \gamma_\ell \frac{\left( z- \mu(z) \right)}{\left( \sigma(z) + \epsilon \right)^{1/2}} + \beta_\ell\cr \mu(z) &= \alpha \ 1^Tz \cr \sigma(z) &= \alpha \sum_{k=1}^m \left( z[k] - \mu(z) \right)^2 \equiv \alpha 1^T \left[ \left( z- \mu(z) \right) \odot \left(z - \mu(z) \right) \right]\cr }$$ where $1^T$ is a row vector with all ones, $\odot$ is an element-wise multiplication, and $\alpha$ and $\epsilon$ are known scalars.
Note:
The gradients of the cost function
without $g_i(\cdot)$, batch normalization, is given in the link. But how to address it with complicated batch normalization.
Thank you so much in advance for your help |
Given an SDE for an underlying:
$$dS(t) = \mu(S,t)dt+\sigma(S,t)dW(t)$$
the SDE for the value of the option $V=V(S,t)$ is given via Ito's lemma as:
$$dV = V_tdt+V_S\mu(S,t)dt+\frac{1}{2}V_{SS}\sigma^2(S,t)dt+V_S\sigma(S,t)dW(t)$$
It seems that this would results in an SDE containing $S(t)$.
How does one then obtain an SDE for the option value so that it can be simulated directly without simulating the underlyings, i.e. something like
$$dV(t) = m(V,t)dt+s(V,t)dW(t)?$$ |
Simulating Nonlinear Sound Propagation in an Acoustic Horn
When modeling acoustic devices, it’s often enough to account for linear propagation alone, even though nonlinearities are always present. However, when the signaling amplitude reaches high levels in a design, nonlinear effects become important. Engineers can include nonlinear effects in simulations by taking advantage of the
Nonlinear Acoustics (Westervelt) feature in the COMSOL Multiphysics® software, as demonstrated by an exponential horn example.
Using an Acoustic Horn to Increase Sound Amplitude
One of the oldest ways of amplifying sound is by using an acoustic horn. A classic example is the mechanical phonograph. Invented by Thomas Edison in the 1870s, the phonograph is a system made of a foil-wrapped wooden cylinder (later made of wax); a needle; and a horn placed against the foil, or metal diaphragm.
With a phonograph, you can make a recording simply by speaking into the horn, with the vibration causing the needle to etch grooves into the foil. You can also listen to a recording by placing the needle at the beginning of the groove and turning the handle of the machine. As the needle moves along the groove pattern, the vibrations it makes are amplified by the horn. These capabilities inspired acoustic engineers to improve upon the design and soon, the cylinders were replaced with flat record discs, and more advanced horns were used to improve the sound amplification.
Left: Thomas Edison and an early phonograph. Image in the public domain in the United States, via Wikimedia Commons. Right: A phonograph with a classic horn shape next to cylinders. Image by Tomasz Sienicki — Own work. Licensed under CC BY-SA 3.0, via Wikimedia Commons.
Nowadays, the acoustic horn is a common element used in electrodynamic loudspeakers or for signaling on ships and trains. At first, horn speakers could not amplify sound very far. After electricity entered the picture, though, horn speakers could transfer low levels of electric power into high levels of sound capable of filling large venues. Instead of a mechanically driven diaphragm, an electrically driven loudspeaker uses an electromagnetic moving coil and diaphragm to produce sound that is amplified through a horn. These high-efficiency speakers are often used in public address systems at outdoor parks or sports stadiums as well as in loud alarm systems. For high-amplitude signaling, the electromagnetic motor is often replaced with a compressed air driver.
The reason the horn is so effective is because its shape allows for a controlled cross-section increase. This results in a so-called impedance match between the sound source (a loudspeaker) and the surrounding air. The idea is that an acoustic horn can radiate sound efficiently in a large frequency range. Efficient radiation is obtained when the pressure is in phase with the particle velocity, which requires a large surface at lower frequencies. The acoustic horn permits this; the sound is generated by a small source (at the throat of the horn) but radiated by a large surface (the mouth of the horn). The impedance matching properties of the horn ensure that the radiated wave front is altered as little as possible (from throat to mouth), keeping the pressure and particle velocity in phase. The simplest one-dimensional description of horn acoustics is given by the Webster horn equation. One common type of horn driver is the exponential horn, which has good impedance-matching capabilities.
When the acoustic horns are driven at very high amplitudes — as is often the case for signaling (for ships or trains) or in sound systems used for concert venues — the nonlinear behavior of the acoustics needs to be taken into account. Because of the geometry of the horn, the high sound pressure levels (SPLs) are typically located in the throat of the horn. While nonlinear propagation is present at lower amplitudes, it doesn’t show its effects until high sound amplitudes are reached. Thus, it’s important to account for nonlinear effects in simulations when using an acoustic horn for high-amplitude signaling.
As a good rule of thumb for the applicability of linear acoustics, it is valid as long as the acoustic pressure p is much smaller than \rho c^2 (that is, |p| \ll \rho c^2), where \rho is the fluid density (1.2 kg/m
3 for air) and c is the speed of sound (343 m/s for air). This gives a value of \rho c^2 = 1.4 \cdot 10^5 Pa for air. Assuming that “much smaller than” corresponds to a factor of 100, linear acoustics applies up to roughly an SPL of 154 dB. Modeling High-Amplitude Acoustics with the Westervelt Model
You can model the propagation of nonlinear acoustic waves generated by a horn using the Acoustics Module, an add-on to COMSOL Multiphysics. Simulation allows you to see how the input waveform at the horn’s throat affects the waveform as output at the mouth. In this exponential horn example, the model is set up so that a harmonic input at the throat driven at the frequency f
0 = 130 Hz. This generates an acoustic wave with the frequency spectrum containing the harmonics 2f 0, 3f 0, 4f 0, etc. The model mesh resolves up to the fourth harmonic 4f 0. Nonlinear acoustic simulations require a full nonlinear transient analysis of the system, as frequency-domain models only apply in the linear case. A schematic of the acoustic horn model.
The
Pressure Acoustics, Transient interface is used in this example for the transient computation of the acoustic pressure, while the dissipative (thermally conducting and viscous) material model and the Nonlinear Acoustics (Westervelt) domain condition (the latter of which is available as of version 5.4 of the COMSOL® software) simulate the nonlinear propagation of acoustics in the physical domain. As shown below for the 2D axisymmetric model, the model includes the Exterior Field Calculation boundary condition (also available as of version 5.4), which comes into play when computing and visualizing the radiation pattern (more on that later), as well as perfectly matched layers (PMLs), which are used together with the lossless Transient Pressure Acoustics Model node to simulate the open nonreflecting condition toward infinity. 2D axisymmetric model setup.
The nonlinear transient study has two steps:
Time-dependent analysis Time-to-frequency fast Fourier transform (FFT)
For the first step, the
Nonlinear Acoustics (Westervelt) feature automatically tunes the time-dependent solver. This convenient functionality helps make the underlying nonlinear problem more effective. Once the solution reaches a steady state, a time-to-frequency FFT is performed, and the result is stored on the exterior field calculation boundary, where it is used to calculate the exterior field. Evaluating the Simulation Results
First up in the results, you can take a look at the acoustic pressure. The plot to the left below compares the linear (green) and the nonlinear (blue) behaviors in a point just in front of the horn. The red lines correspond to the amplitude computed from the frequency-domain model. From this graph, you can visualize the total nonlinear acoustic pressure at high amplitudes.
On the left, there’s a comparison of the linear and nonlinear approaches for computing the acoustic pressure. The animation on the right visualizes the total nonlinear acoustic pressure profile.
Next, you can analyze the frequency content of the signals. The image on the left shows the transient computation of the acoustic pressure, zoomed in on 5 periods. The image on the right displays the frequency spectrum for both the linear and nonlinear analyses. From the graph, it is evident that the nonlinear model contains higher harmonic components. Due to the nonlinear behavior, energy is pumped from the fundamental frequency to the higher harmonics.
Acoustic pressure as function of time (left) and frequency spectrum with the nonlinear harmonic components clearly visible (right).
Next, you can examine the exterior field. The exterior field calculation feature makes it possible to visualize the radiation pattern of the acoustic field at any given distance from the source, enabling you to study the exterior field SPL. Below on the left is the normalized exterior field SPL, showing the nonlinear analysis versus the single frequency domain. In the image on the right, you can see the nonlinear transient analysis, showing the exterior field SPL at the first three harmonic frequency components. The latter graph also shows the relative amplitude of the various components.
On the left is the normalized exterior field SPL for a nonlinear analysis versus a single frequency domain. On the right is the nonlinear transient analysis for the exterior field SPL of the first three frequency components. Nonlinear effects in the exponential horn model.
As shown from this example, the
Nonlinear Acoustics (Westervelt) feature and the Exterior Field Calculation boundary condition help account for and visualize nonlinear propagation and effects in acoustics simulations, thereby enabling engineers to improve upon acoustics designs requiring higher-amplitude signaling. Next Steps
Try modeling an acoustic horn yourself: Click the button below, which will take you to the Application Gallery. From there, you can download a step-by-step guide and the MPH-file (must log into your COMSOL Access account and have a valid software license).
Further Resources Read about nonlinear acoustics simulations in the following blog posts: Ready for more nonlinear acoustics modeling? Try out this tutorial: Comments (0) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science |
As several others have pointed out, $\infty$ is not a number. So you need to deal it with a bit of care.
To clarify your doubt, the way you have written is to look at $n \times 0$ and then let $n \rightarrow \infty$. So it is true that $$\displaystyle \lim_{n \rightarrow \infty}\left( n \times 0 \right)= 0$$
However, when people write $\infty \times 0$ usually it is a shorthand to denote the indeterminate form when some quantity tends to infinity and some other quantity tends to zero in a limiting sense i.e. expressions of the form $$\lim_{x \rightarrow 0} \left( f(x) \times g(x) \right)$$ where $\displaystyle \lim_{x \rightarrow 0} f(x) = \infty$ and $\displaystyle \lim_{x \rightarrow 0} g(x) = 0$.
(Note that $\infty$ is not a number in the conventional sense. It is just a shorthand to denote that something grows unbounded i.e. given any number your function can take a value larger than that number.)
For instance, let $f(x) = \frac{1}{x}$ as $g(x) = x$, then $f(x) \times g(x) = 1$, $\forall x \neq 0$ and hence $$\displaystyle \lim_{x \rightarrow 0} \left( f(x) \times g(x)\right) = \lim_{x \rightarrow 0} 1 = 1$$ However, $\displaystyle \lim_{x \rightarrow 0} f(x) = \infty$ and $\displaystyle \lim_{x \rightarrow 0} g(x) = 0$ and hence in this case, the indeterminate form evaluates to $1$.
The case which resembles what you have written down is when $f(x) = \frac{1}{x}$ and $g(x) = 0$. This again is an indeterminate form since $\displaystyle \lim_{x \rightarrow 0} f(x) = \infty$ and $\displaystyle \lim_{x \rightarrow 0} g(x) = 0$. However in this case, $f(x) \times g(x) = 0$, $\forall x \neq 0$ and hence $$\displaystyle \lim_{x \rightarrow 0} \left( f(x) \times g(x) \right) = 0$$
Yet another example is to look at $f(x) = \frac{1}{x}$ and $g(x) = \sqrt{x}$. This again is an indeterminate form since $\displaystyle \lim_{x \rightarrow 0} f(x) = \infty$ and $\displaystyle \lim_{x \rightarrow 0^+} g(x) = 0$. Note that $f(x) \times g(x) = \frac{1}{\sqrt{x}}$, $\forall x \neq 0$ and hence $$\displaystyle \lim_{x \rightarrow 0^+} (f(x) \times g(x)) = \displaystyle \lim_{x \rightarrow 0^+} \frac{1}{\sqrt{x}} = \infty$$
Hence, you cannot associate a unique value to $\infty \times 0$. It depends on the problem at hand. You can read more about indeterminate form here. (As always with wikipedia, read it just to get a general overall idea.) |
Every function of the form:$$\psi(x, t) = A\cos[k(x\pm ct) + \phi]$$is a solution of the wave equation $$\frac{\partial^2\psi}{\partial x^2} = \frac{1}{c^2}\frac{\partial^2\psi}{\partial t^2}$$The equation is linear, and this means that the general solution is in fact, any linear combination of the possible solutions, something that we can express as follows:$$\psi(x, t) = \int_{-\infty}^{+\infty}dk\ [A_+(k)\cos(k(x-ct)) + A_-(k)\cos(k(x+ct)) + B_+(k)\sin(k(x-ct)) + B_-(k)\sin(k(x+ct))\ ]$$
The above integral is just the Fourier series for $\psi$, with the wave equation enforcing $k^2 = \frac{\omega^2}{c^2}$.
Now, solutions of this type are often called wave packets, as by suitable choice of the coefficients, the $A$s and the $B$s, you can represent any pulse that travels according to the wave equation. These
(pulse-like) wave packets propagate like disturbances with speed $c$.
To see this easily, let $A_-$, and $B_+$ and $B_-$ all vanish for some pulse $\psi_1(x, 0)$, so that we only have the first cosine terms. Let's assume that at $t=0$, the maximum of the pulse is at $x=0$, where $x-ct = 0$. This means that the maximum of the pulse occurs when the argument of all the cosines vanish i.e. when $x = ct$ (because the only position dependence is in the cosines). This means that the peak travels with speed $c$, and the same argument carries over (for this wave equation) to any other part with a particular value of $\psi$.
Here's a sample wave packet propagating according to the wave equation (taken from the linked article)
It's worth noting that this pulse's motion ultimately comes from the phase velocity (i.e. the change of phase $x-ct$) of each of the $\cos$ terms, which is the motion of those solutions themselves. They do, of course, travel locally - the regions of constant phase move with speed $c$; it is simply because of the fact that the waveform repeats periodically that you can't think of it as a disturbance that originates somewhere. |
I asked this question on StackOverflow, but I think here is a more appropriate place.
This is a problem from
Introduction to algorithms course:
You have an array $a$ with $n$ positive integers (the array doesn't need to be sorted or the elements unique). Suggest an $O(n)$ algorithm to find the largest sum of elements that is divisible by $n$.
Example: $a = [6, 1, 13, 4, 9, 8, 25], n = 7$. The answer is $56$ (with elements $6, 13, 4, 8, 25$)
It's relatively easy to find it in $O(n^2)$ using dynamic programming and storing largest sum with remainder $0, 1, 2,..., n - 1$.
Also, if we restrict attention to a contiguous sequence of elements, it's easy to find the optimal such sequence in $O(n)$ time, by storing partial sums modulo $n$: let $S[i]=a[0]+a[1]+\dots + a[i]$, for each remainder $r$ remember the largest index $j$ such that $S[j] \equiv r \pmod{n}$, and then for each $i$ you consider $S[j]-S[i]$ where $j$ is the index corresponding to $r=S[i] \bmod n$.
But is there a $O(n)$-time solution for the general case? Any suggestions will be appreciated! I consider this has something to deal with linear algebra but I'm not sure what exactly.
Alternatively, can this be done in $O(n \log n)$ time? |
We work with a topological space $B$ which is path-connected and locally path-connected.
I have troubles writing down a formal proof for the following proposition:
Prop:Any local coeeficient system $A\hookrightarrow E \to B$ is of the form $$ A \hookrightarrow \tilde{B}\times_{\pi_1B} A \to B $$ i.e. is associated to the principal $\pi_1B$-bundle given by the universal cover $\tilde{B}$ of $B$ where the action is given by a homomorphism $\pi_1B \to $Aut$(A)$.
I'm able to show that the natural monodromy action of the fundamental group on $A$ gives a group homomorphism $\pi_1B \to $Aut$(A)$. But I'm not able to show that this implies that $E \cong \tilde{B}\times_{\pi_1B} A $. Can you help me on that?
Note: The author of the book I'm reading says this should be easy to show using a standard covering space argument.
Here are some definitions:
Def: A local coefficient system is a fiber bundle $p:E\to B$ such that The fiber is a discrete abelian group $A$ The structure group $G$ is a subset of Aut$(A)$ Def: Let $p:P\to B$ be a principal $G$-bundle, where $G$ acts on an abelian group $A$. The Borel contruction is the quotient$$ P\times_G A = P\times A \;/ \sim $$where the equivalence relation $\sim$ is defined by $(p,a) \sim (pg, g^{-1}a) $, for $g\in G$.
The map $q:P\times_G A \to B$ given by $q([p,a]) = p(p)$ gives a fiber bundle with fiber $A$. |
Taking the simplest circuit: battery and resistors.
If I connect lots of resistors in parallel, wouldn't that increase the current to an extent that it would be technically be very similar to shorting the circuit?
Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.Sign up to join this community
Taking the simplest circuit: battery and resistors.
If I connect lots of resistors in parallel, wouldn't that increase the current to an extent that it would be technically be very similar to shorting the circuit?
Yes. The equivalent resistance for $n$ equal resistors of value $R$ connected in parallel is $R(n)=\frac{R}{n}$. As $n \to \infty$ then $R(n) \to 0$, provided that $R$ is finite.
If you have $N$ resistors in parallel, all of which have a resistance of $R$, the total equivalent resistance will be $$ \left( \frac 1 R + \cdots + \frac 1 R \right)^{-1} = \left( \frac N R \right)^{-1} = \frac R N \;. $$ So yes, if you take a sufficiently large amount of (identical) resistors in parallel, it's the same as not having any resistors at all.
No, a short circuit needn't be an overload. There are circumstances (like in current transformers) where no load, however small in resistance, is an overload. There are ideal signal sources that are voltage sources (i.e. low impedance), and sources that are current sources (i.e. high output impedance), and sources that are of known impedance (50 ohm RF wiring, and 110 ohm digital differential wiring, depend on that).
When something is an overload, it means that it is outside the specified intended load limits. Sometimes, that means a HIGH resistance is an overload (and a current source will overvoltage and damage the insulation). Low resistance can be an overload if the source is such low impedance that destructive currents flow. Even that, though, isn't an overload if the intention is an explosive squib.
Yes it is. Voltage taking another path is essentially the same as not enough voltage.
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead? |
Degree $n$ : $33$ Transitive number $t$ : $46$ Parity: $-1$ Primitive: No Nilpotency class: $-1$ (not nilpotent) Generators: (1,11,10,9,8,7,6,5,4,3,2)(12,31,22,24)(13,27,21,28)(14,23,20,32)(15,30,19,25)(16,26,18,29)(17,33), (1,23,6,31)(2,29,5,25)(3,24,4,30)(7,26,11,28)(8,32,10,33)(9,27)(12,20)(13,19)(14,18)(15,17)(21,22) $|\Aut(F/K)|$: $1$
|G/N| Galois groups for stem field(s) 2: $C_2$ x 3 4: $C_2^2$ 6: $S_3$ 12: $D_{6}$ 24: $S_4$ 48: $S_4\times C_2$
Resolvents shown for degrees $\leq 47$
Degree 3: $S_3$
Degree 11: None
44T293
Siblings are shown with degree $\leq 47$
A number field with this Galois group has no arithmetically equivalent fields.
There are 140 conjugacy classes of elements. Data not shown.
Order: $63888=2^{4} \cdot 3 \cdot 11^{3}$ Cyclic: No Abelian: No Solvable: Yes GAP id: Data not available
Character table: Data not available. |
Newform invariants
Coefficients of the \(q\)-expansion are expressed in terms of a basis \(1,\beta_1,\beta_2,\beta_3,\beta_4\) for the coefficient ring described below. We also show the integral \(q\)-expansion of the trace form.
Basis of coefficient ring in terms of a root \(\nu\) of \(x^{5}\mathstrut -\mathstrut \) \(5\) \(x^{3}\mathstrut +\mathstrut \) \(4\) \(x\mathstrut -\mathstrut \) \(1\):
\(\beta_{0}\) \(=\) \( 1 \) \(\beta_{1}\) \(=\) \( \nu \) \(\beta_{2}\) \(=\) \( \nu^{4} - 4 \nu^{2} + 1 \) \(\beta_{3}\) \(=\) \( -\nu^{4} + 5 \nu^{2} - 3 \) \(\beta_{4}\) \(=\) \( \nu^{4} + \nu^{3} - 5 \nu^{2} - 4 \nu + 3 \)
\(1\) \(=\) \(\beta_0\) \(\nu\) \(=\) \(\beta_{1}\) \(\nu^{2}\) \(=\) \(\beta_{3}\mathstrut +\mathstrut \) \(\beta_{2}\mathstrut +\mathstrut \) \(2\) \(\nu^{3}\) \(=\) \(\beta_{4}\mathstrut +\mathstrut \) \(\beta_{3}\mathstrut +\mathstrut \) \(4\) \(\beta_{1}\) \(\nu^{4}\) \(=\) \(4\) \(\beta_{3}\mathstrut +\mathstrut \) \(5\) \(\beta_{2}\mathstrut +\mathstrut \) \(7\)
For each embedding \(\iota_m\) of the coefficient field, the values \(\iota_m(a_n)\) are shown below.
For more information on an embedded modular form you can click on its label.
This newform does not admit any (nontrivial) inner twists.
\( p \) Sign \(5\) \(1\) \(241\) \(-1\)
This newform can be constructed as the intersection of the kernels of the following linear operators acting on \(S_{2}^{\mathrm{new}}(\Gamma_0(6025))\):
\(T_{2}^{5} \) \(\mathstrut -\mathstrut T_{2}^{4} \) \(\mathstrut -\mathstrut 6 T_{2}^{3} \) \(\mathstrut +\mathstrut 5 T_{2}^{2} \) \(\mathstrut +\mathstrut T_{2} \) \(\mathstrut -\mathstrut 1 \) \(T_{3}^{5} \) \(\mathstrut -\mathstrut 5 T_{3}^{4} \) \(\mathstrut +\mathstrut 2 T_{3}^{3} \) \(\mathstrut +\mathstrut 17 T_{3}^{2} \) \(\mathstrut -\mathstrut 9 T_{3} \) \(\mathstrut -\mathstrut 17 \) |
I'm a little new to signal processing and I'm trying to wrap my head around convolutions.
I know the definition of convolution for a continuous signal is
$$y(t) = x(t) * h(t) = \int_{-\infty}^{\infty}{x(\tau)h(t-\tau) \, \mathrm{d}\tau}$$
Let's say for example that
$$h(t)=6e^{-t}u(t)$$
and
$$x(t)=e^{-4t}u(t)$$
you end up getting that
$$y(t)=\int_{-\infty}^{\infty}{6e^{-3\tau-t}}d\tau$$
which is divergent. I watched a few YouTube videos about it and they always explain how to do convolutions graphically with two rectangles, does anybody mind explaining how to do it with two exponential functions, or point me to somewhere that does? |
Just to add some variety to the list of answers, I'm going to go against the grain here and say that you can, in an albeit silly way, interpret $dy/dx$ as a ratio of real numbers.
For every (differentiable) function $f$, we can define a function $df(x; dx)$ of two real variables $x$ and $dx$ via $$df(x; dx) = f'(x)\,dx.$$Here, $dx$ is just a real number, and no more. (In particular, it is not a differential 1-form, nor an infinitesimal.) So, when $dx \neq 0$, we can write:$$\frac{df(x;dx)}{dx} = f'(x).$$
All of this, however, should come with a few remarks.
It is clear that these notations above do not constitute a definition of the derivative of $f$. Indeed, we needed to know what the derivative $f'$ meant before defining the function $df$. So in some sense, it's just a clever choice of notation.
But if it's just a trick of notation, why do I mention it at all? The reason is that in higher dimensions, the function $df(x;dx)$ actually becomes the focus of study, in part because it contains information about all the partial derivatives.
To be more concrete, for multivariable functions $f\colon R^n \to R$, we can define a function $df(x;dx)$ of two n-dimensional variables $x, dx \in R^n$ via$$df(x;dx) = df(x_1,\ldots,x_n; dx_1, \ldots, dx_n) = \frac{\partial f}{\partial x_1}dx_1 + \ldots + \frac{\partial f}{\partial x_n}dx_n.$$
Notice that this map $df$ is
linear in the variable $dx$. That is, we can write:$$df(x;dx) = (\frac{\partial f}{\partial x_1}, \ldots, \frac{\partial f}{\partial x_n})\begin{pmatrix} dx_1 \\ \vdots \\ dx_n \\\end{pmatrix}= A(dx),$$where $A$ is the $1\times n$ row matrix of partial derivatives.
In other words, the function $df(x;dx)$ can be thought of as a linear function of $dx$, whose matrix has variable coefficients (depending on $x$).
So for the $1$-dimensional case, what is really going on is a trick of
dimension. That is, we have the variable $1\times1$ matrix ($f'(x)$) acting on the vector $dx \in R^1$ -- and it just so happens that vectors in $R^1$ can be identified with scalars, and so can be divided.
Finally, I should mention that, as long as we are thinking of $dx$ as a real number, mathematicians multiply and divide by $dx$ all the time -- it's just that they'll usually use another notation. The letter "$h$" is often used in this context, so we usually write $$f'(x) = \lim_{h \to 0} \frac{f(x+h) - f(x)}{h},$$rather than, say,$$f'(x) = \lim_{dx \to 0} \frac{f(x+dx) - f(x)}{dx}.$$My guess is that the main aversion to writing $dx$ is that it conflicts with our notation for differential $1$-forms.
EDIT: Just to be even more technical, and at the risk of being confusing to some, we really shouldn't even be regarding $dx$ as an element of $R^n$, but rather as an element of the tangent space $T_xR^n$. Again, it just so happens that we have a canonical identification between $T_xR^n$ and $R^n$ which makes all of the above okay, but I like distinction between tangent space and euclidean space because it highlights the different roles played by $x \in R^n$ and $dx \in T_xR^n$. |
I'm probably missing something stupid. In the paper mention above (see link, pg. 4) the hydrodynamical instability of the disk is reviewed with Navier-Stokes equation.
Having the unperturbed velocity field in cylindrical coordinates $\mathbf{u_0}=u_0(R)\mathbf{\hat{\phi}}$, and the small perturbation $\mathbf{u}$ in $\mathbf{\hat{R}}$ and $\mathbf{\hat{\phi}}$ directions, one can write the linearized Navier-Stokes equation (neglecting the perturbations of the density and gravitational potential) as: $$ \frac{\partial \mathbf{u}}{\partial t}+\left(\mathbf{u_0}\cdot\nabla\right)\mathbf{u}+\left(\mathbf{u}\cdot\nabla\right)\mathbf{u_0}. $$ It's good so far. Now breaking this equation into components, one can write:
$\mathbf{\hat{R}}$ direction
$$ \frac{\partial u_R}{\partial t}+\frac{u_0}{R}\partial_\phi u_R=0 $$
$\mathbf{\hat{\phi}}$ direction$$\frac{\partial u_\phi}{\partial t}+\frac{u_0}{R}\partial_\phi u_\phi+u_ru_0'=0$$
The problem is, that in the original paper the authors get extra terms:
$\mathbf{\hat{R}}$ direction
$$ \frac{\partial u_R}{\partial t}+\frac{u_0}{R}\partial_\phi u_R\mathbin{\color{red}{-2\frac{u_0}{R}u_\phi}}=0 $$
$\mathbf{\hat{\phi}}$ direction$$\frac{\partial u_\phi}{\partial t}+\frac{u_0}{R}\partial_\phi u_\phi+u_ru_0'\mathbin{\color{red}{+\frac{u_0}{R}u_R}}=0$$
I should be missing something. Can you, please, point me on that? |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
Revista Matemática Iberoamericana
Full-Text PDF (278 KB) | Metadata | Table of Contents | RMI summary
Volume 15, Issue 3, 1999, pp. 429–449 DOI: 10.4171/RMI/261
Published online: 1999-12-31
On radial behaviour and balanced Bloch functionsJuan J. Donaire
[1]and Christian Pommerenke [2](1) Universitat Autònoma de Barcelona, Bellaterra, Barcelona, Spain
(2) Technische Universität Berlin, Germany
A Bloch function $g$ is a function analytic in the unit disk such that $(1–|z|^2)|g' (z)|$ is bounded. First we generalize the theorem of Rohde that, for every "bad" Bloch function, $g(r \zeta) (r \longrightarrow 1)$ follows any prescribed curve at a bounded distance for $\zeta$ in a set of Hausdorff dimension almost one. Then we introduce balanced Bloch functions. They are characterized by the fact that $|g'(z)|$ does not vary much on each circle $\lbrace |z| = r\rbrace$ except for small exceptional arcs. We show e.g. that $$\int^1_0|g'(r \zeta)|dr< \infty$$ holds either for all $\zeta \in \mathbb T$ or for none.
No keywords available for this article.
Donaire Juan, Pommerenke Christian: On radial behaviour and balanced Bloch functions.
Rev. Mat. Iberoam. 15 (1999), 429-449. doi: 10.4171/RMI/261 |
It seems to me that the one part that is difficult to transfer to general coherent sheaves is given only one sentence: "Because we have such a common $m$, we get as before an injective morphism from the functor $\mathfrak{Quot}^{\Phi,L}_{E/X/S}$ into the Grassmanian functor $\mathfrak{Grass}(\pi_\ast E(r), \Phi(r))$." But I see at least two serious hardships with this proof: First, why is $R^1 \pi_{T,\ast} \left( \mathcal{G}_t (r) \right) = 0$? Second, even if that is true, the values on $T$ are quotients of $\pi_{T,\ast} E_T$, not of $(\pi_\ast E)_T$, where the latter is the pullback of the pushforward. These two sheaves are the same for some $r$ by Theorem 5.4 (3.1 in the standalone paper), but that $r$ might depend on $T$.
Edit: The statement of the theorem that has a more or less thorough proof in the book "Fundamental algebraic geometry: Grothendieck's FGA explained.", in part 5, by Nitin Nitsure (it exists as a separate paper [Nitsure, Nitin (2005), "Construction of Hilbert and Quot schemes", Fundamental algebraic geometry, Math. Surveys Monogr. 123, Providence, R.I.: Amer. Math. Soc., pp. 105–137, arXiv:math/0504590, MR 2223407]), is as follows: (theorem 5.2 in the paper, 5,15 in the book)
Let $S$ be a Noetherian scheme, and $X$ a closed subscheme in $\mathbb{P}(V)$ for some vector bundle $V$, let $\pi$ denote the stuctral morphism $X \to S$. Let $E$ be a coherent factor-sheaf of $\pi^\ast(W)(\nu)$ where $W$ is an $S$-vector bundle and $\nu$ is an integer. Then for any integer-valued polynomial $\Phi$, the functor $\mathfrak{Quot}_{E/X/S}^{\Phi,\mathcal{O}(1)}$ is representable by a closed subscheme of the Grassmanian $Gr(W \otimes Sym^r(V),\Phi(r))$ for large enough r.
It differs from the original Grothendieck's statement in that in general, not all projective schemes embed into projectivisations of vector bundles (if there is an ample bundle on $S$, they do) Also, the bundle $E$ is just any coherent sheaf in Grothendieck's version of the theorem.
Because I don't know what the author meant when he wrote that the part that I pointed out generalises easily, I guess I will just write down that part of the proof (it is implicit in the text, I had to guess the specifics myself) and note where I see difficulty.
First of all, $X$ is obviously replaced with $\mathbb{P}(V)$. Then the factor-sheaf of $\pi^\ast (W) (\nu)$ is replaced by $\pi^\ast(W)$ - this induces a closed embedding on the $\mathfrak{Quot}$ sheaves. Here I don't see how to generalise - to express an arbitrary coherent sheaf as a quotient of a vector bundle. So, over an $S$-scheme $T$, on $\mathbb{P}(H) \times_S T$ there is an exact sequence $0 \to \mathcal{G} \to E_T \to \mathcal{F} \to 0.$ And what seems crucial is that all these three sheaves are flat over $T$. On fibers $s$, $\mathcal{G}_s$ is a subsheaf of a free sheaf $E_s$, and we know the Hilbert polynome of $\mathcal{G}$ so Mumford's theorem (5.3 in the book, 2.3 in the paper) is used to show that there exists an $m$ which does not depend on $s$ or the particular $\mathcal{F}$ such that all three sheaves in the exact sequence on the fiber are (Castelnuovo-Mumford) $m$-regular. (perhaps for general $E$ one can emulate the proof of Mumford's theorem, uniformly bounding the dimensions of global sections of $E$ on subspaces in fibers) Then after twisting by $r$ for $r \geq m$, the three sheaves have no higher cohomology on fibers. Then by the Flatness and Base Change Theorem (strongly using that the sheaves are flat over $S$), the higher cohomology vanish. They are also relatively globally generated (maybe after a twist by n), I hae thought of the following argument: they become "relatively 0-regular" after thet twist, so they are relatively globally generated. Since $R^1\pi_{T,\ast}(\mathcal{G(r)})=0$, we get an exact sequence $0 \to \pi_{T,\ast}(\mathcal{G}(r)) \to \pi_{T,\ast}(E(r)) \to \pi_{T,\ast}(\mathcal{F}(r)) \to 0$, and thus a morphism to the Grassmanian. The global generation (of $\mathcal{G}(r)$) is used to show that the morphism is injective and to find its image.
The statement of Mumford's theorem is as follows: There exists a polynome with integer coefficients $F_{d,n}$ in $n+1$ variables such that for any $\mathcal{F}$ a coherent subsheaf of $\oplus^d \mathcal{O}_{\mathbb{P}^n_k}$, $\mathcal{F}$ is $F_{n,d}(a_0, \cdots a_n)$-regular where $a_i$ are the coordinates of the hilbert polynome of $\mathcal{F}$ in the binomial basis. |
I don't know if this question should have better been on math.Stackexchange
Let be the operation $Double$ on the words on an $\Sigma$ alphabet which inserts after each character a copy of this character. Thus, $D(ab) = aabb$, $D(abaab) = aabbaaaabb$, etc... We had to prove that these regular expressions are closed by this operation and it was a sucess.
We now have to prove this properties for automata. In other words, we have to prove that if we have a language $L$ such that it exists an automata $A$ that recognize $(L=L(A))$, there also exists an automata $A'$ that recognize tha language $Double(L)$.
I shouldn't use equivalence between automatas and regular expressions
goal : I therefore deduce that I have to prove that autmomatas are closed by the Double operation. the hypothese : is that it exists a language $L$ such taht it exists an automata $A$ which recognizes it. proof attempt : $D(L)$ being a language, it must be a automata associated.
But:
has(have ?) every language(s?) an associated automata? Isn't this proof too short or taking its goal has an hypothesis ? Proof attempt n°2
Following Rick Decker's advises, here is the second attempt to prove that if we have a language $L$ such that it exists an automata $A$ that recognize $(L=L(A))$, there also exists an automata $A'$ that recognize tha language $Double(L)$ :
To prove it, we are goint to construct an automata $A'$ such that $A'=D(L)$.
The idea is to construct an input string of $w$ that we reads from left to right. After having read the entire string $w$, it checks whether the following char is the same. If it is the case we remain in the final state. Otherwise, we go to a transitional state and if the following char isn't exactly the same, we go to the bin state.
$Q=\{q_0, q_1, q_2, p\}$, $q_1,q_2$ are waiting states, $p$ is a bin state. $\Sigma$ is the alphabet. For the example it is : $\{a,b\}$. $\delta : Q × \Sigma → Q$ is a function, called the transition function,
$$\begin{array}{c|cc|c|c|} & a & b\\ \hline q_0 & q_2 & q_1\\ q_1 & p& q_0\\ q_2 & q_0&p\\ p & p & p\\ \hline \end{array}$$
$q=q_0$. $F=q_0$ is the final state. It corresponds to the intial state because the empty set is accepted by the automata. |
okay, i'm gonna expound a little.
quoting (except for any typos that may result) from the 1989 text of O&S (Introduction to Chapter 8, The Discrete Fourier Transform, p 514):
Although several points of view can be taken toward the derivation and interpretation of the DFT representation of a finite-duration sequence, we have chosen to base our presentation on the relationship between periodic sequences and finite-length sequences. We will begin by considering the Fourier series representation of periodic sequences. While this representation is important in its own right, we are most often interested in the application of Fourier series results to the representation of finite-length sequences. We accomplish this by constructing a periodic sequence for which each period is identical to the finite-length sequence. As we will see, the Fourier series representation of the periodic sequence corresponds to the DFT of the finite-length sequence. Thus our approach is to define the Fourier series representation for periodic sequences and to study the properties of such representations. Then we repeat essentially the same derivations assuming that the sequence to be represented is a finite-length sequence. This approach to the DFT emphasizes the fundamental inherent periodicity of the DFT representation and ensures that this periodicity is not overlooked in applications of the DFT.
section 8.1, p 516 on the DFS:
Eq. (8.11) $\quad \tilde{X}[k] = \sum\limits^{N-1}_{n=0} \tilde{x}[n] \ e^{-j2\pi n k/N} $
Eq. (8.12) $\quad \tilde{x}[n] = \frac{1}{N} \sum\limits^{N-1}_{k=0} \tilde{X}[k] \ e^{+j2\pi n k/N} $
regarding the DFS, $\tilde{x}[n]$ (with the tilde) is defined to be periodic with period $N$ such that $$ \tilde{x}[n+N] = \tilde{x}[n] \quad \forall n $$ and $\tilde{X}[k]$ turns out to also be periodic with period $N$ (so $ \tilde{X}[k+N] = \tilde{X}[k] \quad \forall k $)
later, in section 8.6, p 532 on the DFT:
Eq. (8.59) $\quad X[k] = \begin{cases} \sum\limits^{N-1}_{n=0} x[n] \ e^{-j2\pi n k/N}, & 0 \le k \le N-1 \\ 0, & \text{otherwise} \end{cases} $
Eq. (8.60) $\quad x[n] = \begin{cases} \frac{1}{N} \sum\limits^{N-1}_{k=0} X[k] \ e^{+j2\pi n k/N}, & 0 \le n \le N-1 \\ 0, & \text{otherwise} \end{cases} $
Generally the DFT analysis and synthesis equations are written as
Eq. (8.61) $\quad X[k] = \sum\limits^{N-1}_{n=0} x[n] \ e^{-j2\pi n k/N} $
Eq. (8.62) $\quad x[n] = \frac{1}{N} \sum\limits^{N-1}_{k=0} X[k] \ e^{+j2\pi n k/N} $
In recasting Eqs. (8.11) and (8.12) in the form of Eqs. (8.61) and (8.62) for the finite-duration sequences, we have not eliminated the inherent periodicity. As with the DFS, the DFT $X[k]$ is equal to samples of the periodic Fourier transform $X(e^{j\omega})$, and if Eq. (8.62) is evaluated for values of $n$ outside the interval $0 \le n \le N-1$, the result will not be zero but rather a periodic extension of $x[n]$. The inherent periodicity is always present. Sometimes it causes us difficulty and sometimes we can exploit it, but to totally ignore it is to invite trouble.
so the first obvious thing i would say is that the tildes used for the DFS (to explicitly depict a periodic sequence) are symbols and
still do not change any mathematical fact. now i know some folks will point to the Eqs. (8.59) and (8.60) definition of the DFT that has truncated (to $0$) values outside of the interval $0 \le n,k \le N-1$.
however, that definition is contrived. it could just as well be expressed as
$\quad X[k] = \begin{cases}\sum\limits^{N-1}_{n=0} x[n] \ e^{-j2\pi n k/N}, & 0 \le k \le N-1 \\5, & \text{otherwise}\end{cases} $
$\quad x[n] = \begin{cases}\frac{1}{N} \sum\limits^{N-1}_{k=0} X[k] \ e^{+j2\pi n k/N}, & 0 \le n \le N-1 \\ 5, & \text{otherwise} \end{cases} $
or
$\quad X[k] = \begin{cases}\sum\limits^{N-1}_{n=0} x[n] \ e^{-j2\pi n k/N}, & 0 \le k \le N-1 \\5000, & \text{otherwise}\end{cases} $
$\quad x[n] = \begin{cases}\frac{1}{N} \sum\limits^{N-1}_{k=0} X[k] \ e^{+j2\pi n k/N}, & 0 \le n \le N-1 \\ 5000, & \text{otherwise} \end{cases} $
or
$\quad X[k] = \begin{cases}\sum\limits^{N-1}_{n=0} x[n] \ e^{-j2\pi n k/N}, & 0 \le k \le N-1 \\\text{the man on the moon}, & \text{otherwise}\end{cases} $
$\quad x[n] = \begin{cases}\frac{1}{N} \sum\limits^{N-1}_{k=0} X[k] \ e^{+j2\pi n k/N}, & 0 \le n \le N-1 \\ \text{and his hot girlfriend}, & \text{otherwise} \end{cases} $
because that $0$ in that contrived DFT definition will
never ever be used in any theorems regarding the DFT. when that contrived definition is used for the DFT, then when using any DFT theorems to do any real work (other than the linearity and scaling by constant theorems), then one must use modulo arithmetic in the arguments of $x[n]$ or $X[k]$. and using that modulo arithmetic is explicitly periodically extending the sequence.
so (sorta responding to hotpaw) there are two or three processes that you should think about when using the DFT on a real signal.
the sampling process. what happens to the spectrum of $x(t)$ when you sample it with a "dirac comb" or whatever you want to call the sampling function? windowing to finite length. what happens when you window either $x(t)$ or the sampled version, $x[n]$, with a rectangular window of length $N$? periodic extension. what happens when you periodically extend it by repeatedly shifting the windowed $x[n]$ by $N$ samples and overlap and add it?
deal with each step by itself. |
Let's look at a simpler problem. Suppose you have the situation depicted in the figure below:
Then, given the angle $\alpha$, the coordinates of the point $C''$ are:
$$C''_x = r\cos\alpha\qquad\mbox{and}\qquadC''_y = r\sin\alpha$$
where $r$ is the radius of the circle.
Now let's look at a slightly more complicated problem, depicted below:
This is very similar to the situation above. In fact,
$$C'_x = r\cos(\alpha+\beta)\qquad\mbox{and}\qquadC'_y = r\sin(\alpha+\beta)$$
By using the trigonometric relations $\sin(\alpha+\beta) = \sin\alpha\cos\beta + \sin\beta\cos\alpha$ and $\cos(\alpha+\beta) = \cos\alpha\cos\beta - \sin\alpha\sin\beta$, we can write the above as follows:
$$C'_x = r\cos\alpha\cos\beta - r\sin\alpha\sin\beta\qquad\mbox{and}\qquadC'_y = r\sin\alpha\cos\beta + r\sin\beta\cos\alpha$$
But, wait... By looking at the previous situation and replacing $C''$ with $B'$ and $\alpha$ with $\beta$, we see that
$$B'_x = r\cos\beta\qquad\mbox{and}\qquadB'_y = r\sin\beta$$
Therefore, we can write
$$C'_x = B'_x\cos\alpha - B'_y\sin\alpha\qquad\mbox{and}\qquadC'_y = B'_x\sin\alpha + B'_y\cos\alpha$$
But what you want is this, instead:
Well, we can just move everything rigidly by the vector $-\vec{OA}$ so that $A$ is now the origin of the coordinate system and we get the situation just above. This amounts to subtracting $A$ from both $B$ and $C$ to get $B'$ and $C'$ in the above, and we find
$$C_x - A_x = (B_x-A_x)\cos\alpha - (B_y-A_y)\sin\alpha$$$$C_y - A_y = (B_x-A_x)\sin\alpha + (B_y-A_y)\cos\alpha$$
Then, finally,
$$C_x = A_x + (B_x-A_x)\cos\alpha - (B_y-A_y)\sin\alpha$$$$C_y = A_y + (B_x-A_x)\sin\alpha + (B_y-A_y)\cos\alpha$$ |
The Annals of Probability Ann. Probab. Volume 29, Number 2 (2001), 577-623. Special Invited Paper: Geodesics And Spanning Tees For Euclidean FirstPassage Percolaton Abstract
The metric $D_{\alpha}(q,q')$ on the set $Q$ of particle locations of a homogeneous Poisson process on $\mathbb{R}^d$ , defined as the infimum of (\sum_i |q_i - q_{i+1}|^{\alpha})^{1/\alpha}$ over sequences in $Q$ starting with $q$ and ending with $q'$ (where $|·|$ denotes Euclidean distance) has nontrivial geodesics when $\alpha>1$. The cases $1< \alpha<\infty$ are the Euclidean firstpassage percolation (FPP) models introduced earlier by the authors, while the geodesics in the case $\alpha = \infty$ are exactly the paths from the Euclidean minimal spanning trees/forests of Aldous and Steele. We compare and contrast results and conjectures for these two situations. New results for $1 < \alpha < \infty$ (and any $d$) include inequalities on the fluctuation exponents for the metric $(\chi \le 1/2)$ and for the geodesics $(\xi \le 3/4)$ in strong enough versions to yield conclusions not yet obtained for lattice FPP: almost surely, every semiinfinite geodesic has an asymptotic direction and every direction has a semiinfinite geodesic (from every $q$). For $d = 2$ and $2 \le \alpha < \infty$, further results follow concerning spanning trees of semiinfinite geodesics and related random surfaces.
Article information Source Ann. Probab., Volume 29, Number 2 (2001), 577-623. Dates First available in Project Euclid: 21 December 2001 Permanent link to this document https://projecteuclid.org/euclid.aop/1008956686 Digital Object Identifier doi:10.1214/aop/1008956686 Mathematical Reviews number (MathSciNet) MR1849171 Subjects Primary: 60K35: Interacting random processes; statistical mechanics type models; percolation theory [See also 82B43, 82C43] 60G55: Point processes Secondary: 82D30: Random media, disordered materials (including liquid crystals and spin glasses) 60F10: Large deviations Citation
Howard, C. Douglas; Newman, Charles M. Special Invited Paper: Geodesics And Spanning Tees For Euclidean FirstPassage Percolaton. Ann. Probab. 29 (2001), no. 2, 577--623. doi:10.1214/aop/1008956686. https://projecteuclid.org/euclid.aop/1008956686 |
Probability Seminar Spring 2019 Thursdays in 901 Van Vleck Hall at 2:25 PM, unless otherwise noted. We usually end for questions at 3:15 PM.
If you would like to sign up for the email list to receive seminar announcements then please send an email to join-probsem@lists.wisc.edu
January 31, Oanh Nguyen, Princeton
Title:
Survival and extinction of epidemics on random graphs with general degrees
Abstract: We establish the necessary and sufficient criterion for the contact process on Galton-Watson trees (resp. random graphs) to exhibit the phase of extinction (resp. short survival). We prove that the survival threshold $\lambda_1$ for a Galton-Watson tree is strictly positive if and only if its offspring distribution has an exponential tail, settling a conjecture by Huang and Durrett. On the random graph with degree distribution $D$, we show that if $D$ has an exponential tail, then for small enough $\lambda$ the contact process with the all-infected initial condition survives for polynomial time with high probability, while for large enough $\lambda$ it runs over exponential time with high probability. When $D$ is subexponential, the contact process typically displays long survival for any fixed $\lambda>0$. Joint work with Shankar Bhamidi, Danny Nam, and Allan Sly.
Wednesday, February 6 at 4:00pm in Van Vleck 911 , Li-Cheng Tsai, Columbia University
Title:
When particle systems meet PDEs
Abstract: Interacting particle systems are models that involve many randomly evolving agents (i.e., particles). These systems are widely used in describing real-world phenomena. In this talk we will walk through three facets of interacting particle systems, namely the law of large numbers, random fluctuations, and large deviations. Within each facet, I will explain how Partial Differential Equations (PDEs) play a role in understanding the systems..
Title:
Fluctuations of the KPZ equation in d\geq 2 in a weak disorder regime
Abstract: We will discuss some recent work on the Edwards-Wilkinson limit of the KPZ equation with a small coupling constant in d\geq 2.
February 14, Timo Seppäläinen, UW-Madison
Title:
Geometry of the corner growth model
Abstract: The corner growth model is a last-passage percolation model of random growth on the square lattice. It lies at the nexus of several branches of mathematics: probability, statistical physics, queueing theory, combinatorics, and integrable systems. It has been studied intensely for almost 40 years. This talk reviews properties of the geodesics, Busemann functions and competition interfaces of the corner growth model, and presents some new qualitative and quantitative results. Based on joint projects with Louis Fan (Indiana), Firas Rassoul-Agha and Chris Janjigian (Utah).
February 21, Diane Holcomb, KTH
Title:
On the centered maximum of the Sine beta process Abstract: There has been a great deal or recent work on the asymptotics of the maximum of characteristic polynomials or random matrices. Other recent work studies the analogous result for log-correlated Gaussian fields. Here we will discuss a maximum result for the centered counting function of the Sine beta process. The Sine beta process arises as the local limit in the bulk of a beta-ensemble, and was originally described as the limit of a generalization of the Gaussian Unitary Ensemble by Valko and Virag with an equivalent process identified as a limit of the circular beta ensembles by Killip and Stoiciu. A brief introduction to the Sine process as well as some ideas from the proof of the maximum will be covered. This talk is on joint work with Elliot Paquette.
Title: Quantitative homogenization in a balanced random environment
Abstract: Stochastic homogenization of discrete difference operators is closely related to the convergence of random walk in a random environment (RWRE) to its limiting process. In this talk we discuss non-divergence form difference operators in an i.i.d random environment and the corresponding process—a random walk in a balanced random environment in the integer lattice Z^d. We first quantify the ergodicity of the environment viewed from the point of view of the particle. As consequences, we obtain algebraic rates of convergence for the quenched central limit theorem of the RWRE and for the homogenization of both elliptic and parabolic non-divergence form difference operators. Joint work with J. Peterson (Purdue) and H. V. Tran (UW-Madison).
Wednesday, February 27 at 1:10pm Jon Peterson, Purdue
Title:
Functional Limit Laws for Recurrent Excited Random Walks
Abstract:
Excited random walks (also called cookie random walks) are model for self-interacting random motion where the transition probabilities are dependent on the local time at the current location. While self-interacting random walks are typically very difficult to study, many results for (one-dimensional) excited random walks are remarkably explicit. In particular, one can easily (by hand) calculate a parameter of the model that will determine many features of the random walk: recurrence/transience, non-zero limiting speed, limiting distributions and more. In this talk I will prove functional limit laws for one-dimensional excited random walks that are recurrent. For certain values of the parameters in the model the random walks under diffusive scaling converge to a Brownian motion perturbed at its extremum. This was known previously for the case of excited random walks with boundedly many cookies per site, but we are able to generalize this to excited random walks with periodic cookie stacks. In this more general case, it is much less clear why perturbed Brownian motion should be the correct scaling limit. This is joint work with Elena Kosygina.
March 21, Spring Break, No seminar March 28, Shamgar Gurevitch UW-Madison
Title:
Harmonic Analysis on GLn over finite fields, and Random Walks
Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters. For evaluating or estimating these sums, one of the most salient quantities to understand is the
character ratio:
$$ \text{trace}(\rho(g))/\text{dim}(\rho), $$
for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant
rank. This talk will discuss the notion of rank for $GL_n$ over finite fields, and apply the results to random walks. This is joint work with Roger Howe (Yale and Texas AM). April 4, Philip Matchett Wood, UW-Madison
Title:
Outliers in the spectrum for products of independent random matrices
Abstract: For fixed positive integers m, we consider the product of m independent n by n random matrices with iid entries as in the limit as n tends to infinity. Under suitable assumptions on the entries of each matrix, it is known that the limiting empirical distribution of the eigenvalues is described by the m-th power of the circular law. Moreover, this same limiting distribution continues to hold if each iid random matrix is additively perturbed by a bounded rank deterministic error. However, the bounded rank perturbations may create one or more outlier eigenvalues. We describe the asymptotic location of the outlier eigenvalues, which extends a result of Terence Tao for the case of a single iid matrix. Our methods also allow us to consider several other types of perturbations, including multiplicative perturbations. Joint work with Natalie Coston and Sean O'Rourke.
April 11, Eviatar Procaccia, Texas A&M Title: Stabilization of Diffusion Limited Aggregation in a Wedge.
Abstract: We prove a discrete Beurling estimate for the harmonic measure in a wedge in $\mathbf{Z}^2$, and use it to show that Diffusion Limited Aggregation (DLA) in a wedge of angle smaller than $\pi/4$ stabilizes. This allows to consider the infinite DLA and questions about the number of arms, growth and dimension. I will present some conjectures and open problems.
April 18, Andrea Agazzi, Duke
Title:
Large Deviations Theory for Chemical Reaction Networks
Abstract: The microscopic dynamics of well-stirred networks of chemical reactions are modeled as jump Markov processes. At large volume, one may expect in this framework to have a straightforward application of large deviation theory. This is not at all true, for the jump rates of this class of models are typically neither globally Lipschitz, nor bounded away from zero, with both blowup and absorption as quite possible scenarios. In joint work with Amir Dembo and Jean-Pierre Eckmann, we utilize Lyapunov stability theory to bypass this challenges and to characterize a large class of network topologies that satisfy the full Wentzell-Freidlin theory of asymptotic rates of exit from domains of attraction. Under the assumption of positive recurrence these results also allow for the estimation of transitions times between metastable states of this class of processes.
April 25, Kavita Ramanan, Brown
Title:
Beyond Mean-Field Limits: Local Dynamics on Sparse Graphs
Abstract: Many applications can be modeled as a large system of homogeneous interacting particle systems on a graph in which the infinitesimal evolution of each particle depends on its own state and the empirical distribution of the states of neighboring particles. When the graph is a clique, it is well known that the dynamics of a typical particle converges in the limit, as the number of vertices goes to infinity, to a nonlinear Markov process, often referred to as the McKean-Vlasov or mean-field limit. In this talk, we focus on the complementary case of scaling limits of dynamics on certain sequences of sparse graphs, including regular trees and sparse Erdos-Renyi graphs, and obtain a novel characterization of the dynamics of the neighborhood of a typical particle. This is based on various joint works with Ankan Ganguly, Dan Lacker and Ruoyu Wu.
Friday, April 26, Colloquium, Van Vleck 911 from 4pm to 5pm, Kavita Ramanan, Brown
Title:
Tales of Random Projections
Abstract: The interplay between geometry and probability in high-dimensional spaces is a subject of active research. Classical theorems in probability theory such as the central limit theorem and Cramer’s theorem can be viewed as providing information about certain scalar projections of high-dimensional product measures. In this talk we will describe the behavior of random projections of more general (possibly non-product) high-dimensional measures, which are of interest in diverse fields, ranging from asymptotic convex geometry to high-dimensional statistics. Although the study of (typical) projections of high-dimensional measures dates back to Borel, only recently has a theory begun to emerge, which in particular identifies the role of certain geometric assumptions that lead to better behaved projections. A particular question of interest is to identify what properties of the high-dimensional measure are captured by its lower-dimensional projections. While fluctuations of these projections have been studied over the past decade, we describe more recent work on the tail behavior of multidimensional projections, and associated conditional limit theorems.
May 7, Tuesday Van Vleck 901, 2:25pm,, Duncan Dauvergne (Toronto) |
Difference between revisions of "Higher-dimensional Fujimura"
(→General n)
(One intermediate revision by the same user not shown) Line 37: Line 37:
A lower bound of 2(n-1)(n-2) can be obtained by keeping all points with exactly one coordinate equal to zero.
A lower bound of 2(n-1)(n-2) can be obtained by keeping all points with exactly one coordinate equal to zero.
−
You get a non-constructive quadratic lower bound for the quadruple problem by taking a random subset of size <math>cn^2</math>. If c is not too large the linearity of expectation shows that the expected number of tetrahedrons in such a set is less than one, and so there must be a set of that size with no tetrahedrons.
+
You get a non-constructive quadratic lower bound for the quadruple problem by taking a random subset of size <math>cn^2</math>. If c is not too large the linearity of expectation shows that the expected number of tetrahedrons in such a set is less than one, and so there must be a set of that size with no tetrahedrons. <math> c = 24^{1/4}6 + o(1n)</math>.
−
With coordinates (a,b,c,d),
+
With coordinates (a,b,c,d), the value a+2b+3c. This forms an arithmetic progression of length 4 for any of the tetrahedrons we are looking for. So we can take subsets of the form a+2b+3c=k, where k comes from a set with no such arithmetic progressions. [[http://arxiv.org/PS_cache/arxiv/pdf/0811/0811.3057v2.pdf This paper]] gives formula for the of
+
.
− +
upper bound can be found by counting tetrahedrons. For a given n the tetrahedral grid has n(n+1)(n+2)(n+3)/tetrahedrons. Each point on the grid is part of n tetrahedrons, so (n+1)(n+2)(n+3)/points must be removed to remove all tetrahedrons. This gives an upper bound of (n+1)(n+2)(n+3)/.
Latest revision as of 07:28, 14 April 2009
Let [math]\overline{c}^\mu_{n,4}[/math] be the largest subset of the tetrahedral grid:
[math] \{ (a,b,c,d) \in {\Bbb Z}_+^4: a+b+c+d=n \}[/math]
which contains no tetrahedrons [math](a+r,b,c,d), (a,b+r,c,d), (a,b,c+r,d), (a,b,c,d+r)[/math] with [math]r \gt 0[/math]; call such sets
tetrahedron-free.
These are the currently known values of the sequence:
n 0 1 2 3 4 5 6 7 [math]\overline{c}^\mu_{n,4}[/math] 1 3 7 14 24 37 55 78 n=0
[math]\overline{c}^\mu_{0,4} = 1[/math]:
There are no tetrahedrons, so no removals are needed.
n=1
[math]\overline{c}^\mu_{1,4} = 3[/math]:
Removing any one point on the grid will leave the set tetrahedron-free.
n=2
[math]\overline{c}^\mu_{2,4} = 7[/math]:
Suppose the set can be tetrahedron-free in two removals. One of (2,0,0,0), (0,2,0,0), (0,0,2,0), and (0,0,0,2) must be removed. Removing any one of the four leaves three tetrahedrons to remove. However, no point coincides with all three tetrahedrons, therefore there must be more than two removals.
Three removals (for example (0,0,0,2), (1,1,0,0) and (0,0,2,0)) leaves the set tetrahedron-free with a set size of 7.
General n
A lower bound of 2(n-1)(n-2) can be obtained by keeping all points with exactly one coordinate equal to zero.
You get a non-constructive quadratic lower bound for the quadruple problem by taking a random subset of size [math]cn^2[/math]. If c is not too large the linearity of expectation shows that the expected number of tetrahedrons in such a set is less than one, and so there must be a set of that size with no tetrahedrons. However, [math] c = (24^{1/4})/6 + o(1/n)[/math], which is lower than the previous lower bound.
With coordinates (a,b,c,d), consider the value a+2b+3c. This forms an arithmetic progression of length 4 for any of the tetrahedrons we are looking for. So we can take subsets of the form a+2b+3c=k, where k comes from a set with no such arithmetic progressions. [This paper, Corollary 1] gives this formula for a lower bound on the proportion of retained points: [math]C\frac{(log N)^{1/4}}{2^\sqrt{8 log N}}[/math], for some absolute constant C.
An upper bound can be found by counting tetrahedrons. For a given n the tetrahedral grid has n(n+1)(n+2)(n+3)/24 tetrahedrons. Each point on the grid is part of n tetrahedrons, so (n+1)(n+2)(n+3)/24 points must be removed to remove all tetrahedrons. This gives an upper bound of (n+1)(n+2)(n+3)/8 remaining points. |
Update (May 2014): Please note that these instructions are outdated. while it is still possible (and in fact easier) to blog with the Notebook, the exact process has changed now that IPython has an official conversion framework. However, Blogger isn't the ideal platform for that (though it can be made to work). If you are interested in using the Notebook as a tool for technical blogging, I recommend looking at Jake van der Plas' Pelican support or Damián Avila's support in Nikola. Update: made full github repo for blog-as-notebooks, and updated instructions on how to more easily configure everything and use the newest nbconvert for a more streamlined workflow.
Since the notebook was introduced with IPython 0.12, it has proved to be very popular, and we are seeing great adoption of the tool and the underlying file format in research and education. One persistent question we've had since the beginning (even prior to its official release) was whether it would be possible to easily write blog posts using the notebook. The combination of easy editing in markdown with the notebook's ability to contain code, figures and results, makes it an ideal platform for quick authoring of technical documents, so being able to post to a blog is a natural request.
Today, in answering a query about this from a colleague, I decided to try again the status of our conversion pipeline, and I'm happy to report that with a bit of elbow-grease, at least on Blogger things work pretty well!
The purpose of this post is to quickly provide a set of instructions on how I got it to work, and to test things out. Please note: this requires code that isn't quite ready for prime-time and is still under heavy development, so expect some assembly.
Converting your notebook to html with nbconvert
The first thing you will need is our nbconvert tool that converts notebooks across formats. The README file in the repo contains the requirements for nbconvert (basically python-markdown, pandoc, docutils from SVN and pygments).
Once you have nbconvert installed, you can convert your notebook to Blogger-friendly html with:
nbconvert -f blogger-html your_notebook.ipynb
This will leave two files in your computer, one named
your_notebook.html and one named
your_noteboook_header.html; it might also create a directory called
your_notebook_files if needed for ancillary files. The first file will contain the body of your post and can be pasted wholesale into the Blogger editing area. The second file contains the CSS and Javascript material needed for the notebook to display correctly, you should only need to use this once to configure your blogger setup (see below):
# Only one notebook so far(master)longs[blog]> ls120907-Blogging with the IPython Notebook.ipynb fig/ old/# Now run the conversion:(master)longs[blog]> nbconvert.py -f blogger-html 120907-Blogging\ with\ the\ IPython\ Notebook.ipynb# This creates the header and html body files(master)longs[blog]> ls120907-Blogging with the IPython Notebook_header.html fig/120907-Blogging with the IPython Notebook.html old/120907-Blogging with the IPython Notebook.ipynb
Configuring your Blogger blog to accept notebooks
The notebook uses a lot of custom CSS for formatting input and output, as well as Javascript from MathJax to display mathematical notation. You will need all this CSS and the Javascript calls in your blog's configuration for your notebook-based posts to display correctly:
Once authenticated, go to your blog's overview page by clicking on its title. Click on templates (left column) and customize using the Advanced options. Scroll down the middle column until you see an "Add CSS" option. Copy entire the contents of the
_headerfile into the CSS box.
That's it, and you shouldn't need to do anything else as long as the CSS we use in the notebooks doesn't drastically change. This customization of your blog needs to be done only once.
While you are at it, I recommend you change the width of your blog so that cells have enough space for clean display; in experimenting I found out that the default template was too narrow to properly display code cells, producing a lot of text wrapping that impaired readability. I ended up using a layout with a single column for all blog contents, putting the blog archive at the bottom. Otherwise, if I kept the right sidebar, code cells got too squished in the post area.
I also had problems using some of the fancier templates available from 'Dynamic Views', in that I could never get inline math to render. But sticking to those from the Simple or 'Picture Window' categories worked fine and they still allow for a lot of customization.
Note: if you change blog templates, Blogger does destroy your custom CSS, so you may need to repeat the above steps in that case. Adding the actual posts
Now, whenever you want to write a new post as a notebook, simply convert the
.ipynb file to blogger-html and copy its entire contents to the clipboard. Then go to the 'raw html' view of the post, remove anything Blogger may have put there by default, and paste. You should also click on the 'options' tab (right hand side) and select both
Show HTML literally and
Use <br> tag, else your paragraph breaks will look all wrong.
That's it!
What can you put in?
I will now add a few bits of code, plots, math, etc, to show which kinds of content can be put in and work out of the box. These are mostly bits copied from our example notebooks so the actual content doesn't matter, I'm just illustrating the
kind of content that works.
# Let's initialize pylab so we can plot later%pylab inline
Welcome to pylab, a matplotlib-based Python environment [backend: module://IPython.zmq.pylab.backend_inline]. For more information, type 'help(pylab)'.
With pylab loaded, the usual matplotlib operations work
x = linspace(0, 2*pi)plot(x, sin(x), label=r'$\sin(x)$')plot(x, cos(x), 'ro', label=r'$\cos(x)$')title(r'Two familiar functions')legend()
<matplotlib.legend.Legend at 0x3128610>
The notebook, thanks to MathJax, has great LaTeX support, so that you can type inline math $(1,\gamma,\ldots, \infty)$ as well as displayed equations:
$$ e^{i \pi}+1=0 $$
but by loading the sympy extension, it's easy showcase math
output from Python computations, where we don't type the math expressions in text, and instead the results of code execution are displayed in mathematical format:
%load_ext sympyprintingimport sympy as symfrom sympy import *x, y, z = sym.symbols("x y z")
From simple algebraic expressions
Rational(3,2)*pi + exp(I*x) / (x**2 + y)
eq = ((x+y)**2 * (x+1))eq
expand(eq)
To calculus
diff(cos(x**2)**2 / (1+x), x)
You can easily include formatted text and code with markdown
You can
italicize, boldface build lists
and embed code meant for illustration instead of execution in Python:
def f(x): """a docstring""" return x**2
or other languages:
if (i=0; i<n; i++) { printf("hello %d\n", i); x += 4;}
And since the notebook can store displayed images in the file itself, you can show images which will be embedded in your post:
from IPython.display import ImageImage(filename='fig/img_4926.jpg')
You can embed YouTube videos using the IPython object, this is my recent talk at SciPy'12 about IPython:
from IPython.display import YouTubeVideoYouTubeVideo('iwVvqwLDsJo')
Including code examples from other languages
Using our various script cell magics, it's easy to include code in a variety of other languages
%%rubyputs "Hello from Ruby #{RUBY_VERSION}"
Hello from Ruby 1.8.7
%%bashecho "hello from $BASH"
hello from /bin/bash
And tools like the Octave and R magics let you interface with entire computational systems directly from the notebook; this is the Octave magic for which our example notebook contains more details:
%load_ext octavemagic
%%octave -s 500,500# butterworth filter, order 2, cutoff pi/2 radiansb = [0.292893218813452 0.585786437626905 0.292893218813452];a = [1 0 0.171572875253810];freqz(b, a, 32);
The rmagic extension does a similar job, letting you call R directly from the notebook, passing variables back and forth between Python and R.
%load_ext rmagic
Start by creating some data in Python
X = np.array([0,1,2,3,4])Y = np.array([3,5,4,6,7])
Which can then be manipulated in R, with results available back in Python (in
XYcoef):
%%R -i X,Y -o XYcoefXYlm = lm(Y~X)XYcoef = coef(XYlm)print(summary(XYlm))par(mfrow=c(2,2))plot(XYlm)
Call: lm(formula = Y ~ X) Residuals: 1 2 3 4 5 -0.2 0.9 -1.0 0.1 0.2 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 3.2000 0.6164 5.191 0.0139 * X 0.9000 0.2517 3.576 0.0374 * --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.7958 on 3 degrees of freedom Multiple R-squared: 0.81, Adjusted R-squared: 0.7467 F-statistic: 12.79 on 1 and 3 DF, p-value: 0.03739
XYcoef
[ 3.2 0.9]
And finally, in the same spirit, the cython magic extension lets you call Cython code directly from the notebook:
%load_ext cythonmagic
%%cython -lmfrom libc.math cimport sinprint 'sin(1)=', sin(1)
sin(1)= 0.841470984808 Keep in mind, this is still experimental code!
Hopefully this post shows that the system is already useful to communicate technical content in blog form with a minimal amount of effort. But please note that we're still in heavy development of many of these features, so things are susceptible to changing in the near future. By all means join the IPython dev mailing list if you'd like to participate and help us make IPython a better tool! |
Regardless of what your book might say, a random process is a
collection of random variables that is described in mathematical notation as$\{X(t) \colon t \in \mathbb T\}$ or $\{X_t\colon t \in \mathbb T\}$ where$\mathbb T$ is called the index set (typically, $\mathbb T$ is $(-\infty,\infty)$ or $\{\cdots, -2, -1, 0, 1, 2, \cdots\}$ or subsetsthereof) and we have one random variable for each $t \in \mathbb T$.Each random variable is, of course, a mapping from the sample space$\Omega\to \mathbb R$, and when the experiment is performed andthe outcome $\omega \in \Omega$ is known, the random variable$X_t$ maps that $\omega$ onto some real number that is, ofcourse, usually denoted by $X_t(\omega)$.
Now that we have this (typically infinite) collection of random variables,a
sample path or realization of the random process is the "waveform"that you see when the experiment is performed (with outcome$\omega_{11809}$, say) and each of the randomvariables $X_t$ maps this outcome onto $X_t(\omega_{11809})$.That's the "waveform": its value at $t = 0$ is $X_0(\omega_{11809})$,and when $t = 12$, its value is $X_{12}(\omega_{11809})$ and so on.
Now we come to the crux of the matter. A
deterministic random processis one for which if we know the value(s) of one or more of the$X_t(\omega)$, say $X_{0}(\omega)$ and $X_2(\omega)$,then we know the value of all the $X_t(\omega)$: knowing thevalue taken on by a few of the random variables in the set tellsus the values that all the random variables took on. That is,there is "no more randomness left" in the process.
With this in mind, consider a random process for which
all therandom variables $X_t$ are in fact the same random variable$A$. (Whether $A$ is uniformly distributed on $(0,1)$ or not istotally irrelevant to the issue being considered). Then, if weknow that $X_0(\omega) = A(\omega) = 0.13$, thenwe know that $X_t(\omega) = 0.13$ for all $t \in \mathbb T$.That is why your book is claiming that the random processis deterministic. Note that if $X_t$ were to equal$A \cos(2\pi f_0 t)$ where$f_0$ is some known constant instead of just $A$,the process would still be deterministic since knowledge of thevalue of $X_0$ would give away knowledge of everything.If instead $X_t = A \sin(2\pi f_0 t)$, then knowing the valueof $X_0$ would not be useful but knowing the value of$X_{(4f_0)^{-1}}$ (or any other $X_t$ where $\sin(2\pi f_0 t) \neq 0$)would again be a telltale.
As a harder question that you can use to test whether you understandthe above explanation, consider whether a random process in whichby $$X_t = A \cos(2\pi f_0 t + \Theta), -\infty < t < \infty$$ where $A$ and $\Theta$are independent random variables is a deterministic process or not.Does knowing the values of just a few of the random variables $X_t$tell you the value of all of them?
So, that is the basic idea behind the notion of deterministicrandom processes: observation of
some part of a sample pathallows us to determine the entire sample path with perfectaccuracy. How about nondeterministic random processes? Consider$$X_n = A_n, \quad n = \cdots, -2, -1, 0, 1, 2, \cdots$$where the $A_n$ are independent random variables with somespecific distribution, e.g. each $A_n \sim \mathcal N(0,\sigma^2)$.In this case, the process is often referred to as a(discrete-time) white noise process which might give you ahint as to whether it should qualify as a deterministic ora nondeterministic process.
Exercise: what is the significance of the number 11809 in the secondparagraph above? |
Let $\boldsymbol{A}$ be a square matrix. Show that $\lim_{n\to\infty}\boldsymbol{A}^{n}=0$ if and only if $\lim_{n\to\infty}\|\boldsymbol{A}\|^{n}=0$ for the spectral radius or for some operator norm.
Let $A=\begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}$ with the $l_1$ norm, then $\|A\| = 1$. Since $A^2 = 0$, we see that $\lim_n \|A^n\| = 0$, but we have $\lim_n \|A\|^n = 1$.
The spectral radius is not a norm; one reason is because there are nonzero matrices all of whose eigenvalues are zero. (Such matrices can't be diagonalizable, but nondiagonalizable matrices exist!) copper.hat gave one example.
Your property
does hold if you replace $\| \cdot \|$ with the spectral radius.
If $A^n \to 0$, then there
exists a norm, which can be chosen to be induced by some vector norm, such that $\| A \|^n \to 0$. This is ultimately because $\rho(A)$ is the infimum of $\| A \|$ over all possible operator norms. One direction (that the spectral radius is a lower bound) is easy to see by homogeneity and the operator norm property. The other direction (that the spectral radius is the greatest lower bound) is harder to prove.
For $A$ matrix denote by $\rho(A)$ the spectral radius of $A$, $= \max \{ |\lambda_i|\}$, with $\lambda_i$ are the eigenvalues of $A$.
We have $A^n \to 0$ if and only $\rho(A)^n \to 0$ if and only if $\rho(A)<1$. For proof one can use the Jordan canonical form.
Now if $||\cdot ||$ is any algebra norm on $M_n(\mathbb{R})$ ( for example coming from a norm on $\mathbb{R}^n$), then $||A||< 1$ implies $A^n \to 0$, because $||A^n|| \le ||A||^n$. The converse is not true, as the example of @copper.hat: shows. The norm of the operator $A\colon e_1 \mapsto \alpha e_2 \mapsto 0$ can be made as large as wanted, while $A^2 = 0$.
Notes:
The spectral radius is not a norm for the algebra $M_n(\mathbb{R})$ if $n \ge 2$; one can have $A$, $B$ nilpotent ( $\rho(A) = \rho(B) = 0$) and $\rho(AB) = $ large; take $A$ the one above, $B= A^{t}$.
If $A^n \to 0$ then the convergence is exponential. Moreover, the series $\sum_{n\ge 0} A^n$ is also covergent, with sum $(I-A)^{-1}$.
If use the norm for matrix as
$$\|A\|=\sqrt{\sum\limits_{i,j=1}^{n}|a_{ij}|^2}$$
We can prove
$$ \|A\|^n \to 0\implies \|A^n\| \to 0 $$ Converse is not true as a counter example is given by copper.hat.
First we prove the following:
Lemma:$\hspace{2 mm}\|AB\|\leqslant\|A\|\|B\|$
Prove: \begin{align} \|AB\|^2&=\sum\limits_{i,j=1}^{n}\left|\sum\limits_{k=1}^na_{ik}b_{kj}\right|^2 \\ &\leqslant\sum\limits_{i,j=1}^{n}\left(\sum\limits_{k=1}^n|a_{ik}|^2\sum\limits_{k=1}^n|b_{kj}|^2\right)\tag{Cauchy-Schwarz} \\ &=\sum\limits_{i,j=1}^{n}\left(\sum\limits_{k,l=1}^n|a_{ik}|^2|b_{lj}|^2\right) \\ &=\sum\limits_{i,k=1}^{n}|a_{ik}|^2\sum\limits_{l,j=1}^n|b_{lj}|^2 \\ &=\|A\|^2\|B\|^2 \\ \end{align} Now by lemma $$ \|A^n\|\leqslant\|A\|^n \hspace{5 mm} \text{and so} \hspace{5 mm} \|A^n\| \to 0\hspace{5 mm} \text{as} \hspace{5 mm} \|A\|^n\to 0 $$ |
I've always interpreted this notion in the following way.$\def\eq{\leftrightarrow}\def\t{\text}\def\pa{\t{PA}}\def\th{\t{Th}}\def\prf{\t{Proof}}\def\prov{\t{Prov}}\def\box{\square}\def\nn{\mathbb{N}}\def\str#1{{``\text{#1}\!"}}$
Formal system interpretation
Take any formal systems $S,T$.
We say that $S$
interprets $T$ via $ι$ iff $ι$ is a computable translation from sentences over $T$ to sentences over $S$ such that all the following hold for any sentences $φ,ψ$ over $T$:
(Ι1) $S \nvdash ι(\bot)$.
(Ι2) If $S \vdash ι(φ)$ and $S \vdash ι(φ \to ψ)$ then $S \vdash ι(ψ)$.
(Ι3) If $T \vdash φ$ then $S \vdash ι(φ)$.
This definition automatically implies that any formal system that interprets any other formal system is consistent.
It is sufficiently general to include all kinds of formal systems, unlike other definitions I've seen such as the one given in Rautenberg's
A concise Introduction to Mathematical Logic, which only makes sense for classical first-order theories. The 1st incompleteness theorem (non-constructive)
I shall sketch a proof that any formal system that interprets PA (in fact Robinson's Q suffices) and has decidable proof validity can neither prove nor disprove (the translation of) some sentence over PA.
Take any formal system $S$ with proof verifier $V$ (meaning that for every strings $x,y$, we have that $V(x,y)$ outputs $\str1$ if $x$ is a proof of $y$ over $S$ and outputs $\str0$ otherwise) that interprets PA via $ι$.
Then for any sentence $φ$ over PA, we have $S \vdash ι( φ \to ( \neg φ \to \bot ) )$ by (I3), and hence [(I4)] either $S \nvdash ι(φ)$ or $S \nvdash ι(\neg φ)$, because otherwise $S \vdash ι(\bot)$ by (I2) twice, contradicting (I1). [This is the only place where we use (I1) and (I2).]
Let $u$ be a $1$-parameter sentence over PA such that, for any strings $x,y,z$, if program $x$ on input $y$ produces output $z$, then $\pa \vdash u(c(x,y,z))$ and $\pa \vdash \neg u(c(x,y,w))$ for every string $w \ne z$. Here "$c(x,y,z)$" denotes the term coding for $(x,y,z)$. [The existence of $u$ is really the only difficult part when it comes to PA or Q; it becomes trivial if $S$ can interpret string manipulation natively, since we will not have to go through Godel coding.]
Let $G$ be the following program on input $(P,X)$:
For each string $s$ in length-lexicographic order:
If $V(s,ι(u(c(P,X,\str0))))$ then output $\str0$.
If $V(s,ι(\neg u(c(P,X,\str0))))$ then output $\str1$.
[The idea is that $G(P,X)$ searches for a proof or disproof of "$P$ halts on $X$ and outputs $\str0$" and outputs $\str0$ if it finds a proof and $\str1$ if it finds a disproof.]
If ( $S \vdash ι(φ)$ or $S \vdash ι(\neg φ)$ ) for every sentence $φ$ over PA:
Given any program $P$ and input $X$:
Either $S \vdash ι(u(c(P,X,\str0)))$ or $S \vdash ι(\neg u(c(P,X,\str0)))$, and hence $G$ halts on $(P,X)$.
If $P$ halts on $X$ and outputs $\str0$:
$\pa \vdash u(c(P,X,\str0))$, and hence $S \vdash ι(u(c(P,X,\str0)))$ by (I3).
Thus $S \nvdash ι(\neg u(c(P,X,\str0)))$ by (I4), and hence $G(P,X) = \str0$.
If $P$ halts on $X$ and does not output $\str0$:
$\pa \vdash \neg u(c(P,X,\str0))$, and hence $S \vdash ι(\neg u(c(P,X,\str0)))$ by (I3).
Thus $S \nvdash ι(u(c(P,X,\str0)))$ by (I4), and hence $G(P,X) = \str1$.
[But this property of $G$ is impossible!]
Let $C$ be the following program on input $P$:
If $G(P,P) = 0$ then output $\str1$ otherwise output $\str0$.
Then $C$ halts on $C$ because $G$ halts on $(P,P)$.
Thus $C(C) = \str0$ iff $G(C,C) = \str0$ [by property of $G$] iff $C(C) = \str1$ [by definition of $C$].
Contradiction.
Therefore ( $S \nvdash ι(φ)$ and $S \nvdash ι(\neg φ)$ ) for some sentence $φ$ over PA.
By the way, the core idea of this proof came from this blog post, but I changed the construction to make it cleaner and self-contained (using only basic programming knowledge). Interestingly, a commenter on that blog post claims to prove that the problem that the $G$ in the above proof solves is strictly weaker than the halting problem, which 'corresponds' nicely to the fact that using the unsolvability of the halting problem only yields Godel's version of the incompleteness theorem, which requires $Σ_1$-soundness of $S$.
The 1st incompleteness theorem (constructive)
This was technically not asked for in the question, but I am including it because it is related and also because it gives an explicit sentence that witnesses the first incompleteness theorem, unlike the above elegant but non-constructive proof. The proof simply translates Rosser's trick appropriately, and as before one can see that Q suffices in place of PA.
Take any formal system $S$ as before, and let $V,ι,u,c$ be as previously defined, and let $c'$ be the inverse of $c$.
Let $\prf_S$ be the $2$-parameter sentence $( m,n \mapsto u(c(V,(c'(m),ι(c'(n))),\str1)) )$ over PA.
Then, for any string $x$ and sentence $φ$ over PA, we have that $\pa \vdash \prf_S(c(x),c(φ))$ if $x$ is a proof of $ι(φ)$ over $S$ and $\pa \vdash \neg \prf_S(c(x),c(φ))$ otherwise.
For each sentence $φ$ over PA, let $\prov_S φ = \exists m\ ( \prf_S(m,c(φ)) \land \forall n<m\ ( \neg \prf_S(n,c(\neg φ)) )$. [Intuitively, "$\prov_S φ$" is intended to assert that there is a proof of $ι(φ)$ over $S$ and no smaller proof of $ι(\neg φ)$ over $S$.]
By the fixed point theorem for PA, let $φ$ be a sentence over PA such that $\pa \vdash φ \eq \neg \prov_S φ$.
If $S \vdash ι(φ)$:
$\pa \vdash \prov_S φ \equiv \neg φ$, and hence $S \vdash ι(\neg φ)$ by (I3).
Also $S \nvdash ι(\neg φ)$ by (I4), and hence a contradiction.
If $S \vdash ι(\neg φ)$:
Let $p$ be the proof of $ι(\neg φ)$ over $S$.
Then $\pa \vdash \prf_S(c(p),c(\neg φ))$, and hence $\pa \vdash \forall m > c(p)\ ( \exists n<m\ ( \prf(n,c(\neg φ)) ) )$.
Also $S \nvdash ι(φ)$ by (I4), and hence $\pa \vdash \forall m \le c(p)\ ( \neg \prf_S(m,c(φ)) )$.
Thus $\pa \vdash \neg \prov_S φ \equiv φ$, and hence $S \vdash ι(φ)$ by (I3), which gives a contradiction.
Therefore $S \nvdash ι(φ)$ and $S \nvdash ι(\neg φ)$.
Provability logic and the 2nd incompleteness theorem
We can get more if we further require (I2) and (I3) to be witnessed by computable functions. Precisely:
Take any formal systems $S,T$.
We say that $S$
uniformly interprets $T$ via $ι,f,g$ iff $S$ interprets $T$ via $ι$ and $f,g$ are partial computable functions such that, for any sentences $φ,ψ$ over PA, the following hold:
(I2) For any proofs $x,y$ of $ι(φ)$ and $ι(φ \to ψ)$ respectively over $S$, we have that $f(x,y)$ is a proof of $ι(ψ)$ over $S$.
(I3) For any proof $x$ of $φ$ over PA, we have that $g(x)$ is a proof of $ι(φ)$ over $S$.
Then any formal system $S$ that has decidable proof validity and uniformly interprets PA cannot prove 'its own consistency'. Note that the proof I shall give does not apply to formal systems that merely uniformly interpret Q.
Take any formal system $S$ that has decidable proof validity and uniformly interprets PA via $ι,f,g$, and let $\prf$ be as previously defined.
For each sentence $φ$ over PA, let $\box_S φ = \exists m\ ( \prf_S(m,c(φ)) )$, and let $\t{Con}(S) = \neg \box_S \bot$.
Then it is not hard to check that $S$ satisfies the provability conditions and the fixed point theorem, in the sense that the following hold:
(D1) If $S \vdash ι(φ)$ then $S \vdash ι( \box_S φ )$, for any sentence $φ$ over PA.
(D2) $S \vdash ι( \box_S φ \land \box_S( φ \to ψ ) \to \box_S ψ )$, for any sentences $φ,ψ$ over PA.
(D3) $S \vdash ι( \box_S φ \to \box_S \box_S φ )$, for any sentence $φ$ over PA.
(F) Given any $1$-propositional-parameter sentence $P$ over PA such that every occurrence of the parameter in $P$ is bound by some $\box_S$, there is a sentence $Q$ over PA so that $S \vdash ι( Q \eq P(Q) )$.
Basically this is because PA can capture $f,g$, and hence manipulate proofs over $S$ of arithmetical sentences (of the form $ι(φ)$ for some sentence $φ$ over PA). $S$ interprets PA and hence can do the same.
Thus by Lob's theorem as proven in provability logic we get Godel's second incompleteness theorem for $S$ in both external and internal form:
Note that although (I1) only requires that $S$ does not prove the (translation of the) arithmetical sentence "$\bot$", we have shown that in fact $S$ also does not prove the arithmetical sentence $\t{Con}(S)$, which is just a $Π_1$-sentence (because $u$ needs only bounded quantifiers and hence also $\prf$) such that every instantiation of that universal sentence can be proven by $S$! Therefore $S$ is $ω$-incomplete in the same essential way that PA is. |
Let $(X, \mathcal{F}, \mu)$ and $\mu(X) = 1$. Prove that $$ \mathcal{A} = \left\{A \in \mathcal{F} : \mu(A) = 0 \ \text{or} \ \mu(A) = 1\right\} $$ is a $\sigma$-algebra.
I'm having trouble proving it is closed under countable union. My attempt: let $A_1, A_2, \cdots \in \mathcal{A}$, then we can construct $B_n = A_n \backslash \cup_{i=1}^{n-1}A_i$, then $\cup_{i=1}^{\infty}A_i = \cup_{i=1}^{\infty}B_i$ and $B_i$ are pairwise disjoint.
Case 1: If $\mu(A_i) = 0 \ \forall i$, then $B_i \subset A_i \implies \mu(B_i) = 0$. Then $\mu(\cup A_i ) = \mu(\cup B_i) =\Sigma \mu(B_i) = 0$. Then $\cup A_i \in \mathcal{A}$.
Case 2: If $\mu(A_i) = 1 \ \forall i$, then $B_n = A_n \backslash \cup_{i=1}^{n-1}A_i \subset A_{1}^{c}$. Since $ \mu(A_{1}^{c}) = \mu(X) - \mu(A_1) = 0, \mu(B_n) = 0$. Then $\mu(\cup A_i ) = \mu(\cup B_i) =\mu(B_1) + \Sigma_{k=2}^{\infty} \mu(B_k) = 1$. Then $\cup A_i \in \mathcal{A}$.
Case 3: If $\mu(A_i) = 0$ for some $i$. I don't know where to start here. I feel my proof is complicated and it requires $\mu$ to be a complete measure ($\forall B \subset A, \mu(A) = 0 \implies \mu(B) =0$). |
Consider the stochastic process $(X_t)_{t \in \mathbb{R}}$ and show the equivalence of the following two Markov properties:
(a) $P(X_t \in A \mid X_u, u \leq s) = P(X_t \in A\mid X_s) \qquad \forall A \in \mathcal{S}, \forall t>s \in \mathbb{R}$;
(b) $P(A\cap B\mid X_t) = P(A\mid X_t)P(B\mid X_t) \text{ almost surely} \qquad \forall t \in \mathbb{R}, \forall A \in \mathcal{F}_t, \forall B \in \mathcal{T}_t$.
"The future and the past are independent given the present."
As suggested I try to show the discrete case first, where the random variables take discrete values and time is discrete. So far with the given comments and answers:
(b) => (a)
Set $A := \cap_k [X_{k < n} = i_k], B := [X_n+1 = i], C := [X_n = i_n]$, then (b) implies
$P(A\cap B|C) = P(A|C)P(B|C) \iff \frac{P(A\cap B\cap C)}{P(C)} = \frac{P(A\cap C) P(B\cap C)}{P(C)^2} $ $\iff P(B|C) = P(B|A\cap C)$
For these special $A,B,C$ the latter is equivalent to (a) (as we have shown for the discrete case in a previous evercise).
(a) => (b)
Analog/reverse to above reasoning, (a) implies (b) for events $A := \cap_k [X_{k < n} = i_k], B := [X_n+1 = i], C := [X_n = i_n]$ since for such events in above reasoning all steps where biimplications.
It remains to conclude that (a) implies (b) for events of the form $A=\bigcap_i[X_{u_i}=x_i]$, $B=\bigcap_j[X_{s_j}=z_j]$ where $u_i\leqslant n\leqslant s_j$ as well:
Again $P(A\cap B | C) = P(A|C)P(B|C) \iff P(B|C) = P(B|A\cap C)$
so we need to show that the latter holds also for the general events $A$ and $B$ only assuming (a) and the first part we have already shown; and it indeed follows from the first bit: $P(B|A\cap C) = P(\bigcap_j[X_{s_j}=z_j] | A\cap C) = P(\bigcap_j[X_{s_j}=z_j] | C)$ since $s_j > n$.
Is this the discrete case completed or am I missing something? I think somehow I need to handle the special cases where we have one $s_j = n$ or $u_i=n$.
How can I derive the continuous case from the discrete case then? And where does the "almost surely" some in in the continuous case? |
I presume you are talking about unconstrained minimization. Your question should specify if you are considering a specific problem structure. Otherwise, the answer is no.
First I should dispel a myth. The classical gradient descent method (also called
steepest descent method) is not even guaranteed to find a local minimizer. It stops when it has found a first-order critical point, i.e., one where the gradient vanishes. Depending on the particular function being minimized and the starting point, you may very well end up at a saddle point or even at a global maximizer!
Consider for instance $f(x,y) = x^2 - y^2$ and the initial point $(x_0,y_0) := (1,0)$. The steepest descent direction is $-\nabla f(1,0) = (-2,0)$. One step of the method with exact line search leaves you at $(0,0)$ where the gradient vanishes. Alas, it's a saddle point. You would realize by examining the second-order optimality conditions. But now imagine the function is $f(x,y) = x^2 - 10^{-16} y^2$. Here, $(0,0)$ is still a saddle point, but numerically, the second-order conditions may not tell you. In general, say you determine that the Hessian $\nabla^2 f(x^*,y^*)$ has an eigenvalue equal to $-10^{-16}$. How do you read it? Is it negative curvature or numerical error? How about $+10^{-16}$?
Consider now a function such as$$
f(x) =
\begin{cases}
1 & \text{if } x \leq 0 \\
\cos(x) & \text{if } 0 < x < \pi \\
-1 & \text{if } x \geq \pi.
\end{cases}
$$
This function is perfectly smooth, but if your initial point is $x_0 = -2$, the algorithm stops at a global maximizer. And by checking the second-order optimality conditions, you wouldn't know! The problem here is that some local minimizers are global maximizers!
Now virtually all gradient-based optimization methods suffer from this by design. Your question is really about
global optimization. Again, the answer is no, there are no general recipes to modify a method so as to guarantee that a global minimizer is identified. Just ask yourself: if the algorithm returns a value and says it is a global minimizer, how would you check that it's true?
There are classes of methods in global optimization. Some introduce randomization. Some use multi-start strategies. Some exploit the structure of the problem, but those are for special cases. Pick up a book on global optimization. You will enjoy it. |
Moving averages of prices are closely related to moving averages of price differences. In particular, if the price is a cumulative sum of historical price differences,
$$p_t = \sum_{j=0} \delta p_{t-j}$$
then a moving average of prices with weights $w_k$ can be written as a moving average of price differences with weights $v_k$
$$\sum_{k=0}w_k p_{t-k} = \sum_{k=0} w_k \sum_{j=0}\delta p_{t-j-k} =\sum_{k=0} \left(\sum_{i=0}^k w_i\right) \delta p_{t-k} =\sum_{k=0} v_k \delta p_{t-k}$$
where
$$v_k = \sum_{i=0}^k w_i$$
In particular, a moving average crossover with spans $(n_1, n_2)$ is a moving average of prices, where
$$w_k = \begin{cases}1/n_1 - 1/n_2 & \text{if } k < n_1 \\-1/n_2 & \text{if } n_1 \leq k < n_2 \\0 & \text{otherwise}\end{cases}$$
It is therefore also a moving average of price differences. It differs from the simple moving average, which has equal weight on all lags, by having very little weight on the first lag, with weights linearly increasing up to lag $n_1$, and then linearly decreasing up to lag $n_2$.
Qualitatively, the moving average crossover filters out more of the higher frequency noise resulting in a 'smoother' signal (intuitively this is because there is very little weight on either the most recent or most distant observation). In the language of signal processing it is a form of low pass filter. There is a tradeoff of smoothness of the resulting signal against reactivity to recent price changes. |
If we pick a random number from $[1, n]$ with repetition $k$ times. What is the probability distribution of number of distinct numbers picked for a given $k$? The number of distinct numbers picked is $\in [1, min(k, n)]$.
Sppose we draw $m$ times with $n$ possible values and ask about the number $r$ of distinct values that appeared. The classification by $r$ is given from first principles by
$$\frac{1}{n^m} \sum_{r=1}^n {n\choose r} \times {m\brace r} \times r!.$$
We may include $r=0$ because the Stirling number is zero there. This being a sum of probabilities it should evaluate to one. We get
$$\frac{1}{n^m} \sum_{r=0}^n {n\choose r} \times m! [z^m] (\exp(z)-1)^r = m! [z^m] \frac{1}{n^m} \sum_{r=0}^n {n\choose r} (\exp(z)-1)^r \\ = m! [z^m] \frac{1}{n^m} \exp(nz) = \frac{1}{n^m} n^m = 1$$
and the sanity check goes through. We get for the expected number of distinct values
$$\frac{1}{n^m} \sum_{r=1}^n r {n\choose r} \times m! [z^m] (\exp(z)-1)^r \\ = \frac{1}{n^{m-1}} \sum_{r=1}^n {n-1\choose r-1} \times m! [z^m] (\exp(z)-1)^r \\ = \frac{1}{n^{m-1}} m! [z^m] (\exp(z)-1) \sum_{r=1}^n {n-1\choose r-1} \times (\exp(z)-1)^{r-1} \\ = \frac{1}{n^{m-1}} m! [z^m] (\exp(z)-1) \sum_{r=0}^{n-1} {n-1\choose r} \times (\exp(z)-1)^{r} \\ = \frac{1}{n^{m-1}} m! [z^m] (\exp(z)-1) \exp((n-1)z) = \frac{1}{n^{m-1}} (n^m - (n-1)^m) \\ = n \left(1 - \left(1-\frac{1}{n}\right)^m\right).$$
The species for labeled set partitions is
$$\mathfrak{P}(\mathcal{U}\mathfrak{P}_{\ge 1}(\mathcal{Z}))$$
which yields the generating function
$$G(z, u) = \exp(u(\exp(z)-1)).$$
We verified these with the following script.
ENUM := proc(n, m) option remember; local ind, d, res; res := 0; for ind from n^m to 2*n^m-1 do d := convert(ind, base, n); res := res + nops(convert(d[1..m], `multiset`)); od; res/n^m; end; X := (n,m)-> n*(1-(1-1/n)^m); |
Contents Practice Question on Computing the Fourier Transform of a Continuous-time Signal
Compute the Fourier transform of the signal
$ x(t) = \left\{ \begin{array}{ll} 1, & \text{ for } -5\leq t \leq 5,\\ 0, & \text{ for } 5< |t| \leq 10, \end{array} \right. \ $
x(t) periodic with period 20.
You will receive feedback from your instructor and TA directly on this page. Other students are welcome to comment/discuss/point out mistakes/ask questions too!
Answer 1
For a square wave,
$ a_k=\frac{sin(k\omega_0 T_1)}{\pi k} $
In this case,
$ \omega_0=\frac{2\pi}{20}=\frac{\pi}{10} \mbox{ and } T_1 = 5 $
Therefore
$ \chi(\omega)=\sum_{k=-\infty}^{\infty}2\pi\frac{sin(k\frac{\pi}{10} 5)}{\pi k}\delta(\omega-k\frac{\pi}{10})=\sum_{k=-\infty}^{\infty}2\frac{sin(k\frac{\pi}{2})}{ k}\delta(\omega-k\frac{\pi}{10}) $
--Cmcmican 21:13, 21 February 2011 (UTC)
TA's comments: Good Job. You may use \sin to produce a $ \sin $. Answer 2
Write it here.
Answer 3
Write it here. |
Given a matrix $F \in \mathbb{C}^{m \times n}$ such that $m > n$ and other (non-symmetric) square matrix $A$ of size $n \times n$, how can one formulate
$$ \arg \min_b \left\|A- {F}^{*} \operatorname{diag} \left( b \right) \, {F} \right\|_{2}$$
where $b \in \mathbb{C}^m$ is some vector and $*$ denotes the conjugate transpose, as a semidefinite program?
I started as follows. Writing the above problem in epigraph form by introducing a variable $x$,
\begin{array}{ll} \text{minimize} & x\\ \text{subject to} & \left\|A- {F}^{*} \operatorname{diag} \left( b \right) \, {F} \right\|_{2} \leq x\end{array}
which is equivalent to
\begin{array}{ll} \text{minimize} & x\\ \text{subject to} & \sigma_{\max}(A- {F}^{*} \operatorname{diag} \left( b \right) \, {F} ) \leq x\end{array}
which is equivalent to
\begin{array}{ll} \text{minimize} & x\\ \text{subject to} & \lambda_{\max}\big((A- {F}^{*} \operatorname{diag} \left( b \right) \, {F} )^*(A- {F}^{*} \operatorname{diag} \left( b \right) \, {F} ) \big) \leq x^2\end{array}
Can anybody tell me how I can proceed with this? |
Q. A mixture of 2 moles of helium gas (atomic mass = 4 u), and 1 mole of argon gas (atomic mass = 40 u) is kept at 300 K in a container. The ratio of their rms speeds $\left[\frac{V_{rms}\left(\text{helium}\right)}{V_{rms} ((\text{argon})}\right], $is close to :
Solution:
$\frac{V_{rms} \left(He\right)}{V_{rms}\left(Ar\right)} = \sqrt{\frac{M_{Ar}}{M_{He}}} = \sqrt{\frac{40}{4}} = 3.16 $ Questions from JEE Main 2019 4. One of the two identical conducting wires of length L is bent in the form of a circular loop and the other one into a circular coil of N identical turns. If the same current is passed in both, the ratio of the magnetic field at the central of the loop $(B_L)$ to that at the centre of the coil $(B_C)$ , i.e., $R \frac{B_L}{B_C}$ will be 9. The magnetic field associated with a light wave is given, at the origin, by $B = B_0 [\sin(3.14 \times 10^7)ct + \sin(6.28 \times 10^7)ct]$.
If this light falls on a silver plate having a work
function of 4.7 eV, what will be the maximum
kinetic energy of the photo electrons ?
$(c = 3 \times 10^{8} ms^{-1}, h = 6.6 \times 10^{-34} J-s)$ Physics Most Viewed Questions 1. An AC ammeter is used to measure current in a circuit. When a given direct current passes through the circuit, the AC ammeter reads 3 A. When another alternating current passes through the circuit, the AC ammeter reads 4 A. Then the reading of this ammeter, if DC and AC flow through the circuit simultaneously, is 9. A cannon of mass I 000 kg, located at the base of an inclined plane fires a shelI of mass 100 kg in a horizontal direction with a velocity 180 $kmh^{-1}$. The angle of inclination of the inclined plane with the horizontal is 45$^{\circ}$. The coefficient of friction between the cannon and the inclined plane is 0.5. The height, in metre, to which the cannon ascends the inclined plane as a result of the recoiI is (g = 10 $ms^{-1})$ Latest Updates Top 10 Medical Entrance Exams in India Top 10 Engineering Entrance Exams in India JIPMER Results Announced NEET UG Counselling Started NEET Result Announced KCET Result Announced KCET College Predictor JEE Main Result Announced JEE Advanced Score Cards Available AP EAMCET Result Announced KEAM Result Announced UPSEE – Online Applications invited from NRI &Kashmiri Migrants MHT CET Result Announced NEET Rank Predictor Questions Tardigrade |
Search
Now showing items 1-2 of 2
Anisotropic flow of inclusive and identified particles in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV with ALICE
(Elsevier, 2017-11)
Anisotropic flow measurements constrain the shear $(\eta/s)$ and bulk ($\zeta/s$) viscosity of the quark-gluon plasma created in heavy-ion collisions, as well as give insight into the initial state of such collisions and ...
Investigations of anisotropic collectivity using multi-particle correlations in pp, p-Pb and Pb-Pb collisions
(Elsevier, 2017-11)
Two- and multi-particle azimuthal correlations have proven to be an excellent tool to probe the properties of the strongly interacting matter created in heavy-ion collisions. Recently, the results obtained for multi-particle ... |
Search
Now showing items 1-10 of 166
J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-02)
Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ...
Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV
(Elsevier, 2013-04-10)
The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ...
Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-12)
The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ...
Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC
(Springer, 2014-10)
Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ...
Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV
(Elsevier, 2017-12-21)
We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ...
Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV
(American Physical Society, 2017-09-08)
The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ...
Online data compression in the ALICE O$^2$ facility
(IOP, 2017)
The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ...
Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV
(American Physical Society, 2017-09-08)
In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ...
J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(American Physical Society, 2017-12-15)
We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ...
Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions
(Nature Publishing Group, 2017)
At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ... |
Shalom [edit: originally M. Burger] showed that the pair $(\mathrm{SL}_2(\mathbb{Z}) \ltimes \mathbb{Z}^2, \mathbb{Z}^2)$ has Relative Property (T) with respect to standard generating sets.
(The action of $\mathrm{SL}_2(\mathbb{Z})$ on $\mathbb{Z}^2$ is the usual one, i.e. the semidirect product can be thought of as a group of affine transformations $x \mapsto A x + b$ where $A \in \mathrm{SL}_2(\mathbb{Z})$ and $b \in \mathbb{Z}^2$.
If we reduce $\mathrm{mod}\ p$, we can think of this as giving an "efficient" way of generating the translations $x \mapsto x + b$ for $b \in F_p^2$.)
A one-dimensional variant in the finite case is whether there exist bounded size subsets $S_p \subset F_p^{\times} \ltimes F_p$ and $\delta > 0$ such that the relative Kazhdan constant:
$\kappa\ (F_p^{\times} \ltimes F_p, F_p, S_p) \ge \delta$
i.e. whether the pairs $(F_p^{\times} \ltimes F_p, F_p)$ can form a relative expander family.
An equivalent formulation: do there exist bounded size sets $S_p$ of affine transformations on $F_p$, such that no non-empty subset $U \subset F_p$, $|U| \leq p/2$ is
almost invariant with respect to all of them, i.e.
$\neg \exists U: \forall s \in S: |s(U) \cap U| > \frac{99}{100} |U|$
I believe the answer is no if one uses standard "generating" sets (they needn't actually generate) such as $x \mapsto x + 1,\ x \mapsto ax$, even if $a$ is allowed to vary with $p$. This is very slightly surprising, as these do generate all translations "efficiently" in the weaker sense of logarithmic diameter.
Is there a good argument as to why this should fail in general? Or might there be cunning sets $S_p$ such that relative expansion occurs? |
Defining parameters
Level: \( N \) = \( 3600 = 2^{4} \cdot 3^{2} \cdot 5^{2} \) Weight: \( k \) = \( 1 \) Character orbit: \([\chi]\) = 3600.en (of order \(20\) and degree \(8\)) Character conductor: \(\operatorname{cond}(\chi)\) = \( 400 \) Character field: \(\Q(\zeta_{20})\) Newforms: \( 0 \) Sturm bound: \(720\) Trace bound: \(0\) Dimensions
The following table gives the dimensions of various subspaces of \(M_{1}(3600, [\chi])\).
Total New Old Modular forms 64 16 48 Cusp forms 0 0 0 Eisenstein series 64 16 48
The following table gives the dimensions of subspaces with specified projective image type.
\(D_n\) \(A_4\) \(S_4\) \(A_5\) Dimension 0 0 0 0 |
A simple illustration of the trapezoid rule for definite integration:$$ \int_{a}^{b} f(x)\, dx \approx \frac{1}{2} \sum_{k=1}^{N} \left( x_{k} - x_{k-1} \right) \left( f(x_{k}) + f(x_{k-1}) \right). $$
First, we define a simple function and sample it between 0 and 10 at 200 points
%matplotlib inlineimport numpy as npimport matplotlib.pyplot as plt
def f(x): return (x-3)*(x-5)*(x-7)+85x = np.linspace(0, 10, 200)y = f(x)
Choose a region to integrate over and take only a few points in that region
a, b = 1, 8 # the left and right boundariesN = 5 # the number of pointsxint = np.linspace(a, b, N)yint = f(xint)
Plot both the function and the area below it in the trapezoid approximation
plt.plot(x, y, lw=2)plt.axis([0, 9, 0, 140])plt.fill_between(xint, 0, yint, facecolor='gray', alpha=0.4)plt.text(0.5 * (a + b), 30,r"$\int_a^b f(x)dx$", horizontalalignment='center', fontsize=20);
Compute the integral both at high accuracy and with the trapezoid approximation
from __future__ import print_functionfrom scipy.integrate import quadintegral, error = quad(f, a, b)integral_trapezoid = sum( (xint[1:] - xint[:-1]) * (yint[1:] + yint[:-1]) ) / 2print("The integral is:", integral, "+/-", error)print("The trapezoid approximation with", len(xint), "points is:", integral_trapezoid)
The integral is: 565.2499999999999 +/- 6.275535646693696e-12 The trapezoid approximation with 5 points is: 559.890625 |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
Difference between revisions of "Huge"
m (→Ultrafilter definition: added example)
m (→Ultrafilter definition: intuition)
Line 34: Line 34:
Where $\text{order-type}(X)$ is the [[Order-isomorphism|order-type]] of the poset $(X,\in)$. <cite>Kanamori2009:HigherInfinite</cite> $\kappa$ is then super n-huge if for all ordinals $\theta$ there is a $\lambda>\theta$ such that $\kappa$ is n-huge with target $\lambda$, i.e. $\lambda_n$ can be made arbitrarily large. If $j:V\to M$ is such that $M^{j^n(\kappa)}\subseteq M$ (i.e. $j$ witnesses n-hugeness) then there is a ultrafilter $U$ as above such that, for all $k\leq n$, $\lambda_k = j^k(\kappa)$, i.e. it is not only $\lambda=\lambda_n$ that is an iterate of $\kappa$ by $j$; all members of the $\lambda_k$ sequence are.
Where $\text{order-type}(X)$ is the [[Order-isomorphism|order-type]] of the poset $(X,\in)$. <cite>Kanamori2009:HigherInfinite</cite> $\kappa$ is then super n-huge if for all ordinals $\theta$ there is a $\lambda>\theta$ such that $\kappa$ is n-huge with target $\lambda$, i.e. $\lambda_n$ can be made arbitrarily large. If $j:V\to M$ is such that $M^{j^n(\kappa)}\subseteq M$ (i.e. $j$ witnesses n-hugeness) then there is a ultrafilter $U$ as above such that, for all $k\leq n$, $\lambda_k = j^k(\kappa)$, i.e. it is not only $\lambda=\lambda_n$ that is an iterate of $\kappa$ by $j$; all members of the $\lambda_k$ sequence are.
−
As an example, $\kappa$ is 1-huge with target $\lambda$ iff there is a normal $\kappa$-complete ultrafilter, $U$, over $\mathcal{P}(\lambda)$ such that $\{x\subseteq\lambda:\text{order-type}(x)=\kappa\}\in U$.
+
As an example, $\kappa$ is 1-huge with target $\lambda$ iff there is a normal $\kappa$-complete ultrafilter, $U$, over $\mathcal{P}(\lambda)$ such that $\{x\subseteq\lambda:\text{order-type}(x)=\kappa\}\in U$.
== Consistency strength and size ==
== Consistency strength and size ==
Revision as of 09:22, 26 September 2018 Huge cardinals (and their variants) were introduced by Kenneth Kunen in 1972 as a very large cardinal axiom. Kenneth Kunen first used them to prove that the consistency of the existence of a huge cardinal implies the consistency of $\text{ZFC}$+"there is a $\omega_2$-saturated $\sigma$-ideal on $\omega_1$". It is now known that only a Woodin cardinal is needed for this result. [1] Contents Definitions
Their formulation is similar to that of the formulation of superstrong cardinals. A huge cardinal is to a supercompact cardinal as a superstrong cardinal is to a strong cardinal, more precisely. The definition is part of a generalized phenomenon known as the "double helix", in which for some large cardinal properties n-$P_0$ and n-$P_1$, n-$P_0$ has less consistency strength than n-$P_1$, which has less consistency strength than (n+1)-$P_0$, and so on. This phenomenon is seen only around the n-fold variants as of modern set theoretic concerns. [2]
Although they are very large, there is a first-order definition which is equivalent to n-hugeness, so the $\theta$-th n-huge cardinal is first-order definable whenever $\theta$ is first-order definable. This definition can be seen as a (very strong) strengthening of the first-order definition of measurability.
Elementary embedding definitions $\kappa$ is almost n-huge with target $\lambda$iff $\lambda=j^n(\kappa)$ and $M$ is closed under all of its sequences of length less than $\lambda$ (that is, $M^{<\lambda}\subseteq M$). $\kappa$ is n-huge with target $\lambda$iff $\lambda=j^n(\kappa)$ and $M$ is closed under all of its sequences of length $\lambda$ ($M^\lambda\subseteq M$). $\kappa$ is almost n-hugeiff it is almost n-huge with target $\lambda$ for some $\lambda$. $\kappa$ is n-hugeiff it is n-huge with target $\lambda$ for some $\lambda$. $\kappa$ is super almost n-hugeiff for every $\gamma$, there is some $\lambda>\gamma$ for which $\kappa$ is almost n-huge with target $\lambda$ (that is, the target can be made arbitrarily large). $\kappa$ is super n-hugeiff for every $\gamma$, there is some $\lambda>\gamma$ for which $\kappa$ is n-huge with target $\lambda$. $\kappa$ is almost huge, huge, super almost huge, and superhugeiff it is almost 1-huge, 1-huge, etc. respectively. Ultrahuge cardinals
A cardinal $\kappa$ is
$\lambda$-ultrahuge for $\lambda>\kappa$ if there exists a nontrivial elementary embedding $j:V\to M$ for some transitive class $M$ such that $\mathrm{j}(\kappa)>\lambda$, $M^{j(\kappa)}\subseteq M$ and $V_{j(\lambda)}\subseteq M$. A cardinal is ultrahuge if it is $\lambda$-ultrahuge for all $\lambda\geq\kappa$. [1] Notice how similar this definition is to the alternative characterization of extendible cardinals. Furthermore, this definition can be extended in the obvious way to define $\lambda$-ultra n-hugeness and ultra n-hugeness, as well as the " almost" variants. Ultrafilter definition
The first-order definition of n-huge is somewhat similar to measurability. Specifically, $\kappa$ is measurable iff there is a nonprincipal $\kappa$-complete ultrafilter, $U$, over $\kappa$. A cardinal $\kappa$ is n-huge with target $\lambda$ iff there is a normal $\kappa$-complete ultrafilter, $U$, over $\mathcal{P}(\lambda)$, and cardinals $\kappa=\lambda_0<\lambda_1<\lambda_2...<\lambda_{n-1}<\lambda_n=\lambda$ such that:
$$\forall i<n(\{x\subseteq\lambda:\text{order-type}(x\cap\lambda_{i+1})=\lambda_i\}\in U)$$
Where $\text{order-type}(X)$ is the order-type of the poset $(X,\in)$. [1] $\kappa$ is then super n-huge if for all ordinals $\theta$ there is a $\lambda>\theta$ such that $\kappa$ is n-huge with target $\lambda$, i.e. $\lambda_n$ can be made arbitrarily large. If $j:V\to M$ is such that $M^{j^n(\kappa)}\subseteq M$ (i.e. $j$ witnesses n-hugeness) then there is a ultrafilter $U$ as above such that, for all $k\leq n$, $\lambda_k = j^k(\kappa)$, i.e. it is not only $\lambda=\lambda_n$ that is an iterate of $\kappa$ by $j$; all members of the $\lambda_k$ sequence are.
As an example, $\kappa$ is 1-huge with target $\lambda$ iff there is a normal $\kappa$-complete ultrafilter, $U$, over $\mathcal{P}(\lambda)$ such that $\{x\subseteq\lambda:\text{order-type}(x)=\kappa\}\in U$. The reason why this would be so surprising is that every set $\{x\subseteq\lambda\}$ with every set of order-type $\kappa$ would be in the ultrafilter; that is, every set containing $\{x\subseteq\lambda:\text{order-type}(x)=\kappa\}$ as a subset is considered a "large set."
Consistency strength and size
Hugeness exhibits a phenomenon associated with similarly defined large cardinals (the n-fold variants) known as the
double helix. This phenomenon is when for one n-fold variant, letting a cardinal be called n-$P_0$ iff it has the property, and another variant, n-$P_1$, n-$P_0$ is weaker than n-$P_1$, which is weaker than (n+1)-$P_0$. [2] In the consistency strength hierarchy, here is where these lay (top being weakest): measurable = 0-superstrong = 0-huge n-superstrong n-fold supercompact (n+1)-fold strong, n-fold extendible (n+1)-fold Woodin, n-fold Vopěnka (n+1)-fold Shelah almost n-huge super almost n-huge n-huge super n-huge ultra n-huge (n+1)-superstrong
All huge variants lay at the top of the double helix restricted to some natural number n, although each are bested by I3 cardinals (the critical points of the I3 elementary embeddings). In fact, every I3 is preceeded by a stationary set of n-huge cardinals, for all n. [1]
Similarly, every huge cardinal $\kappa$ is almost huge, and there is a normal measure over $\kappa$ which contains every almost huge cardinal $\lambda<\kappa$. Every superhuge cardinal $\kappa$ is extendible and there is a normal measure over $\kappa$ which contains every extendible cardinal $\lambda<\kappa$. Every (n+1)-huge cardinal $\kappa$ has a normal measure which contains every cardinal $\lambda$ such that $V_\kappa\models$"$\lambda$ is super n-huge" [1], in fact it contains every cardinal $\lambda$ such that $V_\kappa\models$"$\lambda$ is ultra n-huge".
Every n-huge cardinal is m-huge for every m<n. Similarly with almost n-hugeness, super n-hugeness, and super almost n-hugeness. Every almost huge cardinal is Vopěnka (therefore the consistency of the existence of an almost-huge cardinal implies the consistency of Vopěnka's principle). [1] Every ultra n-huge is super n-huge and a stationary limit of super n-huge cardinals. Every super almost (n+1)-huge is ultra n-huge and a stationary limit of ultra n-huge cardinals.
In terms of size, however, the least n-huge cardinal is smaller than the least supercompact cardinal (assuming both exist). [1] This is because n-huge cardinals have upward reflection properties, while supercompacts have downward reflection properties. Thus for any $\kappa$ which is supercompact and has an n-huge cardinal above it, $\kappa$ "reflects downward" that n-huge cardinal: there are $\kappa$-many n-huge cardinals below $\kappa$. On the other hand, the least super n-huge cardinals have
both upward and downward reflection properties, and are all much larger than the least supercompact cardinal. It is notable that, while almost 2-huge cardinals have higher consistency strength than superhuge cardinals, the least almost 2-huge is much smaller than the least super almost huge.
References Kanamori, Akihiro. Second, Springer-Verlag, Berlin, 2009. (Large cardinals in set theory from their beginnings, Paperback reprint of the 2003 edition) www bibtex The higher infinite. Kentaro, Sato. Double helix in large large cardinals and iteration ofelementary embeddings., 2007. www bibtex |
Let $X=\Big(\beta\omega_1\times(\omega_2+1)\Big)\setminus\Big((\beta\omega_1\setminus \omega_1)\times\{\omega_2\}\Big)$ as a subspace of $\beta\omega_1\times(\omega_2+1)$; I claim that $X$ is star compact.
Let $\mathscr{U}$ be an open cover of $X$. For each $\xi\in\omega_1$ there are $U_\xi\in\mathscr{U}$ and $\alpha_n\in\omega_2$ such that $$\langle \xi,\omega_2\rangle\in \{\xi\}\times(\alpha_n,\omega_2]\subseteq U_\xi\;.$$ Let $\alpha=\sup_\xi\alpha_\xi<\omega_2$, and let $K=\beta\omega_1\times\{\alpha+1\}$; $K$ is compact, and $$\omega_1\times \{\omega_2\}\subseteq \operatorname{st}(K,\mathscr{U})\;,$$ since $U_\xi\subseteq \operatorname{st}(K,\mathscr{U})$ for each $\xi\in\omega_1$. The ordinal space $\omega_2$ is countably compact, so $\beta\omega_1\times\omega_2$ is countably compact and therefore star finite, and there is a finite $F\subseteq \beta\omega_1\times\omega_2$ such that $\beta\omega_1\times\omega_2\subseteq\operatorname{st}(F,\mathscr{U})$. But then $K\cup F$ is compact, and $\operatorname{st}(K\cup F,\mathscr{U})=X$, as desired.
However, $X$ is not star countable. To see this, let $$\mathscr{U}=\{\beta\omega_1\times\omega_2\}\cup\Big\{\{\xi\}\times(\omega_2+1):\xi\in\omega_1\Big\}\;;$$ $\mathscr{U}$ is certainly an open cover of $X$, but if $C$ is any countable subset of $X$, we can choose $\xi\in\omega_1$ such that $C\cap\big(\{\xi\}\times (\omega_2+1)\big)=\varnothing$, and then $\langle \xi,\omega_2\rangle\notin\operatorname{st}(C,\mathscr{U})$.
This is a modification of Example 2.1 of Yan-Kui Song,
On $\mathcal{K}$-Starcompact Spaces, Bull. Malays. Math. Soc. (2) 30(1) (2007), 59-64, which is available as a PDF here. |
For a probabilistic binary forecast, the BS (Brier score) is given by $$ \text{BS}= \begin{cases} (1-f_i)^2\\ f_i^2\\ \end{cases} $$ Where $f$ is the forecast. If the event occurs with probability $p_i$ then the Expected Brier score is $$E[\text{BS}] = p_i(1-f_i)^2 + (1-p_i)f_i^2$$ which is minimized by setting $f = p$. This means that if one where to make accurate forecast $f$ of the true probability the expected Brier score reaches a minimum.
If we instead had many probabilistic forecasts, $\text{BS}=\sum \text{BS}_i$, then its expectation would be minimized when every forecast equals the true probability for the outcome.
If the random variable $\text{BS}$ materializes the sample mean is:$n^{-1} \sum (f_i-O_i)^2$. Where $O_i$ is the observed event: 1 or 0.
But the sample mean is minimized by letting $f_i$ equal the true outcome: 1 or 0 which may
not be the true probability of the outcome.Something is wrong with my reasoning but I can't understand what? Could someone explain?
From the reasoning about minimizing the expected Brier score above, should I interpret the Brier score such that if I minimized the expected Brier score then I am making more accurate predictions?
** EDITED** I Want to emphasize that each event has a different probability of occurring.
** EDITED** @kjetil b halvorsen
suppose we fitted a logistic regression in millions of observations then we fit the model $logit( f_i) = \hat{\alpha} + \hat{\beta}_1 x $
What is the difference when we use logistic regression model? what more restrictions are there than less parameters than observations?
In this setting we probably could not minimize the sample mean so that it equals zero ? |
Boundedness of composition of operators associated with different homogeneities on weighted Besov and Triebel-Lizorkin spaces
Related Articles The boundedness of composition operators on Triebel-Lizorkin and Besov Spaces with different homogeneities. Ding, Wei // Acta Mathematica Sinica;Jun2014, Vol. 30 Issue 6, p933
In this paper, we introduce new Triebel-Lizorkin and Besov Spaces associated with the different homogeneities of two singular integral operators. We then establish the boundedness of composition of two Calderón-Zygmund singular integral operators with different homogeneities on these...
Boundedness of θ-type Calderón-Zygmund operators on Hardy spaces with non-doubling measures. Ri, Chol; Zhang, Zhenqiu // Journal of Inequalities & Applications;10/10/2015, Vol. 2015 Issue 1, p1
Let μ be a non-negative Radon measure on $R^{d}$ which only satisfies some growth condition. In this paper, we obtain the boundedness of θ-type Calderón-Zygmund operators on the Hardy space $H^{1}(\mu)$.
Boundedness of multilinear Calderón-Zygmund singular operators on Morrey-Herz spaces with variable exponents. Lu, Yan; Zhu, Yue // Acta Mathematica Sinica;Jul2014, Vol. 30 Issue 7, p1180
In this paper, we introduce Morrey-Herz spaces $M\dot K_{q,p( \cdot )}^{\alpha ( \cdot ),\lambda } (\mathbb{R}^n )$ with variable exponents α(·) and p(·), and prove the boundedness of multilinear Calderón-Zygmund singular operators on the product of these spaces.
Riesz potential in generalized Morrey spaces on the Heisenberg group. Guliyev, V.; Eroglu, A.; Mammadov, Y. // Journal of Mathematical Sciences;Mar2013, Vol. 189 Issue 3, p365
We consider the Riesz potential operator [InlineMediaObject not available: see fulltext.], on the Heisenberg group $$ {{\mathbb{H}}_n} $$ in generalized Morrey spaces $$ {M_{{p,\varphi }}}\left( {{{\mathbb{H}}_n}} \right) $$ and find conditions for the boundedness of [InlineMediaObject not...
Weighted inequalities for fractional type operators with some homogeneous kernels. Riveros, María; Urciuolo, Marta // Acta Mathematica Sinica;Mar2013, Vol. 29 Issue 3, p449
In this paper, we study integral operators of the form , where A are certain invertible matrices, α > 0, 1 ≤ i ≤ m, α + ... + α = n − α, 0 ≤ α < n. For $$\tfrac{1} {q} = \tfrac{1} {p} - \tfrac{\alpha } {n}$$, we obtain the L(â„, w) − L(â„,...
Weighted norm inequalities with general weights for the commutator of Calderón. Hu, Guo; Zhu, Yue // Acta Mathematica Sinica;Mar2013, Vol. 29 Issue 3, p505
In this paper, by a sharp function estimate and an idea of Lerner, the authors establish some weighted estimates for the m-multilinear integral operator which is bounded from L(â„)×...× L(â„) to L(â„), and the associated kernel K( x; y, ..., y) enjoys a regularity on the...
On Calderón-Zygmund theory for p- and $${\mathcal{A}}$$ -superharmonic functions. Phuc, Nguyen // Calculus of Variations & Partial Differential Equations;Jan2013, Vol. 46 Issue 1/2, p165
We establish global L boundedness of nonlinear singular operators arising from a class of quasilinear PDEs in divergence form for p- and $${\mathcal{A}}$$ -superharmonic functions.
Discrete Calderón-type Reproducing Formula. Han, Youngsheng // Acta Mathematica Sinica;2000, Vol. 16 Issue 2, p277
In this paper we use the Calderón-Zygmund operator theory to provide a discrete Calderón-type reproducing formula. Since translation, dilation and, in particular, the Fourier transform are never used in the proofs, all results still hold on spaces of homogenous type introduced by Coifman...
Tb Theorem for Besov Spaces over Spaces of Homogeneous Type and their Applications. Yanchang Han // Southeast Asian Bulletin of Mathematics;2008, Vol. 32 Issue 4, p641
The author first establishes the Tb theorem for the Besov spaces by the discrete Calderón type reproducing formula and the Plancherel-Pôlya characterization for the Besov spaces. As an application of the Tb theorem, new characterizations of Besov spaces with minimum regularity and... |
How to treat $\epsilon$ and '\$' in top-down parser using predict table?
The construction of the predict table
Given a product $X \rightarrow w$, row $X$ and column $t$
-Mark $X \rightarrow w$ for each $t \in FIRST(w)$ -If $NULLABLE(w)$, then mark $X \rightarrow w$ for each $t \in FOLLOW(w)$ as well
says to create columns for all terminal symbols. $\epsilon$ is a terminal symbol so it's naturally added as a column.
However, I've somehow interpreted that I could/should add '\$' as a terminal symbol as well. Basically because '\$' is used in the FOLLOW sets (and the FOLLOW sets don't contain $\epsilon$).
But does this create redundancy since then the table would hold basically the same predict rules for '\$' and $\epsilon$ (at least in the implementation I have here)?
The rules given here also treat '\$' and $\epsilon$ as if they were separate:
http://www.jambe.co.nz/UNI/FirstAndFollowSets.html
The FOLLOW sets basically use '\$' in place of $\epsilon$, but the predict table uses $\epsilon$, because it's a terminal symbol. |
I am interested in a proof of the following statement which seems intuitive, but is somehow really tricky:
Let $X$ be a stochastic process and let $(\mathcal{F}(t) : t \geq 0)$ be the filtration that it generates (unaugmented). Let $T$ be a bounded stopping time. Then we have $\mathcal{F}(T) = \sigma(X(T \wedge t) : t \geq 0)$
I have a proof at hand (Bain and Crisan, Fundamentals of Stochastic Filtering, page 309), but in my opinion there is a major gap. I will try to explain the idea of proof.
Let $V$ be the space of functions $[0,\infty) \rightarrow \mathbb{R}$ equipped with the sigma algebra generated by the cylinder sets. Consider the canonical map $X^T:\Omega \rightarrow V$ which maps $\omega$ to the trajectory $t \mapsto X(t \wedge T(\omega),\omega)$. Then we have $\sigma(X(T \wedge t) : t \geq 0) = \sigma(X^T)$.
The difficult part is $\subseteq$. Let $A \in \mathcal{F}(T)$. We want to find a measurable map $g:V \rightarrow \mathbb{R}$ such that $1_A = g \circ X^T$, then we're done. It is now straightforward to show that $1_A$ is constant on sets where the sample paths of $X^T$ are constant. (To be more precise: for $\rho \in \Omega$ consider the set $\mathcal{M}(\rho) = \lbrace \omega : X(\omega,t) = X(\rho,t), 0 \leq t \leq T(\rho) \rbrace$. Then $T$ and $1_A$ are constant on every set of this form).
The problem is: this is not sufficient! It suffices to construct a map $g$ such that $1_A = g \circ X^T$, but how we can we know that $g$ is measurable? This is where the proof of Bain and Crisan comes up short IMO.
I can show this result only under the assumption that the map $X:\Omega \rightarrow V$ be surjective: Since $A \in \mathcal{F}(\infty)$, we have a measurable map $g$ such that $1_A = g \circ X$. Let $x \in V$. Then $T$ and $1_A$ are constant on the preimage of $x$ under $X$. Therefore, $g(x)$ does not depend on the values of $x$ after time $T$ (which is constant on the preimage of $x$). Since $X$ is surjective, we have $g(x) = g(K^Tx)$, where $K$ is the killing functional $K^tx(s) = x(t \wedge s)$. Hence, $g \circ X = g \circ X^T$, and we are done.
I think that this result could be a little bit deeper. I have seen two proofs of this for the special case that $X$ is the coordinate process on $C[0,\infty)$, one is given in the book of Karatzas & Shreve, Lemma 5.4.18. The fact that Karatzas proves this late in the book only in this special case somehow makes me think that the general case is not so easy.
I would really appreciate any comment or other reference for this result. |
Can every sufficiently large integer be written in the form $a^{100} + b^{101} + c^{102} + d^{103} + e^{104}$ for some non-negative integers $a$, $b$, $c$, $d$ and $e$? I'm only 15 so if u could please write as elemntary as you can! I know that this problem can be solved elementary :)
No, it is not possible. Given a large integer $N$, there are $N^{\frac 1{100}}$ smaller numbers of the form $a^{100}$. There are even fewer of the forms with higher exponents. This means you can express less than $(N^{\frac 1{100}})^5=N^{\frac 1{20}} \lt N$ numbers this way.
Let $A=\{n\in\mathbb N: \exists a,b,c,d,e\in\mathbb N: n= a^{100}+b^{101}+c^{102}+d^{103}+e^{104}\}$. For $N>0$, let $A_N=\{n<N: n\in A\}$.
If your statement were true, we'd have, amongst other things, that $\lim_{N\to\infty} |A_N|/N =1$.
Now write $n=1+\lfloor\sqrt[100]N\rfloor$. Then the set of $5$-tuples $(a,b,c,d,e)$ such that $a^{100}+b^{101}+c^{102}+d^{103}+e^{104}<N$ is at most $n^5$. Therefore, $$|A_N|<(1+\lfloor \sqrt[100]N\rfloor)^5$$
I think it is therefore pretty obvious that $|A_N|/N\to 0$ as $N\to\infty$.
This is a much stronger result than negating your result - it says that $|A_N|=O(\sqrt[20]N)$, which is a very sparse set. |
Let $\Omega\subset \mathbb R^n$ be a convex subset. All the objects below will be defined on this set.
Let us assume $P(x,D)$ to be a differentiable operator order $m$ and of square size, that is having as many equations as unknowns, with variable smooth coefficients.
Let moreover $Q(D)$ another differentiable operator of order $m$ and of square size (the same of $P$), but this time having
constant coefficients.
We also assume that
$P$ is of constant strength in the sense of Hormander, that is for any $x,y\in \Omega$ there exist a constant $C_{xy}$ such that $$|P(x,\xi)|/|P(y,\xi)|\leq C_{xy},$$ for any $\xi\in\mathbb R^n$ The operators $P$ and $Q$ are of equal strength, that is for any $x\in\Omega$ (it is not anymore important who is $x$ by the previous point), we have that there exist a constant $C$ such that $$\frac{1}{C}\leq\frac{|P(x,\xi)|}{|Q(\xi)|}\leq C,\quad\forall \xi\in\mathbb R^n$$
I want to solve now the equation $P(x,D)u=f$, for $f=(f_1,\dotso,f_k)\in (C^\infty_c(\Omega))^k$, and of course deduce something on the regularity of the solution $u$.
If I understand well, the heuristic should be as follows:
The equation $Q(D)u=f$ admits a smooth solution $u$ (Ehrenpreis fundamental principle), this should be the hard part to prove, but I take it as a black box Since $P$ and $Q$ are of equal strength, by the previous hypotheses, then the solution $u$ to $P(x,D)u=f$ has to be "as regular as" the solution $v$ to $Q(D)v=f$. In particular I expect $u$ to exist and to be smooth, well, even compactly supported since so is $f$. This is in fact the guiding principle of Hormander when he introduce this concept of strength as a preorder relation between operators in Linear partial differential operators, chap. 3.
Is my heuristic correct? How can one be rigorous in proving this? Thank you very much in advance |
NTS ABSTRACTSpring2019
Return to [1]
Jan 23
Yunqing Tang Reductions of abelian surfaces over global function fields For a non-isotrivial ordinary abelian surface $A$ over a global function field, under mild assumptions, we prove that there are infinitely many places modulo which $A$ is geometrically isogenous to the product of two elliptic curves. This result can be viewed as a generalization of a theorem of Chai and Oort. This is joint work with Davesh Maulik and Ananth Shankar. Jan 24
Hassan-Mao-Smith--Zhu The diophantine exponent of the $\mathbb{Z}/q\mathbb{Z}$ points of $S^{d-2}\subset S^d$ Abstract: Assume a polynomial-time algorithm for factoring integers, Conjecture~\ref{conj}, $d\geq 3,$ and $q$ and $p$ prime numbers, where $p\leq q^A$ for some $A>0$. We develop a polynomial-time algorithm in $\log(q)$ that lifts every $\mathbb{Z}/q\mathbb{Z}$ point of $S^{d-2}\subset S^{d}$ to a $\mathbb{Z}[1/p]$ point of $S^d$ with the minimum height. We implement our algorithm for $d=3 \text{ and }4$. Based on our numerical results, we formulate a conjecture which can be checked in polynomial-time and gives the optimal bound on the diophantine exponent of the $\mathbb{Z}/q\mathbb{Z}$ points of $S^{d-2}\subset S^d$. Jan 31
Kyle Pratt Breaking the $\frac{1}{2}$-barrier for the twisted second moment of Dirichlet $L$-functions Abstract: I will discuss recent work, joint with Bui, Robles, and Zaharescu, on a moment problem for Dirichlet $L$-functions. By way of motivation I will spend some time discussing the Lindel\"of Hypothesis, and work of Bettin, Chandee, and Radziwi\l\l. The talk will be accessible, as I will give lots of background information and will not dwell on technicalities. Feb 7
Shamgar Gurevich Harmonic Analysis on $GL_n$ over finite fields Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters.
For evaluating or estimating these sums, one of the most salient quantities to understand is the {\it character ratio}: $$trace (\rho(g))/dim (\rho),$$ for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant {\it rank}. This talk will discuss the notion of rank for GLn over finite fields, and apply the results to random walks. This is joint work with Roger Howe (TAMU).
Feb 14
Tonghai Yang The Lambda invariant and its CM values Abstract: The Lambda invariant which parametrizes elliptic curves with two torsions (X_0(2)) has some interesting properties, some similar to that of the j-invariants, and some not. For example, $\lambda(\frac{d+\sqrt d}2)$ is a unit sometime. In this talk, I will briefly describe some of the properties. This is joint work with Hongbo Yin and Peng Yu. Feb 28
Brian Lawrence Diophantine problems and a p-adic period map. Abstract: I will outline a proof of Mordell's conjecture / Faltings's theorem using p-adic Hodge theory. Joint with Akshay Venkatesh. March 7
Masoud Zargar Sections of quadrics over the affine line Abstract: Abstract: Suppose we have a quadratic form Q(x) in d\geq 4 variables over F_q[t] and f(t) is a polynomial over F_q. We consider the affine variety X given by the equation Q(x)=f(t) as a family of varieties over the affine line A^1_{F_q}. Given finitely many closed points in distinct fibers of this family, we ask when there exists a section passing through these points. We study this problem using the circle method over F_q((1/t)). Time permitting, I will mention connections to Lubotzky-Phillips-Sarnak (LPS) Ramanujan graphs. Joint with Naser T. Sardari March 14
Elena Mantovan p-adic automorphic forms, differential operators and Galois representations A strategy pioneered by Serre and Katz in the 1970s yields a construction of p-adic families of modular forms via the study of Serre's weight-raising differential operator Theta. This construction is a key ingredient in Deligne-Serre's theorem associating Galois representations to modular forms of weight 1, and in the study of the weight part of Serre's conjecture. In this talk I will discuss recent progress towards generalizing this theory to automorphic forms on unitary and symplectic Shimura varieites. In particular, I will introduce certain p-adic analogues of Maass-Shimura weight-raising differential operators, and discuss their action on p-adic automorphic forms, and on the associated mod p Galois representations. In contrast with Serre's classical approach where q-expansions play a prominent role, our approach is geometric in nature and is inspired by earlier work of Katz and Gross.
This talk is based joint work with Eishen, and also with Fintzen--Varma, and with Flander--Ghitza--McAndrew.
March 28
Adebisi Agboola Relative K-groups and rings of integers Abstract: Suppose that F is a number field and G is a finite group. I shall discuss a conjecture in relative algebraic K-theory (in essence, a conjectural Hasse principle applied to certain relative algebraic K-groups) that implies an affirmative answer to both the inverse Galois problem for F and G and to an analogous problem concerning the Galois module structure of rings of integers in tame extensions of F. It also implies the weak Malle conjecture on counting tame G-extensions of F according to discriminant. The K-theoretic conjecture can be proved in many cases (subject to mild technical conditions), e.g. when G is of odd order, giving a partial analogue of a classical theorem of Shafarevich in this setting. While this approach does not, as yet, resolve any new cases of the inverse Galois problem, it does yield substantial new results concerning both the Galois module structure of rings of integers and the weak Malle conjecture. |
Linear models describe a continuous response variable as a function of one or more predictor variables. They can help you understand and predict the behavior of complex systems or analyze experimental, financial, and biological data.
Linear regression is a statistical method used to create a linear model. The model describes the relationship between a dependent variable \(y\) (also called the response) as a function of one or more independent variables \(X_i\) (called the predictors). The general equation for a linear model is:
\[y = \beta_0 + \sum \ \beta_i X_i + \epsilon_i\]
where \(\beta\) represents linear parameter estimates to be computed and \(\epsilon\) represents the error terms.
There are several types of linear regression:
Simple linear regression:models using only one predictor Multiple linear regression:models using multiple predictors Multivariate linear regression:models for multiple response variables Generate predictions Compare linear model fits Plot residuals Evaluate goodness-of-fit Detect outliers
To create a linear model that fits curves and surfaces to your data, see Curve Fitting Toolbox. To create linear models of dynamic systems from measured input-output data, see System Identification Toolbox. To create a linear model for control system design from a nonlinear Simulink model, see Simulink Control Design. |
For part $(1)$, Since $\frac{|\phi|^n}{\sqrt 5}\lt \frac{1}{2}$ $$F_n=\lfloor \frac{\phi^n}{\sqrt 5}+\frac{1}{2}\rfloor$$ as where $\lfloor . \rfloor $ represents the greatest integer function
For part $(3)$, For even $n$,$$F_n=\frac{\phi^n-\frac{1}{\phi^n}}{\sqrt 5}=\frac{\phi^{2n}-1}{\phi^n\sqrt 5}$$ $$\implies (\phi^n)^2-(\sqrt 5 F_n) (\phi^n)-1=0$$.
This is quadratic equation in $\phi^n$ solving which we get,
$$\phi^n=\frac{\sqrt 5F_n+\sqrt{5F_n^2+4}}{2}$$ (negative root discarded) $$\implies n=\log_\phi \frac{\sqrt 5F_n+\sqrt{5F_n^2+4}}{2}=\frac{\ln (\frac{\sqrt 5F_n+\sqrt{5F_n^2+4}}{2})}{\ln \phi}$$
As you can see that for large $F_n$, $\sqrt{5F_n^2+4}\approx\sqrt 5 F_n$ which gives $$n\approx\frac{\ln \sqrt 5F_n}{\ln \phi}$$ for large $F_n$
For odd $n$, similar computations follow which give
$$n=\frac{\ln (\frac{\sqrt 5F_n+\sqrt{5F_n^2-4}}{2})}{\ln \phi}$$
But since you don't know whether $n$ is even or odd,
my suggestion is to compute both expressions (for even one as well as odd one) and check which $n$ fits best.
For large $F_n$, those expression would yield same $n$, so no need to worry for large $F_n$ |
Since $f$ is continuous on $(a,b)$, bounded and increasing, there's a unique continous extension of $f$ to $[a,b]$. This works because both limits $f(b) := \lim_{x\to b-}$ and $f(a) = \lim_{x\to a+}$ and are guaranteed to exist since every bounded and increasing (respectively bounded a decreasing) sequence converges. To prove this, simply observe that for a increasing and bounded sequence, all $x_m$ with $m > n$ have to lie within $[x_n,M]$ where $M=\sup_n x_n$ is the upper bound. Add to that the fact that by the very definition of $\sup$, there are $x_n$ arbitrarily close to $M$.
You can then use the fact that continuity on a compact set implies uniform continuity, and you're done. This theorem, btw, isn't hard to prove either (and the proof shows how powerful the compactness property can be). The proof goes like this:
First, recall the if $f$ is continuous then the preimage of an open set, and in particular of an open interval, is open. Thus, for $x \in [a,b]$ all the sets $$ C_x := f^{-1}\left(\left(f(x)-\frac{\epsilon}{2},f(x)+\frac{\epsilon}{2}\right)\right) $$are open. The crucial property of these $C_x$ is that for all $y \in C_x$ you have $|f(y)-f(x)| < \frac{\epsilon}{2}$ and thus $$ |f(u) - f(v)| = |(f(u) - f(x)) - (f(v)-f(x))| \leq \underbrace{|f(u)-f(x)|}_{<\frac{\epsilon}{2}} + \underbrace{|f(v)-f(x)|}_{<\frac{\epsilon}{2}} < \epsilon \text{ for all } u,v \in C_x$$Now recall that an open set contains an open interval around each of its points. Each $B_x$ thus contains an open interval around $x$, and you may wlog assume that its symmetric around $x$ (just make it smaller if it isn't). Thus, there are $$ \delta_x > 0 \textrm{ such that } B_x := (x-\frac{\delta_x}{2},x+\frac{\delta_x}{2}) \subset (x-\delta_x,x+\delta_x) \subset C_x$$Note how we made $B_x$ artifically smaller than seems necessary, that will simplify the last stage of the proof. Since $B_x$ contains $x$, the $B_x$ form an open cover of $[a,b]$, i.e. $$ \bigcup_{x\in[a,b]} B_x \supset [a,b] \text{.}$$Now we invoke compactness. Behold! Since $[a,b]$ is compact,
every covering with open sets contains a finite covering. We can thus pick finitely many $x_i \in [a,b]$ such that we still have $$ \bigcup_{1\leq i \leq n} B_{x_i} \supset [a,b] \text{.}$$We're nearly there, all that remains are a few applications of the triangle inequality. Since we're only dealing with finitly many $x_i$ now, we can find the minimum of all their $\delta_{x_i}$. Like in the definition of the $B_x$, we leave ourselves a bit of space to maneuver later, and actually set $$ \delta := \min_{1\leq i \leq n} \frac{\delta_{x_i}}{2} \text{.}$$
Now pick arbitrary $u,v \in [a,b]$ with $|u-v| < \delta$.Since our $B_{x_1},\ldots,B_{x_n}$ form a cover of $[a,b]$, there's an $i \in {1,\ldots,n}$ with $u \in B_{x_i}$, and thus $|u-x_i| < \frac{\delta_{x_i}}{2}$. Having been conservative in the definition of $B_x$ and $\delta$ pays off, because we get $$ |v-x_i| = |v-((x_i-u)+u)| = |(v-u)-(x_i-u)| < \underbrace{|u-v|}_{<\delta\leq\frac{\delta_{x_i}}{2}} + \underbrace{|x_i-u|}_{<\frac{\delta_{x_i}}{2}} < \delta_{x_i} \text{.}$$This doesn't imply $y \in B_{x_i}$ (the distance would have to be $\frac{\delta_{x_i}}{2}$ for that), but it
does imply $y \in C_{x_i}$!. We thus have $x \in B_{x_i} \subset C_{x_i}$ and $y \in C_{x_i}$, and by definition of $C_x$ (see the remark about the crucial property of $C_x$ above) thus $$ |f(x)-f(y)| < \epsilon \text{.}$$ |
2019-09-04 12:06
Soft QCD and Central Exclusive Production at LHCb / Kucharczyk, Marcin (Polish Academy of Sciences (PL)) The LHCb detector, owing to its unique acceptance coverage $(2 < \eta < 5)$ and a precise track and vertex reconstruction, is a universal tool allowing the study of various aspects of electroweak and QCD processes, such as particle correlations or Central Exclusive Production. The recent results on the measurement of the inelastic cross section at $ \sqrt s = 13 \ \rm{TeV}$ as well as the Bose-Einstein correlations of same-sign pions and kinematic correlations for pairs of beauty hadrons performed using large samples of proton-proton collision data accumulated with the LHCb detector at $\sqrt s = 7\ \rm{and} \ 8 \ \rm{TeV}$, are summarized in the present proceedings, together with the studies of Central Exclusive Production at $ \sqrt s = 13 \ \rm{TeV}$ exploiting new forward shower counters installed upstream and downstream of the LHCb detector. [...] LHCb-PROC-2019-008; CERN-LHCb-PROC-2019-008.- Geneva : CERN, 2019 - 6. Fulltext: PDF; In : The XXVII International Workshop on Deep Inelastic Scattering and Related Subjects, Turin, Italy, 8 - 12 Apr 2019 Detaljerad journal - Similar records 2019-08-15 17:39
LHCb Upgrades / Steinkamp, Olaf (Universitaet Zuerich (CH)) During the LHC long shutdown 2, in 2019/2020, the LHCb collaboration is going to perform a major upgrade of the experiment. The upgraded detector is designed to operate at a five times higher instantaneous luminosity than in Run II and can be read out at the full bunch-crossing frequency of the LHC, abolishing the need for a hardware trigger [...] LHCb-PROC-2019-007; CERN-LHCb-PROC-2019-007.- Geneva : CERN, 2019 - mult.p. In : Kruger2018, Hazyview, South Africa, 3 - 7 Dec 2018 Detaljerad journal - Similar records 2019-08-15 17:36
Tests of Lepton Flavour Universality at LHCb / Mueller, Katharina (Universitaet Zuerich (CH)) In the Standard Model of particle physics the three charged leptons are identical copies of each other, apart from mass differences, and the electroweak coupling of the gauge bosons to leptons is independent of the lepton flavour. This prediction is called lepton flavour universality (LFU) and is well tested. [...] LHCb-PROC-2019-006; CERN-LHCb-PROC-2019-006.- Geneva : CERN, 2019 - mult.p. In : Kruger2018, Hazyview, South Africa, 3 - 7 Dec 2018 Detaljerad journal - Similar records 2019-05-15 16:57 Detaljerad journal - Similar records 2019-02-12 14:01
XYZ states at LHCb / Kucharczyk, Marcin (Polish Academy of Sciences (PL)) The latest years have observed a resurrection of interest in searches for exotic states motivated by precision spectroscopy studies of beauty and charm hadrons providing the observation of several exotic states. The latest results on spectroscopy of exotic hadrons are reviewed, using the proton-proton collision data collected by the LHCb experiment. [...] LHCb-PROC-2019-004; CERN-LHCb-PROC-2019-004.- Geneva : CERN, 2019 - 6. Fulltext: PDF; In : 15th International Workshop on Meson Physics, Kraków, Poland, 7 - 12 Jun 2018 Detaljerad journal - Similar records 2019-01-21 09:59
Mixing and indirect $CP$ violation in two-body Charm decays at LHCb / Pajero, Tommaso (Universita & INFN Pisa (IT)) The copious number of $D^0$ decays collected by the LHCb experiment during 2011--2016 allows the test of the violation of the $CP$ symmetry in the decay of charm quarks with unprecedented precision, approaching for the first time the expectations of the Standard Model. We present the latest measurements of LHCb of mixing and indirect $CP$ violation in the decay of $D^0$ mesons into two charged hadrons [...] LHCb-PROC-2019-003; CERN-LHCb-PROC-2019-003.- Geneva : CERN, 2019 - 10. Fulltext: PDF; In : 10th International Workshop on the CKM Unitarity Triangle, Heidelberg, Germany, 17 - 21 Sep 2018 Detaljerad journal - Similar records 2019-01-15 14:22
Experimental status of LNU in B decays in LHCb / Benson, Sean (Nikhef National institute for subatomic physics (NL)) In the Standard Model, the three charged leptons are identical copies of each other, apart from mass differences. Experimental tests of this feature in semileptonic decays of b-hadrons are highly sensitive to New Physics particles which preferentially couple to the 2nd and 3rd generations of leptons. [...] LHCb-PROC-2019-002; CERN-LHCb-PROC-2019-002.- Geneva : CERN, 2019 - 7. Fulltext: PDF; In : The 15th International Workshop on Tau Lepton Physics, Amsterdam, Netherlands, 24 - 28 Sep 2018 Detaljerad journal - Similar records 2019-01-10 15:54 Detaljerad journal - Similar records 2018-12-20 16:31
Simultaneous usage of the LHCb HLT farm for Online and Offline processing workflows LHCb is one of the 4 LHC experiments and continues to revolutionise data acquisition and analysis techniques. Already two years ago the concepts of “online” and “offline” analysis were unified: the calibration and alignment processes take place automatically in real time and are used in the triggering process such that Online data are immediately available offline for physics analysis (Turbo analysis), the computing capacity of the HLT farm has been used simultaneously for different workflows : synchronous first level trigger, asynchronous second level trigger, and Monte-Carlo simulation. [...] LHCb-PROC-2018-031; CERN-LHCb-PROC-2018-031.- Geneva : CERN, 2018 - 7. Fulltext: PDF; In : 23rd International Conference on Computing in High Energy and Nuclear Physics, CHEP 2018, Sofia, Bulgaria, 9 - 13 Jul 2018 Detaljerad journal - Similar records 2018-12-14 16:02
The Timepix3 Telescope andSensor R&D for the LHCb VELO Upgrade / Dall'Occo, Elena (Nikhef National institute for subatomic physics (NL)) The VErtex LOcator (VELO) of the LHCb detector is going to be replaced in the context of a major upgrade of the experiment planned for 2019-2020. The upgraded VELO is a silicon pixel detector, designed to with stand a radiation dose up to $8 \times 10^{15} 1 ~\text {MeV} ~\eta_{eq} ~ \text{cm}^{−2}$, with the additional challenge of a highly non uniform radiation exposure. [...] LHCb-PROC-2018-030; CERN-LHCb-PROC-2018-030.- Geneva : CERN, 2018 - 8. Detaljerad journal - Similar records |
I've been given an audio signal with some high frequency noise. The professor gave me the system function of a lowpass filter that gives me control over the cutoff frequency:
$$H(z)=\frac{1-\alpha}{2}\frac{1+z^{-1}}{1-\alpha z^{-1}}$$
where $\alpha=\frac{1-\sin \omega_c}{\cos \omega_c}$ and $\omega_c$ is the cutoff frequency in radians/sec.
The professor claims that if the filter is cascaded with itself $L$ times, it becomes a filter of order $L$.
My questions are the following:
What does increasing the order of the filter do? Why would I want to increase the order of the filter? If I cascade the filter with itself $L$ times, won't I lose the ability to control the cutoff frequency? How could I correct for this?
edit:
Here's what I get for the frequency response of the filter when the cutoff frequency is set to 1000 Hz. Sampling frequency is 16000 Hz. |
I've been trying to learn how to integrate by differentiation under the integral. I've made good progress on some problems, but I seem to not be able to get an answer for $$f(\alpha)=\int_0^\pi \ln(1+\alpha \cos(x)) \,\mathrm{d}x$$
I've managed to get as far as $$f'(\alpha)=\int_0^\pi \frac{\cos(x)}{1+ \alpha \cos(x)} \,\mathrm{d}x$$
But this seems like a ridiculous integral to try and integrate by elementary methods, indeed an integral calculator returns $$\dfrac{x}{a}+\dfrac{\ln\left(\left|\left(a-1\right)\tan\left(\frac{x}{2}\right)-\sqrt{a^2-1}\right|\right)-\ln\left(\left|\left(a-1\right)\tan\left(\frac{x}{2}\right)+\sqrt{a^2-1}\right|\right)}{a\sqrt{a^2-1}}$$
Hopefully someone can advise on whether I've already made a mistake in my working, or whether I've just completely misunderstood the method. |
As some people on this site might be aware I don't always take downvotes well. So here's my attempt to provide more context to my answer for whoever decided to downvote.
Note that I will confine my discussion to functions $f: D\subseteq \Bbb R \to \Bbb R$ and to ideas that should be simple enough for anyone who's taken a course in scalar calculus to understand. Let me know if I haven't succeeded in some way.
First, it'll be convenient for us to define a new notation. It's called "little oh" notation.
Definition: A function $f$ is called little oh of $g$ as $x\to a$, denoted $f\in o(g)$ as $x\to a$, if
$$\lim_{x\to a}\frac {f(x)}{g(x)}=0$$
Intuitively this means that $f(x)\to 0$ as $x\to a$ "faster" than $g$ does.
Here are some examples:
$x\in o(1)$ as $x\to 0$ $x^2 \in o(x)$ as $x\to 0$ $x\in o(x^2)$ as $x\to \infty$ $x-\sin(x)\in o(x)$ as $x\to 0$ $x-\sin(x)\in o(x^2)$ as $x\to 0$ $x-\sin(x)\not\in o(x^3)$ as $x\to 0$
Now what is an affine approximation? (Note: I prefer to call it affine rather than linear -- if you've taken linear algebra then you'll know why.) It is simply a function $T(x) = A + Bx$ that
approximates the function in question.
Intuitively it should be clear which affine function should best approximate the function $f$ very near $a$. It should be $$L(x) = f(a) + f'(a)(x-a).$$ Why? Well consider that any affine function really only carries two pieces of information: slope and some point on the line. The function $L$ as I've defined it has the properties $L(a)=f(a)$ and $L'(a)=f'(a)$. Thus $L$ is the unique line which passes through the point $(a,f(a))$ and has the slope $f'(a)$.
But we can be a little more rigorous. Below I give a lemma and a theorem that tell us that $L(x) = f(a) + f'(a)(x-a)$ is the
best affine approximation of the function $f$ at $a$.
Lemma: If a differentiable function $f$ can be written, for all $x$ in some neighborhood of $a$, as $$f(x) = A + B\cdot(x-a) + R(x-a)$$ where $A, B$ are constants and $R\in o(x-a)$, then $A=f(a)$ and $B=f'(a)$.
Proof: First notice that because $f$, $A$, and $B\cdot(x-a)$ are continuous at $x=a$, $R$ must be too. Then setting $x=a$ we immediately see that $f(a)=A$.
Then, rearranging the equation we get (for all $x\ne a$)
$$\frac{f(x)-f(a)}{x-a} = \frac{f(x)-A}{x-a} = \frac{B\cdot (x-a)+R(x-a)}{x-a} = B + \frac{R(x-a)}{x-a}$$
Then taking the limit as $x\to a$ we see that $B=f'(a)$. $\ \ \ \square$
Theorem: A function $f$ is differentiable at $a$ iff, for all $x$ in some neighborhood of $a$, $f(x)$ can be written as $$f(x) = f(a) + B\cdot(x-a) + R(x-a)$$ where $B \in \Bbb R$ and $R\in o(x-a)$.
Proof: "$\implies$": If $f$ is differentiable then $f'(a) = \lim_{x\to a} \frac{f(x)-f(a)}{x-a}$ exists. This can alternatively be written $$f'(a) = \frac{f(x)-f(a)}{x-a} + r(x-a)$$ where the "remainder function" $r$ has the property $\lim_{x \to a} r(x-a)=0$. Rearranging this equation we get $$f(x) = f(a) + f'(a)(x-a) -r(x-a)(x-a).$$ Let $R(x-a):= -r(x-a)(x-a)$. Then clearly $R\in o(x-a)$ (confirm this for yourself). So $$f(x) = f(a) + f'(a)(x-a) + R(x-a)$$ as required.
"$\impliedby$": Simple rearrangement of this equation yields
$$B + \frac{R(x-a)}{x-a}= \frac{f(x)-f(a)}{x-a}.$$ The limit as $x\to a$ of the LHS exists and thus the limit also exists for the RHS. This implies $f$ is differentiable by the standard definition of differentiability. $\ \ \ \square$
Taken together the above lemma and theorem tell us that not only is $L(x) = f(a) + f'(a)(x-a)$ the only affine function who's remainder tends to $0$ as $x\to a$
faster than $x-a$ itself (this is the sense in which this approximation is the best), but also that we can even define the concept differentiability by the existence of this best affine approximation. |
Search
Now showing items 1-10 of 26
Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV
(Elsevier, 2017-12-21)
We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ...
Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV
(American Physical Society, 2017-09-08)
The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ...
Online data compression in the ALICE O$^2$ facility
(IOP, 2017)
The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ...
Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV
(American Physical Society, 2017-09-08)
In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ...
J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(American Physical Society, 2017-12-15)
We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ...
Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions
(Nature Publishing Group, 2017)
At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ...
K$^{*}(892)^{0}$ and $\phi(1020)$ meson production at high transverse momentum in pp and Pb-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 2.76 TeV
(American Physical Society, 2017-06)
The production of K$^{*}(892)^{0}$ and $\phi(1020)$ mesons in proton-proton (pp) and lead-lead (Pb-Pb) collisions at $\sqrt{s_\mathrm{NN}} =$ 2.76 TeV has been analyzed using a high luminosity data sample accumulated in ...
Production of $\Sigma(1385)^{\pm}$ and $\Xi(1530)^{0}$ in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Springer, 2017-06)
The transverse momentum distributions of the strange and double-strange hyperon resonances ($\Sigma(1385)^{\pm}$, $\Xi(1530)^{0}$) produced in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV were measured in the rapidity ...
Charged–particle multiplicities in proton–proton collisions at $\sqrt{s}=$ 0.9 to 8 TeV, with ALICE at the LHC
(Springer, 2017-01)
The ALICE Collaboration has carried out a detailed study of pseudorapidity densities and multiplicity distributions of primary charged particles produced in proton-proton collisions, at $\sqrt{s} =$ 0.9, 2.36, 2.76, 7 and ...
Energy dependence of forward-rapidity J/$\psi$ and $\psi(2S)$ production in pp collisions at the LHC
(Springer, 2017-06)
We present ALICE results on transverse momentum ($p_{\rm T}$) and rapidity ($y$) differential production cross sections, mean transverse momentum and mean transverse momentum square of inclusive J/$\psi$ and $\psi(2S)$ at ... |
Forgot password? New user? Sign up
Existing user? Log in
f(x)=(a2−3a+2)(cos2x4−sin2x4)+(a−1)x+sin1f\left( x \right) =\left( { a }^{ 2 }-3a+2 \right) \left( \cos ^{ 2 }{ \frac { x }{ 4 } } -\sin ^{ 2 }{ \frac { x }{ 4 } } \right) +\left( a-1 \right) x+\sin { 1 } f(x)=(a2−3a+2)(cos24x−sin24x)+(a−1)x+sin1
The set of all values of aaa for which the function above does not posses critical point is __________\text{\_\_\_\_\_\_\_\_\_\_} __________.
Note by Akhilesh Prasad 3 years, 8 months ago
Easy Math Editor
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
*italics*
_italics_
**bold**
__bold__
- bulleted- list
1. numbered2. list
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)
> This is a quote
This is a quote
# I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world"
# I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world"
\(
\)
\[
\]
2 \times 3
2^{34}
a_{i-1}
\frac{2}{3}
\sqrt{2}
\sum_{i=1}^3
\sin \theta
\boxed{123}
Sort by:
@Rishabh Cool. Would you spare some of your time to please post a solution of this.
Log in to reply
@Rishabh Cool, I have got a issue in this one too so if you will please, see this one too
What's the answer ?? I'm getting a€(0,4)- {1} ........... I might have missed cases because I'm a expert in doing that.. ;-)
The answer given is (1,∞)(1,\infty)(1,∞).
How you solved ?? I found f'(x) and ensured that it does not vanish!! And ultimately got the wrong answer!!
@Rishabh Jain – It's too time consuming writing the whole answer so i wanted to attach the scan of the hand written answer, but i dunno how to do it, if you could tell me how to do it.
@Rishabh Jain – Finally, found it My solution
@Akhilesh Prasad – Should at the end cases be like: (1) Right Max value<0 (2) Left Min value>0
@Akhilesh Prasad – And shouldn't where you multiplied inequality by (a-2) cases must be made for a-2>0 and a-2<0??
@Rishabh Jain – Considered the cases you told me to, still i am not getting the desired answer.Corrected solution part 1
Corrected solution part 2
@Akhilesh Prasad – So we both are getting the same answer that is not correct!! ... ??
@Rishabh Jain – On a side not, I would wanna ask you are you gonna give JEE this year.
@Akhilesh Prasad – absolutely Yes!!
@Rishabh Jain – Hey were u able to open the file i uploaded. coz it was on google drive
Problem Loading...
Note Loading...
Set Loading... |
In this topic:
Vxxxx n+ n- [[DC] dcvalue] [DCOP] [INFCAP] [AC magnitude [phase]] [transient_spec]
N+
Positive node
N-
Negative node DCOP If this is specified, the voltage source will only be active during the DC operating point solution. In other analyses, it will behave like an open circuit. This is an effective method of creating a 'hard' initial condition. See Alternative Initial Condition Implementations for an example. INFCAP If specified, the voltage source will behave as an infinite capacitor. During the DC operating point solution it will behave like an open circuit. In the subsequent analysis, it will behave like a voltage source with a value equal to the solution found during the operating point. Note that the device is inactive for DC sweeps - as all capacitors are.
dcvalue
Value of source for dc operating point analysis
magnitude
AC magnitude for AC sweep analysis.
phase
phase for AC sweep analysis
transient_spec
Specification for time varying source as described in the following table. PULSE ( v1 v2 [td [tr [tf [pw [per ]]]]] )
Where:
Name
Description
Default
v1 Initial value (V,A) Compulsory v2 Pulsed value (V,A) Compulsory td Delay time (S) Default if omitted = 0 tr Rise time (S)
Default if omitted, negative or zero = Time step
a
tf Fall time (S) Default if omitted, negative or zero = Time step pw Pulse width (S)
Default if omitted or negative = Stop time
b
per Period (S) Default if omitted, negative or zero = Stop time
a.
Time step is set up by the .TRAN simulator statement which defines a transient
analysis. Refer to .TRAN.
b.
Stop time refers to the end time of the transient analysis.
SIMetrix deviates from standard SPICE in the action taken for a pulse width of zero. Standard SPICE treats a zero pulse width as if it had been omitted and changes it to the stop time. In SIMetrix a zero pulse width means just that.
Both the above examples give a pulse lasting 5???MATH???\mu???MATH???S with a period of 10???MATH???\mu???MATH???S, rise and fall times of 100nS and a delay of 0. The voltage source has a 0V base line and a pulse of 5V while the current source has a 0mA base line and a pulse of 1mA.
PWL ( t1 v1 [t2 v2 [t3 v3 [... ]]] )
Each pair of values (
ti vi) specifies that the value of the source is vi at time = ti. The value of the source at intermediate values of time is determined by using linear interpolation on the input values.
Although the example given below is for a voltage source, the PWL stimulus may be used for current sources as well.
Gives:
PWLFILE filename
This performs the same function as the normal piece wise linear source except that the values are read from a file named
filename.
The file contains a list of time voltage pairs in text form separated by any whitespace character (space, tab, new line). It is not necessary to add the '+' continuation character for new lines but they will be ignored if they are included. Any non-numeric data contained in the file will also be ignored.
The PWLFILE source is
considerably more efficient at reading large PWL definitions than the standard PWL source. Consequently it is recommended that all PWL definitions with more than 200 points are defined in this way.
The data output by
Show /file is directly compatible with the PWLFILE source making it possible to save the output of one simulation and use it as a stimulus for another. It is recommended, however, that the results are first interpolated to evenly spaced points using the Interp() function.
The use of engineering suffixes (e.g. k, m, p etc.) is
not supported by PWLFILE.
The PWLFILE source is a feature of SIMetrix and does not form part of standard SPICE.
Note, you can use the simulator statements .FILE and .ENDF to define the contents of the file. E.g.
Vpwl1 N1 N2 PWLFILE pwlSource ... .FILE pwlSource ... ... .ENDF
This will be read in much more efficiently than the standard PWL and is recommended for large definitions. See .FILE and .ENDF.
SIN[E] ( vo va [freq [delay [theta [ phase]]]] )
Where:
Name
Description
Default
vo Offset (V,A) Compulsory va Peak (V,A) Compulsory freq Frequency (Hz)
Default if omitted or zero= 1/Stop time
a
delay Delay (seconds) Default if omitted = 0 theta Damping factor (1/seconds) Default if omitted = 0 phase Phase in degrees Default if omitted = 0
a.
Stop time refers to the end time of the transient analysis.
The shape of the waveform is described by:
0 to delay: vo
delay to Stop time
???MATH???vo + va \cdot \text{e}^{-(t-\textit{delay})..\textit{theta}} \cdot \sin(2\pi(\textit{freq} \cdot (t - \textit{delay}) + \textit{phase}/360))???MATH???
Gives output of:
EXP ( v1 v2 [td1 [tau1 [td2 [tau2 ]]]] )
Where:
Name
Description
Default
v1 Initial value (V,A) Compulsory v2 Pulsed value (V,A) Compulsory td1 Rise delay time Default if omitted or zero: 0 tau1 Rise time constant
Default if omitted or zero: Time step
a
td2 Fall delay time Default if omitted or zero: td1 + Time step tau2 Fall time constant Default if omitted or zero: Time step
a. Time step is set up by the .TRAN simulator directive which defines a transient analysis. Refer to .TRAN.
td1 to td2: ???MATH???v1 + (v2 - v1). [ 1 - \text{e}^{-(t-td1)/tau1}]???MATH??? td2 to stop time: ???MATH???v1 + (v2 - v1). [ 1 - \text{e}^{-(t-td1)/tau1}] + v1 + (v2 - v1). [ 1 - \text{e}^{-(t-td2)/tau2}]???MATH??? SFFM ( vo va [fc [mdi [fs ]]] )
Where:
Name
Description
Default
vo Offset (V,A) Compulsory va Amplitude (V,A) Compulsory fc Carrier frequency (Hz)
Default if omitted or zero = 1/Stop time
a
mdi Modulation index Default if omitted = 0 fs Signal frequency (Hz) Default if omitted or zero = 1/Stop time
a.
Stop time refers to the end time of the transient analysis.
Defined by: ???MATH???vo + va\cdot\sin[2\pi\cdot \textit{fc}\cdot t + \textit{mdi}\cdot\sin(2\pi\cdot\textit{fs}\cdot t)]???MATH???
noise interval rms_value [start_time [stop_time]]
Source generates a random value at
interval with distribution such that spectrum of signal generated is approximately flat up to frequency equal to 1/(2* interval). Amplitude of noise is rms_value volts. start_time and stop_time provide a means of specifying a time window over which the source is enabled. Outside this time window, the source will be zero. If stop_time is omitted or zero a value of infinity will be assumed.
PWLS [TIME_SCALE_FACTOR=time_factor] [VALUE_SCALE_FACTOR=value_factor] pwls_spec [ pwls_spec ... ]Where:
time_factor
Scales all time values in definition by time_factor
value_factor
Scales all magnitude values by value_factor
pwls_spec
may be one of the following:
Name
Description
Default
Compulsory
FREQ
Frequency N/A Yes
PEAK
Peak value of sine 1.0 No
OFFSET
Offset 0.0 No
DELAY
Delay before sine starts. 0.0 No
PHASE
Phase 0.0 No
CYCLES
Number of cycles. Use -1.0 for infinity -1.0 No
MINPOINTS
Minimum number of timesteps used per cycle 13 No
RAMP
Frequency ramp factor 0.0 No if t>0 OR DELAY<0 PEAK * SIN(f*2pi*t+PHASE*pi/180) + OFFSET else PEAK * SIN(PHASE*pi/180) + OFFSET
Name
Description
Default
Compulsory
V0
Offset 0 No
V1
Positive pulse value 1.0 No
V2
Negative pules value -1.0 No
RISE
Rise time i.e time to change from V2 to V1 PERIOD/1000 No
FALL
Fall time i.e time to change from V1 to V2 PERIOD/1000 No
WIDTH
Positive pulse width (PERIOD-RISE-FALL)/2 No
PERIOD
Period N/A Yes
DELAY
Delay before start 0 No
CYCLES
Number of complete cycles. -1 means infinity -1 No
◄ Voltage Controlled Voltage Source Mutual Inductor ▶ |
I would appreciate if somebody could run this over and see if it works out? any suggestions or pointers would be appreciated. I denote the standard eta function $\eta$ by $\zeta^{*}$. I have not used big O notation and just used general well behaved functions. I do not wish to express the full error term, but instead ,just the principal part.
Behaviour of $\zeta(s)$ near $1$
From Abel's Theorem we can see that when $s=1$, $ \zeta^{*}(1) = \log(2)$. Now looking at $(1-2^{1-s})$ we can write it in terms of an exponential like so, \begin{equation} 1-2^{1-s} = 1 - e^{(1-s)\log(2)} \end{equation}
The power series expansion of $e^{z}$ is,
\begin{equation} e^{z}= \sum_{n=0}^{\infty} \frac{z^{n}}{n!}\\ \Rightarrow 1-2^{1-s} = - e^{\log(2)(s-1)}= - \sum_{n=0}^{\infty} \frac{((1-s)\log(2))^{n}}{n!} \end{equation}
We can ignore the term when $n=0$ due to it being zero and sum from $n=1$ instead,
\begin{equation} 1-2^{1-s} = 0 - \sum_{n=1}^{\infty} \frac{(1-s)^{n}\log(2)^{n}}{n!} \end{equation}
Expanding this sum and multiplying in the negative sign we have,
\begin{equation} 1-2^{1-s}= (s-1) \Bigg( \log(2) - \frac{\log(2)^{2}}{2!}(s-1) + \cdot \cdot \cdot \Bigg ) \end{equation}
Factorizing the $\log(2)$ term out, \begin{equation*} (s-1)\log(2)\Bigg [ 1 - \bigg( \frac{\log(2)}{2!}(s-1) + \frac{\log(2)}{3!}(s-1)^{2} - \cdot \cdot \cdot \bigg ) \Bigg ] \end{equation*}
By the geometric series formula, for $|s| < 1$, \begin{equation} \frac{1}{\bigg[1 - \bigg( \frac{\log(2)}{2!}(s-1) + \cdot \cdot \cdot \bigg ) \bigg ] }= 1 + \Bigg( \frac{\log(2)}{2!}(s-1) + \cdot \cdot \cdot \Bigg ) + \Bigg( \frac{\log(2)}{2!}(s-1) + \cdot \cdot \cdot \Bigg )^{2} + \cdot \cdot \cdot \end{equation} The terms of this geometric series decrease rapidly, so we are only interested in keeping the first terms while letting a well-behaved and analytic function $g$ represent the remaining terms as a function in $s$.
\begin{equation} \frac{1}{\bigg[1 - \bigg( \frac{\log(2)}{2!}(s-1) + \cdot \cdot \cdot \bigg ) \bigg ] } = 1 + \frac{\log(2)(s-1)}{2} + (s-1)^{2}\cdot g(s). \end{equation}
We can now return to $\frac{1}{1-2^{1-s}}$, and express it in terms of what we have learned. \begin{equation} \frac{1}{1-2^{1-s}} = \frac{1}{\log(2)(s-1)} \Bigg( 1 + \frac{\log(2)(s-1)}{2} + (s-1)^{2}\cdot g(s) \Bigg ) = \frac{1}{\log(2)} \cdot \Bigg [ \frac{1}{s-1} + \frac{\log(2)}{2} + (s-1)g(s)\Bigg ] \end{equation}
We can now study $\zeta(s)$ when $s$ is near to $1$. \begin{equation} \zeta(s) = \frac{\zeta^{*}(s)}{1-2^{1-s}} = \frac{\zeta^{*}(s)}{\log(2)} \cdot \Bigg [ \frac{1}{s-1} + \frac{\log(2)}{2} + (s-1)g(s)\Bigg ] = \frac{\zeta^{*}(s)}{\log(2)} \cdot \frac{1}{s-1} + \frac{\zeta^{*}(s)}{2 \log(2)} \log(2) + \frac{\zeta^{*}(s)(s-1)g(s)}{\log(2)} \end{equation}
As we know already, $\zeta^{*}(1) = \log(2)$ is analytic, so $\zeta^{*}(s)$ can be expanded as a series around $1$, \begin{equation} \zeta^{*}(s) = \log(2) + (s-1)a_1 + (s-1)^{2}a_2 + \cdot \cdot \cdot = \log(2) + (s-1) h(s) \end{equation} for a well behaved and analytic $h$.
Near $s=1$ and by just looking at the principal terms, \begin{equation} \zeta(s) = \frac{\zeta^{*}(s)}{1-2^{1-s}} = \frac{ \log(2) +(s-1)h(s) }{\log(2)(s-1)} = \frac{1}{s-1} + \frac{h(s)}{\log(2)} \end{equation} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.