text
stringlengths 83
79.5k
|
|---|
H: Inequality string for comparison of series
I'd like to know if the following statement is a valid one. Specifically, I hope to use it in applying comparison tests for convergence/divergence of certain series.
For sufficiently large n,
$n, n^2, n^3, n^4, ... < 2^n, 3^n, 4^n, ... < n!$
The statement wasn't in my text, and I couldn't find it anywhere -- but I tried to recall it from back when I took Calculus II myself.
Is the above statement correct ? Further, can the above statement be expanded upon to say
For sufficiently large n,
$n^c < a^n < n!$ ,
where $c$ is any positive real number and $a$ is any real number greater than $1$ ?
Thanks !
AI: Proofs are relatively easy.
$n^c \lt a^n$ can be gotten by taking log. $clog(n)\lt nlog(a)$ or $\frac{c}{log(a)}\lt \frac{n}{log(n)}$ for large enough $n$.
Using Stirling formila $n!\approx \sqrt{2\pi n}(\frac{n}{e})^n$ allows $a^n\lt n!$ to be $1\lt \sqrt{2\pi n}(\frac{n}{ae})^n$, as long as $n\ge ae$.
The expansion is more general (as I interpret it) The first inequality looks like $n^k\lt k^n$ for fixed $k$ and large $n$.
|
H: Converging Integral question
Is this integral convergent or divergent?
$$ \int_1^{\infty} \sqrt{x}\, \ln\!\left(1+ \frac{\sin(x)}{x}\right)\,\text {d}x$$
and
$$ \int_1^{\infty} \sqrt{x}\, \ln\!\left(1+ \frac{\cos(x)}{x}\right)\,\text {d}x$$
You cannot use the convergence tests because the $\ln$ changes sign ($-$ to $+$ each time) and I couldn't do L'Hospital on the absolute value.
I would appreciate your help!
AI: Near $X=0$, we have
$$\ln(1+X)=X-\frac{X^2}{2}(1+\epsilon(X))$$
with
$$\lim_{X\to 0}\epsilon(X)=0$$
So, with $$X=\frac{\sin(x)}{x}$$
we have
$$\lim_{x\to+\infty}X=0$$
because
$$|\frac{\sin(x)}{x}|\le \frac 1x$$
Thus
$$\ln(1+\frac{\sin(x)}{x})=\frac{\sin(x)}{x}-\frac{\sin^2(x)}{2x^2}(1+\epsilon(x))$$
with
$$\lim_{x\to +\infty}\epsilon(x)=0$$
Then
$$f(x)=g(x)+h(x)$$
where
$$g(x)=\frac{\sin(x)}{\sqrt{x}}$$
$\int_1^{+\infty}g $ converges by Dirichlet's test.
and
$$h(x)\sim \frac{\sin^2(x)}{2x^{\frac 32}}
$$
$\int_1^{+\infty}h$ converges absolutely since
$$|\frac{\sin^2(x)}{2x^\frac 32}|\le \frac{1}{2x^\frac 32}$$
As a sum of two convergent integrals, your integral is Convergent.
Same for the second.
|
H: L'Hopital's rule conditions
I have seen easy geometrical argument why L'Hopital's rule ($\lim_{x \to a} \frac{f(x)}{g(x)} = \lim_{x \to a} \frac{f'(x)}{g'(x)}$) works (local linearization). But, I still don't understand this:
why is rule defined just when limit is in form $\frac{0}{0}$ or $\pm \frac{\infty}{\infty}$?
Why must be $f(a) = g(a) = 0$ ?
why must be $g'(a) \neq 0$?
Counterexample for 1: $\lim_{x \to 0} \frac{e^x}{x^2 + x + 1} = \frac{1}{1}$ but is also $\lim_{x \to 0} \frac{(e^x)'}{(x^2 + x + 1)'} = \lim_{x \to 0} \frac{e^x}{2x + 1} = \frac{1}{1}.$ So, L'Hopital's rule works here but $\frac{1}{1} \neq \frac{0}{0}$!
Also, I read that there is another condition:
4. $\frac{f'(x)}{g'(x)}$ must exist
does condition 3 implies this?
can you give example when original limit exist but $\frac{f'}{g'}$ does not and how is this possible if functions $f, g$ are differentiable?
AI: You can easily come up with counterexamples for applying L'Hôpital's rule when the limit is not of the form $0/0$ or $\infty/\infty$. For any $a\in\mathbb{R}$:
$$\lim_{x\to a}\frac{x}{1+x}=\frac{a}{1+a}\neq1=\lim_{x\to1}\frac{1}{1}=\lim_{x\to1}\frac{(x)'}{(1+x)'}.$$
The limit is never of the form $0/0$ or $\infty/\infty$ and clearly L'Hôpital's rule does not work on this example. To see why the rule does work for limits of the $0/0$ or $\infty/\infty$ form, see any analysis textbook for a proof (for example, Rudin's Principles of Mathematical Analysis).
We don't strictly need $g'(a)\neq0$. For example:
$$\lim_{x\to0}\frac{x^2}{x^2+x^3}=\lim_{x\to0}\frac{2x}{2x+3x^2}=\lim_{x\to0}\frac{2}{2+6x}=1,$$
where we applied L'Hôpital's rule twice, but the second limit still is of the form $0/0$.
The limit $f'/g'$ may fail to exist even when the conditions for L'Hôpital's rule are satisfied and the limit $f/g$ exists; the classic example is:
$$\lim_{x\to\infty}\frac{x+\sin(x)}{x}=1,$$
but upon applying L'Hôpital's rule we obtain:
$$\lim_{x\to\infty}1+\cos(x),$$
for which the limit does not exist.
|
H: Why is it true that if the integral is finite then $\lim_{s\to\infty} \int_{s}^{\infty} f(x) dx=0$?
I saw it whilst reading my lecture notes that
Let $f(x)$ be integrable such that $\int_{-\infty}^{\infty} f(x) dx<\infty$ then it means that $\lim_{s\to\infty} \int_{s}^{\infty} f(x) dx=0.$
I can't seem to understand why such deduction can be made. I can believe such statement but is there a solid reason behind this? (P.S. I not thinking either Lebesgue Integration or Riemann Integration but just Integration in general, if it helps.)
Many thanks in advance.
AI: Quick proof:
$${\int_{0}^{\infty}f(x)dx = \int_{0}^{s}f(x)dx + \int_{s}^{\infty}f(x)dx}$$
By definition, ${\int_{0}^{\infty}f(x)dx=\lim_{s\rightarrow\infty}\int_{0}^{s}f(x)dx}$. Denote ${\int_{0}^{\infty}f(x)dx}$ as ${L}$. Then you see
$${\lim_{s\rightarrow\infty}\int_{0}^{\infty}f(x)dx = \lim_{s\rightarrow\infty}\int_{0}^{s}f(x)dx + \int_{s}^{\infty}f(x)dx}$$
$${\Rightarrow L=L + \lim_{s\rightarrow \infty}\int_{s}^{\infty}f(x)dx}$$
So you see that
$${\lim_{s\rightarrow\infty}\int_{s}^{\infty}f(x)dx}$$
must ${=0}$
|
H: Solvability by radicals
I study the book "Introduction To Field Theory" by Iain Adamson (https://archive.org/details/IntroductionToFieldTheory), and struggle with the Theorem 26.5. on page 166:
Let $F$ be a field of characteristic zero. If the polynomial $f$ in $F[x]$ is solvable by radicals, then the Galois group of any splitting field of $f$ over $F$ is solvable.
Adamson defines "solvable by radicals" on page 160/161:
Let $F$ be a field of characteristic zero; a field $E$ containing $F$ is said to be an extension of $F$ by radicals if there exists a sequence of subfields $F = E_0, E_1, ..., E_{r-1}, E_r = E$
such that for $i = 0, ..., r- 1, E_{i+1} = E_i(\alpha_i)$, where $\alpha_i$, is a root of an irreducible polynomial in $E_i[x]$ of the form $X^{n_i} - a_i$.
A polynomial $f$ in $F[x]$ is said to be solvable by radicals if there exists a splitting field of $f$ over $F$ which is contained in an extension of F by radicals.
I don't see where in the proof of Theorem 26.5 the irreducibility of $X^{n_i} - a_i$ is used. Why does Adamson require the polynomial to be irreducible?
Update, July 8th.: I would like to add some definitions. Let us call a radical extension with irreducible polynominals $X^{n_i} - a_i$, an irreducible radical extension. And a polynominal with splitting field contained in an irreducible radical extension, solvable by irreducible radicals. When Adamson says "solvable by radicals", he actually means "solvable by irreducible radicals". In the cited Theorem Adamson then states: "If the polynominal is solvable by irreducible radicals, then the Galois group is solvable". But I think what he actually proves is the stonger statement: "If the polynominal is solvable by radicals, then the Galois group is solvable". Don't be confused when you read the theorems in the book: Adamson does not use the term "solvable by irreducible radicals".
AI: It's probably so the Galois group of the extension $K_n(E_{i+1})/K_n(E_i)$ is easier to handle. Consider, for instance, $\mathbb Q(\mathrm i,\sqrt2)/\mathbb Q(\mathrm i)$ (the $\mathrm i$ is the root of unity he adjoins at the beginning of the proof). This is the splitting field of $X^4-4$ over $\mathbb Q(\mathrm i)$, but also of the minimal polynomial $X^2-2$ of $\sqrt2$. The order of the Galois group is the degree of the minimal polynomial. If we choose the minimal polynomial to construct our extension, we can read off the order of the cyclic factors directly. If reducible polynomials were allowed, the orders of the groups would not be readily available. It would also make the degree $n$ of the polynomial $X^n-e$ whose roots he adjoins at the start unneccessarily large. I don't think any of these matter for the proof, but maybe he considered it more elegant?
|
H: Fibonacci and tossing coins
Consider the following scheme starting with a sequence $\sigma_0 = \langle 1,1,\dots,1\rangle$ of length $k$, successively followed by sequences $\sigma_i$ of the same length but shifted by one to the right, where the first entry $\sigma_{i0}$ equals the sum of all values above, and $\sigma_{ij} = \sigma_{i0}$.
For $k = 5$ one has:
1 1 1 1 1
1 1 1 1 1
2 2 2 2 2
4 4 4 4 4
8 8 8 8 8
15 15 15 15 15
29 29 29 29 29
56 56 56 56 56
108 108 108 108 108
208 208 208 208 208
Calculating the sum for each column one gets e.g. for $k = 5$:
1 2 4 8 16 30 58 112 216 416 802 1546 2980 5744 ...
It turns out that for $k = 3$ and $k = 4$ these sequences, namely
1 2 4 6 10 16 26 42 68 110 178 288 466 754 1220 1974 ...
and
1 2 4 8 14 26 48 88 162 298 548 1008 1854 3410 6272 ...
seem to be the numbers of ways to toss a coin $n$ times and not get a run of $k$ (see A128588 and A135491).
Conjecture: This holds in general, i.e. for arbitrary $k$.
My question is two-fold:
How to prove this conjecture?
What do the schemes above have to do with tossing a coin and counting runs?
Guess: When you try to calculate the numbers of ways to toss a coin $n$ times and not get a run of $k$ you may come up with those schemes. But how?
Note that the sequence for $k=3$ (A128588) happens to be double the Fibonacci numbers.
The schemes arose when I tried to mimic epidemic spread in a SIR-like discrete model (see here).
AI: Here's another way to construct your sequence. Let $a^k$ be the sequence defined by
$$a^k_n=a^k_{n-1}+a^k_{n-2}+\cdots+a^k_{n-k+1}$$ for $n\geq k$ and
$$a^k_n=2^n$$ for $$0\leq n < k$$
Essentially this is a generalization of the fibonacci sequence where the initial terms are powers of $2$ and successive terms are the sum of the previous $k-1$ entries.
What does this have to do with coins and runs? Let's first look at the case $k=2$.
$$a^2:1,2,2,2,...,2$$
In order to create a sequence of $n$ coin flips without a run of $2$ you must first create a sequence of $n-1$ coin flips without a run of $2$, and then you are forced to pick heads or tails based on the last entry in this $n-1$ sequence.
What happens in the case $k=3$?
$$a^3:1,2,4,6,10,16,...$$
In order to count the number of ways to create a sequence of $n$ coin flips without a run of $3$, you can break this down into two easier questions: 1) How many $n$ sequences without $3$-runs have a tail of $1$-run? And 2) How many $n$ sequences without $3$-runs have a tail of $2$-runs? The respective answers are 1) the number ways you can create $n-1$ sequences without $3$-runs and 2) the number of ways to create $n-2$ sequences without $3$-runs.
In the general case, in order to count the number of $n$ sequences without a $k$-run you break the question down into a series of smaller ones: How many $n$ sequences without $k$-run have a $1$-run at the end? And so on and so forth until you ask how many $n$ sequences without $k$-runs have $k-1$ runs at the end? So counting the number of $n$ sequences without $k$-runs just amounts to summing up the previous $k-1$ terms.
If anything I've written is confusing, please let me know and I will try and explain myself better.
|
H: Show that $\sum_{k=1}^\infty \frac{i}{k(k+1)}$ converges. Find its sum.
Show that $\sum_{k=1}^\infty \frac{i}{k(k+1)}$ converges. Find its sum.
The presence of the $i$ throws me off. Our professor taught us to find $a$ (the first term in the sequence) and $z$ (the multiplier to get the next term). Then find the modulus of z, and if $|z|<1$ then the series converges.
Here $a=\frac{i}{2}$, but I'm struggling to find $z$. I think that $z=i-\frac{i}{K+1}$, but I'm not completely confident in that. Additionally, if that $z$ value is correct would I end up with $\sqrt{0^2+(1-\frac{1}{K+1})^2}$? I'm pretty sure there shouldn't be a variable in there...
Also how should I find the sum?
AI: As suggested by @Kavi Rama Murthy, pull out the constant multiple of $i$ and simply do partial fractions:
$${=i\sum_{k=1}^{\infty}\left(\frac{1}{k}-\frac{1}{k+1}\right)}$$
So you only need to find the answer to the infinite sum in the brackets, and multiply the result by $i$. Can you take it from here?
|
H: Show $l^p$ is not complete with the $q$ norm
I know the question has been asked here, but I do not understand the solution (Are $\ell_p$ spaces complete under the $q$-norm?)
I came up with my own solution and was wondering if it is correct.
Denote $l^p_q$ as $l^p$ with $q-$norm. Now if it is closed, then the identity map $l^p_p \to l^p_q$ is a continuous bijection (easy to show), and so by open mapping theorem the inverse mapping is bounded. Therefore $\exists M$ s.t $\|x\|_q\leq M\|x\|_p$. Now consider $x_n=(2^n,0,0....)$ then $\|x_n\|_q=2^{nq}$ and $\|x_n\|_p=2^{np}$ thus $2^{n(q-p)}\leq M$ for all $n$ which is a contradiction.
Is this correct? Is there an even simpler solution?
AI: There are several mistakes. Continuity of the inverse gives $\|x\|_p \leq M\|x\|_q$ (not $\|x\|_q \leq M\|x\|_p$). Besides your computation of the norms is wrong. In your example $\|x\|_p=\|x\|_q=2^{n}$.
Consider the sequences $(1, 2^{-1/p},3^{-1/p},...., N^{-1/p},0,0,...)$ to arrive at a contradiction to the inequality $\|x\|_p \leq M\|x\|_q$
|
H: Prove that every set and subset with the cofinite topology is compact
Prove that every set with the cofinite topology is compact as well as every subset
Solution. Let $X$ be a nonempty set with the cofinite topology and let $ \mathscr{U}$ be an open cover of $ X $. Let $ U \in \mathscr{U}$. Then $X\setminus U$ is finite. For every $a \in X\setminus U$ let $U_a$ be an element of $\mathscr{U}$ that contains $a$. Then $\{U\}\cup\{U_a : a ∈ X\setminus U\}$
is a finite subcover of $\mathscr{U}$.
Now I missing the part for the subsets $E\subseteq X$. I don't think this refers to the relative topology, but just to any subset of $X$
How do I go about it?
AI: In the case of compactness it makes no difference whether you use the topology of $X$ or the relative topology on the subset.
Proposition. Let $\langle X,\tau\rangle$ be any space, let $K\subseteq X$, and let $\tau_K$ be the relative topology on $K$; then $K$ is compact with respect to $\tau$ iff it is compact with respect to $\tau_K$.
Proof. Suppose first that $K$ is compact with respect to $\tau$, and let $\mathscr{U}\subseteq\tau'$ be a $\tau'$-open cover of $K$. For each $U\in\mathscr{U}$ there is a $V_U\in\tau$ such that $U=K\cap V_U$. Let $\mathscr{V}=\{V_U:U\in\mathscr{U}\}$; clearly $\mathscr{V}$ is a $\tau$-open cover of $K$, so it has a finite subcover $\{V_{U_1},\ldots,V_{U_n}\}$. Let $\mathscr{F}=\{U_1,\ldots,U_n\}$; $\mathscr{F}$ is a finite subset of $\mathscr{U}$, and
$$\bigcup\mathscr{F}=\bigcup_{k=1}^nU_k=\bigcup_{k=1}^n(K\cap V_{U_k})=K\cap\bigcup_{k=1}^nU_k=K\;,$$
so $\mathscr{F}$ covers $K$. Thus, $K$ is compact with respect to $\tau'$.
Now suppose that $K$ is compact with respect to $\tau'$, and let $\mathscr{U}\subseteq\tau$ be a $\tau$-open cover of $K$. For each $U\in\mathscr{U}$ let $V_U=K\cap U$, and let $\mathscr{V}=\{V_U:U\in\mathscr{U}\}$. $\mathscr{V}$ is a $\tau'$-open cover of $K$, so it has a finite subcover $\{V_{U_1},\ldots,V_{U_n}\}$. Let $\mathscr{F}=\{U_1,\ldots,U_n\}$; $\mathscr{F}$ is a finite subset of $\mathscr{U}$, and
$$\bigcup\mathscr{F}=\bigcup_{k=1}^nU_k\supseteq\bigcup_{k=1}^n(K\cap U_k)=\bigcup_{k=1}^nV_{U_k}=K\;,$$
so $\mathscr{F}$ covers $K$. Thus, $K$ is compact with respect to $\tau$. $\dashv$
|
H: Verify $\tau=\{A \subseteq \mathbb{R}| |\mathbb{R}\setminus A| \leq |\mathbb{N}| \}$ is a topology
Consider the set $\tau=\{A \subseteq \mathbb{R}| |\mathbb{R}\setminus A| \leq |\mathbb{N}| \}$
Verify it is a topology
obviously $\mathbb{R} \in \tau$ since $ |\mathbb{R}\setminus \mathbb{R}|=0 \leq |\mathbb{N}| \}$
but $\emptyset\notin \tau$ ? because $ |\mathbb{R}\setminus \emptyset|=|\mathbb{R}| \leq |\mathbb{N}| \}$ is false.... (1) this is a problem, Should I assume $\emptyset$ is there anyway?
Let $A_\alpha \in \tau$ $ \forall \alpha \in I$ $\implies$ $ |\mathbb{R}\setminus A_\alpha| \leq |\mathbb{N}| $ $\forall \alpha$ now let $U=\bigcup_{\alpha \in I}A_\alpha$
$| \mathbb{R}\setminus\bigcup_{\alpha \in I}A_\alpha|=| \bigcap_{\alpha \in I} \mathbb{R}\setminus A_\alpha| $
...(2) Now here is where I am unsure how can I justify this cardinality is $\leq |\mathbb{N}| $, what theorem should I invoque?
Let $A_\alpha \in \tau$ $ \forall \alpha =1,...,m$ $\implies$ $ |\mathbb{R}\setminus A_\alpha| \leq |\mathbb{N}| $ $\forall \alpha$ now let $B=\bigcap_{\alpha=1}^mA_\alpha$
$| \mathbb{R}\setminus\bigcap_{\alpha=1}^mA_\alpha|=|\bigcup_{\alpha=1}^m \mathbb{R}\setminus A_\alpha| $
...(3) Same here , how can I justify this cardinality is $\leq |\mathbb{N}| $, what theorem should I invoque?
AI: As you’ve noticed, $\tau$ is not actually a topology, since it fails to include the empty set. While this may be a trick question, I suspect that the omission is actually an oversight. I would point out that $\varnothing\notin\tau$, so that in fact $\tau$ is not a topology, but I would go on to prove that $\tau\cup\{\varnothing\}$ is a topology.
For your second question, pick any $\alpha_0\in I$ and observe that
$$\bigcap_{\alpha\in I}(\Bbb R\setminus A_\alpha)\subseteq\Bbb R\setminus A_{\alpha_0}\;,$$
so that automatically
$$\left|\bigcap_{\alpha\in I}(\Bbb R\setminus A_\alpha)\right|\le|\Bbb R\setminus A_{\alpha_0}|\;.$$
Finally, for the third question, what do you know about the union of a finite number of countable sets?
|
H: Calculate $\lim_{x \to 0} f(x)$
Let $f:\mathbb{R}\to \mathbb{R}$ be a function. Suppose $\lim_{x \to 0} \frac{f(x)}{x} = 0$. Calculate $\lim_{x \to 0} f(x)$.
According to the answer key, $\lim_{x \to 0} f(x) = 0$. I see $f(x)=x^2$ satisfies both limits, but is there a way "construct" an argument to prove this? I mean, how do I arrive at the answer?
AI: $\lim_{x \to 0} f(x)=\lim_{x \to 0} \frac {f(x)} x x =\lim_{x \to 0}\frac {f(x)} x \lim_{x \to 0}x=(0)(0)=0$.
|
H: Contradicting the non-existence of a linear map $T: \Bbb R^5 \to \Bbb R^5$ and the Fundamental Theorem of Linear Algebra (from Axler Exercise 3.B(5))
I am asked to prove there does not exist a linear map $T:\Bbb R^5 \to \Bbb R^5$ such that $\operatorname{range}(T) = \operatorname{null}(T)$.
I think I understand the proof whereby the Fundamental Theorem of Linear Algebra (aka rank nullity theorem) shows how this isn't possible. The null space and the range can't each have a dimension of 2.5
But I want to ask what is wrong with the following linear map, why this somehow doesn't work:
Let the basis of $\Bbb R^5$be defined as 5 independent vectors $e_1, e_2, e_3, e_4$and $e_5$.
Define a linear map $T:\Bbb R^5 \to \Bbb R^5$as follows: $Te_1=0, Te_2=0, Te_3=e_1, Te_4=e_1, Te_5=e_2.$
If I define T this way, it seems that $\operatorname{null}(T) = \operatorname{span}(e_1, e_2)$ and $\operatorname{range}(T) = \operatorname{span}(e_1, e_2)$, ie $\operatorname{range}(T) = \operatorname{null}(T)$, which is not supposed to be the case given the textbook is asking me to prove that this isn't possible.
Where I am going wrong here? (For context, the previous problem in this exercise asked to give an example of a showed a similar linear map $T:\Bbb R^4 \to \Bbb R^4$ where $\operatorname{range}(T) = \operatorname{null}(T)$, and supposedly one such linear map $T$ is as as follows: $Te_1=e_3, Te_2=e_4, Te_3=0, Te_4=0$ according to the solutions. Notice that doesn't seem too different to what I'm doing in $T:\Bbb R^5 \to \Bbb R^5$)
AI: $e_3\,-\,e_4$ is also in the null space of your $T$, but is not in the span of $e_1, e_2$.
|
H: What does it really mean for a model to be pointwise definable?
(Note: I'm only an amateur in logic, so I'm sorry for any weird
terminology or notation, or excessive tedious details. Most of what I
know is from Kunen's Foundations of Mathematics.)
I'm trying to learn a little about pointwise definable models. I'm
looking at "Pointwise definable models of set
theory" by Hamkins, Linetsky and
Reitz, and I'm stuck on the really basic question of what
"pointwise definable" really means formally.
Let me give a toy example that I hope will illustrate my problem.
Let's work in $\mathsf{ZFC}-\mathsf{Infinity}$, or
$\newcommand{\ZFCI}{\mathsf{ZFC}-\mathsf{I}}\ZFCI$ for short, and let
$HF$ be the class of hereditarily finite sets, which may not itself be
a set. But there is certainly a first-order formula which says that
$x$ is hereditarily finite, which will be abbreviated as usual by
"$x \in HF$". Note that $HF$ is a model of $\ZFCI$, and given any
first-order sentence $\varphi$, there is a first-order sentence
$HF \vDash \varphi$ which relativizes $\varphi$ to $HF$,
i.e. replacing all $\forall y$ with $\forall y \in HF$ and so on.
It's intuitively obvious that the model $HF$ is pointwise definable,
because I "know" what all the hereditarily finite sets are, and for
each one I can write down a first-order formula in the language of set
theory that uniquely defines it. But if I want to try to prove this,
I need to know where the statement "lives", and what axioms would be
available. I can think of three different possibilities but they each
have problems.
I could try to state and prove "$HF$ is pointwise
definable" as a theorem schema in the metatheory. Now the
best way I've found to understand the metatheory is as a system that
reasons about "strings": its universe of discourse consists of
first-order formulas, sentences, lists of sentences, proofs, etc. So
I would have to say something like
For every set $x \in HF$, there exists a first-order formula
$\varphi(y)$ with $y$ free and a proof from the axioms $\ZFCI$ of the sentence
$$HF \vDash \forall y (y=x \longleftrightarrow \varphi(y) )\tag{1}$$
But I have two
problems with that statement. Sets are not strings, and so the
metatheory can't quantify over them. And the "sentence" (1) is not a
sentence because $x$ is free, and I don't know what to put in its
place. (This feels like the paradox illustrated by Hamkins' young son
in a footnote in the paper: "Tell me any number, and I'll tell you a
description of it.")
I could try to state and prove "$HF$ is pointwise
definable" as a theorem of $\ZFCI$. Now I have the opposite problem
with a statement like ``for every set $x \in HF$ there exists a
first-order formula $\varphi$'', because first-order formulas are not
sets and set theory cannot quantify over them, at least not as such. But I do know
that I can encode first-order formulas $\varphi$ as sets
$\ulcorner \varphi \urcorner$ using Gödel codes or the like. So I
could try to write down a sentence in the language of set theory, of the form
$$\forall x \in HF \: \exists\, \ulcorner \varphi \urcorner \: \dots $$ But now I am stuck again
because the $\cdots$ needs to say $HF \vDash \forall y (y=x \longleftrightarrow \varphi(y))$, and Tarski's
undefinability of truth tells me that there is no first-order formula
in $\ulcorner \varphi \urcorner$ and $y$ that expresses that.
I could try to state and prove "$HF$ is pointwise
definable" as a theorem of some stronger set theory, say
$\mathsf{ZFC}$. This gives me a way out of the previous dilemma,
because in $\mathsf{ZFC}$, $HF$ is actually a set. And Tarski's
definability of truth tells me that there is indeed a first-order
formula $\Phi(M, \ulcorner \varphi \urcorner, x)$ which says
$M \vDash \varphi(x)$ for set models $M$. So finally I can write a sentence like
$$\forall x \in HF \: \exists \ulcorner \varphi \urcorner \: \Phi(HF, \ulcorner \forall y ( y=x \longleftrightarrow \varphi(y) )\urcorner)).$$
But I've paid a price in consistency strength. More generally, if I
want to do this for any other class model $M$ of $\ZFCI$, then by
Gödel's second incompleteness theorem, I am going
to have to work in an axiom system at least as strong as $(\ZFCI)
+ \mathrm{Con}(\ZFCI)$ so that $M$ has some hope of being a set.
So I'm wondering if 3 is really what's meant when we say that a model
is pointwise definable, or if there's some way to salvage 1 or 2, or
yet a fourth interpretation that I haven't thought of (some sort of
meta-meta-theory, or a different logic or set theory altogether?).
Likewise in the HLR paper, I don't know whether the theorem "there
exist pointwise definable models of $\mathsf{ZFC}$" is meant to be
understood as a metatheorem, or a theorem of $\mathsf{ZFC}$, or of
$\mathsf{ZFC}+\mathrm{Con}(\mathsf{ZFC})$ (in which the models in
question are actually sets), or what. I can't figure out how to make
sense of the first two, and if they meant the third, it seems
surprising that they wouldn't say so explicitly.
I did notice a comment on HLR's page 3 that ``the property of being
pointwise definable is not first-order expressible'', which I don't
quite understand but is maybe a reference to my problem?
AI: The first point is to distinguish between internal and external properties. This is exacerbated by the fact that we're looking specifically at $\mathsf{ZFC}$, which is "doing double duty" in the most confusing way possible.
The short answer to your question, though, is: "$3$."
First, the internal side of things. This is relevant to the end of your question only.
Almost always, when we say "Property X is not first-order expressible" what we mean is "There is no first-order sentence $\varphi$ such that for every appropriate structure $\mathcal{M}$, we have $\mathcal{M}\models\varphi$ iff $\mathcal{M}$ has property X." So, for example, being a torsion group is not first-order expressible.
In particular, "Pointwise definability is not first-order expressible" is a consequence of the following perhaps-simpler result:
Every (infinite) pointwise-definable structure is elementarily equivalent to a non-pointwise-definable structure.
The above statement is made and proved within $\mathsf{ZFC}$. The "nuke" is upwards Lowenheim-Skolem:
$\mathsf{ZFC}$ proves "If $\mathcal{M}$ is a pointwise-definable structure, then $\mathcal{M}$ is countable."
Wait, what? See the very end of this answer.
$\mathsf{ZFC}$ also proves "Every infinite structure $\mathcal{M}$ is elementarily equivalent to an infinite structure of strictly greater cardinality."
Putting these together, we have the desired result.
As a corollary, we have the following (again proved in $\mathsf{ZFC}$):
For every first-order theory $T$, either $T$ has no pointwise-definable models at all or the class of pointwise-definable models of $T$ is not an elementary class.
(We need the first clause in case the relevant class is $\emptyset$. This can indeed happen, even if $T$ is consistent: consider the theory of a pure set with two elements.)
But the bulk of the issue in your OP is about the external side of things. Here it's your third option which holds, as follows:
We state and prove each relevant fact $\mathsf{A}$ inside $\mathsf{ZFC}$.
... Except that sometimes as a matter of practice, we're sloppy - and either (the two options are equivalent) actually use a stronger system $\mathsf{ZFC+X}$ or prove $\mathsf{X}\rightarrow\mathsf{A}$ for some "unspoken but clear from context" (:P) additional principle $\mathsf{X}$. Standard candidates for $\mathsf{X}$ include the standard "generalized consistency" principles ("$\mathsf{ZFC}$ has a model/$\omega$-model/transitive model") and - far less benignly, but unfortunately with nonzero frequency - the whole slew of large cardinal axioms.
However, it seems to me that the HLR paper is actually pretty good on this point. For example, the first bulletpoint of Theorem $3$ is "If $\mathsf{ZFC}$ is consistent, then there are continuum many non-isomorphic pointwise definable models of $\mathsf{ZFC}$," which is indeed a $\mathsf{ZFC}$-theorem. (I could be missing an elision somewhere else, though.)
As a coda, note that above I mentioned that $\mathsf{ZFC}$ proves that every pointwise-definable structure is countable. It does this, moreover, by exactly the "math tea argument." So what gives?
Well, we have to unpack what "Every pointwise-definable structure is countable" means when we phrase it in $\mathsf{ZFC}$. When we say "$\mathcal{M}$ is pointwise-definable," what we mean is that there is an appropriate assignment of truth values to pairs consisting of formulas of the language and tuples of appropriate arity such that [stuff]. This blob of data exists "one level higher" than $\mathcal{M}$ itself, and in particular even the bit of this blob verifying that each element of $\mathcal{M}$ satisfies "$x=x$" is a collection of $\mathcal{M}$-many facts. As such:
Using the "all-at-once" definition of $\models$, which is totally fine for set-sized structures, we have $\mathsf{ZFC}\vdash$ "$V\not\models \forall x(x=x)$."
Hehehehe.
This is because the expression "$V\not\models\forall x(x=x)$," if we try to construe it directly as above, is shorthand for: "There is a function with domain $V\times Formulas(\{\in\})$ such that ...," and that's dead on arrival since there aren't any functions with domain anything like $V$ in the first place.
So actually $\mathsf{ZFC}$ does prove "$V$ is not pointwise definable" - as long as we formulate this blindly. But if we do that, then we have to admit that $\mathsf{ZFC}$ also proves e.g. "There is no sentence which $V$ satisfies." Which ... yeah.
Incidentally, the above should make you worry about a couple things:
Relatively benignly, is the "all-at-once" definition of $\models$ actually appropriate for set-sized structures? As a matter of fact it is, but this is not entirely trivial. Specifically, the $\mathsf{ZFC}$ axioms are strong enough to perform the recursive construction of the theory of a structure, and so prove that for each (set-sized) structure $\mathcal{M}$ there is exactly one relation between formulas and tuples from $\mathcal{M}$ satisfying the desired properties. Weaker theories don't need to be this nice: while any non-totally-stupid theory can prove that at most one "theory-like thing" exists for a given structure, if we go weak enough we lose the ability to carry out Tarski's "algorithm." (Fortunately, we do in fact have to go quite weak; see my answer here.)
More foundationally, why are we so blithe about how we formulate various mathematical claims in the language of set theory? Of course this isn't at all new, and in particular the above observation that in a precise sense $\mathsf{ZFC}$ proves "$V$ doesn't satisfy $\forall x(x=x)$" is just another example of a junk theorem. However, to my mind it's one of the more worrying ones: unlike e.g. "Is $\pi\in 42$?," it's not totally clear that "Does $V\models \forall x(x=x)$?" is something we'd never accidentally ask in day-to-day mathematics. Ultimately I'm still not worried, but I do think that this highlights the seriousness of the question "Is $X$ a faithful translation of $Y$?"
Finally, on a purely technical level: what about set theories which do allow for functions on the universe, and with respect to which therefore the "internal naive $V\models ...$"-situation is nontrivial? Well, per Tarski/Godel(/etc.) we know that things have to still be nasty. See the end of this old answer of mine for a quick analysis of the specific case of $\mathsf{NF}$.
|
H: Show that there exists a metric $d$ on $\mathbb{R}$ such that $(\mathbb{R},d)$ is compact
I've come across this problem here and I've been trying to solve it. I've tried metrics like $d(x,y) = \ln(1+\frac{|x-y|}{1+|x-y|})$ but these end up not working (I believe this one does not give a totally bounded set). My thinking is that if I can get a metric such that $d(x,y)<|x-y|$ then $(\mathbb{R},d)$ should be complete. But for every such metric $d$ I try, it turns out that either $d$ does not satisfy the triangle inequality (hence $d$ is not a metric) or $(\mathbb{R},d)$ is not totally bounded (hence $(\mathbb{R},d)$ is not compact).
My final thinking is that maybe some theorem can used to show existence of such a metric without an explicit construction, but I've not been able to make any progress this way either.
AI: There exists a bijection $f: \mathbb R \to [0,1]$. Define $d(x,y)=|f(x)-f(y)|$. This makes $\mathbb R$ compact.
|
H: Simpler proof of van Kampen's theorem?
I've been trying to understand the proof of van Kampen's theorem in Hatcher's Algebraic Topology, and I'm a bit confused why it's so long and complicated.
Intuitively, the theorem seems obvious to me. Given a path $p$ in $A \cup B$, we can split it up into paths $p_1p_2...p_n$ that alternate between $A$ and $B$. So we have $\pi_1(A, x) * \pi(B, x)$, except that certain paths from $A$ and $B$ are equivalent (the ones in $A \cap B$), so we need to quotient by $\pi_1(A \cap B, x)$.
I'm confused about what the proof in Hatcher's book is doing... Is it just a more detailed version of that idea? Or is there something I'm missing?
Thank you for your help.
AI: Well...the first thing you're missing is that your $n$ might not be finite unless you have some nice assumptions. For instance, if $A$ is the closed upper half-plane, and $B$ is the closed lower half-plane, then the graph of
$$
f(x) = \begin{cases}
x^2 \sin \frac{1}{x} & x \ne 0\\
0 & x = 0
\end{cases}
$$
has infinitely many arcs in $A$ and in $B$. "But," I hear you cry, "we have that $A$ and $B$ are open!" Sure you do...and is that enough to prove finiteness of $n$? Well...yes, kind of, but there's work to do...and there's a half-page of your life gone.
"But after that," you say, "The only subtle thing is getting rid of the equivalent things ... that "amalgamation" stuff, right?" Well... yes and no. Showing that curves that are equivalent modulo the relevant stuff in the amalgamated product are in fact homotopic is subtle. But you also have to prove that those are the ONLY ones that are, and that's subtle too. And there's another page or two.
You might want to remember that once you have the van Kampen theorem, you can prove the Jordan Curve Theorem, for which a correct (and understood-to-be-correct) proof took a very long time. So...it's not likely to be an easy proof.
|
H: Triangle problem with a simpler solution
Problem: In the triangle $\mathit{ABC}$ the angle $A$ is 60°. The “interior” circle has center $O$. If $|\mathit{OB}|=8$, $|\mathit{OC}|=7$, how long is $\mathit{OA}$?
“Solution”: Let the radius be $R$. Since $A=60°$; $|\mathit{OA}|=2R$ and (see image below) $|\mathit{AD}|=|\mathit{AF}|=\sqrt{3}R$. Pythagoras’ Theorem on $\triangle\mathit{BDO}$ and $\triangle\mathit{CEO}$ gives $|\mathit{DB}|=\sqrt{64-R^2}$ and $|\mathit{CF}|=\sqrt{49-R^2}$ so we have.
\begin{align*}
b&=\sqrt{3}R+\sqrt{64-R^2}\\
c&=\sqrt{3}R+\sqrt{49-R^2}.
\end{align*}
Cosine Theorem on $\triangle\mathit{ABC}$ gives
$$
a
=\sqrt{b^2+c^2-2bc\cos(60°)}
=\sqrt{b^2+c^2-bc}
$$
which can be expressed in terms of $R$.
Two expressions for the area (left = sinus theorem, right = interior circle radius = $2A/(a+b+c)$ where $A$ is the area) gives
$$\tfrac{1}{2}bc\sin(60°)=\tfrac{1}{2}(a+b+c)R$$
which can be expressed in terms of $R$ and gives the solution $R=28\sqrt{3}/13\approx3.73057$ using Mathematica (the equation is rather ‘complex’ to write out).
I'm not sure this correct however. Drawing it in GeoGebra does not ‘match up’ (see image above) and I think the equation is too advanced to the be correct ‘M.O.’ for this problem. Is there a better, simpler (and correct) way to solve this problem? TIA.
AI: Note that,
$$
OA\sin{30^o}=7\sin{\frac{C}{2}}=8\sin{\frac{B}{2}}=r
$$
where $r$ is the radius of incircle.
So,
$$
\begin{align}
7\sin{\frac{C}{2}}&=8\sin{\frac{B}{2}} \\
&=8\cos{\frac{A+C}{2}}\\
&=8(\cos{30^o\cos{\frac{C}{2}}-\sin{30^o\sin{\frac{C}{2}}}})\\
\end{align}
$$
Therefore,
$$
\tan{\frac{C}{2}}=\frac{4\sqrt{3}}{11}
$$
Can you continue from here?
|
H: Lie Algebras of Covering of a Group is Isomorphic to the Lie Algebra of the Group.
If $\tilde{G} \to G$ is a covering of the lie group $G$, why are the associated lie algebras isomorphic? I.e., why $Lie(\tilde{G}) \cong Lie( G)$?
AI: There is natural identification of $T_eG\cong \mathfrak{g}$, and we know that a covering map of groups $\pi:\widetilde{G}\to G$ is a diffeomorphism in a neighborhood $U$ of the identity, whence it follows that $d\pi_*:T_e\widetilde{G}\to T_eG$ is an isomorphism. In particular, we have an isomorphism $\widetilde{\mathfrak{g}}\to \mathfrak{g}$ of the associated Lie algebras.
|
H: $\lim_{x \to +\infty} f(x) + g(x) = +\infty$. True or false?
True or false? If true, justify. If false, give counterexample. If $f,g : \mathbb{R} \to \mathbb{R}$ are functions such that $f$ is bounded and $\lim_{x \to +\infty} g(x) = +\infty$, then $\lim_{x \to +\infty} f(x) + g(x) = +\infty$.
I could not think of a counterexample, so I assumed it is true. I am trying to get used to proving these kinds of statements, so I tried the following:
$f$ is bounded, so $\lim_{x \to +\infty} f(x)$ exists and is finite. $g$ increases without bound as $x \to +\infty$. Therefore, $\lim_{x \to +\infty} f(x) + g(x) = +\infty$.
Is it incomplete? Is it rigorous? Is it correct? Any help is appreciated.
Edit: Just noticed that "$f$ is bounded, so $\lim_{x \to +\infty} f(x)$ exists and is finite" is false. For example, $f(x)=\sin{x}$.
AI: Since $f$ is bounded, there exists $M$ for which $|f(x)| < M$ for any $x$. Hence $\lim_{x \rightarrow+\infty}f(x) + g(x) \geq \lim_{x \rightarrow+\infty} g(x) - M = +\infty$.
|
H: brownian motion unbounded variation
I have been doing a little bit of reading regarding random processes and probability theory recently for some research I have been doing, and I have come across the claim in many places that Brownian motion cannot be treated with Riemannian integration due to the fact that it is of unbounded variation. I have been trying to find a rigorous proof for that, but I have been having a difficult time. I intuitively have the idea that since Brownian motion is considered as a continuous random walk, then it is theoretically possible for it to exceed whatever bound may be placed on it. Is this the right way of thinking of it? And can anyone produce more rigorous proof to show this? Thanks!
AI: One way to rigorously formulate what you are talking about is as follows. As a process, the sample paths of the usual Brownian motion $(B_t)_{t\in\mathbb R_+}$ are not functions of bounded variation. In fact, almost surely, the sample paths have infinite variation on any interval of the form $[0,t]$ with $t\in\mathbb R_+$.
This presents the following issue if we want to try to define what $\int_0^t X_s\,dB_s$ means, where $(X_t)_{t\in\mathbb R_+}$ is a suitable random process. In the Riemann-Stieltjes approach to integration, we can define integrals of the form $\int_0^t f(s)\,d\alpha(s)$ when $\alpha$ is, say, a function of bounded variation on $[0,t]$, and $f$ is a continuous function on $[0,t]$. When $\alpha$ does not have bounded variation, then there are continuous functions $f$ that are not Riemann-Stieltjes integrable with respect to $\alpha$.
So this means we have to look for another way to interpret what $\int_0^tX_s\,dB_s$ should mean, and the way it is treated in Le Gall's book Brownian Motion, Martingales, and Stochastic Calculus,
for instance, is to proceed by building up the theory of martingales and stochastic integration enough so that we can define expressions of the form $\int_0^t X_s\,dB_s$ as stochastic processes known as stochastic integrals, which are very roughly speaking some kind of martingale-like object.
Le Gall's book is a great reference for this and other topics related to Brownian motion after one has a suitable background in probability theory.
|
H: Non-abelian group of order 165 containing $\mathbb{Z}_{55}$.
There is an example in this note http://www.math.mcgill.ca/goren/MATH370.2013/MATH370.notes.pdf (example 27.1.3 p. 56) that I cannot understand.
I attach the example.
Example 27.1.3 Is there a non-abelian group of order $165$ containing $\mathbb{Z}_{55}$?
In such a group $G$, the subgroup $\mathbb{Z}_{55}$ must be normal (its index is the minimal one dividing
the order of $G$). Since there is always a 3-Sylow, we conclude that $G$ is a semi-direct product
$\mathbb{Z}_{55}\rtimes \mathbb{Z}_{3}$. This is determined by a homomorphism $\mathbb{Z}_{3}\to Aut(\mathbb{Z}_{55}) \simeq(\mathbb{Z}_{55})^{\times}$ The
right hand side has order $\varphi(55) = 4\cdot 10$. Thus, the homomorphism is trivial and $G$ is a direct
product. It follows that $G$ must be commutative.
Question. Why the homomorphism is trivial? I can't see this.
AI: The homomorphism is trivial because the target group does not have subgroups of order 3 by Lagrange's theorem.
|
H: Show the sequence $f_n(x)=\frac{1}{n}\chi_{[0,n]}$ has no weakly convergent subsequence in $L^1$.
Show the sequence $f_n(x)=\frac{1}{n}\chi_{[0,n]}$ has no weakly convergent subsequence in $L^1[R]$.
My observations:
Assume $f_{n_k} \to f$ weakly, then:
The sets where $f$ is positive or where it is negative have to have infinite measure. I denote them $
A^+,A^-$. Also $\int f=1$
I am not sure how to proceed.
AI: Suppose that there is ea weakly convergent subsequence $f_{n_k}$ with limit $f\in L_1$. Then $\int_{(a,b]}f=\lim_k\langle f_{n_k},\mathbb{1}_{(a,b]}\rangle =\frac{1}{n_k}(b-a)=0$. This means that $f\equiv0$ almost surely by monotone class arguments, but
$$\int f=\langle f,\mathbb{1}\rangle=\lim_k\langle f_{n_k},\mathbb{1}\rangle =1$$
which is a contradiction.
Here is a monotone class argument based on Dynkin systems.
Let $\mathcal{L}$ be the collection of all measurable sets for which $\inf_Af=0$. $\mathcal{L}$ contains the mutilicative class (a $\pi$-system) $\mathcal{C}$ of all bounded intervals $(a,b]$. Since $\mathbb{R}=\bigcup_n(-n,n]$, by dominated convergence (with $|f|$ as dominating function. $\int_{\mathbb{R}}f=0$. It follows that $\mathbb{R}\in\mathcal{L}$. Furthermore, it is easy to show that $\mathcal{L}$ is a $d$-system. Thus it contains the $\sigma$-algebra generated by $\mathcal{C}$, which happens to be the whole collection of Borel sets.
In particular, $\int_{\{|f|>0\}}f=0$ and so, $f=0$ a.s.
|
H: Compute projection of vector onto nullspace of vector span
Say I have a matrix $\pmb{W}$ of $m$ vectors, each of length $n$: $\pmb{W} =\left[ \vec{W}_1, \dots, \vec{W}_m\right]$, where $\vec{W}_i \in \mathbb{R}^n$ for integers $1\leq i\leq m$. How would I go about computing the projection a new vector, $\vec{V} \in \mathbb{R}^n$, onto the span of the nullspace of $\pmb{W}$?
When I compute the null space of $\pmb{W}$, I find a $nxm$ matrix. It is my understanding that I must have a square matrix in order to perform a projection. So, I am confused. It seems like $m$ would have to be equal to $n$ to perform the projection.
Finally, I experimented with a QR factorization to find a square projection matrix associated with $\pmb{W}$. After some experimentaiton, I did not think this was the right path and gave up.
AI: This might be a useful approach to consider.
Given the following form:
$$
A\mathbf{x}=\mathbf{b}
$$
where $A$ is $m \times n$, $\mathbf{x}$ is $n \times 1$, and $\mathbf{b}$ is $m \times 1$, then projection matrix $P$ which projects onto the subspace spanned by the columns of $A$, which are assumed to be linearly independent, is given by:
$$
P=A\left(A^{T}A\right)^{-1}A^{T}
$$
which would then be applied to $\mathbf{b}$ as in:
$$
\mathbf{p}=P\mathbf{b}
$$
In the case you are describing, the columns of $A$ would be the vectors which span the null-space that you have separately computed, and $\mathbf{b}$ is the vector $\vec{V}$ that you wish to project onto the null-space.
I hope this helps.
|
H: Area of a Sector of a Circle Question
In the figure, $AB$ and $CD$ are two arcs subtended at center $O$. $r$ is the radius of the sector $AOB$. I was told to find the radius, $x$ (the angle), and the shaded area. I know $2\pi r\cdot\dfrac x{360} = 13$. And $\pi(r+4)^2 \cdot \frac{x}{360} - \pi r^2 \cdot \frac{x}{360}$ = shaded area
AI: Hint:
$$\begin{align}2\pi r\dfrac{x}{360} &= 13\tag{1}\\ 2\pi (r + 4)\dfrac x{360} &= 17\tag{2}\end{align}$$
What happens if you divide $(1)$ with $(2)$?
$$\begin{align}\dfrac{r}{r + 4} &= \dfrac{13}{17}\implies r = 13 \\ x = \dfrac{13\cdot360}{2\pi\cdot 13} &= \dfrac{180}{\pi}\end{align}$$
|
H: $\sum_{k=0} ^\infty (-1)^k \frac{(2k)!!}{(2k+1)!!} a^{2k+1}$ to differential equation
I want to use $S = \sum_{k=0} ^\infty (-1)^k \frac{(2k)!!}{(2k+1)!!} a^{2k+1}$ and get the relation $(a^2+1) S'=1−aS$. So far I am just getting $\frac{dS}{da} = \sum_{k=0} ^\infty (-1)^k \frac{(2k)!!}{(2k+1)!!} a^{2k} (2k+1)$, which I am not seeing how to use with S to get that relation. What should I do to progress and get the desired relation?
AI: Turns out to be
quite straightforward
once the double factorials
are replaced.
$S(a)
= \sum_{k=0} ^\infty (-1)^k \dfrac{(2k)!!}{(2k+1)!!} a^{2k+1}
$.
$\begin{array}\\
\dfrac{(2k)!!}{(2k+1)!!}
&=\dfrac{\prod_{j=0}^{k-1}(2k-2j)}{\prod_{j=0}^{k-1}(2k+1-2j)}\\
&=\dfrac{\prod_{j=1}^{k}(2j)}{\prod_{j=1}^{k}(2j+1)}\\
&=\dfrac{\prod_{j=1}^{k}(2j)^2}{\prod_{j=1}^{k}(2j)(2j+1)}\\
&=\dfrac{4^k(k!)^2}{(2k+1)!}\\
\end{array}
$
$\begin{array}\\
S(a)
&= \sum_{k=0} ^\infty (-1)^k \dfrac{(2k)!!}{(2k+1)!!} a^{2k+1}\\
&= \sum_{k=0} ^\infty (-1)^k \dfrac{4^k(k!)^2}{(2k+1)!} a^{2k+1}\\
S'(a)
&= \sum_{k=0} ^\infty (-1)^k \dfrac{4^k(k!)^2}{(2k)!} a^{2k}\\
a^2S'(a)
&= \sum_{k=0} ^\infty (-1)^k \dfrac{4^k(k!)^2}{(2k)!} a^{2k+2}\\
&= \sum_{k=1} ^\infty (-1)^{k-1} \dfrac{4^{k-1}((k-1)!)^2}{(2k-2)!} a^{2k}\\
(a^2+1)S'(a)
&= \sum_{k=1} ^\infty (-1)^{k-1} \dfrac{4^{k-1}((k-1)!)^2}{(2k-2)!} a^{2k}+\sum_{k=0} ^\infty (-1)^k \dfrac{4^k(k!)^2}{(2k)!} a^{2k}\\
&= 1+\sum_{k=1} ^\infty \left((-1)^{k-1}\dfrac{4^{k-1}((k-1)!)^2}{(2k-2)!}+(-1)^k \dfrac{4^k(k!)^2}{(2k)!}\right) a^{2k}\\
&= 1+\sum_{k=1} ^\infty (-1)^{k-1} \dfrac{4^{k-1}((k-1)!)^2}{(2k-2)!}\left(1- \dfrac{4k^2}{(2k)(2k-1)}\right) a^{2k}\\
&= 1+\sum_{k=1} ^\infty (-1)^{k-1} \dfrac{4^{k-1}((k-1)!)^2}{(2k-2)!}\left(\dfrac{2k(2k-1)-4k^2}{(2k)(2k-1)}\right) a^{2k}\\
&= 1+\sum_{k=1} ^\infty (-1)^{k-1} \dfrac{4^{k-1}((k-1)!)^2}{(2k-2)!}\left(\dfrac{-2k}{(2k)(2k-1)}\right) a^{2k}\\
&= 1+\sum_{k=1} ^\infty (-1)^{k} \dfrac{4^{k-1}((k-1)!)^2}{(2k-1)!} a^{2k}\\
1-aS(a)
&= 1-a\sum_{k=0} ^\infty (-1)^k \dfrac{4^k(k!)^2}{(2k+1)!} a^{2k+1}\\
&= 1+\sum_{k=0} ^\infty (-1)^{k+1} \dfrac{4^k(k!)^2}{(2k+1)!} a^{2k+2}\\
&= 1+\sum_{k=1} ^\infty (-1)^{k} \dfrac{4^{k-1}((k-1)!)^2}{(2k-1)!} a^{2k}\\
\end{array}
$
and they match.
|
H: If $f(x)= (x-a)(x-b)$ for then the minimum number of roots of equation $\pi(f'(x))^2 \cos(\pi(f(x))) + \sin(\pi(f(x)))f''(x) =0$
If $f(x)= (x-a)(x-b)$ for $a,b$ $\in \mathbb{R}$ then the minimum number of roots of equation
$$\pi(f'(x))^2 \cos(\pi(f(x))) + \sin(\pi(f(x)))f''(x) =0$$
in $(\alpha,\beta)$ where $f(\alpha) =+3 = f(\beta)$ and $\alpha <a<b<\beta$ will be:
AI: Hint
You can rewrite that equation as
$$\frac{d}{dx}\left(\sin(\pi f(x))f'(x)\right)|_{x=x^*} = 0$$
Consider the function $g(x) = \sin (2\pi x -\pi(a+b))(2x - (a+b))$
Between any two roots of this function, there will be at least one point at which derivative vanishes (Rolle's theorem). Find the number of roots in $(\alpha, \beta)$ and it would be one less than that
|
H: Restriction of endomorphism on its image
Berkeley problems
Problem 7.4.7 Let $V$ be a finite-dimensional vector space and let $f:V\rightarrow V$ be a linear transformation. Let $W$ denote the image of $f$. Prove that the restriction of $f$ to $W$, considered as an endomorphism of $W$, has the same trace as $f:V\rightarrow V$.
Let $v$ be eigenvector with eigenvalue $\lambda \neq 0$. Since $\lambda v\in W$, $v=\frac{1}{\lambda}(\lambda v)\in W$. So the restriction has the same nonzero eigenvalues. How to prove their algebraic multiplicity is also the same?
Please give a hint. Thanks!
AI: I'd go for a direct approach:
Choose a basis $\{w_1, \dots ,w_k\}$ of $\text{im}(f)$ and extend this to a basis $\beta=\{w_1, \dots, w_k,v_{k+1}, \dots ,v_n\}$ of $V$. Write $A$ for the matrix of $f$ w.r.t. the basis $\beta$. Clearly $A$ is of the form
$$\begin{pmatrix}*&*&\dots &*&\bullet&\dots &\bullet\\*&*&\dots &*&\bullet&\dots &\bullet\\ \vdots&\vdots& & \vdots&\vdots&&\vdots\\*&*&\dots &*&\bullet&\dots &\bullet\\0&0&\dots &0&0&\dots&0\\\vdots&\vdots& & \vdots&\vdots&&\vdots\\0&0&\dots &0&0&\dots&0
\end{pmatrix}$$
The result follows by examining this shape.
|
H: Is there a simple rule defining the sequence $\frac 1 2, 1, -\frac 1 2, -1, \frac 1 4, \frac 1 2, \dots$?
I'm revisiting one of my old topology texts: "Introduction to Metric and Topological Spaces" by W.A. Sutherland, 1975 (the 1981 reprint with corrections), Oxford Science Publications.
One of the example illustrating how a sequence can be defined by a rule goes:
"Examples 1.2.1. (c) $\frac 1 2, 1, -\frac 1 2, -1, \frac 1 4, \frac 1 2, \dots,$
"In these examples, there is a simple formula for $s_n$ in terms of $n$, which the reader will spot."
No, I can't see an "obvious" rule to define this sequence.
In the course lecture notes there was a comment to the effect that the tutorial team themselves had not been able to spot the rule either.
It would of course possible to define any arbitrary rule, but it would be pretty contrived, and there would be no guarantee that it would generate the "correct" continuation.
I understand that Sutherland has more recently run to a second edition (2009) but I have not laid hands on it to check whether this has been amended.
But the immediate question is: can anyone identify what the "simple formula" may actually be?
AI: I guess it is $$...,\frac{1}{2^n},\frac{1}{2^{n-1}}, -\frac{1}{2^n}, -\frac{1}{2^{n-1}},... , $$ $n=1,2,...$.
|
H: Parametric solution of a Diophantine equation of three variables
I came across this Diophantine equation $$4x^2+y^4=z^2$$
Primitive solutions of this equation can be found by
\begin{align}
\begin{split}
x&=2ab(a^2+b^2)\\
y&=a^2-b^2\\
z&=a^4+6a^2b^2+b^4\\
\end{split}
\end{align}
where $a$, $b$ are relatively prime and $1 \leq b < a$. One of these two, one is odd, and the other one is even.
I would like to know the intermediate steps that are required to find such a parametrization. I tried to manipulate the identity.
$$4x^2+y^4=z^2 \implies 4x^2=(z+y^2)(z-y^2)$$ and then using parity check to further simplify it.
Another approach was
$$4x^2+y^4=z^2 \implies 4xy^2=(2x+y^2+z)(2x+y^2-z)$$
I could not make any more progress. Some help will be appreciated.
AI: As JCAA's question comment suggests, rewriting the equation as $(2x)^2 + (y^2)^2 = z^2$ shows $2x$, $y^2$ and $z$ form a Pythagorean triple. Primitive solutions are obtained from
$$2x = 2mn \implies x = mn \tag{1}\label{eq1A}$$
$$y^2 = m^2 - n^2 \implies y^2 + n^2 = m^2 \tag{2}\label{eq2A}$$
$$z = m^2 + n^2 \tag{3}\label{eq3A}$$
where $m$, $n$ are relatively prime and $1 \leq n < m$.
Note \eqref{eq2A} shows $y$, $n$ and $m$ form another Pythagorean triple. In this case, it's another primitive solution since $\gcd(m,n) = 1$ means $y$ is also relatively prime to $m$ and $n$. Also, since $y$ is odd (due to $2x$ being even in the original equation), you get
$$y = a^2 - b^2 \tag{4}\label{eq4A}$$
$$n = 2ab \tag{5}\label{eq5A}$$
$$m = a^2 + b^2 \tag{6}\label{eq6A}$$
where $a$, $b$ are relatively prime and $1 \leq b < a$. Plugging \eqref{eq5A} and \eqref{eq6A} into \eqref{eq1A} gives
$$x = 2ab(a^2 + b^2) \tag{7}\label{eq7A}$$
while plugging \eqref{eq5A} and \eqref{eq6A} into \eqref{eq3A} gives
$$\begin{equation}\begin{aligned}
z & = (a^2 + b^2)^2 + (2ab)^2 \\
& = a^4 + 2a^2b^2 + b^4 + 4a^2b^2 \\
& = a^4 + 6a^2b^2 + b^4
\end{aligned}\end{equation}\tag{8}\label{eq8A}$$
|
H: How to evaluate $\int _0^{\frac{\pi }{2}}x\ln \left(\sin \left(x\right)\right)\:dx$
How can i evaluate $$\int _0^{\frac{\pi }{2}}x\ln \left(\sin \left(x\right)\right)\:dx$$
I started like this
$$\int _0^{\frac{\pi }{2}}x\ln \left(\sin \left(x\right)\right)\:dx=\frac{x^2\ln \left(\sin \left(x\right)\right)}{2}|^{\frac{\pi }{2}}_0-\frac{1}{2}\int _0^{\frac{\pi }{2}}x^2\cot \left(x\right)\:dx$$
but this way doesnt turn things any simpler, i also tried using the substitution $t=\tan \left(\frac{x}{2}\right)$ and got this,
$$4\int _0^{1}\arctan \left(t\right)\ln \left(\frac{2t}{1+t^2}\right)\:\frac{1}{1+t^2}\:dt$$
$$=4\ln \left(2\right)\int _0^{1}\frac{\arctan \left(t\right)}{1+t^2}\:dt+4\int _0^{1}\frac{\arctan \left(t\right)\ln \left(t\right)}{1+t^2}\:dt-4\int _0^{1}\frac{\arctan \left(t\right)\ln \left(1+t^2\right)}{1+t^2}\:dt$$
That first integral is very simple but the rest look very difficult, could you help me evaluate this one?
AI: $$\int_0^{\pi/2}x\ln(\sin x)dx=\int_0^{\pi/2}x\left(-\ln2-\sum_{n=1}^\infty\frac{\cos(2nx)}{n}\right)dx$$
$$=-\frac{\pi^2}{8}\ln2-\sum_{n=1}^\infty\frac{1}{n}\int_0^{\pi/2}x\cos(2nx)dx$$
$$=-\frac{\pi^2}{8}\ln2-\sum_{n=1}^\infty\frac{1}{n}\left(\frac{\cos(n\pi)}{4n^2}+\frac{\pi\sin(n\pi)}{4n}-\frac{1}{4n^2}\right)$$
$$=-\frac{\pi^2}{8}\ln2-\sum_{n=1}^\infty\frac{1}{n}\left(\frac{(-1)^n}{4n^2}+\frac{0}{4n}-\frac{1}{4n^2}\right)$$
$$=-\frac{\pi^2}{8}\ln2-\frac14\text{Li}_3(-1)+\frac14\zeta(3)$$
$$=-\frac{\pi^2}{8}\ln2+\frac{7}{16}\zeta(3)$$
Bonus: With subbing $x\to \pi/2-x$ we have
$$\int_0^{\pi/2}x\ln(\cos x)dx=\int_0^{\pi/2}(\pi/2-x)\ln(\sin x)dx$$
$$=\frac{\pi}{2}\int_0^{\pi/2}\ln(\sin x)dx-\int_0^{\pi/2}x\ln(\sin x)dx$$
$$=\frac{\pi}{2}\left(-\frac{\pi}{2}\ln2\right)-\left(-\frac{\pi^2}{8}\ln2+\frac{7}{16}\zeta(3)\right)$$
$$=-\frac{\pi^2}{8}\ln(2)-\frac7{16}\zeta(3)$$
Or we can use the Fourier series of $\ \ln(\cos x)=-\ln2-\sum_{n=1}^\infty\frac{(-1)^n\cos(2nx)}{n}$.
Also by subtracting the two integrals gives
$$\int_0^{\pi/2}x\ln(\tan x)dx=\frac78\zeta(3)$$
Or we can use the Fourier series of $\ \ln(\tan x)=-2\sum_{n=1}^\infty\frac{\cos((4n-2)x)}{2n-1}.$
|
H: Showing $X_{(n)}$ is not complete for $\theta \in [1,\infty)$ when $X_i$'s are i.i.d $\text{Unif}(0,\theta)$
I am trying to show that the order statistic $X_{(n)}$ for a set of RV $\{X_i\}_{1}^{n}$ where $X_i\overset{iid}\sim \text{Unif}(0,\theta)$ is complete when $\theta \in (0,\infty)$ but not when $\theta \in [1,\infty)$.
By completeness $E[g(X_{(n)})]=0$ iff $g(X_{(n)})=0$ a.e.
$X_{(n)}\sim n\theta^{-n}X_{(n)}^{n-1}$ then if
$$E[g(X_{(n)})]=\int_{0}^{\theta}g(X_{(n)})n\theta^{-n}X_{(n)}^{n-1}dX_{(n)}=0$$
this implies
$$g(\theta)=0, \forall \theta \in(0,\infty)$$
Since for any $X_{(n)}$ there exists a $\theta=X_{(n)}$ one can conclude $g(X_{(n)})=0$
when the parameter space is restricted to $\theta \in [1,\infty)$ then by the statement above one can conclude
$$g(\theta)=0, \forall \theta \in [1,\infty)$$
not guaranteeing that $g(X_{(n)})=0$ for $X_{(n)}\in(0,1)$
I am having trouble using this to justify non completeness.
AI: To demonstrate that $X_{(n)}$ is not complete, you need to come up with a nonzero function $g$ such that $$E_\theta[g(X_{(n)})]=0\quad\text{for all $\theta\ge 1$}.\tag1$$ You have already shown that any $g$ that satisfies (1) must have $g(\theta)=0$ for all $\theta\ge1$, so your job is to think of a $g$ that's nonzero but still satisfies
$$
\int_0^1 g(x)n\theta^{-n}x^{n-1}dx=0\quad\text{for all $\theta\ge1$}.\tag2
$$
(Note that you should be using a consistent variable of integration throughout (2), and not alternating between $x$ and $X_{(n)}$.) You can factor out the $n\theta^{-n}$ from (2), which simplifies your requirement to
$$\int_0^1g(x)x^{n-1}dx=0.\tag3$$
Many choices of $g$ exist for (3). Hint: You could take $g$ to be piecewise constant, for instance. Maybe the quickest choice (making the integration in (3) as painless as possible) would be $$g(x):=\begin{cases}1-cx&0<x<1\\0&x\ge 1\end{cases}$$ for a constant $c$ that you have to determine.
|
H: Lottery probability -> Does winning affect others?
I came up with this question today since in italy somebody has won the national lottery:
(I know nothing about statistics)
there is a town with 10 spots where you can play lottery, and 1000 people play on each spot.
One of these guys wins. The probability that the same person could possibly win again in the future, i immagine to be veeeeeery small. So the probability that the same guy wins two times in his life would be (at least to my understanding) very remote.
But if I take this statement and expand it, I could say the same thing for the lottery spot he playd in. Iv'e never heared of any shop in history where they cracked the jackpot twice. So my question is: does his winning in any way statistically affect the other 999 people playing in that shop? if so, what about the entire city?
my logical thinking would tell me that If i have my numbers it doesn't make any difference where I play, since the numbers dont change whether I play them in one spot or another... At the other hand, it is very unlikely to have a winner in the same spot twice..
AI: It's very unlikely that any given person wins twice. But given that he has already won once, he is as likely to win a second time than everybody else is to win for the first time.
Same for spots and similar. Thus, winning once does not affect the probability to win a second time, or everybody else's probability to win for the first time.
So why don't you hear more about people winning twice? Think of people as red and blue balls: they are red if they didn't won there lottery, yet, and blue otherwise. Now you draw by random a ball with equal probability (decide on the winner of the next lottery). Even though this means that every single ball has the same chance to be drawn, the color of the ball will most likely be red (=first time winner), since most people didn't yet win the lottery and thus most balls are red. Thus, as long as most people didn't win the lottery, the next winner will most likely be a first time winner, simply because there are way more potential first time winners than potential two time winners.
|
H: Splitting Lemma: cokernel vs kernel being isomorphic
In Algebra: Chapter 0, the author proves the first part of the splitting lemma with the following: Proposition:
And the proof:
However, why do we have that coker $\phi$ $\cong$ ker $\psi$? I see that this would hold for when $\phi$ is surjective, as then the ker $\psi$ would be the trivial group and so would the cokernal $\phi$, however, I'm not sure what an isomorphism would be between ker $\psi$ $\subset$ N and coker $\phi$ would be in general.
Thanks in advance
AI: Note that the author shows that $\theta\colon N\to M\oplus \ker(\psi):n\mapsto (\psi(n),n-\phi\psi(n))$ is an isomorphism.
Recall that kernels, cokernels, etc..., are defined up to isomorphism. Thus the cokernels of the maps $\phi$ and $\theta\circ \phi$ are isomorphic, i.e. $\text{coker}(\theta\circ \phi)\cong \text{coker}(\phi)$.
Now $$\theta\circ \phi\colon M\to M\oplus \ker(\psi):m\mapsto (\psi\phi(m), \phi(m)-\phi\psi\phi(m))=(m,\phi(m)-\phi(m))=(m,0).$$
As the cokernel of a module map $f\colon X\to Y$ is isomorphic to $Y/\text{im}(f)$, we find that $\text{coker}(\theta\circ \phi)\cong (M\oplus \ker(\psi))/M\cong \ker(\psi)$.
|
H: Find a subgroup of index 3 of dihedral group $D_{12}$
Find a subgroup of index 3 in the dihedral group $D_{12}$. I know the number of elements in $D_{12}$ is 24 and also that is we have this subgroup of index 3, then we obtain that $|D_{12}:H|=8$, where $H$ is our wanted subgroup, but I don`t know how to go further.
I am new to this type of problems and I do not have many examples, could you provide a full proof, or at least in the form of an answer, such that it would serve as a model for similar problems I encounter? Thank you very much!!!
AI: $D_{12}$ is generated by $a,b$ with $a^2=b^{12}=1, aba=b^{-1}$. Then $b^3$ generates a subgroup $A$ of order $12/3=4$. That subgroup is normal in $D_{12}$ because $ab^3a=b^{-3}=b^9=(b^3)^3$. Then $a$ and $A$ generate a subgroup $H$ of order 8. The index of that subgroup is then 3.
|
H: Yes/ No Is $(X,T)$ is connected?
Given $X= \{ a, b , c, d , e\}$ and $T= \{ X , \emptyset , \{a\} ,\{c,d\}, \{a , c, d\} , \{b ,c, d, e\} \}$. Is $(X,T)$ is connected ?
My attempt: I think yes take any two open set $\{a\}$ and $\{a,c,d\}$ , we have $\{a\} \cap \{a,c,d\} \neq \emptyset $ this implies that $(X,T)$ is connected
Is its True or not ?
AI: It is not connected because it is the union of two disjoint open subsets $\{a\}$ and $\{b,c,d,e\}$.
|
H: maximum eigen value of a square matrix whose rows are normalized (2-norm) to 1
Consider a positive definite square matrix $A$ of size $n\times n$, with rows $A_i$, such that $||A_i||_2=1$. For such a matrix, I have checked that the maximum eigen value is upperbounded by $\sqrt n$
How do i prove this?
Note: We can add assumption of symmetric $A$ if required
AI: Let $A=(a_{jk})$. That $A$ is positive definite is not needed !
Let $ \lambda$ be an eigenvalue of $A$ and $x$ with $||x||_2=1$ such that $Ax=\lambda x$.
Then there is $j \in \{1,2,...,n\}$ such that $|x_j|^2 \ge 1/n.$
It follows that with (Cauchy - Schwarz)
$$|\lambda| |x_j|=|a_{j1}x_1+...+a_{jn}x_n| \le ||A_j||_2||x||_2=1.$$
Hence
$$| \lambda| \le \frac{1}{|x_j|} \le \sqrt{n}.$$
|
H: Inequality of a linear operator on Hilbert space.
Let $T \colon \mathcal H \to \mathcal H$ be a linear operator and let $x,y \in \mathcal H$. We assume that
$$ \forall \ z \in \mathcal H, \ \left \langle y-Tz,x-z \right \rangle \ge 0. $$
Show that $Tx=y$.
Any hints?
AI: Let $z=x-th$, where $t$ is real, then
$\langle y-Tx,h \rangle + t \langle Th, h \rangle \ge 0 $ for $h$ and $t$.
In particular, $\langle y-Tx,h \rangle \ge 0$ for all $h$ and choosing $h=y-Tx$ we get
$\|y-Tx\|^2 = 0$ or $Tx=y$.
|
H: Evaluate $1-x+x^2-x^3+\cdots$
In some problem, I have to use the expression
$$\sum^\infty_{k=0}(-1)^kx^k=1-x+x^2-x^3+\cdots$$
I know about Taylor series, but I'm not sure how to find the equivalent to this. It's similar to the $log(1+x)$ series. Any help will be appreciated.
AI: For $|y| < 1$,
$$
1 + y + y^2 + \dots + y^n = \frac{1-y^{n+1}}{1-y}
$$
so taking the limit $ n \to \infty$ we get
$$
\sum_{k=0}^\infty y^k = \frac{1}{1-y}.
$$
Now take $y = -x$ and you get what you want.
Have I answered your question?
|
H: Prove The following inequality $(ax+by)^2 \le ax^2+by^2$ for $a+b=1$
Prove The following inequality $(ax+by)^2 \le ax^2+by^2$ for $a+b=1, 0 \le a,b \le 1$
I tried expanding the equation and substituting $b=1-a$
\begin{equation}
(ax+by)^2=a^2x^2+2abxy+b^2y^2=a^2x^2+2axy-2a^2xy+b^2y^2
\end{equation}
The middle member $2axy-2a^2xy$ is negative only for $a>1$, so I'm not sure how to proceed from here.
In the hints I was given it was also said that it can be done by proving that the quadratic form $q(x,y)=ax^2+bx^2-(ax+by)^2$ is always positive. I tried finding the eigenvalues but ended up with huge equation I couldn't make sense of.
AI: We can use $\frac{x^2}{p}+\frac{y^2}{q}\geq\frac{(x+y)^2}{p+q}$,
$$ax^2+by^2=\frac{(ax)^2}{a}+\frac{(by)^2}{b}\geq\frac{(ax+by)^2}{a+b}$$
$$\Rightarrow ax^2+by^2\geq{(ax+by)^2}$$
|
H: Let $X$ be a banach space, and let $U$ be a finite dimensional subspace, then there is a closed subspace $V$ s.t $X=U\bigoplus V$
Let $X$ be a banach space, and let $U$ be a finite dimensional subsapce, then there is a closed subspace $V$ s.t $X=U\bigoplus V$
MY attempt:
Let $U=Span\{v_1,...,v_n\}$ and consider the following bounded operators $F:U \to R^n$, $F_k$ being linear functionals,
$F(x)=F(\alpha_1 v_1+...+\alpha_n v_n)=(F_1(x),...,F_n(x))=(\alpha_1,..,\alpha_n)$. Now by Banach extension theorem we can extend each $F_k$ and so $F$ to the entire $X$. Note $F$ still remains bounded.
Now let $V=\ker(F)$. I claim this choice of $V$ works. Let $x\in X$ then $F(x)=(\alpha_1,..,\alpha_n)$ and so $x=\alpha_1v_1...+(x-\alpha_1v_1...)$. Uniquness is easy to see.
Does this work?
AI: Your idea is almost correct but as mentioned in the comments you can only use Hahn-Banach to linear functionals, so Consider $\{e_1,...e_n\}$ to be a basis for $U$ by the Hahn-Banach Theorem you can get linear functionals $\{f_1,...,f_n\}$ such that $f_i(e_j)=\delta_{ij}$ and then consider $V=\cap_{i=1}^{n}kerf_i$.
|
H: An easy way to define the sequence $0$, $1$, $0$, $\frac12$, $1$, $0$, $\frac13$, $\frac23$, $1$, $0$, $\frac14$, $\frac24$, $\frac34$, $1$, $\ldots$?
Define $a_0=0$, $a_1=1$, $a_2=0$, $a_3=\frac 1 2$, $a_4=1$, $a_5=0$, $a_6=\frac 1 3$, $a_7=\frac 2 3$, $a_8=1$, $a_9=0$, $a_{10}=\frac 1 4 $, $a_{11}= \frac 2 4$, $a_{12}= \frac 3 4$, $a_{13}=1$ and so on.
How can we define it recursively or by a closed form?
AI: Recursion would not work, as this pattern does not necessarily depend on previous terms in a predictable fashion. You can actually do one better here, you can find $a_n$ in terms of $n$.
Now, for $n$, we need to find the largest $k$ such that
$$\frac{k(k+1)}{2} -1\leq n$$
$$\implies k^2 +k-2(n+1) \leq 0$$
$$k_{max} = \left[\frac{-1 + \sqrt{8n+9}}{2}\right]$$
Once you find $k_{max}$, you then start the sequence for the $(k+1)^{th}$ sequence, and hence for the remainder of the terms will just be dependant on $n-k_{max}$
|
H: Power series involving the Von Mangoldt-Function
I've been studying a proof of the Prime Number Theorem, given by D. V. Widder, where in one part he uses the identity $$\sum_{n=1}^{\infty}\frac{(\Lambda(n)-1)}{1-e^{-nx}}e^{-nx} = \sum_{n=1}^{\infty}(\log(n)-\tau(n))e^{-nx},$$ where $\Lambda(n)$ is the Von Mangoldt-Function and $\tau(n)$ is the number of divisors of n.
Can anybody help or give a hint how to see this identity?
AI: Expand in the LHS $\frac{1}{1-e^{-nx}}$ as $\sum_{m=1}^{\infty}{e^{-mnx}}$.
Then the coefficient in front of $e^{-nx}$, where $n \geq 1$, is $\sum_{d|n}{\Lambda(d)-1}$.
But it’s easy to see $\sum_{d|n}{\Lambda(d)}=\log(n)$, which concludes.
|
H: The tangent line is the best "linear" approximation to the graph of a differentiable function
I wanted to understand what it means that the tangent line is the best linear approximation to the graph of a differentiable function at the point of tangency.
I've looked in several books and I don't understand anything yet.
P.S. Where I wrote "linear approximation" read "affine approximation"... I saw that they don’t use this much, but I thought about like the parabola and the tangent at the point (0,0) which is the x axis... Like: How do I prove it is the best approximation? It's intuitively obvious, but I want a proof.
Thanks in advance to whoever help me. ^^
AI: Let $f$ be a function defined on an open subset $U$ of $\mathbb{R}$, differentiable at $a\in U$. Then for $x\in U$
$$f(x)=f(a)+f'(a)(x-a)+o_a(x-a) \tag{1}$$
What you want to show is that the function $x\mapsto f'(a)(x-a)+f(a)$ is the only affine function verifying this kind of property. To do so, suppose that there is another affine function $g(x)=\alpha (x-a)+\beta$ defined on $\mathbb{R}$ such that
$$\forall x\in U, f(x)=g(x)+o_a(x-a) \tag{2}$$
Evaluating $(2)$ in $a$ yields
$$\beta=g(a)=f(a)$$
Moreover, on the one hand the definition of $g$ gives
$$\forall x\in\mathbb{R}, x\neq a, \frac{g(x)-g(a)}{x-a}=\frac{\alpha(x-a)}{x-a}=\alpha \tag{3}$$
And on the other hand, equation $(2)$ implies that
$$\forall x\in U, x\neq a, \frac{g(x)-g(a)}{x-a}=\frac{f(x)-f(a)}{x-a}+o_a(1)$$
Letting $x\rightarrow a$ in this expression and using $(3)$ gives
$$\alpha = f'(a)$$
Therefore, $g(x)=f'(a)(x-a)+f(a)$. We have shown that $x\mapsto f'(a)(x-a)+f(a)$ is the affine function verifying $(1)$.
|
H: Why is $2$ considered a singular point for $f(x) = \frac{x-2}{x^2-x-2}$?
Let $$g(x) = \frac{1}{x^2-x-2} = \frac{1}{(x-2)(x+1)}$$
The domain of this function in apparently $D(g) = \{x \in \mathbb{R} : x \neq \{2,-1\}\}$
Now let $$f(x) = \frac{x-2}{x^2-x-2} = \frac{x-2}{(x-2)(x+1)}$$
The graph suggests that its domain is also $D(f) = \{x \in \mathbb{R} : x \neq \{2,-1\}\}$
But, if we write $f$ as
$$f(x) = \frac{x-2}{(x-2)(x+1)} = \frac{1}{(x+1)} $$
We actually notice that $2$ is not a singular point. Basically, the denominator is not zero at $2$ therefore $2$ belongs to the domain of $f$.
Apparently, I am making a trivial mistake here but I can't understand what the mistake is. Could someone explain this to me?
AI: A function does not only consist of a functional equation. Its domain and codomain are essential parts of a function, and you can have a function with the same functional equation but different domains and codomains. Often, the domain is not specified, and instead it's implied that the domain is the largest set on which the given functional equation is well-defined. With this in mind, the two functions
$$f(x)=\frac{x-2}{(x-2)(x+1)}$$
and
$$\tilde f(x)=\frac{1}{x+1}$$
are not the same. The expression in the functional equation of $f$ is well defined everywhere except $-1$ and $2$ (at $x=2$ you'd get the expression $\frac{0}{0}$, which is undefined), while the expression in the functional equation of $\tilde f$ is well-defined everywhere except $-1$. You could notice that if you do specify the domain of $f$ to exclude $2$, then their functional equations are the same, while they have different domains. But if you don't specify the domain, then it's implied by the fact that the equations are well-defined on different sets, and those sets are chosen as domains.
|
H: Question about proof of 'There are infinitely many primes $p$ with $p \equiv 2(\text{mod3})$'
I have read other proof, but I am stuck on the proof in my algebra class.
Hope someone could help me. Thanks a lot.
Prove by contradiction. Let $ \{ p_1,\dots p_n\} $ be our finite primes with $p_i \equiv 2 (\text{mod3})$ $\forall i$.
Let $$m=1+p_1^{2}\dots p_n^{2}$$
Then $m\equiv 2 (\text{mod3})$.
By fundamental theorem of arithmetic $m=q_1\dots q_t$, where $q_i$ is prime $\forall i$
How can I prove that $q_i=3$ or $q_i\equiv1 (\text{mod3})$ $\forall I$ ?
Since if I prove it, then $m=q_1\dots q_t\equiv 1 \;\text{or}\; 3$ and we get the contradiction.
My professor have define $q_i'=p_1^2\dots p_i\dots p_n^2$, so that $m=q_i'p_i+1$. But I have no idea why we have to define this.
AI: Since $m \equiv 2\;(\text{mod}\;3)$, we can't have any $q_i=3$ (else $m$ would be a multiple of $3$).
But if $q_i \equiv 1\;(\text{mod}\;3)$ for all $i$, then the product $q_1\cdots q_t$ would be congruent to $1$ mod $3$, contrary to $m \equiv 2\;(\text{mod}\;3)$.
It follows that $q_k \equiv 2\;(\text{mod}\;3)$ for some $k$.
But $m$ is not divisible by any of $p_1,...,p_n$, so $q_k$ is a new prime congruent to $2$ mod $3$.
That's the contradiction.
|
H: Is $x_n = (−1)^n$, $n ∈ \mathbb{N}$ convergent in $(\mathbb{R}, \cal{T} )$?
Let $\cal{T}$ = {$∅, \mathbb{R}$} ∪ {$(−a, a) : a ∈ (0, ∞)$} be a topology on $\mathbb{R}$.
Is the sequence $(x_n)_n∈\mathbb{N}$ defined by $x_n = (−1)^n$, $n ∈ \mathbb{N}$, convergent in $(\mathbb{R}, \cal{T} )$? In this case, what does it
converge to?
I think it depends on the parity of n, because it would alternate between -1 and 1, but I'm not sure...
AI: The sequence converges to any number $x$ such that $|x| \geq 1$. Consider any open set $(-a,a)$ such that $x$ lies in this set. Then $(-1)^{n}$ lies in $(-a,a)$ for all $n$ . Thus $x_n \to x$.
|
H: multiplication of measurable functions in $L^p$ spaces
Let $(X, M, \mu)$ be a measure space, $q \in (0, +\infty]$ and $f,g : X \rightarrow \mathbb{C}$ in which $f \in L^{\infty} (\mu)$ and $g \in L^q (\mu)$. I want to show that $fg \in L^q (\mu)$.
For this, I showed that if $g$ is an $L^1$
function on $X$ and $f$ is an $L^
∞$ function on $X$, then $fg$ is
Lebesgue integrable. But I'm not sure if this helps me to prove the question above. So, any help and idea is definitely appreciated.
AI: $fg$ is measurable and $\int |fg|^{q} d\mu \leq \|f\|_{\infty}^{q} \int |g|^{q}d\mu <\infty$ because $|f(x)| \leq \|f\|_{\infty}$ almost everywhere. Hence $fg \in L^{q}(\mu)$.
|
H: Why is there's a unique circle passing through a point?
I am trying to solve this problem:
We know that there's a circle with center$(m,h)$. And it passes through the points(1,0), (-1,0).
Show that there's a unique circle passing through the three points:$(1,0),(-1,0),(x_0,y_0)$.
I tried making substitution, and get:
$$h=\frac{{x_0}^2+{y_0}^2-1}{2{y_0}}$$
I said that only 1 value of h would be produced, so there's a unique value of h implying that a unique circle would pass the point.
After checking the answer, it said that:
Given:
$${x_0}^2+{y_0}^2=1+2y_0\sqrt{r^2-1}(1)$$
I know how the (1) is deduced, but I don't know why does it show that there's unique circle satisfying the condition. The answer justified this by saying that the equation is uniquely dependent on $x_0, y_0$. I don't know what does this mean.
Thank you very much for you guys.
Note:
r is the radius of the circle.
AI: The fact that the circle passes through $A=(1,0)$ and $B=(-1,0)$ is equivalent to the fact that its center $C$ (which is necessarily on the line bissector of $AB$) has coordinates $(0,a)$.
Therefore, the circle being the locus of points $M$ such that
$$CM^2=CA^2,$$
if we take $M=(x,y)$, this relationship is converted into the equation
$$x^2+(y-a)^2=1+a^2$$
or (simplifying by $a^2$)
$$x^2+y^2-2ay-1=0\tag{1}$$
Saying that this circle passes through $(x_0,y_0)$ is equivalent to say that :
$$x_0^2+y_0^2-2ay_0-1=0\tag{2}$$
One obtains a first degree equation in $a$, explaining that there exists a unique solution,
$$a=\dfrac{x_0^2+y_0^2-1}{2y_0}$$
(the equation you mention) with an exceptional case : no solution exists when the denominator $y_0=0$, i.e., when $M_0$ belongs to the $x$-axis, which is natural because no circle passes through 3 aligned points.
|
H: How to prove there exists a positive integer $1\le i\le n$ so that $p^i(x)=x$ when $p:[n]\to[n]$ is permutation and $x\in[n]$
I am reading book A Walk Through Combinatorics and here is a Lemma and its proof.
Let $p:[n]\to[n]$ be a permutation, and let $x\in[n]$, then there exists a positive integer $1\le i\le n$ so that $p^i(x)=x$.
Proof: Consider the entries $p(x),p^2(x),\cdots,p^n(x)$. If none of them is equal to $x$, then the Pigeon-hole Principle implies that there are two of themthat are equal, say $p^j(x)=p^k(x)$, with $j<k$. Then, applying $p^{−1}$ to both sides of this equation, we get $p^{j−1}(x)=p^{k−1}(x)$. Repeating this step, we get $p^{j−2}(x)=p^{k−2}(x)$, and repeating this step $j−3$ more times, we get $p(x)=p^{k−j+1}(x)$.
In the end, the writer says we get $p(x)=p^{k−j+1}(x)$ but he doesn't say anything about $p^i(x)=x$ or something like this. Or here is another $x$ which meets the assumption of lemma? Or it contradicts something? How to understand this proof?
AI: It probably should say to apply the step $j-2$ more times to get $x = p^{k-j}(x)$, which contradicts that none of the entries were equal to $x$.
|
H: Are the solutions of $f(x+h)=f(x)f(h)$of the form $a^x$ even if we consider discontinuous functions
Let $$f(x):\mathbb{R}\to \mathbb{R} $$$$$$and$$f(x+h)=f(x)f(h)$$
If $f(x)$ is a continuous function then we can prove all solutions for ($f(x)$ not equal to zero at any point) are of the form $a^x$ .(Where $a^x$ is defined using sequences ) by simply using properties of $f(x)$ and continuity
But is the result still true if we also consider $f$ not continuous and ($f(x)$ not equal to zero at any point) or is there a counter-example?
This function can be constant and the constant must be $1$ but that is the only constant value the function can achieve and since the constant function on $\mathbb{R}\to \mathbb{R}$is continuous the previous result will work .
If we can prove that such a function must be either monotonically increasing everywhere or monotonically decreasing everywhere if it is not $1$Then we can use the theorem that a function from $\mathbb{R}\to \mathbb{R}$that is monotonically increasing must be continuous somewhere which for this function due to the functional equation would imply the function is continuous everywhere and we would have proved the question using the previous result .Q: Can the existence of nosuch function be somehow proven by using monotonicity (or some other way )or is there a counter-example?
AI: Clearly, $f(x)$ for some $x$ implies $f(x)=0$ for all $x$. Excluding this case the question can be translated to Cauchy's equation : $g(x+y)=g(x)+g(y)$ by taking logarithm. Here are some facts about $g$: If $g$ is Borel measurable (in particular if it is monotone) then $g(x)=cx$ for some constant $c$. But there exist non-measurable solution so of this equation. [Proof of the existence of such functions requires Axiom of Choice].
Hence $f(x)=a^{x}$ need not be true in general (take $f(x)=e^{g(x)})$.
|
H: How to compute $\int_0^1 \left\lfloor\frac2{x}\right\rfloor-2\left\lfloor\frac1{x}\right\rfloor dx$?
How to compute $$\int_0^1 \left\lfloor\frac2{x}\right\rfloor-2\left\lfloor\frac1{x}\right\rfloor dx\ ?$$
Now, what I did is break the integral so that $$\int_0^1 \left\lfloor\frac2{x}\right\rfloor dx-\int_0^12\left\lfloor\frac1{x}\right\rfloor dx$$
Now, for the first integral , I further break it.
$n < \frac2x < n+1 $, where n is natural number, so we get $\frac2{n+1}< x < \frac2n$ , so we break it to $$\sum_{n=2}^\infty \int_{2/ (n+1)}^{2/ n} n dx = \sum_{n=2}^\infty\frac2{n+1}$$
Similar analysis gives the second integral is $$\sum_{n=1}^\infty\frac2{n+1}$$
So, net answer should be $-1$ , but answer key gives $2\ln2 -1$
Where am I wrong ??
AI: The integrand function takes the values 0 and 1. In particular, it takes the value 1 in intervals of the form $]\frac{1}{k+1}, \frac{2}{2k+1}[$. The integral will then correspond to the infinite sum of the areas of rectangles whose base is an interval of that type and the height is 1. The value of the integral is given by
$$
\sum_{k\ge 1} \left(\frac{2}{2k+1}-\frac{1}{k+1} \right)=\sum_{k\ge1} \frac{2k+2-2k-1}{(2k+1)(k+1)} = \sum_{k\ge 1}\frac{1}{(2k+1)(k+1)}
$$
which is indeed a convergent series. Now you just have to show that the sum of this series is actually $2 \ln2 -1$.
|
H: Pull the limit inside the infinit serie in complex analysis?
Let $f: U \mapsto \Bbb C$ a holomorphic function and $U$ an open set of the complex plane. We have
$$f(z)=(z-z_0)^m\sum_{k=0}^{\infty}a_{k+m}(z-z_0)^k$$
with $m\geq 1$. In my course, it is written that the right hand side converges on some ball $B_r(z_0)$ thus :
$$\lim_{z\rightarrow z_0}\frac{f(z)}{(z-z_0)^m}=\lim_{z\rightarrow z_0}\sum_{k=0}^{\infty}a_{k+m}(z-z_0)^k=a_m$$
I don't understand why we can put the limit inside the infinit serie... Is it a result from complex analysis ?
AI: If a power series $\sum_{k=0}^\infty b_k(z-z_0)^k$ converges on some disk centered at $z_0$; then its sum is continuous function. And the value that that function takes at $z_0$ is $b_0$. Therefore,$$\lim_{z\to z_0}\sum_{k=0}^\infty b_k(z-z_0)^k=b_0.$$
|
H: Relationship Between Determinant and Matrix Rank
Let $n\in \mathbb{N}$, and $S\in \mathbb{R}^{n\times n}$ be a symmetric positive semi-definite (PSD) matrix with rank $r \triangleq \mathrm{rank(S)}\leq n$. Can $r$ be bounded in terms of the determinant of some function of $S$?
AI: At least a lower-bound is possible; in fact, the following holds for any PSD matrix $S$:
\begin{align}
(1+\|S\|_{\mathrm{op}})^r \geq \det(I_n +S),
\end{align}
where $\|S\|_{\mathrm{op}} \triangleq \sup_{x\in \mathbb{R}^n\setminus\{0\}} \|S x\|_2/\|x\|_2$ is the operator norm of $S$.
Proof. Since $S$ is PSD, there exists an orthogonal matrix $U\in \mathbb{R}^{n\times n}$, and $\lambda_1\geq \dots \geq \lambda_n\geq 0$ such that $$S= U\mathrm{diag}(\lambda_1,\dots,\lambda_n) U^\top.$$
In this case, the eigenvalues of $S$ are $\lambda_1,\dots, \lambda_n$ in decreasing order. This implies that \begin{align}
\det(I_n +S)&= \det(U (I_n+\mathrm{diag}(\lambda_1,\dots, \lambda_n) )U^\top), \\ &= \det(U) \det(I_n +\mathrm{diag}(\lambda_1,\dots, \lambda_n)) \det(U^\top),\\
&=\det(I_n +\mathrm{diag}(\lambda_1,\dots, \lambda_n)),\\
& =\prod_{i=1}^n(1+\lambda_i),\hspace{6cm} (1)
\end{align}
where the penultimate equality follows by the fact that $\det(U)\det(U^\top)=1$, since $U$ is orthogonal. Now, since $S$ has rank $r$, we must have $\lambda_{n-r+1}=\dots=\lambda_{n}=0$. Furthermore, by the definition of the operator norm, we have $\|S\|_{\mathrm{op}}=\lambda_1\geq \dots\geq \lambda_n$. This, together with (1) implies that
\begin{align}
\det(I_n +S) &=\prod_{i=1}^r(1+\lambda_i),\\
& \leq (1+\|S\|_{\mathrm{op}})^r.
\end{align}
|
H: Equal roots of a certain polynomial equation by changing the sign?
Is there a certain polynomial equations which when you change the sign of the equation the roots will still be the same? I wonder if there are, how can it be constructed using the algebraic properties?
AI: In general if
$$f(x)=f(-x)$$
the function $f$ is termed an even function.
Examples,
$$f(x)=x^4+x^2+1$$
$$f(x)=cos(x)$$
You can recognise such functions from their graphs because they have reflectional symmetry in the y-axis.
Does that answer your question ?
|
H: Evaluate the limit $\lim\sqrt[n]{\frac{1}{n!}\sum(m^m)}$
In some problem, I need to evaluate this limit:
$$\lim_{n\rightarrow \infty}\sqrt[n]{\frac{1}{n!}\sum^n_{m=0}(m^m)}.$$
I know about Taylor series and that kind of stuff. I'm not sure where to start, maybe Stirling but after using it I still could not solve it. Any help will be appreciated.
Using Stirling's equivalence, I get to:
$$\lim_{n\rightarrow \infty}\frac{e}{n}\sqrt[n]{\frac{\sum^n_{m=0}(m^m)}{\sqrt{2\pi n}}}$$
I don't know if this is useful anyway.
AI: Using $n^n\le\sum_mm^m\le nn^n$, $$1\le\frac{1}{n}\left(\sum_{m=0}^nm^m\right)^{1/n}\le n^{1/n}$$
Since $n^{1/n}\to1$, so does the sum. Hence, (using $(\sqrt{2\pi})^{1/n}\to1$) $$\lim_{n\to\infty}\frac{1}{n}\sqrt[n]{\frac{\sum_mm^m}{\sqrt{2\pi n}}}\to1$$ and the original sequence converges to $e$.
|
H: For an infinite sequence of functions $\Bbb{R}\to\Bbb{R}$, each function is a composition of a certain finite set of functions $\Bbb{R}\to\Bbb{R}$.
Given an infinite sequence of functions $\{g_1, g_2, \ldots, g_n, \ldots\}$ where $ g_n : \Bbb R \to \Bbb R$ prove there's a finite set of functions $ \{ f_1, f_2, \ldots, f_M \} $ such that any $ g_n $ can be represented as a composition of $ f_m $'s.
Honestly, not sure even how to approach this. The intuition is that if the infinite sequence of functions is not defined using finite set of functions and composition then the sequence definition would be infinite itself, but I don't know how to formalize that.
AI: Fix a bijection $\varphi:\mathbb{R}\to [0,1)$ and define $\psi : \mathbb{R} \to \mathbb{R}$ by
$$ \psi(x) = \begin{cases}
g_n(\varphi^{-1}(x-n)), & \text{if $x \in [n, n+1)$ for some $n \in \mathbb{N}_1$}; \\
0, &\text{otherwise};
\end{cases} $$
where $\mathbb{N}_1 = \{1,2,3,\dots\}$. Finally, set $ f(x) = x+1 $. Then
$$ g_n = \psi \circ f^{\circ n} \circ \varphi $$
for any $n \in \mathbb{N}_1$.
|
H: Why $f(x)=x^2 \sin \frac{1}{x} $ Lipschitz but not continuously differentiable?
Let $f:[-1,1]\to \mathbb R$ such that $$f(x)=x^2 \sin \frac{1}{x} \quad (x\neq 0)$$ and $$f(0)=0.$$ It is clear to me that for $x\neq 0,$ $f$ is differentiable function (as being a product of two differentiable function). So $f'(x)= 2x \sin \frac{1}{x}-\cos \frac{1}{x}$ for $x\neq 0.$ Also, it clear to me that $f'$ is continuous at $x\neq 0$.
My Questions:
(1) Is $f$ differentiable at $x=0$?
(2) If so, can say that its derivative function $f'$ is continuous at $0$?
(3) Can we say that $f$ is Lipschitz continuous on $[-1,1]$? that is, there exists $M>0$ such that $|f(x)-f(y)| \leq M |x-y|$ for $x, y \in [-1,1]$.
My thoughts: $\lim_{x\to 0} \frac{f(x)-f(0)}{x-0}= \lim_{x\to 0} x \sin \frac{1}{x}$. I do not know how to proceed from here..
AI: Broad hints: Remember that $\sin $ and $\cos $ are bounded. Conclude that $f'(0)=0$. Clearly $f$ is differentiable at all other points. Write down the derivative and conclude that $f'$ is bounded. Use MVT to show that $f$ is Lipschitz. Use that fact that $\cos (\frac 1 x)$ does not have a limit as $ x \to 0$ to conclude that $f'$ is not continuous at $0$.
|
H: Which linear maps on a finite field are field multiplications?
I am mainly interested in the fields $\mathrm{GF}(2^n)$, but the question can be asked for any prime.
We can write out each element $x\in\mathrm{GF}(2^n)$ in base $2$ and note that its additive group combined with multiplication by elements of $\mathrm{GF}(2)$ is isomorphic to the vector space $\left(\mathbb{Z}/(2\mathbb{Z})\right)^n$. Let $v:\mathrm{GF}(2^n)\to\left(\mathbb{Z}/(2\mathbb{Z})\right)^n$ stand for this "vectorisation" operation.
Linear maps on $\left(\mathbb{Z}/(2\mathbb{Z})\right)^n$ may be represented by $n\times n$, $\{0,1\}$-valued matrices.
Since field multiplication is linear for any $x\in\mathrm{GF}(2^n)$ there is a matrix $M_x$ such that for all $y\in\mathrm{GF}(2^n)$
\begin{align}
M_x v(y) = v(x\cdot y),
\end{align}
There are, however $2^{n\times n}$ matrices and only $2^{n}$ field elements, so the question is what can we say about the structure of the set of matrices $\{M_x \mid x\in \mathrm{GF}(2^n)\}$ as a subset of the full set of matrices?
Loosely speaking - if I give you a matrix then how can you tell if it represents a field element?
AI: We note that any finite field $GF(p^n)$ can be presented in the form $GF(p^n) = \Bbb Z_p[x]/\langle q(x)\rangle $, where $\Bbb Z_p = \Bbb Z/p\Bbb Z$ and $q$ is an irreducible polynomial of degree $n$. Relative to the basis $\{1,x,\dots,x^{n-1}\}$, we find that the matrix $M_x$ corresponding to multiplication by (the distinguished indeterminate) $x$ is given by the companion matrix $C_q$ of $q$. It follows that a matrix $M$ corresponds to a field element if and only if there exists there exists a polynomial $f$ for which $M = f(C_q)$.
Because the matrix $C_q$ is non-derogatory, it turns out that there is such a polynomial $f$ if and only if $C_q M = MC_q$ (cf. Horn and Johnson Matrix Analysis theorem 3.2.4.2).
We can get another perspective on this if we take the elements of the matrix to themselves be elements of $GF(p^n)$. Any irreducible polynomial over $\Bbb Z_p$ splits into distinct linear factors over its splitting field.
It follows the polynomial $q$ splits into linear factors with
$$
q(x) = (x - a_1)\cdots (x - a_n), \quad a_i \in GF(p^n).
$$
It follows that the matrix $C_q$ is diagonalizable $GF(p^n)$. A matrix $M$ will commute with $\operatorname{diag}(a_1,\dots,a_n)$ if and only if it is also diagonal. So, given a matrix, it suffices to change bases and check whether the transformed matrix has the required block structure.
More specifically, the eigenvectors of $C_p$ correspond to the polynomials of multiplication by $x$, which are
$$
q_i(x) = q(x)/(x - a_i), i = 1,\dots,n.
$$
In particular, we can see that $xq_i(x) = a_i q_i(x)$, modulo $q(x)$. $M$ will correspond to multiplication by an element of $GF(p^n)$ if and only if $p_i(x)$ is an eigenvector of (the operator over $\Bbb Z_p/q(x)$ corresponding to) $M$ for all $i$.
|
H: The value of expression $x-y+2x^2y+2xy^2-x^4y+xy^4$
Let $x = \sqrt{3-\sqrt{5}}$ and $y = \sqrt{3+\sqrt{5}}$. If the value of expression $x-y+2x^2y+2xy^2-x^4y+xy^4$ can be expressed in the form $\sqrt{p}+\sqrt{q}$ where $p,q \in N$, then $(p+q)$ is equal to?
I have simplified the expression to $-11x+19y$ but don't know how to express it in $\sqrt{p}+\sqrt{q}$ as $-11$ can't be taken into radical sign like $\sqrt{121(3-\sqrt{5})}$ as it has negative sign. Please help and also tell if there is any mistake in simplification.
AI: Evaluate $x-y+2x^2y+2xy^2-x^4y+xy^4$ if $x=\sqrt{3-\sqrt5}$ and $y=\sqrt{3+\sqrt5}$ in the form $\sqrt p+\sqrt q$.
Note that
$x,y > 0$
$x^2+y^2=6$
$xy=2$
$x=\sqrt{3-\sqrt5}=\sqrt{\frac{5+1-2\sqrt5}{2}}=\frac{\sqrt5-1}{\sqrt2}$
$y=\sqrt{3+\sqrt5}=\sqrt{\frac{5+1+2\sqrt5}{2}}=\frac{\sqrt5+1}{\sqrt2}$
$x+y=\sqrt{10}$
$x-y=-\sqrt2$
\begin{align*}
&\Rightarrow x-y+2x^2y+2xy^2-x^4y+xy^4\\
&=(x-y)+2xy(x+y)-xy(x-y)(x^2+y^2+xy)\\
&=-\sqrt2+2\cdot 2\cdot \sqrt{10}+2\cdot\sqrt2\cdot 8\\
&=4\sqrt{10}+15\sqrt2
\end{align*}
|
H: Gaussian with zero mean dense in $L^2$
I have found in this article that linear combinations of Gaussian with fixed variance are dense in $L^2$. Can something similar be true for Gaussian of fixed mean and variable variance? Equivalently, can linear combination of this family of functions
$$
f(x,a) = e^{-(x/a)^2}
\qquad x \in \mathbb{R}^+_0
$$
be dense in $L^2(\mathbb{R}^+_0)$? Intuitively I think that is not possibile as it should be hard to approximate localized "high" peak far from the origin ( $\lambda \chi_{[n,n+1]}$ for large $\lambda \in \mathbb{R}$ and $n \in \mathbb{N}$). Any suggestions (or references if that is a known fact)?
[EDIT: as pointed in the comments, this is trivial in $\mathbb{R}$ as this functions all belong to the proper closed subspace of symmetric function with respect to the origin]
AI: Yes, it is dense in the space of even $L^2$ functions. To see this we can use a corollary of Hahn-Banach theorem:
The space would be dense if we have the following implication for every even $f\in L^2$:
$$ \int_\mathbb R f(x) e^{-a x^2} \, dx = 0 \quad \forall a > 0 \Longrightarrow f \equiv 0.$$
Lets take $f$ that satisfies
$$ \int_\mathbb R f(x) e^{-a x^2} \, dx = 0 \quad \forall a > 0.$$
If we differentiate with respect to $a$ we obtain (after dividing by $-a$)
$$\int_\mathbb R x^2 f(x) e^{-a x^2} \, dx = 0 \quad \forall a > 0.$$
In order to justify the differentiation under the integral sign, just notice that if $a\in (0,A)$, then
$$ |x^2 f(x) e^{-a x^2}| \leq x^2 |f(x)| e^{-A x^2} , $$
which is integrable.
We can continue this process iterativelly to obtain
$$\int_\mathbb R x^{2n} f(x) e^{-a x^2} \, dx = 0 \quad \forall a > 0, n\in \mathbb N.$$
If we choose $a = 1$ we obtain
$$\int_\mathbb R x^{2n} f(x) e^{-x^2} \, dx = 0 \quad \forall n\in \mathbb N, $$
hence
$$\int_\mathbb R H_{2n}(x) f(x) e^{-x^2} \, dx = 0 \quad \forall n\in \mathbb N, $$
where $H_n$ are the hermite polynomials that form an orthogonal base of $L^2(\mathbb R)$ with the weight $w(x) = e^{-x^2}$. Thanks to the fact that $f$ is even, we have also
$$\int_\mathbb R H_{2n + 1}(x) f(x) e^{-x^2} \, dx = 0 \quad \forall n\in \mathbb N, $$
from which we conclude that $f = 0 \in L^2(\mathbb R)$.
Remark: In fact this shows that it is enough to consider Gaussians with variance in a given open (and non empty) interval only.
|
H: What is the cardinality of a vanishing set?
In Wiki's page on Chevalley–Warning theorem, under "Statement of the theorems", it's written that
Chevalley–Warning theorem states that [...] the cardinality of the
vanishing set of ${\displaystyle \{f_{j}\}_{j=1}^{r}}$ [...].
What does "the cardinality of the vanishing set of ${\displaystyle \{f_{j}\}_{j=1}^{r}}$" mean? (What is a vanishing set of polynomials? What is its cardinality?)
AI: This is given by interpreting the multivariate polynomials $f_i(x_1,x_2,..x_m)$ as functions from $\mathbb{F}^m_q$ to $\mathbb{F}_q$, by the evaluation rule
\begin{equation}
(\alpha_1,\alpha_2,...\alpha_m)\mapsto f_i(\alpha_1,..,\alpha_m)
\end{equation}
Where $\mathbb{F}^m_q$ here is just $m$ tuples of elements of $\mathbb{F}_q$.
So in this sense, we can talk about the tuples $(\alpha_1,..,\alpha_m)$ for which all these maps equal zero. This is our vanishing set, and its a subset of the (finite) set $\mathbb{F}^m_q$. The cardinality of this set is just the number of elements in this finite set.
As an example, take $f(x,y,z)=x^2+y^2+z^2$ over the field $\mathbb{F}_2$. We have solutions $(1,0,1),(1,1,0),(0,1,1)$ and $(0,0,0)$, so we have $4$ solutions, so the solution set has cardinality $4$. This is an instance of the Chevalley warning theorem in action and would be good to work through the proof for to gain understanding.
|
H: Reverse order of a ring
When we think of an ordered structure with an order $\le$ we assume there is an opposite order $\le^{op}$ as well:
$a \le^{op} b \iff b \le a$.
I would suggest this is a fundamental principle for all ordered structures:
If a structure is ordered one way $(\le)$ it is also ordered the opposite way $(\le^{op})$.
The reverse order principle works on the simplest algebraic structures:
An ordered set with a unary operation $f(a) = b$:
$a \le b \implies f(a) \le f(b)$ or
$a \le^{op} b \implies f(a) \le^{op} f(b)$ for any $a,b$;
An ordered semigroup with a binary operation $f(a,b) = c$:
$a \le b \implies f(a,c) \le f(b,c) \land f(c,a) \le f(c,b)$ or
$a \le^{op} b \implies f(a,c) \le^{op} f(b,c) \land f(c,a) \le^{op} f(c,b)$ for any $a,b,c$;
An ordered group (same as an ordered semigroup).
Now, let's apply the principle to an ordered ring $R(+,\cdot,0,1,\le)$.
The compatibility rule for addition is fine:
$a \le b \implies a + c \le b + c \land c + a \le c + b$ or
$a \le^{op} b \implies a + c \le^{op} b + c \land c + a \le^{op} c + b$ for any $a,b,c$;
But what about the compatibility rule for multiplication?
$0 \le a \land 0 \le b \implies 0 \le a \cdot b$ or
$0 \le^{op} a \land 0 \le^{op} b \implies 0 \le^{op} a \cdot b$.
Checking the last statement on the ring of integers with the regular order and operations:
Taking $a = -1$ and $b = -1$: $0 \le^{op} -1 \land 0 \le^{op} -1 \implies 0 \le^{op} (-1) \cdot (-1) = 1$ (false).
It looks like the reverse order principle does not work on rings:
If a ring is ordered one way $(\le)$ it may not be ordered the opposite way $(\le^{op})$.
Is this correct?
If yes, why do we ignore the reverse order principle for rings?
Are there (non-standard) definitions of an ordered ring that accept the opposite order?
AI: The reverse order isn't consistent with the compatibility axioms of the order with the ring structure.
Suppose you reverse the order in a ring with identity.
Then $0\leq^{op} -1$, and from the multiplicative axiom $0\leq^{op} (-1)(-1)=1$.
From the additive axiom and $0\leq^{op} -1$, adding $1$ to both sides would yield $1\leq^{op} 0$, and from the previous deduction $1=0$. Presumably this isn't the case for the ring you started with, so you have a contradiction.
Note: I think what's written at the wiki link you cited causes considerable confusion. In my opinion, it's better written at opposite category where they write "$x ≤^{op} y$ if and only if $y ≤ x$." I believe this was the intent of what's written at the partial order wiki, because partial orders can be viewed as categories, and the reverse order ought to be the opposite category.
I think the reason what's written at the order wiki is confusing is that most people will interpret $a\leq b$ and $b\geq a$ as meaning exactly the same thing, that $b$ is the bigger thing. But the whole point of the reverse order is to make big things small and small things big. If we were to follow in the notation suggested at the order wiki, we would say this in the opposite order "$b$ used to be the bigger one, so now in the new order $a$ is bigger, and I will write $b\geq a$."
If you just view the thing in between as a separator, and always read it as "the thing on the left is smaller" then it is a correct description of the reverse order. I just think it is extremely confusing to reverse both the inputs and the direction of the symbol.
|
H: Properties of functions of mean zero
Let $f,g: \mathbb{R} \longrightarrow \mathbb{R}$ be differentiable functions and $a<b$ such that
$$\frac{1}{b-a}\int_{a}^{b}f(x)\;dx=0 \quad \text{and} \quad \frac{1}{b-a}\int_{a}^{b}g(x)\;dx=0 \tag{1}.$$
So, I think that I can conclude that
$$\int_{a}^{b}f'(x)\;dx=0 \tag{2}$$
Moreover, I can conclude that
$$\int_{a}^{b}f'(x)\;dx=0 \Rightarrow f(b)-f(a)=0? \tag{3}$$
And
$$g(b)\cdot f'(b)-g(a)\cdot f'(a)=0? \tag{4}$$
I ask this because I would like to conclude that
$$g(x)\cdot f'(x)\Bigg|_{a}^{b} -\int_{a}^{b}f'(x)g'(x)\;dx=-\int_{a}^{b}f'(x)g'(x)\;dx. \tag{5}$$
These statements are in general true?
AI: Regarding (1) and (2) observe
$$\frac 12 \int_{-1}^1 x \, dx = 0$$
yet
$$ \int_{-1}^1 1\, dx = 2.$$
|
H: How do we define power of irrational numbers?
power of rational numbers for me can be defined as multiplying $m$ times the $nth$ root of $x$. because we have:
$$
x^{\frac{m}{n}}
$$
when : $m,n \in \Bbb Z $.
Is this definition correct? if no what is the correct one and if yes, how can I extend this definition for irrational numbers? because we can't write them in the form $a/b$.
AI: Assume $x^q$ is defined for $q \in\mathbb{Q}$. Given real numbers $x>0$, $y>0$ then $$x^y = sup \{x^q,q \in\mathbb{Q}, q<y \}$$ and extend it for $y<0$ for a full definition over $\mathbb{R}$.
|
H: fibonacci recurrence relation proof
I've been trying to prove the closed form solution of fibonacci recurrence sequence and achieve this
$a_n=\frac{1}{\sqrt{5}}[(\frac{1+\sqrt{5}}{2})^n−(\frac{1-\sqrt{5}}{2})^n]$
And so far I haven't achieve that, this is how I did it
$a_n=x(\frac{1+\sqrt{5}}{2})^n+y(\frac{1-\sqrt{5}}{2})^n$
$a_0=0=x+y$
$a_1=1=x(\frac{1+\sqrt{5}}{2})+y(\frac{1-√5}{2})$
thus, I was able to get $x=\frac{\sqrt{5}+5}{10}$ and $y=-\frac{\sqrt{5}+5}{10}$. Then plugging in $x$ and $y$ to the formula this is what I got
$a_n=\frac{\sqrt{5}+5}{10}(\frac{1+\sqrt{5}}{2})^n+(-\frac{\sqrt{5}+5}{10})(\frac{1-\sqrt{5}}{2})^n$
beyond that, I just can't prove the closed form from above, I'm stuck to this, since I can't or don't know how to further reduce $\frac{\sqrt{5}+5}{10}$.
Did I miss anything? or got something wrong?
AI: From $x+y=0$ we have $y=-x$ and substituting into the second we obtain $x(\frac{1+\sqrt{5}}{2})+(-x)(\frac{1-\sqrt{5}}{2})=x(\frac{1+\sqrt{5}-1+\sqrt{5}}{2})=x\sqrt{5}=1$ and thus $x=\frac{1}{\sqrt{5}}$ so $y=-\frac{1}{\sqrt{5}}$ as required.
Alternatively, you can use the generating function of the fibonacci sequence as @SeraPhim mentioned.
|
H: Probability of black sock
this is an easy problem, but i feel i am stuck some whether conceptually.
A man has 3 pairs of black socks and 2 pairs of brown socks kept together in a box.If he dressed hurriedly in the dark, the probability that after he has put on a black sock, he will then put on another black socks is what?
AI: The probability that the first sock he draws is black is $\frac{6}{6+4}=\frac35$. Similarly the probability that the second sock is black is $\frac {5}{5+4}=\frac {5}{9}$. Hence total probability is $\frac35.\frac59=\frac13$
|
H: Restricting a function in the disk algebra
Let $A$ be the disk algebra, i.e. continuous functions on the closed unit disk in $\Bbb{C}$ that are analytic on the interior of the disk. By the maximum-modulus theorem, we have an isometric morphism of algebras:
$$\varphi: A \to C(S^1): f \mapsto f\vert_{S^1}$$
The book I read claims that $\varphi(A)$ is contained in the closed subalgebra $B$ of $C(S^1)$ generated by $1$ and $z$. Why is this the case?
My intuition is that $f \in A$ can be written as $f(z) = \sum_n a_n z^n$ on the interior of the disk? If this also holds for $|z| = 1$ what I want to prove becomes obvious but I'm not sure this holds.
Also, do we have $B = \varphi(A)$? Or only the inclusion $\varphi(A) \subseteq B?$
Thanks for any help!
AI: It's enough to show that every $f\in A$ can be uniformly approximated by polynomials. The power series doesn't converge uniformly on the closure, but: Let $\epsilon>0$. Since $f$ is uniformly continuous there exists $r\in(0,1)$ such that if $g(z)=f(rz)$ then $$|g(z)-f(z)|<\epsilon\quad(|z|\le1).$$And since the power series for $f$ converges uniformly on $\{|z|\le r\}$ there exists a polynomial $p$ with $$|p(z)-g(z)|<\epsilon\quad(|z|\le 1).$$
|
H: Bijection Cancellation rule for cartesian product
Suppose $A$, $B$ and $C$ are sets, and that there is a bijection between $C \times A$ and $C \times B$. Is there necessarily a bijection between $A$ and $B$?
I know this should work for finite sets - you can use a size argument to demonstrate $A$ and $B$ have the same size, so there's a bijection between them. And I know that this works the other way around - if there's a bijection between $A$ and $B$, then for any set $C$ there's a bijection between $C \times A$ and $C \times B$. But is this true in general?
AI: Without AC, there is a bijection from $\mathbb{Z}\times \mathbf{2}$ to $\mathbb{Z}\times\mathbf{3}$ but no bijection from $\mathbf2$ to $\mathbf3$ where $\mathbf{n}$ denotes the set of size $n$.
|
H: Find the value of $\sum_{r=0}^{\infty} \tan^{-1}(\frac{1}{1+r+r^2})$
The given expression can be written as
$$\tan^{-1}(\frac{r+1+(-r)}{1-(-r)(r+1)})$$
$$=\tan^{-1}(r+1)-\tan^{-1}(r)$$
Therefore $$\sum =\tan^{-1}(1)-\tan^{-1}(0)+\tan^{-1}(2)....$$
Since it goes on to infinity, all the terms except $-\tan^{-1}(0)$ get cancelled. So the answer should be $0$ or $-\pi$. But the right answer is $\frac{\pi}{2}$. What’s wrong with this solution?
I know how to get $\frac{\pi}{2}$, I figured out an alternate for it, but I want to know what went wrong here.
AI: You're almost there. The $k$-th partial sum is given by $\text{tan}^{-1}(k+1) - \text{tan}^{-1}(0) = \text{tan}^{-1}(k+1)$. As $k \rightarrow \infty$, the partial sum converges to $\pi /2$.
|
H: Proof that two charts on the tangent bundle $TM$ are $C^\infty(M)$-compatible
I'm struggling to understand a proof in the "Construction of the tangent bundle" section of the lecture notes downloadable here https://mathswithphysics.blogspot.com/2016/07/lectures-on-geometric-anatomy-of.html (Frederic Schuller's Lectures on the Geometric Anatomy of Theoretical Physics), on page 86 of the document, regarding the fact that one can construct a smooth atlas on the tangent bundle.
In particular, two charts $(preim_{\pi}(U),\xi)$ and $(preim_{\pi}(\tilde{U}),\tilde{\xi})$ have been constructed on the tangent bundle $TM$ from two charts on the base manifold $M$ denoted $(U,x)$ and $(\tilde{U},\tilde{x})$. I understand that we need to show that the chart transition map $\tilde{\xi}\circ\xi^{-1}$ is smooth, but I don't understand why (as it is written in the notes) $$\tilde{\xi}\circ\xi^{-1} : x(U\cap \tilde{U})\times \mathbb{R}^{dimM} \longrightarrow \tilde{x}(U\cap \tilde{U})\times \mathbb{R}^{dimM}$$ has the domain and target that it does. In my head, the map should just go from $\xi(preim_{\pi}(U))\rightarrow\tilde{\xi}(preim_{\pi}(\tilde{U}))$. Why does it instead go from $x(U\cap \tilde{U})\times \mathbb{R}^{dimM} \rightarrow \tilde{x}(U\cap \tilde{U})\times \mathbb{R}^{dimM}$?
AI: To avoid typing too much I'll just write $\pi^{-1}[U]$ instead of $\text{preim}_{\pi}(U)$. First, recall how the chart $(\pi^{-1}[U], \xi)$ is constructed from $(U,x)$: we define $\xi: \pi^{-1}[U]\to \xi[\pi^{-1}(U)]\subset \Bbb{R}^n \times \Bbb{R}^n$ (where $n := \dim M$) according to
\begin{align}
\xi(X) &:= \left((x^1\circ \pi)(X), \dots (x^n\circ \pi)(X), (dx^1)_{\pi(X)}(X), \dots , (dx^n)_{\pi(X)}(X)\right)
\end{align}
In words, what we're doing is we first take the vector $X \in \pi^{-1}[U]$. Now note that this vector lies in a particular tangent space, namely $X\in T_{\pi(X)}M$; so the base point is $\pi(X)$. So there are two pieces of information we have to keep track of, the fist is the base point, and the second is the actual "vectorness" aspect of it, which is why the first $n$ entries are $(x^i\circ\pi)(X)$, which keep track of the chart representative of the base point, while the second $n$ entries $(dx^i)_{\pi(X)}(X)$ keep track of the components of $X$ relative to the chart induced basis.
Now what you can show is that the image of $\xi$, namely $\xi[\pi^{-1}(U)]$ equals exactly $x[U]\times \Bbb{R}^n$. How do we show this? Well, it's very simple, note that by construction, the following set inclusion is "obvious"
\begin{align}
\xi[\pi^{-1}(U)]\subset x[U]\times \Bbb{R}^n
\end{align}
For the reverse inclusion, note that if $(a,v) = (a^1, \dots, a^n, v^1, \dots, v^n)\in x[U]\times \Bbb{R}^n$, then
\begin{align}
X:= \sum_{i=1}^n a^i \dfrac{\partial}{\partial x^i}\bigg|_{x^{-1}(a)}
\end{align}
is a vector which lies in $T_{x^{-1}(a)}M$; and since $\pi(X) = x^{-1}(a) \in U$, this means exactly that $X\in \pi^{-1}(U)$. Also, it is easy to see from the definition of $\xi$ that $\xi(X) = (a,v)$.
What we have just shown is that for every $(a,v) \in x[U]\times \Bbb{R}^n$, there exists an $X \in \pi^{-1}(U)$ such that $\xi(X) = (a,v)$. This is exactly what it means to prove $x[U]\times \Bbb{R}^n \subset \xi[\pi^{-1}(U)]$. Thus, these two sets are actually equal.
Finally, we apply this to your actual question. The domain of the transition map $\tilde{\xi}\circ \xi^{-1}$ should actually be $\xi[\pi^{-1}(U) \cap \pi^{-1}(\tilde{U})]$. Now, we're just going to apply a few simple set theoretic identities:
\begin{align}
\xi[\pi^{-1}(U) \cap \pi^{-1}(\tilde{U})] &= \xi[\pi^{-1}(U \cap \tilde{U})] = x[U \cap \tilde{U}] \times \Bbb{R}^n.
\end{align}
A similar reasoning shows that the target space is $\tilde{\xi}[\pi^{-1}(U)\cap \pi^{-1}(\tilde{U})] = \tilde{x}[U\cap \tilde{U}]\times \Bbb{R}^n$.
|
H: Prove that every subsequence of a convergent real sequence converges to the same limit.
Here's the statement I want to prove:
Let $\{a_n\}_{n=1}^{\infty}$ be a sequence of real numbers that converges to a real number $L$. Then, every subsequence $\{a_{n_k}\}_{k=1}^{\infty}$ converges to $L$.
Proof Attempt:
Let $\epsilon > 0$ be arbitrary but fixed. We are required to prove that:
$$\exists K \in \mathbb{N}: \forall k \geq K: |a_{n_k}-L| < \epsilon$$
We know that there exists an $N_0 \in \mathbb{N}$ such that:
$$\forall n \geq N_0: |a_n-L| < \epsilon$$
Since $\{n_k\}_{k=1}^{\infty}$ is a strictly increasing sequence of natural numbers, then:
$$\exists K \in \mathbb{N}: \forall k \geq K: n_k \geq N_0$$
$$\implies \exists K \in \mathbb{N}: \forall k \geq K: |a_{n_k}-L| < \epsilon$$
which is exactly the assertion that $\lim_{k \to \infty} (a_{n_k}) = L$. That proves the desired result.
Is the proof above correct? If it isn't, why? How can I fix it?
AI: Your proof is correct. In fact, you could use your proof to derive a method to find an explicit suitable $K$ for each $\epsilon$, for the subsequence, given a method for the sequence itself.
|
H: Adding differentials
Suppose I have a sum of two Indefinite integrals, $\int f(t)dx + \int f(t)dy$. Is it possible to write this in a singke form as $\int f(t) d(x+y)$, and vise versa?
It looks “okay” to me from a logical point of view, but I obviously have no rigorous reasoning of why this should be
Edit:
I now realise that I might have stirred confusion by using a function of x. I have now changed f to a function of t. The reason I am asking is that I know that x+y=t so If i am able to express the differential in that form, integration would be possible
AI: First of all, keep in mind that "d" is an operator, not a multiplier. Think of $dx$ meaning $d(x)$ and being similar to $\sin(x)$. So, if you had $f(x)\sin(x) + f(x)\sin(y)$ you could not rewrite that as $f(x)\sin(x + y)$.
That being said, as noted in the comments below, the addition rule states that $d(x + y) = d(x) + d(y)$, and this is true for both derivatives and differentials.
Therefore, let's do the requested transform a step at a time. The starting formula:
$$
\int f(t)\,dx + \int f(t)\,dy
$$
We can use the addition rule to combine the integrals into one:
$$
\int \left(f(t)\,dx + f(t)\,dy\right)
$$
Now we can associate:
$$
\int f(t)(dx + dy)
$$
Now we can use the addition rule in differentials to combine the differentials together:
$$
\int f(t)\,d(x + y)
$$
And that is the result you were looking for.
NOTE - I had originally come out against this method based on the reasoning in the first paragraph, when someone pointed me to the obvious point about differentials and addition in the second paragraph.
|
H: An equation for a graph which resembles a hump of a camel / pulse in a string?
Sorry if this question isn't valid. I just need to know an equation/function for a graph which resembles something close to
AI: $$\frac{a}{1+b(x-c)^2}+d$$Looks close to your graph.Try playing around with the constant to get the desired looka $\rightarrow$represent the maximum value of your function b$\rightarrow$ represents how steep your function isc$\rightarrow$represents the x co-ordinate of your peakd$\rightarrow$represents the minima of the function$$ae^{-b(x-c)^2}+d$$Also looks close to your graph.Constants carry the same meaning
|
H: Find an angle between a triangle and a plane
The hypotenuse $AB$ of triangle $ABC$ lies in plane $Q$. Sides $AC$ and $BC $, respectively, create angles $\alpha$ and $\beta$ towards the plane Q (meaning they are tilted towards the plane $Q$ with such angles). Find the angle between plane $Q$ and the plane of the triangle, given $\sin(\alpha) = \frac{1}{3} $ and $\sin(\beta)=\frac{\sqrt5}{6}$.
I'm really struggling with these kinds of problems and I can't seem to find any material in English that covers this topic. Only videos I found about planes use normal vectors and equation of the plane, which is not necessary for this.
The picture wasn't given but Here's my interpretation:
let $CK$ be the perpendicular line from point $C$ to plane $Q$. $CD$ is the height of triangle $ABC$. What I'm struggling to understand is what will the dihedral angle be in this case? Well, I know that the angle between two planes is the angle between two perpendicular lines of such planes. One of which must be $CD$, but what will the other line be? Is it $KD$? How can I know for sure that $KD$ is a perpendicular line?
Anyway, I don't think I'm understanding the problem clearly. If someone can provide a graphical solution, i'll be very thankful.
AI: Because $AB\perp CD$ and $AB\perp CK$, which says $AB\perp(CDK)$ and from here $AB\perp DK$.
Let $CK=h$ and $\measuredangle CDK=\phi$.
Thus, $$\sin\phi=\frac{h}{DC}=\frac{h}{\frac{AC\cdot BC}{\sqrt{AC^2+BC^2}}}=\frac{1}{\frac{\frac{1}{\sin\alpha}\cdot\frac{1}{\sin\beta}}{\sqrt{\frac{1}{\sin^2\alpha}+\frac{1}{\sin^2\beta}}}}=\sqrt{\sin^2\alpha+\sin^2\beta}.$$
Can you end it now?
I got $\phi=30^{\circ}.$
|
H: Integer solutions of $2a+2b-ab\gt 0$
Let $a\in\mathbb{N}_{\ge 3}$ and $b\in\mathbb{N}_{\ge 3}$. What are the solutions of the Diophantine inequality
$$2a+2b-ab\gt 0?$$
By guessing, I found 5 solutions:
$$\text{1)}\, a=3,\, b=3$$
$$\text{2)}\, a=3,\, b=4$$
$$\text{3)}\, a=4,\, b=3$$
$$\text{4)}\, a=5,\, b=3$$
$$\text{5)}\, a=3,\, b=5.$$
Are these all the solutions? How could I find all the solutions rigorously?
AI: $2a+2b-ab>0$ is equivalent to $(a-2)(b-2)<4$, or to $(a-2)(b-2) \le 3$. If $a \ge 3$ and $b \ge 3$, then $(a-2)(b-2) \ge 1$. Thus, $(a-2)(b-2) \in \{1,2,3\}$.
$\bullet$ If $(a-2)(b-2)=1$, then $a-2=b-2=1$, so that $(a,b)=(3,3)$.
$\bullet$ If $(a-2)(b-2)=2$, then $\{a-2,b-2\}=\{1,2\}$, so that $\{a,b\}=\{3,4\}$.
$\bullet$ If $(a-2)(b-2)=3$, then $\{a-2,b-2\}=\{1,3\}$, so that $\{a,b\}=\{3,5\}$.
Therefore, we have the five solutions $(a,b)=(3,3), (3,4), (4,3), (3,5),\:\text{or}\: (5,3)$. $\blacksquare$
|
H: Invariant transformation's complement
Let V be an inner product space.
Let T : V-> V be linear and U a subspace of V . If T (U) ⊆ U, then T(U⊥)⊆ U⊥
I began with showing that (T(u), u') = 0, but didn't know how to show that (u, T(u')) = 0
AI: The statement is not true unless we are given further information about $T$ (e.g. $T$ is normal or self-adjoint). As a counterexample, consider $T: \Bbb R^2 \to \Bbb R^2$, $T(x,y) = (y,0)$. We see that the span of $(1,0)$ is an invariant subspace, but the orthogonal complement (spanned by $(0,1)$) is not.
|
H: Can someone explain how this integral of a third derivative works?
I'm reading some notes on the derivation of the Friedmann equation from Newton's formulas The paper reads:
The equation of motion for $R_s(t)$ can be obtained from the gravitational acceleration at the outer
edge of the sphere:
$$\frac{d^2R_s}{dt^2}=-\frac{GM_s}{R_s(t)^2}$$
Multiplying both sides by $dR_s/dt$ and integrating converts this "acceleration equation" to an "energy equation":
$$\frac{1}{2}\left(\frac{dR}{dt}\right)^2=\frac{GM_s}{R_s(t)}+U$$
I'm afraid I can't follow the r.h.s. of this derivation. It looks like they took the third integral of the radius and then integrated it over $R_s$, but my intuition tells me that taking the integral of the derivative gets you right back where you started. Could someone please explain this part of the derivation?
Is $\frac{1}{2}\left(\frac{dR}{dt}\right)^2$ just another way of writing $\frac{d^2R_s}{dt^2}$?
AI: Let's work backwards from the end of your question:
Is $\frac{1}{2}\left(\frac{dR}{dt}\right)^2$ just another way of writing $\frac{d^2R_s}{dt^2}$?
No, it's not. The latter expression is (almost, but not quite) the derivative of the former expression.
This might be easier to see if you introduce a change of variable to reduce some of the noise in the formula. Let's write $u(t) = \frac{dR}{dt}$. Then the first expression is $\frac12 \left( u(t) \right)^2$. It's derivative (with respect to $t$) is, by the Chain Rule, $u(t) \cdot u'(t)$. That is,
$$\frac{d}{dt} \left( \frac12 \left(\frac{dR}{dt}\right)^2 \right) = \frac{dR}{dt}\cdot \frac{d^2R}{dt^2}$$
Now let's try to understand what the text is saying.
Multiplying both sides by $dR_s/t$...
I suspect there is a typo in here, as it should say "Multiplying both sides by $dR_s/dt$". If we do this, the original equation becomes
$$\frac{d^2R_s}{dt^2} \cdot \frac{dR_s}{dt} =-\frac{GM_s}{R_s(t)^2} \cdot \frac{dR_s}{dt}$$
Now let's integrate both sides with respect to $t$. We have
$$\int \frac{d^2R_s}{dt^2} \cdot \frac{dR_s}{dt} \, dt =- \int \frac{GM_s}{R_s(t)^2} \cdot \frac{dR_s}{dt} \, dt$$
Let's tackle these two integrals separately. For the left-hand side, we will use the substitution $u(t) = \frac{dR}{dt}$. (This is the same substitution I used at the start of this answer!) Then the left-hand side reads $\int u'(t) \cdot u(t) \, dt$. This is exactly the same thing as $\int u \, du$, which integrates easily to $\frac12 u^2$. In other words, the left-hand side is $\frac{1}{2}\left(\frac{dR}{dt}\right)^2$.
Now for the right-hand side. This time set $v = R_s(t)$. Then (setting aside some constants) the integral is $\int v^{-2} dv$, which is easy to integrate, and we get $-\frac1v$. So the integral on the right is $\frac{GM_s}{R_s(t)}$.
|
H: Contour Integration to Evaluate Improper Integral
I am working on the problem above and have, for part a, that $D=\{z \in \mathbb{C} | 0< \Re(z) < 1\}$. What I'm working on now is part b.
My attempt so far: I have set up a rectangular contour, $\Gamma$, with its base sitting on the real axis, going from -R to R, and with a height of $2\pi i$. Using the residue theorem, along with the fact that $e^{pz}/(1+e^z)$ has a pole of order 1 at $z=\pi i$, I get that $$\int_{\Gamma} \frac{e^{pz}}{1+e^z}dz = 2\pi i (-e^{p\pi i})$$ But this must be equal to the sum of the four separate integrals over the four sides of the rectangle. Using the ML inequality we quickly get that as $R\rightarrow \infty$, the contour integrals over the two vertical sides of the rectangular contour go to zero. So it looks like I'm left with: $$\int_{\Gamma} \frac{e^{pz}}{1+e^z}dz = 2\pi i (-e^{p\pi i}) = \lim_{R\to\infty} \Bigg(\int_{-R}^{R} \frac{e^{px}}{1+e^x}dx + \int_{R+2\pi i}^{-R + 2\pi i} \frac{e^{pz}}{1+e^z}dx\Bigg)$$ Using the parametrization $\gamma(t)=2\pi i + t$, where $t \in [-R,R]$, I get that $$\int_{R+2\pi i}^{-R + 2\pi i} \frac{e^{pz}} {1+e^z}dz = -\int_{-R}^{R} \frac{e^{p\gamma(t)}}{1+e^{\gamma(t)}}dt = -e^{p(2\pi i)}\int_{-R}^{R} \frac{e^{px}}{1+e^x}dx$$ Finally, equating what I got from the Residue Theorem to the sum of the individual contour integrals, I get: $$\int_{-\infty}^{\infty} \frac{e^{px}}{1+e^x}dx = \frac{2\pi i (-e^{p\pi i})}{1-e^{p(2\pi i)}}$$ However, this is clearly not right as this will yield non-real answers depending on the value of p. Where did I go wrong here?
AI: Note that for $0<\text{Re}(p)<1$, we have
$$\begin{align}
\int_{-\infty}^\infty\frac{e^{px}}{1+e^x}\,dx &=2\pi i \left(\frac{-e^{i\pi p}}{1-e^{i2\pi p}}\right)\\\\
&=\frac{2\pi i}{e^{i\pi p}-e^{-i\pi p}}\\\\
&=\frac{\pi}{\sin(\pi p)}
\end{align}$$
which is real valued when $p$ is a real number with $0<p<1$.
|
H: Literature on bounds of Fubini's numbers
If anybody can suggest where I can find a literature for a known upper and lower bounds on Fubini numbers https://en.wikipedia.org/wiki/Ordered_Bell_number
AI: QING ZOU, "THE LOG-CONVEXITY OF THE FUBINI NUMBERS", http://toc.ui.ac.ir/article_21835_684378fec55e5c66c7fccd4321a84637.pdf
gives the bounds on $f_n$, the $n^{\text{th}}$ Fubini number:
$$ 2^n < f_n < \frac{n!}{(\ln 2)^{n+1}} < (n+1)^n \text{.} $$
The lower bound holds for $n \geq 3$ and the upper for $n \geq 1$.
(Zou cites Barthelemy, "AN ASYMPTOTIC EQUIVALENT FOR THE NUMBER OF TOTAL PREORDERS ON A FINITE SET ", from which we determine the intended base of the logarithm.)
|
H: Converse of $(A\rightarrow(B\rightarrow C))\rightarrow((A\rightarrow B)\rightarrow(A\rightarrow C))$
The following proposition in (1) is taken as an axiom in intuitionistic propositional logic.
$$(A\rightarrow(B\rightarrow C))\rightarrow((A\rightarrow B)\rightarrow(A\rightarrow C))\quad\quad(1)$$
What about its converse in (2)?
$$((A\rightarrow B)\rightarrow(A\rightarrow C))\rightarrow(A\rightarrow(B\rightarrow C))\quad\quad (2)$$
It's clear that (2) is also valid in intuitionistic propositional logic. But why it's less mentioned in the literature compared to (1)?
AI: Formula (2) is indeed valid intuitionistically. The likely reason for its not being an axiom is that it follows easily from other intuitionistically valid formulas. The details of that would depend on the particular deductive system, but the idea is that $(B\to(A\to C))\to(A\to (B\to C))$ and $B\to(A\to B)$ together imply (2).
|
H: Does $f(x+yi):= \frac{xy}{x^2+y^2}$ have a continuous function in 0?
Does $f(x+yi):= \frac{xy}{x^2+y^2}$ have a continuous function in 0 ?
I would start by changing it to $f(z) = \frac{xy}{|z|^2}$ but i cant find anything for xy
AI: Hint: What is the limit as $x \to 0$ along the line $y=0$? What about the line $y=x$?
|
H: Proving that $\mathbb R^n$ satisfies the second axiom of countability
In a general topology exercise I am asked to prove the following:
A topological space $(X,\tau)$ is said to satisfy the second axiom of countability if there exists a basis $B$ for $\tau$, where $B$ consists of only a countable number of sets.
Prove that $\mathbb R^n$ satisfies the second axiom of countability for every positive integer $n$.
But instead I came up with a proof that proves the opposite, that $\mathbb R^2$ does not satisfies the axiom:
My proof:
The basis for the euclidean topology is $B=\{\alpha_i<x_i<\beta_i,i \in \{1,...,n\}\}$.
If let $A\in B$. Then we can define a function $f:B \to \mathbb R^n \times\mathbb R^n$, such that $f(A)=\left((\alpha_1,...;\alpha_n),(\beta_1,...,\beta_n)\right)$. This function is a bijection, so we have that $B \sim \mathbb R^n \times\mathbb R^n$. $\mathbb R$ is uncountable, so $R^n$ is also uncountable. Because of that $\mathbb R^n \times\mathbb R^n$ is also uncountable. This $B$ must also be uncountable since $B \sim \mathbb R^n \times\mathbb R^n$. So $\mathbb R^n$ does not satisfies the axiom of countability.
What did I do wrong in this proof? Where is the mistake?
AI: You are right that that particular base $B$ is uncountable; your mistake is thinking that because $B$ is uncountable, every base for the usual topology on $\Bbb R^n$ must be uncountable. If you keep only those members of $B$ for which the endpoints $\alpha_i$ and $\beta_i$ are all rational, you’ll have a countable family that is still a base for the topology.
|
H: Changing a double integral into a single integral - Volterra-type integral equations
I have a question regarding a calculation that i stumbled upon when proving that a Cauchy problem can be converted in a Volterra-type integral equation. Specifically, this equality:
\begin{equation*}
\int_0^t\int_0^sy(t) dt ds = \int_0^t (t-s) y(s) ds \, .
\end{equation*}
There seems to be some geometric intuition behind this that I am missing. The generalization, which is also a problem for me, is the following:
$$
\int_0^t ds\int_0^s ds_1 ... \int_0^{s_{n-1}}ds_n y(s_n)
= \frac{1}{n!}\int_0^t (t-s)^n y(s)ds \, .
$$
These integrals also come up when trying to say that a Volterra-type integral equation of the second kind has one and only one solution in $C([a,b])$, using the fixed point theorem.
Thanks to everyone for reading.
AI: There is an issue of confusion between the dummy variable of integration $t$ and the upper limit on the outer integral. Instead, write
$$F(t)=\int_0^t\int_0^s y(x)\,dx\,ds\tag1$$
Then, note that the region $0\le x\le s$, for $0\le s\le t$ is a triangular shaped region with vertices in the $(x,s)$-plane at $(0,0)$, $(0,t)$, and $(t,t)$.
So, this triangular region is also defined by $x\le s\le t$, for $0\le x\le t$. Thus, we can write $(1)$ as
$$F(t)=\int_0^t\int_x^t y(x)\,ds\,dx\tag2$$
But note that in $(2)$, $y(x)$ is independent of $s$. So, we can "take $y(x)$ outside the inner integral" to obtain
$$F(t)=\int_0^t y(x)\int_x^t (1)\,ds\,dx=\int_0^t (t-x)y(x)\,dx$$
|
H: How can I approximate an arc of a circle with an ellipse?
If I know the center and radius of a massive circle C, how can I construct a smaller ellipse E to approximate the arc I'm interested in within a range of confidence R?
Approximate an Arc with an Ellipse
Basically, this is a Navigation problem relating to the Circles of Equal Altitude. If we know the angular altitude of the Sun, we can determine a circle around the Globe of our possible location. Since we can only be within a certain range of where we were when we measured Latitude and Longitude, that circle is reduced to just an arc. And since we can't locate the centre of the circle in a map to draw that arc, an ellipse would suffice.
I know there's the curvature of the Earth to take into account, but for now, could such 2D approximation be constructed?
AI: The radius of curvature at the end of the minor axis of an ellipse is $a^2/b$, where $a$ is the semi-major axis and $b$ is the semi-minor axis. Therefore choose any $b$ (perhaps the biggest possible value that will fit on your page); then set $a = \sqrt{br}$, where $r$ is the radius of the circle.
|
H: Find the matrix representation of the operator $A\in\mathcal L(G)$ in the basis $f$.
In the very beginning, I'm going to refer to my previous question where I applied the same method in a bit different vector space.
Let $G\leqslant M_2(\Bbb R)$ be the subspace of the upper-triangular matrices of the order $2$ and let's define a linear operator $A\in\mathcal L(G)$ with:
$$A\left(\begin{bmatrix}a&b\\0&c\end{bmatrix}\right)=\begin{bmatrix}4a+3b-3c&3a-2b-3c\\0&-a+b+2c\end{bmatrix}$$
and let $f=\left\{\begin{bmatrix}1&1\\0&0\end{bmatrix},\begin{bmatrix}0&0\\0&1\end{bmatrix},\begin{bmatrix}1&0\\0&1\end{bmatrix}\right\}$ be a basis for $G$.
Find the matrix representation of the operator $A$ in the basis $f$.
My attempt:
First, I computed the transformation matrix in the standard canonical basis $e=\left\{\begin{bmatrix}1&0\\0&0\end{bmatrix},\begin{bmatrix}0&1\\0&0\end{bmatrix},\begin{bmatrix}0&0\\0&1\end{bmatrix}\right\}$.
$$\begin{aligned}A\left(\begin{bmatrix}1&0\\0&0\end{bmatrix}\right)&=\begin{bmatrix}4&3\\0&-1\end{bmatrix}&=&&\color{red}{4}\cdot\begin{bmatrix}1&0\\0&0\end{bmatrix}+\color{red}{3}\cdot\begin{bmatrix}0&1\\0&0\end{bmatrix}\color{red}{-1}\cdot\begin{bmatrix}0&0\\0&1\end{bmatrix}\\A\left(\begin{bmatrix}0&1\\0&0\end{bmatrix}\right)&=\begin{bmatrix}3&-2\\0&1\end{bmatrix}&=&&\color{red}{3}\cdot\begin{bmatrix}1&0\\0&0\end{bmatrix}\color{red}{-2}\cdot\begin{bmatrix}0&1\\0&0\end{bmatrix}+\color{red}{1}\cdot\begin{bmatrix}0&0\\0&1\end{bmatrix}\\A\left(\begin{bmatrix}0&0\\0&1\end{bmatrix}\right)&=\begin{bmatrix}-3&-3\\0&2\end{bmatrix}&=&\ \color{red}{-}&\color{red}{3}\cdot\begin{bmatrix}1&0\\0&0\end{bmatrix}\color{red}{-3}\cdot\begin{bmatrix}0&1\\0&0\end{bmatrix}+\color{red}{2}\cdot\begin{bmatrix}0&0\\0&1\end{bmatrix}\end{aligned}$$
$$[A]_e=\begin{bmatrix}4&3&-3\\3&-2&-3\\-1&1&2\end{bmatrix}$$
$$\begin{aligned}\begin{bmatrix}1&1\\0&0\end{bmatrix}&=\color{red}{1}\cdot\begin{bmatrix}1&0\\0&0\end{bmatrix}+\color{red}{1}\cdot\begin{bmatrix}0&1\\0&0\end{bmatrix}+\color{red}{0}\cdot\begin{bmatrix}0&0\\0&1\end{bmatrix}\\\begin{bmatrix}0&0\\0&1\end{bmatrix}&=\color{red}{0}\cdot\begin{bmatrix}1&0\\0&0\end{bmatrix}+\color{red}{0}\cdot\begin{bmatrix}0&1\\0&0\end{bmatrix}+\color{red}{1}\cdot\begin{bmatrix}0&0\\0&1\end{bmatrix}\\\begin{bmatrix}1&0\\0&1\end{bmatrix}&=\color{red}{1}\cdot\begin{bmatrix}1&0\\0&0\end{bmatrix}+\color{red}{0}\cdot\begin{bmatrix}0&1\\0&0\end{bmatrix}+\color{red}{1}\cdot\begin{bmatrix}0&0\\0&1\end{bmatrix}\end{aligned}$$
$T=I^{-1}F=F=\begin{bmatrix}1&0&1\\1&0&0\\0&1&1\end{bmatrix}$ will be the transition matrix representing the change of a standard canonical basis $e$ into $f$, so
$$[A]_f=F^{-1}[A]_eF$$
I got $F^{-1}=\begin{bmatrix}0&1&0\\-1&1&1\\1&-1&0\end{bmatrix}$, and then:
$$\begin{aligned}[A]_f=F^{-1}[A]_eF&=\begin{bmatrix}0&1&0\\-1&1&1\\1&-1&0\end{bmatrix}\cdot\begin{bmatrix}4&3&-3\\3&-2&-3\\-1&1&2\end{bmatrix}\cdot\begin{bmatrix}1&0&1\\1&0&0\\0&1&1\end{bmatrix}\\&=\begin{bmatrix}3&-2&-3\\-2&-4&2\\1&5&0\end{bmatrix}\cdot\begin{bmatrix}1&0&1\\1&0&0\\0&1&1\end{bmatrix}\\&=\begin{bmatrix}1&-3&0\\-6&2&0\\6&0&1\end{bmatrix}\end{aligned}$$
Is this correct? If so, how can I improve my answer?
Thank you in advance!
AI: In this particular example, it is a lot easier to work directly with the basis $f$. Just by looking at it,
\begin{align}
Af_1&=f_1-6f_2+6f_3\\
Af_2&=-3f_1+2f_2\\
Af_3&=f_3
\end{align}
If it's not obvious, note that the $1,2$ coordinate can only be determined by $f_1$, so that gives you its coefficient right away. Then you use $f_3$ to adjust the $1,1$ coordinate, and then $f_2$ to adjust the $2,2$.
Now you can read directly that
$$
[A]_f=\begin{bmatrix} 1&-3&0\\-6&2&0\\6&0&1\end{bmatrix}.
$$
|
H: Bijection between $\mathbb{N}$ and $[0,\alpha]$
Suppose $\alpha<\omega_1$ is an ordinal. Can anyone give me an example of a bijection between $\mathbb{N}$ and $[0,\alpha]:=\{\gamma: \gamma\leq \alpha\}$. Is there an order preserving bijection between the two sets?
AI: There is a bijection iff $\omega\le\alpha<\omega_1$. There is no order-preserving bijection for any $\alpha$, since $[0,\alpha]$ has a largest element, and $\Bbb N$ does not. Actually producing a bijection between $\Bbb N$ and $[0,\alpha]$ will depend on the specific $\alpha$. For instance, one bijection from $\Bbb N$ to $[0,\omega+\omega]$ sends $0$ to $\omega+\omega$, $2n$ to $n$ if $n>0$, and $2n+1$ to $\omega+n$:
$$f:\Bbb N\to\omega+\omega:n\mapsto\begin{cases}
\omega+\omega,&\text{if }n=0\\
k,&\text{if }n=2k>0\\
\omega+k,&\text{if }n=2k+1\;.
\end{cases}$$
|
H: What is the RN derivative of infinite product measure?
Suppose $\mu_k$ and $\nu_k$, $k=1,2,...$ are sigma-finite measures on spaces $(S_k,\mathcal F_k)$ such that $\nu_k<<\mu_k$ for each $k$. Let $f_k=\dfrac{d\nu_k}{d\mu_k}$ for each $k$. Then is it true that $\nu:=\prod_{k=1}^\infty \nu_k<<\prod_{k=1}^\infty \mu_k:=\mu$ with $\dfrac{d\nu}{d\mu}(s_1,s_2,...)=\prod_{k=1}^\infty f_k(s_k)$?
The result is true when you have a finite product.
AI: No, not in general.
For a simple counterexample, let all the $\mu_k \sim N(0,1)$ be the standard normal distribution on $\mathbb{R}$, and $\nu_k \sim N(42, 1)$ be a normal distribution with mean 42 and variance 1. Clearly $\nu_k \ll \mu_k$ for each $k$, since they are both absolutely continuous to Lebesgue measure with strictly positive densities.
Now if $X_i$ are iid $N(0,1)$, the strong law of large numbers says that $\lim_{n \to\infty} \frac{1}{n} \sum_{i=1}^n X_i = 0$ almost surely. Rephrasing this, it says that if $A \subset \mathbb{R}^\infty$ is the set of sequences $x_i$ satisfying $\lim_{n \to \infty} \frac{1}{n} \sum_{i=1}^n x_i = 0$, we have $\mu(A) = 1$. But if $Y_i \sim N(42, 1)$, then $\lim_{n \to\infty} \frac{1}{n} \sum_{i=1}^n Y_i$ equals 42 with probability 1, and thus equals 0 with probability 0; in other words, $\nu(A)=0$. So we see that $\mu, \nu$ are mutually singular.
More generally, Kakutani's "dichotomy" theorem gives a necessary and sufficient condition for $\nu \ll \mu$: this holds if and only if the infinite product $\prod_{i=1}^\infty \int \sqrt{f_k}\,d\mu_k$ is nonzero. In this case we will also have $\mu \ll \nu$, and otherwise the measures will be mutually singular. See for instance Bogachev's Measure Theory, Theorem 10.3.6. (There is also a Wikipedia article but it has a mistake that I will shortly edit.)
One might naively think that $f(s_1, s_2, \dots) := \prod_{k=1}^\infty f_k(s_k)$ "must" be the desired density. And so it must, if it is a density at all; but it can happen that $\int f\,d\mu < 1$. One might naively think that $\int f\,d\mu = 1$ is assured by Fubini's theorem; but Fubini only applies to finite products. If we let $g_n(s_1, s_2, \dots) = \prod_{k=1}^n f_k(s_k)$, then indeed $\int g_n\,d\mu = 1$ by Fubini and $g_n \to f$ $\mu$-a.e., but as we know, that is not enough to conclude that $\int g_n\,d\mu \to \int f\,d\mu$. Kakutani's condition provides uniform integrability of the $g_n$ which allows the argument to go through.
|
H: $ \lim_{n\to \infty} \int_0^1 e^{i\cdot n\cdot p(x)}~dx=0$ where $p(x)$ is a nonconstant polynomial with real coefficients
If $p(x)$ is a nonconstant polynomial with real coefficients, then how can we show that $$ \lim_{n\to \infty} \int_0^1 e^{i\cdot n \cdot p(x)}~dx=0 ?$$
The integrand $e^{i \cdot n \cdot p(x)}$ is clearly bounded by $1$, but I can't apply the dominated convergence theorem because $\lim_{n\to \infty}e^{i \cdot n \cdot p(x)}$ does not necessarily exists. Any hints?
AI: Given $\delta > 0$, let $[a,b] \subset [0,1]$ be an interval on which $p' > \delta$.
Then using the change of variable $t = p(x)$
$$ \int_a^b \exp(inp(x))\; dx = \int_{p(a)}^{p(b)} \frac{\exp(int)}{p'(p^{-1}(t))} \; dt $$
(where $p^{-1}$ is the inverse function to the restriction of $p$ to the interval $[a,b]$)
and this converges to $0$ as $n \to \infty$ by the Riemann-Lebesgue Lemma. Similarly for intervals on which $p' < -\delta$. Now take $\epsilon \to 0+$
Given $\epsilon > 0$, after excluding a set of measure $< \epsilon$ containing the zeros of $p'$ we cover the rest by finitely many intervals on which, for some $\delta > 0$, $p' > \delta$ or $p' < -\delta$, and conclude that $$\limsup_{n \to \infty} \left|\int_0^1 \exp(inp(x))\; dx \right| < \epsilon$$
|
H: $ f(x)+ \sum \lambda_ig_i(x) \geq f(\bar x), \forall x \in \mathbb{R}^n.$
Suppose that $f,g_i : \mathbb{R}^n \to \mathbb{R}$ $(i=1,\ldots,m)$ are convex functions and $\exists x$ such that
$$g_i(x)<0 , \qquad i=1,\ldots,m.$$
Show $\bar x$ is optimized solution of
$$\min f(x)$$
$$\text{s.t. }g_i(x) \leq 0, \qquad i=1,\ldots,m$$
iff $\exists\lambda \geq 0$ such that
$$ f(x)+ \sum_{i=1}^m \lambda_ig_i(x) \geq f(\bar x), \qquad \forall x \in \mathbb{R}^n.$$
AI: Quick answer: Just use the following two facts:
If $\bar x$ and $\lambda^\star$ together satisfy the KKT conditions, then $\bar x$ is primal optimal (and $\lambda^\star$ is dual optimal).
Suppose that Slater's condition is satisfied. Then there exists a dual optimal vector $\lambda^\star$, and if $\bar x$ is primal optimal then $\bar x$ and $\lambda^\star$ together satisfy the KKT conditions.
I'll write a more detailed answer below.
Terminology:
The problem of minimizing $f(x)$ subject to the constraints that $g_i(x) \leq 0$ for $i = 1, \ldots, m$ will be called the "primal problem".
The Lagrangian is
$$
L(x,\lambda) = f(x) + \sum_{i=1}^m \lambda_i g_i(x).
$$
Here $\lambda_i$ is the $i$th component of the vector $\lambda$.
The dual function is
$$
h(\lambda) = \inf_{x \in \mathbb R^n} L(x,\lambda).
$$
The dual problem is to maximize $h(\lambda)$ subject to the constraint that $\lambda \geq 0$.
I'll say that a vector $x \in \mathbb R^n$ is "feasible for the primal problem" if $x$ satisfies $g_i(x) \leq 0$ for $i = 1, \ldots, m$.
Solution:
Let's start with the easier direction. Suppose that there exists a vector $\lambda^\star \geq 0$ such that
$$
f(x) + \sum_{i=1}^m \lambda_i^\star g_i(x) \geq f(\bar x) \quad \text{for all } x \in \mathbb R^n.
$$
If $x$ is feasible for the primal problem, then $\lambda_i^\star g_i(x) \leq 0$ for $i = 1, \ldots, m$, and so
$$
f(x) \geq f(x) + \sum_{i=1}^m \lambda_i^\star g_i(x) \geq f(\bar x) \quad \text{for all } x \in \mathbb R^n.
$$
If we additionally assume that $\bar x$ is feasible for the primal problem, then we can conclude that $\bar x$ is primal optimal.
[Note that you did not give us this additional assumption in your question, but I think we need it.]
Conversely, suppose that $\bar x$ is primal optimal.
Here is a key fact: Because Slater's condition is satisfied, it follows that strong duality holds and there exists a dual optimal vector $\lambda^\star$. (A fairly simple visual proof of this fact is given in Boyd and Vandenberghe.)
Thus,
$$
f(\bar x) = h(\lambda^\star) = \inf_{x \in \mathbb R^n} f(x) + \sum_{i=1}^n \lambda_i^\star g_i(x).
$$
So we see that $f(\bar x) \leq f(x) + \sum_{i=1}^m \lambda_i^\star g_i(x) \quad \text{for all} x \in \mathbb R^n.
$
(The reason I said that this direction is more difficult is that we had to invoke the key fact about Slater's condition.)
|
H: Group action of the Baumslag-Solitar groups
The Baumslag-Solitar groups are defined by
$$G=BS(m,n)=\langle a,b: ba^{m}b^{-1}=a^{n}\rangle\,,$$
where $m,n$ are integers.
My question is: Is there a linear action of $G=BS(1,2)$ over $\mathbb{R}^{2}$ ?
AI: Yes, the matrices $\begin{bmatrix}2&0\\0&1\end{bmatrix}$ and $\begin{bmatrix}1&1\\0&1\end{bmatrix}$ generate a copy of $BS(1,2)$. So this gives an action on $\mathbb{R}^2$.
|
H: Evaluate a complex integral.
I looked around but I couldn't find if this question has been asked before.
Given two polynomials $P(z) = a_{n-1}z^{n-1}+\cdots+a_0$ and $Q(z)=z^n+b_{n-1}z^{n-1}_ + \cdots + b_0,$ prove for sufficently large $r>0$
$$ \int_{|z|=r} \frac{P(z)}{Q(z)} dz = 2 \pi a_{n-1} i$$
My attempt: I tried to do this integral directly by computing the residues directly depending on if the root occurs multiple or times or not. But I failed. Should I try this method again, or a hint for another direction would be appreciated.
EDIT: By fail, I mean there was no reason that they should sum to the desired result.
AI: Hint: $ \dfrac{z^j}{Q(z)} = O(|z|^{j-n})$ as $|z| \to \infty$ for $0 \le j < n-1$,
while $ \dfrac{z^{n-1}}{Q(z)} = \dfrac{1}{z} + O(|z|^{-2})$
|
H: Continuity of $a^x+b$ with $a, b \in \mathbb R$
Let $a,b \in \mathbb{R}$ with $a > 0$. find $a$, $b$ so the function would be continuous
$$
f(x) = \begin{cases} a^x + b, & |x|<1 \\
x, & |x| \geq 1 \end{cases}
$$
I got $b = -a^x+x$ as my answer, but I'm unsure.
AI: Since $f(x) = a^x + b$ will be continuous on $|x| < 1$ for $a > 0$, we only need to match up this portion of $f$ with it's definition on $|x| \geq 1$ at the endpoints $x = \pm 1$. Evidently, $f(-1) = -1$ and $f(1) = 1$. So we need $a^{-1} + b = -1$ and $a + b = 1$. Using the latter gives $b = 1 -a$, so substitution yields:
$$
\frac{1}{a} + 1 - a = -1.
$$
This becomes:
$$
a^2 -2a -1 = 0.
$$
A quick application of the quadratic formula yields $a = 1 \pm \sqrt{2}$, and we can discard $a = 1 - \sqrt{2}$ since we require $a > 0$. Thus,
$$
a = 1 + \sqrt{2},
\; \; \; \; \; \;b = -\sqrt{2}.
$$
Indeed, the function:
$$
f(x) = \begin{cases} (1+\sqrt{2})^x - \sqrt{2} & |x| < 1, \\
x & |x| \geq 1, \end{cases}
$$
is continuous, as shown below:
|
H: Find $L=\lim_{n\to \infty }\frac{1}{n}\sum_{k=1}^{n}\left\lfloor 2\sqrt{\frac{n}{k}} \right\rfloor -2\left\lfloor \sqrt{\frac{n}{k}} \right\rfloor$
Question:- Find Limit $$L=\lim_{n\to \infty }\frac{1}{n}\sum_{k=1}^{n}\left\lfloor 2\sqrt{\frac{n}{k}} \right\rfloor -2\left\lfloor\sqrt{\frac{n}{k}} \right\rfloor \text , $$ where $\lfloor x \rfloor$ represents greatest integer function.
Yesterday, my friend sent me this limit question.Greatest integer function is the biggest problem here.I don't know how to evaluate the summation to find the given limit.
Can anybody help me!!
AI: As this is a Riemann sum, you can convert it into an integral.
This becomes:
$$\int_0^1 \left \lfloor \frac2{\sqrt x} \right \rfloor -2\left \lfloor\frac1{\sqrt x} \right \rfloor\,dx$$
Put $\sqrt x \rightarrow 1/t$ to get:
$$ = 2\int_1^\infty \frac{\left \lfloor 2t \right \rfloor}{t^3} -2\frac{\left \lfloor t \right \rfloor}{t^3}\,dt$$
$$ = 2\left(\sum_{r=1}^\infty\int_{(r+1)/2}^{r/2 + 1}\frac{r+1}{t^3}\,dt - 2\sum_{r=1}^\infty\int_{r}^{r + 1}\frac{r}{t^3}\,dt\right)$$
$$ = 2\sum_{r=1}^\infty\left(\frac{2(2r+3)}{(1+r)(2+r)^2} - \frac{2r+1}{r(1+r)^2}\right)$$
$$ = 2\sum_{r=1}^\infty\left(\frac{4}{(r+1)(r+2)} - \frac{2}{(r+1)(r+2)^2} - \frac{2}{r(r+1)} + \frac{1}{r(1+r)^2}\right)$$
$$ = 2\sum_{r=1}^\infty\left(\frac{1}{r(1+r)^2}-\frac{2}{(r+1)(r+2)^2}\right)$$
$$ = 1 - 2\sum_{r=1}^\infty\left(\frac{1}{r(1+r)^2}\right)$$
$$= \boxed{\frac{\pi^2}3 - 3}$$
|
H: What is the fraction of customers lost in a finite queue with one server, M/M/1/k? k = four places and s = 1 server
What is the fraction of customers lost in a finite queue with one server, M/M/1/k? $k =$ four places and $s = 1$ server
$k=4, \lambda=\dfrac 1 {30}$, $\mu=\dfrac 1 {25}$
The steady-state probs are p0 = 0.2786, p1 = 0.2322, p2 = 0.1935, p3 = 0.1612, p4 = 0.1343. The theory says that a client doesn't do the queue if the system is in state 4. Thus the portion of clients won´t join the queue is p4. is that correct?
AI: yes, you are right, Luis. Many people get confused on that one.
The steady state probabilities are the probabilities that a client arrives in the system to find the system at that state.
In particular, $p4$ is the probability that an arrival finds the system at state $4$ and is rejected.
Hence, the answer is simply $p4$.
This is due to PASTA (Poisson Arrivals See Time Averages)
https://en.wikipedia.org/wiki/Arrival_theorem
|
H: Dominated convergence theorem and Cauchy's integral formula
Let $U\subseteq \mathbb{C}$ be open and $\bar B(a,r) \subseteq U$. Let $\gamma(t) =a+ re^{it}$ with $t \in [0,1]$ be the boundary path of $B(a,r)$. By Cauchy's integral formula $f(w) = \frac{1}{2 \pi i}\int_{\gamma} \frac{f(z)}{(z-w)} dz$, where $w \in B(a,r)$.
I want to prove $\frac{d f(w)}{dw} = \frac{1}{2 \pi i} \int_{\gamma} \frac{f(z)}{(z-w)^2}dz$.
The usual argument is to interchange the order of differentiation and integration and this is justified by uniform convergence.
Is it possible to justify this interchange with the DCT?
My attempt:
For DCT to apply, I need to check that $\frac{d}{dw}(\frac{f(\gamma(t))}{\gamma(t)-w}\gamma'(t)) = \frac{d}{dw}(\frac{f(re^{it})}{re^{it}-w}ire^{it}) = \frac{f(re^{it})}{(re^{it}-w)^2}ire^{it} $ is dominated by some function which is integrable over $[0,1]$. Because $f$ is continous over a compact it is bounded by some $M$ and therefore $\frac{M}{(r+ |w-a|)^2}$ should be the desired dominating function.
For the bounty:
I am happy with the accepted answer. I would just like to know if my attempt is wrong and if it is necessary to consider the real and imaginary parts of $w$. Many thanks!
AI: I think the only possible issue with the proposed solution in the OP is a careful proof that the proposed dominating function is a dominating function and a reference to the appropriate version of the DCT (or bounded convergence theorem, see below) that the OP would like to use.
There are at least two ways to approach this problem with the DCT. In each approach, we must identify which parameter serves as the parameter that we take a limit in, and identify an appropriate dominating function, prove it is a dominating function, and then quote the appropriate version of the DCT.
Note that the DCT is most commonly stated in terms of sequences of functions, so in any application of the "sequential" DCT to problems involving limits with a continuous parameter, we must use a characterization of limits in terms of sequences—see the second approach below. (Also see this old answer of mine regarding DCT with respect to continuous and discrete parameters for more on this.)
Now, we want to justify the equation:
\begin{align*}
\frac{\partial}{\partial w} \oint_{\partial B(a,r)}\frac{f(z)}{z-w}\,dz= \oint_{\partial B(a,r)}\frac{\partial }{\partial w}\frac{f(z)}{z-w}\,dz.
\end{align*}
The first approach using real parameters:
We will use that $\frac{\partial}{\partial w} = \frac12\left(\frac\partial{\partial w_1}-i\frac\partial{\partial w_2}\right)$, and can show that
\begin{align*}
\frac{\partial}{\partial w_j} \oint_{\partial B(a,r)}\frac{f(z)}{z-w}\,dz= \oint_{\partial B(a,r)}\frac{\partial }{\partial w_j}\frac{f(z)}{z-w}\,dz\qquad(j=1,2).
\end{align*}
Because then by linearity and the definition of $\partial/\partial w$, we will have the equality we are after. With this approach, the parameters $w_j$ are the parameters we take limits in, and they are real parameters, which has the advantage that we can use the version of differentiating under the integral sign quoted here:
Differentating under the integral sign. Suppose that $F(x,t)$ is integrable as a function of $x \in \mathbb{R}^d$ for each value of $t \in \mathbb{R}$ and differentiable as a function of $t$ for each value of $x$. Assume also that
$$\bigg| \frac{\partial}{\partial t} F(x,t) \bigg| \le G(x),$$
for all $x,t$, where $G(x)$ is an integrable function of $x$. Then $\frac{\partial}{\partial t} F(x,t)$ is integrable as a function of $x$ for each $t$ and
$$\frac{d}{dt} \int F(x,t)\, dx = \int \frac{\partial}{\partial t} F(x,t)\,dx.$$
To prove this, you can mimic the second approach we will use to the problem in the involving the characterization of limits I mentioned (to prove this theorem I quoted above, the mean value theorem is also useful). To apply this, write out $\frac{f(z)}{z-w}$ as a function $F_j = F(t,w_j)$ where $t$ can be the parameter for $\partial B(a,r)$ for each $j = 1,2$ and apply this result to each of $F_1$ and $F_2$ separately.
A second approach from first principles:
Without separating the integral into real and imaginary parts and quoting the theorem on differentiating under the integral that we are familiar with from real variables, we can choose to write the integral in a form that lets us apply the DCT for sequences of functions $([0,2\pi],\mathrm{Borel},dt)\to(\mathbb C,\mathrm{Borel})$ from first principles. We still would like to show
\begin{align*}
\frac{\partial}{\partial w} \oint_{\partial B(a,r)}\frac{f(z)}{z-w}\,dz= \oint_{\partial B(a,r)}\frac{\partial }{\partial w}\frac{f(z)}{z-w}\,dz.
\end{align*}
The DCT is stated for sequences of functions, so recall the following characterization of limits in a metric space:
\begin{align*}
\lim_{h\to a}g(h) = L \iff \text{for all sequences $h_j\to a$,}\ \lim_{j\to\infty}g(h_j) = L.
\end{align*}
(Cf. Rudin's Principles of Mathematical Analysis p. 84.) Thus, let $h_j\to 0$ be an arbitrary sequence of complex numbers and write the difference quotient corresponding to the left-hand side as (after skipping some algebra):
\begin{align*}
\int_0^{2\pi}\frac{f(re^{it})}{(re^{it}-w)^2-h_j(re^{it}-w)}ire^{it}\,dt.
\end{align*}
By continuity, $f$ is bounded by $M$ say on $\partial B(a,r)$. To bound the expression in the denominator, we use the reverse triangle inequality,
\begin{align*}
|(re^{it}-w)^2-h_j(re^{it}-w)| &\ge |re^{it}-w|\big(|re^{it}-w|-|h_j|\big).
\end{align*}
Because the distance $\delta$ from $w$ to the boundary of the disk is positive, we have $|re^{it}-w|\ge \delta > 0$ for all $t$, so if $j$ is so large that $|h_j|<\frac\delta2$, then the right-hand side of the last inequality is bounded below by
$$
|re^{it}-w|\big(|re^{it}-w|-\frac\delta2\big)\ge \delta\big(\frac\delta2\big).
$$
Hence we see that for $j\gg1$,
$$
\bigg|\frac{f(re^{it})}{(re^{it}-w)^2-h_j(re^{it}-w)}ire^{it}\bigg| \le \frac{2M}{\delta^2}r,
$$
which is bounded and hence belongs to $L^1([0,2\pi],dt)$. By the DCT (in fact, merely bounded convergence theorem will do here),
\begin{align*}
\lim_{j\to\infty}\int_0^{2\pi}\frac{f(re^{it})}{(re^{it}-w)^2-h_j(re^{it}-w)}ire^{it}\,dt &= \int_{0}^{2\pi}\lim_{j\to\infty}\frac{f(re^{it})}{(re^{it}-w)^2-h_j(re^{it}-w)}ire^{it}\,dt \\
&= \int_{0}^{2\pi}\frac{f(re^{it})}{(re^{it}-w)^2}ire^{it}\,dt\\
&= \oint_{\partial B(a,r)}\frac{f(z)}{(z-w)^2}\,dz\\
&= \oint_{\partial B(a,r)}\frac{\partial }{\partial w}\frac{f(z)}{z-w}\,dz.
\end{align*}
As the sequence $h_j\to 0$ we chose was arbitrary, we have the desired conclusion by the characterization of limits we stated.
|
H: Local diffeomorphism between a disk and a sphere
This may be a silly question, but I’ll make it anyway. Let $f: D^2 \to S^2$ be a local diffeomorphism between the closed unit disk and the unit sphere. Is it necessarily injective?
AI: No, I don't think so. For instance, think about stretching out $D^2$ into a very long and thin oval and wrapping it twice around the equator of $S^2$. This wrapping is locally a diffeomorphism, but it is not injective. If this is unclear, I can try to attach a picture.
You could imagine wrapping $[0,4\pi]\times [\pi/4,3\pi/4]$ around the equator of $S^2$ using the usual spherical coordinate parametrization where we view $[0,4\pi]$ as the $\theta$ coordinate and $[\pi/4,3\pi/4]$ as the azimuthal coordinate $\phi$.
|
H: A question based on property of a function satisfying $f(1/n) =0$ for every $n \in\mathbb{N} $
I am trying quiz questions of senior year and was unable to solve this particular question.
It's image:
Unfortunately, I couldn't think which result in analysis I can use. I am totally confused and would really appreciate if someone tells what should be done in such kind of problems.
Kindly help.
Answers :
A, B, C
AI: It's obvious that $D$ is not true since there are infinitely many roots, which a non-zero polynomial can't have.
$A$ is true by continuity, as $\frac1n$ approachs $0$.
By the Mean Value Theorem, for each interval $\left[\frac1{n+1},\frac1n\right]$, there is a point in the interval with derivative $0$. As these points approach $0$, by continuity, $B$ is true.
The same argument using MVT twice shows that $C$ is true.
|
H: How do smooth manifolds differ from manifolds embedded in $\mathbb{R}^n$?
Instead of defining a smooth manifold to be a manifold whose gluing functions are smooth, what would happen if we defined it as an $n$-manifold $M$ which has an embedding into $\mathbb{R}^{n +1}$?
A smooth map between manifolds $e_M : M \hookrightarrow \mathbb{R}^{n+1}$ and $e_N : N \hookrightarrow \mathbb{R}^{n+1}$ would be a continuous function $f : M \to N$ along with a smooth function $g : \mathbb{R}^{n+1} \to \mathbb{R}^{n+1}$ such that $g \circ e_M = e_N \circ f$.
Would defining them this way be equivalent?
AI: For a start, the Klein bottle would no longer be a smooth manifold, as it has no embedding in $\Bbb R^3$. Nor would any non-orientable closed $2$-manifold.
|
H: Solving $f(x)$ in a functional equation
Find of general form for $f(x)$ given $f(x)+xf\left(\displaystyle\frac{3}{x}\right)=x.$
I think we need to substitute $x$ as something else, but I'm not sure. Will $x=\displaystyle\frac{3}{x}$ help me?
AI: Yes, it helps, as follows:
From
$$f(x)+xf(\frac{3}{x})=x\tag{*}$$
we get
$$f(\frac{3}{x})+\frac{3}{x}f(x)=\frac{3}{x}$$
or
$$xf(\frac{3}{x})+3f(x)=3\tag{**}$$
from (*) anf (**), we have:
$$-2f(x)=x-3$$
or $$f(x)=\frac{3-x}{2}$$
|
H: Let $S = \{1/n, n \in\mathbb N\}$ and we define a function $f : \{0\} \cup S \to \mathbb R$ as the formula below. is this function continuous at $0$?
the function is $f(x) = \begin{cases} \sin(\pi/x) & \text{ if } x\neq0 \\
0 & \text{ if } x=0 \end{cases}$
I know that that it is proven using the definition of continuity at a point however, I do not know how to go about it.
AI: As Don Thousand says, realize that
$$
\forall x\in S:\
f(x)=\sin\left(\frac{\pi}{x}\right)= \sin(n \pi)=0
$$
for some $n\in \mathbb{N}^*$. On the other hand, $f(0)=0$. Therefore $f(x)=0$ at every point of its domain, so it is constant, hence continuous everywhere, including at $0$.
|
H: Find a probability given specific cdf values
Given $X$ is uniform random variable, $P\{X>1\} = 0.6$ and $F(2) = 0.5$.
Find $P\{-1\leq X < 3\}$.
My solution is: $P\{X>1\} = F(\infty) - F(1) = 0.6$. So $F(1) = 0.4$
And now I assume that F grows linearly. I need to find $F(3) - F(-1)$. Using fact of linearity I can say that $F(3) - F(-1) = 4 * [F(2) - F(1)]$.
My answer is $4\times(0.5 - 0.4) = 0.4$.
Am I right?
AI: If $X$ is uniform on some support $a \le X \le b$, then the density is $$f_X(x) = \begin{cases} \frac{1}{b-a}, & a \le x \le b, \\ 0, & \text{otherwise}. \end{cases}$$ The CDF is then $$F_X(x) = \begin{cases} 0, & x < a \\ \frac{x-a}{b-a}, & a \le x \le b \\ 1, & b < x. \end{cases}$$ We are given $$F_X(1) = 1 - \Pr[X > 1] = 1 - 0.6 = 0.4,$$ and $$F_X(2) = 0.5.$$ From these, we determine $$\frac{1 - a}{b-a} = 0.4, \\ \frac{2 - a}{b-a} = 0.5.$$ Hence $$\frac{0.4}{0.5} = \frac{1-a}{2-a},$$ so $a = -3$; we then substitute back to obtain $b = 7$, and $$\Pr[-1 \le X < 3] = F_X(3) - F_X(-1) = \frac{3-(-3)}{7-(-3)} - \frac{-1-(-3)}{7-(-3)} = \frac{2}{5}.$$
The reason why your solution works is that it so happens that the given probabilities result in the support $[a,b]$ to be "large enough" to contain the desired outcome $-3 \le X < 1$. For example, now that we know $[a,b] = [-3,7]$, if I asked you for $\Pr[-5 < X < 0]$, you could neither say this is $1$ nor $0.5$. The answer is $0.3$ because $\Pr[-5 < X < 0] = \Pr[-3 < X < 0]$, since the intersection $$(-5,0) \cap [-3,7] = [-3, 0).$$ So you need an extra step to justify that the "shortcut" you used is valid.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.