Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
How to prove $\sum_{k=1}^{n}kx^{k} \binom{n}{k}= nx(1+x)^{n-1}$? How to prove $\sum_{k=1}^{n}kx^{k}\binom{n}{k}= nx(1+x)^{n-1}$?
I can see the LHS has some structure to it and if we don't consider the $k$ term then we have the binomial formula for $(1+x)^{n}$ but the $k$ makes things complicated for me. I tried to shift the $k$ values around but I am not exactly sure how I can use that though? And I feel like we can use the fact that the RHS has $(1+x)^{n-1}$. But how?
Any hints/suggestions?
| \begin{align}
\sum\limits_{k = 1}^n {kx^k \binom{n}{k}} &= \sum\limits_{k = 1}^n {kx^k \frac{{n!}}{{(n - k)!k!}}}\\
&= nx\sum\limits_{k = 1}^n {x^{k - 1} \frac{{(n - 1)!}}{{(n - k)!(k - 1)!}}} \\
&= nx\sum\limits_{k = 1}^n {x^{k - 1} \frac{{(n - 1)!}}{{(n - 1 - (k - 1))!(k - 1)!}}}\\
&= nx\sum\limits_{k = 0}^{n-1} {x^{k} \frac{{(n - 1)!}}{{(n - 1 - k)!k!}}} \\ &= nx\sum\limits_{k = 0}^{n-1} {x^{k} \binom{n-1}{k}}\\
&= nx(1 + x)^{n - 1} .
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3852677",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Probability of drawing all hearts before all spades Let's say we have a 26 card deck(13 hearts and 13 spades), and we want to find the probability that all hearts are drawn before all spades. What is this value? What if we have 52 cards?
For the first case, the way I thought about it was the following. If we have 26 cards, we can arrange them n 26! ways, and if we wanted all the hearts to come before all the spades, this can be done in 13!13! ways since there are 13! ways to arrange each suit of 13 cards. But the answer does not match the probability of 1/2 that is given in the book. I get, 13!13!/26!
The book also mentions that the second probability is also the same. This, I don't understand at all. If someone could please explain to me the answer to this question, as well as why my approach doesn't work, I would be grateful! Thank you so much!
| They’re not asking for the probability that all of the hearts are drawn before any spades are drawn: they’re asking for the probability that all of the hearts are drawn before the spades have been completely exhausted, i.e., before the last spade is drawn. Do you see now why the probability is $\frac12$ regardless of how many other cards are in the deck?
Added: Suppose that you have deck of $n$ cards that consists of $13$ hearts, $13$ spades, and $n-26$ other cards that are neither hearts nor spades. Let $H$ be the set of permutations of the deck in which the last heart comes before the last spade, and let $S$ be the set of permutations of the deck in which the last spade comes before the last heart. Every permutation of the deck is in exactly one of the sets $H$ and $S$, and we want to know the probability that a randomly chosen permutation is in $H$.
Let $p$ be a permutation in $H$. Go through the deck and interchange the first heart with the first spade, the second heart with the second spade, and so on, until you’ve completely interchanged the hearts and spades. Call the new permutation $p'$; it will be in $S$. And if you perform the same operation on $p'$, you’ll get $p$ back. In other words, we can pair each permutation $p$ in $H$ with a unique permutation $p'$ in $S$: we have a bijection between $H$ and $S$. $H$ and $S$ must therefore be the same size, so exactly half of the $n!$ permutations of the deck are in $H$, and the probability that a randomly chosen one is in $H$ must be $\frac12$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3852895",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Is $TM \cong M \times \mathbb{R}^n$ as sets? To be clear, I know these sets are not diffeomorphic or even homeomorphic in general. However, I've been told that there doesn't even exist a bijection between these sets.
But suppose $M$ is an $n$-dimensional manifold and let $\{\partial_1|_p, \ldots, \partial_n|_p\}$ be the basis of $T_p M$ with respect to some chart containing $p \in M$. If $v_p \in T_p M$ we have $v_p = v_p^i \partial_i|_p$ for unique real numbers $v_p^i$. Define the function $\lambda: TM \to M \times \mathbb{R}^n$ by $\lambda(p, v_p)=(p, v_p^1, \ldots, v_p^n)$.
Surely this is a well-defined bijection?
| Sure: The tangent bundle on $M^n$ is by definition the disjoint union (with a more complicated topology) of the $T_p M$ for $p\in M$, and each $T_p M$ is an $n$-dimensional vector space by definition. Of course, the isomorphisms $T_p M \to \mathbb{R}^n$ are not canonical and can't be made continuous in $p$ for a nontrivial bundle, but that's not a problem if you're just looking for a map of sets.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3853120",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Wronskian of functions $\sin(nx), n=1,2,...,k$. Is it true that the Wronskian of the functions $\sin(nx), n=1,...,k$ is equal to $c(\sin(x))^p$ where $c$ is a constant and $p=1+2+...+k=k(k+1)/2$?
That is true for $k=1,2,3,4,5$. If it is true, how to find the constant $c=c(k)$?
| Let $W(f_1, \ldots, f_n)$ denote the Wronskian determinant of the functions $f_1, \ldots, f_n$. We can show that
$$
W (\sin(x), \sin(2x), \ldots, \sin(nx)) =
1!2! \cdots (n-1)! (-2)^{n(n-1)/2} \sin(x)^{n(n+1)/2} \, .
$$
For example,
$$
\begin{align}
W (\sin(x), \sin(2x)) &= -2 (\sin(x))^3 \, .\\
W (\sin(x), \sin(2x), \sin(3x)) &= -16 (\sin(x))^{6} \, ,\\
W (\sin(x), \ldots, \sin(4x)) &= 768 (\sin(x))^{10} \, .
\end{align}
$$
The proof uses that
$$
\sin(k x) = U_{k-1}(\cos (x)) \sin(x)
$$
where $U_k$ are the Chebyshev polynomials of the second kind, and two identities for Wronskians: A “product rule”
$$
W(h f_1, \ldots, h f_n) = h^n \cdot W(f_1, \ldots, f_n)
$$
which is a consequence of the Leibniz rule for the $n$th derivative of a product (see also Why does the Wronskian satisfy $W(yy_1,\ldots,yy_n)=y^n W(y_1,\ldots,y_n)$?), and a “chain rule”
$$
W(f_1 \circ g, \ldots, f_n \circ g)(x) = W(f_1, \ldots f_n)(g(x)) \cdot (g'(x))^{n(n-1)/2} \, .
$$
which is a consequence of Faà di Bruno's formula for the $n$th derivative of a composite function (compare also About a chain rule for Wronskians).
Now we can argue as follows:
$$
\begin{align}
&W (\sin(x), \sin(2x), \ldots, \sin(nx)) \\
&\quad = W(U_0(\cos(x))\sin(x), U_1(\cos (x)) \sin(x), \ldots, U_{n-1}(\cos (x)) \sin(x)) \\
&\quad = (\sin(x))^n W(U_0(\cos (x)), U_1(\cos (x)), \ldots, U_{n-1}(\cos (x))) \, \\
&\quad = (\sin(x))^n W(U_0(t), U_1(t), \ldots, U_{n-1}(t)) |_{t=\cos(x)} (-\sin(x))^{n(n-1)/2} \, .
\end{align}
$$
Each $U_k$ is a polynomial of degree $k$ with the leading coefficient $2^k$, so that $W(U_0, U_1, \ldots, U_{n-1})$ is the determinant of a triangular matrix with the entries $U_k^{(k)}(t) = k!2^k$, $k=0, \ldots, n-1$ on the diagonal. It follows that
$$
W (\sin(x), \sin(2x), \ldots, \sin(nx)) = (\sin(x))^n \cdot (-\sin(x))^{n(n-1)/2}
\cdot \prod_{k=0}^{n-1} k!2^k
$$
and that is the claimed formula.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3853211",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
How to evaluate $\int _0^{\pi }x\sin \left(x\right)\operatorname{Li}_2\left(\cos \left(2x\right)\right)\:dx$. How can i evaluate
$$\int _0^{\pi }x\sin \left(x\right)\operatorname{Li}_2\left(\cos \left(2x\right)\right)\:dx$$
$$=\frac{\pi ^3}{6}-\frac{\pi ^3}{6\sqrt{2}}-4\pi +6\pi \ln \left(2\right)-\frac{\pi }{2\sqrt{2}}\ln ^2\left(2\sqrt{2}+3\right)-\sqrt{2}\pi \operatorname{Li}_2\left(2\sqrt{2}-3\right)$$
This is what I've done thus far.
\begin{align*}
&\int _0^{\pi }x\sin \left(x\right)\operatorname{Li}_2\left(\cos \left(2x\right)\right)\:dx\\
&=\frac{\pi }{2}\int _0^{\pi }\sin \left(x\right)\operatorname{Li}_2\left(\cos \left(2x\right)\right)\:dx\\[2mm]
&=\frac{\pi }{4}\int _0^{\pi }\sin \left(\frac{x}{2}\right)\operatorname{Li}_2\left(\cos \left(x\right)\right)\:dx+\frac{\pi }{4}\int _{\pi }^{2\pi }\sin \left(\frac{x}{2}\right)\operatorname{Li}_2\left(\cos \left(x\right)\right)\:dx\\[2mm]
&=\frac{\pi }{2}\int _0^{\pi }\sqrt{\frac{1+\cos \left(x\right)}{2}}\operatorname{Li}_2\left(-\cos \left(x\right)\right)\:dx=\pi \int _0^{\infty }\frac{\operatorname{Li}_2\left(\frac{t^2-1}{1+t^2}\right)}{\left(1+t^2\right)\sqrt{1+t^2}}\:dt\\[2mm]
&=\frac{\pi }{2}\int _1^{\infty }\frac{\operatorname{Li}_2\left(\frac{x-2}{x}\right)}{x\sqrt{x}\sqrt{x-1}}\:dx=\frac{\pi }{2}\int _0^1\frac{\operatorname{Li}_2\left(1-2x\right)}{\sqrt{1-x}}\:dx\\[2mm]
&=\pi \zeta \left(2\right)+\frac{\pi }{\sqrt{2}}\int _{-1}^1\frac{\sqrt{1+x}\ln \left(1-x\right)}{x}\:dx
\end{align*}
But I'm not sure how to proceed with either that polylogarithmic integral on the $5$th line nor the last one.
I'd appreciate any hints or ideas, thanks.
| $$\small \int _0^{\pi }x\sin \left(x\right)\operatorname{Li}_2\left(\cos \left(2x\right)\right)dx=\frac{\pi^3}{6}+6\pi \ln 2-4\pi +\frac{\pi}{\sqrt 2}\left(\operatorname{Li}_2(-(3+2\sqrt 2))-\operatorname{Li}_2(-(3-2\sqrt 2))\right)$$
$$\small\int_{-1}^1\frac{\sqrt{1+x}\ln(1-x)}{x}dx=6\sqrt 2 \ln 2 -4\sqrt 2+\operatorname{Li}_2(-(3+2\sqrt 2))-\operatorname{Li}_2(-(3-2\sqrt 2))$$
To show this result we can continue with the integral left at the end of the question body.
$$\int_{-1}^1\frac{\sqrt{1+x}\ln(1-x)}{x}dx=\int_{-1}^1 \frac{\ln(1-x)}{\sqrt{1+x}}dx+\int_{-1}^1 \frac{\ln(1-x)}{x\sqrt{1+x}}dx=\mathcal I+\mathcal J$$
$$\mathcal I=\int_{-1}^1 \frac{\ln(1-x)}{\sqrt{1+x}}dx \overset{1+x=t^2}=2\int_0^\sqrt 2\ln(2-x^2)dx=2\int_0^\sqrt 2 (x-\sqrt 2)'\ln(2-x^2)dx$$
$$\overset{IBP}=2\sqrt 2\ln 2 -4\int_0^\sqrt 2 \frac{x}{x+\sqrt 2}dx=6\sqrt 2 \ln 2 -4\sqrt 2$$
$$\mathcal J(t)=\int_{-1}^1 \frac{\ln(1-tx)}{x\sqrt{1+x}}dx\Rightarrow \mathcal J'(t)=-\int_{-1}^1 \frac{1}{(1-tx)\sqrt{1+x}}dx$$
$$\overset{1+x\to x^2}=-\frac{2}{t}\int_0^\sqrt 2\frac{1}{\frac{1+t}{t}-x^2}dx=-\frac{2\operatorname{arctanh}\left(\sqrt 2 \sqrt{\frac{t}{1+t}}\right)}{\sqrt t\sqrt{1+t}}$$
$$\mathcal J(0)=0\Rightarrow \mathcal J=-2\int_0^1 \frac{\operatorname{arctanh}\left(\sqrt 2 \sqrt{\frac{t}{1+t}}\right)}{\sqrt t\sqrt{1+t}}dt\overset{\frac{t}{1+t}=x^2}=-4\int_0^\frac{1}{\sqrt 2}\frac{\operatorname{arctanh}(\sqrt 2 x)}{1-x^2}dx$$
$$\overset{\sqrt 2 x\to x}=-4\sqrt 2 \int_0^1 \frac{\operatorname{arctanh} x}{2-x^2}dx\overset{x=\frac{1-t}{1+t}}=\int_0^1 \frac{\ln t}{3-2\sqrt 2+t}dt-\int_0^1\frac{\ln t}{3+2\sqrt 2 +t}dt$$
$$=\operatorname{Li}_2(-(3+2\sqrt 2))-\operatorname{Li}_2(-(3-2\sqrt 2))$$
Putting this togheter gives the announced result.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3853327",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 1,
"answer_id": 0
} |
Find if a metric space complete Let $(\mathbb{R},d)$ be a metric space with a distance function $d(x,y)=|f(x)-f(y)|,$ where
$$
f(x) =
\begin{cases}
&\displaystyle\frac{1}{x-2}, \quad &\text{$x < 2$} \\
&\ln \ x, \quad &\text{$x\ge 2$.}
\end{cases}
$$
Is this metric space a complete metric space? If it is not what is the completion of it?
| First of all we consider our particular case. This space is not complete and an example can be the Cauchy sequence $\{-n+2\}_n$. In fact this sequence is Cauchy
if $n,m\geq M $, then $|f(-n+2)-f(-m+2)|=|\frac{1}{n}-\frac{1}{m}|\leq \frac{2}{M}$
Conversely, is not a convergent sequence on $(\mathbb{R},d)$. By contradiction, we suppose $\{-n+2\}_n$ is convergent to some $\alpha \in \mathbb{R}$. Then one get
$0=\lim_{n\to \infty}d(-n+2, \alpha)=\lim_{n\to\infty } |\frac{1}{n}-f(\alpha)|=|0-f(\alpha)|=|f(\alpha)|$
This means $f(\alpha)=0$ and this is not possible because $f^{-1}(0)=\emptyset$
Thus $(\mathbb{R},d)$ is not a complete space.
What is the obstruction to the positive answer? The problem is that the image of $f$ is not a closed subset of $\mathbb{R}$ but only the subset $(-\infty ,0)\cup [ln(2), \infty)$
If one want to complete this space it is more convenient to study the problem in a more general way, in order to understand what it happens in a general situation.
We consider a bijective function $f: X\to Y$. If $Y$ is a topological space, then it is possible to inherit a natural topological structure on $X$ by $f$ such that $X$ and $Y$ are homeomorphic topological spaces and $f$ is an homeomorphism.
Morevoer, if $(Y,d_Y)$ is a metric space inducing that topology on $Y$, then one can get a metric $d_X$ on $X$ such that the induced topology is the topology inherited on $X$ by $f$ and $f$ is not only an homeomorphism but also an isometry with respect the two metrics fixed respectively on $X$ and $Y$. The metric induced on $X$ will be clearly
$d_X(x,y):=d_Y(f(x),f(y))$
In this way one can observe $(Y,d)$ is a complete metric space if and only if $X$ is a complete metric space.
At this point we can consider our particular case. The bijection in this case is the following
$f: \mathbb{R} \to (-\infty ,0)\cup [ln(2), \infty)\subseteq \mathbb{R}$
and so $(\mathbb{R}, d)$ can not be a complete metric space because $((-\infty ,0)\cup [ln(2), \infty), |\cdot |)$ is not complete. In fact a subset of a complete metric space ( that in this case is $(\mathbb{R}, |\cdot|)$ ) is complete if and only if it is closed and in our case $(-\infty ,0)\cup [ln(2), \infty)$ is not closed.
At this point it is clear what is a completion of $(\mathbb{R}, d)$. The completion will be simply the complete space
$cl ((-\infty ,0)\cup [ln(2), \infty))=(-\infty ,0]\cup [ln(2), \infty)$
with the metric $|\cdot |$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3853477",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Is the BPSW-test always correct for special kinds of numbers? Here
https://en.wikipedia.org/wiki/Baillie%E2%80%93PSW_primality_test
a powerful primality test is mentioned for which no counterexample is known and it is claimed that none upto $2^{64}$ exists.
Can we prove that the test never fails for particular kind of numbers, Fermat numbers , Mersenne numbers , numbers of the form $n^2+1$ , $\cdots$ ?
Any reference would be welcome.
| On this page of Thomas R. Nicely there is a lot of information and also the source of the claim.
https://faculty.lynchburg.edu/~nicely/misc/bpsw.html
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3853593",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How can we explain the fact that $\mathbb E\left( {\mathbb E\left( {X|Y} \right)} \right) = \mathbb E(X)$? We know that the following property conditional expectation holds, assuming $\mathbb E[|X|]<\infty$:
$\mathbb E\left( {\mathbb E\left( {X|Y} \right)} \right) = \mathbb E(X)$
Could anyone give me some intuition into this? why when we take the expected value again, knowing the random variable Y will not affect the expected value of X.
| I don't know if these diagrams will be of any use to you. I find it useful to think of conditioning as putting a transparency over the sample space.
Let $X_1, X_2$ be two fair, independent coinflips. Denote the outcome of heads by $0$ and the outcome of tails by $1$. Let $S = X_1 + X_2$. The sample space $\Omega$ has four points: $\{(0,0), (0,1), (1,0), (1,1)\} = \{\omega_1, \omega_2, \omega_3, \omega_4\}$, :
First consider the inner conditional expectation, $Z = E[S | X_2]$. Note that $Z$ is a random variable: for each $\omega \in \Omega$, $Z(\omega)$ is a real number. It's simply that $Z(\omega)$ is constant on the sets $X_2^{-1}(\{0\}) = \{(0,0), (1,0)\} = \{\omega_1, \omega_3\}$ and $X_2^{-1}(\{1\}) = \{(0,1),(1,1)\} = \{\omega_2, \omega_4\}.$ In diagram format,
What is the constant value of $Z$ when $X_2 = 0$? It is the conditional probability of $S$ given $X_2 = 0$, which is an average over the $\omega$'s in the set $\{\omega : X_2(\omega) = 0\}$:
$$E[S | X_2](\omega_1) = E[S|X_2](\omega_3) = 0.5.$$
Similarly,
$$E[S | X_2](\omega_2) = E[S|X_2](\omega_4) = 1.5.$$
Now what happens when you do $E[E[S|X_2]]$? You average again. The rule $E[E[S|X_2]] = E[S]$ can be (roughly) read as "the average of the partial averages is the full average".
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3853744",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
If $X,Y \sim N(0, 1)$, what is $P(X > 3Y \mid Y > 0)$? Given that $X,Y \sim N(0, 1)$ and IID, what is $P(X > 3Y \mid Y > 0)$? I think the answer is $1/12$.
If we look at the $X-Y$ plane of the joint distribution of $X,Y$, we see that conditioning on $Y > 0$, gets rid of the bottom half. The area under the lie $Y = X/3$ is $1/12$ of the area of the top half. However, when I try to use monte carlo, I am seeing about $0.102$ as I increase the number of trials.
Is my solution wrong?
My other question regarding this is, what if $X,Y$ are not independent? How would you compute the conditional probability in this case?
| The conditional probability can be rewritten as:
$$P(X > 3Y | Y > 0) = \frac{P(X > 3Y \wedge Y > 0)}{P(Y > 0)} = 2P(X > 3Y \wedge Y > 0),$$
since obviously $P(Y>0)= \frac{1}{2}.$
Now, we need to integrate:
$$P(X > 3Y \wedge Y > 0) = \int_{0}^{+\infty}\int_{3y}^{+\infty} \frac{1}{2\pi}e^{-\frac{1}{2}(x^2 + y^2)}dx dy$$
We can use polar coordinates, where:
$$\begin{cases}
x = r \cos\theta\\
y = r \sin \theta
\end{cases},$$
and
$$dxdy = r d\theta dr.$$
$$P(X > 3Y \wedge Y > 0) = \int_{0}^{+\infty} \int_{0}^{\arctan\frac{1}{3}} \frac{1}{2\pi}e^{-\frac{1}{2}r^2}rd\theta dr = \\
= \frac{1}{2\pi}\left(\int_{0}^{+\infty}e^{-\frac{1}{2}r^2}rdr \right)\left(\int_{0}^{\arctan\frac{1}{3}} d\theta\right) =\\= \frac{1}{2\pi}\cdot 1\cdot \arctan\frac{1}{3} = \frac{1}{2\pi}\arctan\frac{1}{3} \simeq 0.0512$$
Finally, the probability you are looking for is:
$$P(X > 3Y | Y > 0) = \frac{1}{\pi}\arctan\frac{1}{3} \simeq 0.1024.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3853887",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Question about bounded subsequences Let $(x_n)$ be a sequence of real numbers satisfying the following:
Every bounded subsequence $(x_{n_j})$ of $(x_n)$ is convergent.
Prove:
*
*Is possible to $(x_n)$ have convergent subsequences but $(x_n)$ to be not convergent.
*If $(x_n)$ is bounded, then it is convergent.
Number (2) follows from the property because every sequence is a subsequence of itself. For number (1) I do not know if it is enough to find an example or if I have to prove it in general.
thank you
| For first argument yes it is possible.
Suppose $((−1)^n)$ is convergent. The every subsequence of
$((−1)^n)$ converges to the same limit. The subsequence of even terms converges to $1$, whereas the subsequence of odd terms converges to $−1$. Hence we conclude that $((−1)^n)$ is not convergent. Because the convergence of a sequence is unique.
This example also works for second argument.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3853987",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Expected number of samples to get a number greater than `X`? Consider a population of people. Each person's height is IID (you're not given the distribution). You randomly choose a person and observe that their height is $X$. Let $N$ be the number of additional samples needed to randomly select a person who's height is greater than $X$. What is $E[N]$?
Apparently, $E[N] = \infty$ and the logic is that $X$ is a random variable.
My first concern is that I am not sure why $X$ is a random variable. Once you obtain a sample with a value $X$, isn't $X$ now fixed/observed (the instructions state the height is observed) and no longer random? In that case, the expectation should be bounded?
If we assume $X$ is indeed a random number. Then I believe you do the following
$$
E[N] = \sum_i E[N|X = X_i] P(X_i) =
$$
We apply the law of total expectation and condition on all possible values of $X$. Is the expectation infinity because $X$ can take on the value $\max(heights)$, in which case you will never get a person with a height greater than $> \max(heights)$ no matter how long you sample for? If there was a constraint that you know $X$ isn't $\max(heights)$, then in this case, would $E[N]$ be bounded?
| Suppose you select the tallest person in the population; there is an assumption that the population is finite. There is a nonzero probability that this happens. If you select the tallest person, you'll never find a higher one, so $N=\infty$. Since $\infty×p=\infty$ if $p>0$, when $X$ is treated as a random variable on which $N$ depends, we have $E[N]=\infty$.
If you do not select the tallest person, then the expectation would indeed be bounded.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3854089",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Proving that $\|f*g\|_1\leq\|f\|_1\|g\|_1$ for all $f,g\in\ell^1(\mathbb{Z})$? Let $f,g\in\ell^1(\mathbb{Z})$, where $$\ell^1(\mathbb{Z}):=\lbrace f:\mathbb{Z}\rightarrow\mathbb{C}:\sum_{n=-\infty}^\infty|f(n)|<\infty\rbrace$$ and convolution is defined as: $$(f*g)(n):=\sum_{m=-\infty}^\infty f(m)g(n-m).$$ Let $n\in\mathbb{Z}$. I have proved that $f*g:\mathbb{Z}\rightarrow\mathbb{C}$ is well-defined. So I need to prove that $$\sum_{n=-\infty}^\infty|(f*g)(n)|\leq\Big{(}\sum_{n=-\infty}^\infty|f(n)|\Big{)}\Big{(}\sum_{n=-\infty}^\infty|g(n)|\Big{)}.$$
I have that $$\sum_{n=-\infty}^\infty|(f*g)(n)|\leq\sum_{n=-\infty}^\infty\sum_{m=-\infty}^\infty|f(m)|\,|g(n-m)|,\:\:\:\:\:\:\:\:(*)$$ provided that the RHS converges. Since $$\sum_{j=-n}^n\sum_{i=-m}^m|f(i)|\,|g(j-i)|\leq\sum_{i=-m-n}^{m+n}\sum_{j=-m-n}^{m+n}|f(i)|\,|g(j)|=(\sum_{i=-m-n}^{m+n}|f(i)|)(\sum_{j=-m-n}^{m+n}|g(j)|)$$ Is it possibly to simply take limits as $m,n\rightarrow\infty$ to get the result? Something doesn't seem quite right...
| We can simply reorder summands because they are all non-negative.
$$\Bigg(\sum_{-\infty}^\infty |f(n)|\Bigg)\cdot\Bigg(\sum_{-\infty}^\infty |g(n)|\Bigg)=\sum_{(m,n)\in\Bbb Z\times\Bbb Z}|f(m)||f(n)|=\sum_{-\infty}^\infty\sum_{-\infty}^\infty |f(m)||g(n-m)|$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3854284",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Does an identity exist for distributing the inverse for a product including nonsquare matrices? For example, if $A$ and $B$ are invertible square matrices, we can write $(AB)^{-1} = B^{-1} A^{-1}$.
Now, consider $A$ is an $n \times n$ matrix and $C$ is an $n \times m$ matrix. If $A$ is invertible, does an identity exist for distributing the inverse inside parenthesis of a product of matrices including a nonsquare matrix such as $C$?
For example, if $(C^T A C)^{-1}$ exists, does some identity exist for $(C^T A C)^{-1}$?
| A common generalization of an inverse to a non-square matrix $A$ is a Moore–Penrose inverse $A^+$. An equality $(AB)^+=B^+A^+$ holds in special cases, but not in general.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3854388",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Primes in $\mathbb Z[i]$ I've been trying to proof that if $\pi\in Z[i]$ and $\pi$ is prime, then $\pi\overline \pi$ is a prime in $\mathbb Z$ or is the square of a prime in $\mathbb Z$.
I took a prime $p$ in $\mathbb Z$ such that $\pi|p$, then $\overline \pi |p$, which is why $\pi\overline\pi |p^2$. I am stuck in that, so I need that $\pi\overline \pi=p^2,p$.
Thanks in advance :).
| Let $\pi=a+bi$, $a,b$ integers.Suppose that it is a prime. First check that $\bar\pi=a-bi$ is also a prime.
Let $n=|a+bi|=a^2+b^2$. Let $r$ be a prime dividing integer $n$. So $a^2\equiv -b^2 \mod r$. If $r\equiv 3\mod 4$, then the only possibility is that $a\equiv b\equiv 0\mod r$, so $n$ is divisible by $r^2$.
If $r\equiv 1 \mod 4$, then $r=c^2+d^2$ for some integers $c,d$, so $c+di | (a+bi)(a-bi)$ which contradicts the fact that $a+bi$ (hence $a-bi$) is prime.
So we need to consider only the case that $n=m^2$ where $m$ is a product of primes $p_1...p_k$, each $p_i\equiv 3\mod 4$. Again, if $k>1$, then $p_1$ is a prime dividing the product of two primes $(a+bi)(a-bi)$ which is impossible, so $k=1$.
Thus $n$ is either a prime or a square of a prime.
The opposite implication is obvious.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3854526",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
If rankT = rank $T^2$,How could I prove $\ker(T) \cap \text{Im}(T) = \{0\}$? Let $T: V \rightarrow V$ be a linear transformation. Assume that
$$
\operatorname{rank} T=\operatorname{rank} T^{2}
$$
Show that $\operatorname{Ker} T \cap \operatorname{Im} T=\{0\}$.
Actually, I have an idea.
I remember a corollary: if dim V = dim W <=> V and W is isomorphism.Therefore ,T is an isomorphism. And $T(v) = 0$ if and only if $v = 0$, so $\ker(T) = \{0\}$.
And ImT is a subspace of V, so ImT have an zero vector .Then we got $\ker(T) \cap ImT = \{0\}$
But if I prove in this way ,$rank T = rank T^2$ is useless.
And another thing is ,I learn that course linear algebra here.
And I do homework, but I can not find the solution, Does anyone else who find the solution of homework?
| Well, you nearly got it right. The condition $\mathrm{rank}\;T = \mathrm{rank}\;T^2$ implies that if you restrict $T$ onto the subspace $\mathrm{Im}\;T$ , which of course is invariant, you will get an isomorphism.
Thus, any vector in $\mathrm{Im}\;T$ except $\mathbf{0}$ is non-zero under $T$. This is equivalent to say that $\mathrm{Im}\;T\;\cap\;\mathrm{Ker}\;T=\{\mathbf{0}\}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3854666",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Assume $(G,\times)$ is a group and for $a,b \in G$: $ab=ba$, $\text{ord}(a)=n$, $\text{ord} (b)=m$ Assume $(G,\times)$ is a group and for $a,b \in G$: $ab=ba$, $\text{ord}(a)=n$, $\text{ord}
(b)=m$
*
*Show that if $\gcd(m,n)=1$ then $G$ has an element of order $nm$.
*If $m,n$ are arbitrary,then $G$ has an element of order $\text{lcm}(m,n)$
Since $G$ is not cyclic I don't have any idea how to start, any help is appreciated.
Lemma: Assume $(G,\times)$ is a group and $a,b \in G$, Moreover $ab=ba$.
let $\text{ord}(a)=n$ and $\text{ord}(b)=m$,then $\text{ord}(ab)\mid \text{lcm}(n,m)$.
$\text{lcm}(n,m)=ns$ and $\text{lcm}(n,m)=mr$ for some $r,s \in \mathbb Z^+$,then:
$$(ab)^{\text{lcm}(n,m)}$$ Since $ab=ba$ ,hence $$=a^{\text{lcm}(n,m)}b^{\text{lcm}(n,m)}$$
$$=a^{ns}b^{mr}=(a^n)^s(b^m)^r$$
$$=e^se^r=e$$
Follows $\text{ord}(ab)\mid \text{lcm}(n,m)$.
*
*Since $\text{ord}(ab) \mid \text{lcm}(n,m)=\frac{nm}{\text{gcd}(n,m)}$,By the assumption $\text{gcd}(n,m)=1$ So $\text{ord}(ab) \mid nm$
*If $\text{ord}(ab)
\mid \text{lcm}(n,m)$ then there is $g \in G$ such that $g^{\text{lcm}(n,m)}=e$
| For the first question, this trick can help you : if $(ab)^k=a^kb^k=1$ therefore $a^k = b^{-k}$ so $a^{mk}=b^{-mk}=1$ so $n \mid mk$ and since $\gcd(m,n)=1$ we have $n \mid k$ and $m \mid k$ (by symmetry) so $k= \alpha n= \beta m$ so using the same argument we have $nm \mid k$.
For the second one, remember that $\text{lcm}(m,n)=\frac{mn}{\gcd(m,n)}$ so if $m'=\frac{m}{pgcd(m,n)}$ and $n'=\frac{n}{pgcd(m,n)}$ we have
$$
m'n'=\text{lcm}(m,n) \text{ and } \gcd(m',n')=1,
$$
we get the result using $(1)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3854868",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
Solving Riccati equation $y^{\prime}=x^2-y^2$. I was trying to draw the solution curve of the Riccati equation $y^{\prime}=x^2-y^2$, and I met some problem:
On the one hand, I tried substitution
$$
\left\{\begin{array}{l}
u=x+y
\\ v=x-y
\end{array}\right.
,
$$
which leads to
$$
\dfrac{\mathrm du}{\mathrm dv}
=\dfrac{\mathrm du}{\mathrm dy}\cdot\dfrac{\mathrm dy}{\mathrm dx}\cdot\dfrac{\mathrm dx}{\mathrm dv}
=\dfrac{\mathrm dy}{\mathrm dx}
=(x+y)(x-y)
=uv.
$$
This is separable and the solutions are
$$
u=0\text{ and }\ln\left|\dfrac{u}{C}\right|=\dfrac{1}{2}v^2\Rightarrow y=-x\text{ and }x+y=C\mathrm e^{\frac{(x-y)^2}{2}},
$$
where $C$ is an arbitrary constant.
On the other hand, I tried to draw the slope field and the curve in the picture above does not fit the field, since on $y=x$ the slope should be $0$.
I wonder if I've made any mistake. Any help would be appreciated:)
| $$y'=x^2-y^2$$
Why not the usual way to solve Riccati ODEs ?
Let $y=\frac{u'}{u}$
$$y'=\frac{u''}{u}-\frac{(u')^2}{u^2}=x^2-\left(\frac{u'}{u}\right)^2$$
$$u''-x^2u=0$$
This is a second order ODE of Bessel kind. See https://mathworld.wolfram.com/BesselDifferentialEquation.html From Eqs.(6-7) the solution is :
$$u=c_1x^{1/2}I_{1/4}(x^2/2)+c_2x^{1/2}I_{-1/4}(x^2/2)$$
$I_{\nu}(z)$ denotes the modified Bessel functions of first kind https://mathworld.wolfram.com/ModifiedBesselFunctionoftheFirstKind.html
Then differentiate wrt $x$ to find $u'(x)$ and $y(x)=u'(x)/u(x)$ .
Note : in case of $I_{\pm 1/4}$ one can use the Parabolic Cylinder functions. https://mathworld.wolfram.com/ParabolicCylinderFunction.html
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3854981",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Relation between reproducing kernel and kernel matrix Let $\mathcal{R} : \mathbb{R} \times \mathbb{R} \to \mathbb{R}$ denote the reproducing kernel of a Hilbert space $\mathcal{H}$ of functions on $[0,1]$ endowed with inner product $ \langle \cdot, \cdot \rangle_{\mathcal{H}}$. For my particular setting I am able to show that
$$ \sup_{x \in [0,1]} ||\mathcal{R}(x, \cdot)||_{\mathcal{H}} \leq C,$$
for some $C>0$.
Now let us imagine that we have a set of $n$ values $t_j$ in $[0,1]$, from which we evaluate $\mathcal{R}(t_i, t_j)$ and form the corresponding matrix $\mathbf{R}$.
What I am wondering is, if we can also establish a similar bound for the largest eigenvalue of this positive-semidefinite matrix. That is, if for a vector $\mathbf{a} \in \mathbb
{R}^n$ we can have
$$ \sum_j^n \sum_i^n a_i a_j \mathcal{R}(t_i, t_j) \leq B C \sum_{j=1}^n a_j^2,$$
for another constant $B$.
All help is greatly appreciated, thank you.
| Using a property of reproducing kernels, and convexity of the squared norm:
\begin{aligned}
\sum_i^n \sum_j^n a_i a_j \mathcal{R}(t_i, t_j) &= \sum_i^n \sum_j^n a_i a_j <\mathcal{R}(t_i, \cdot), \mathcal{R}(t_j, \cdot)>_\mathcal{H} \\
&= <\sum_i^n a_i \mathcal{R}(t_i, \cdot), \sum_j^n a_j \mathcal{R}(t_j, \cdot)>_\mathcal{H}\\
&= \left\Vert a \right\Vert_1^2 \left\Vert \sum_i^n \frac{a_i}{\left\Vert a \right\Vert_1} \mathcal{R}(t_i, \cdot) \right\Vert^2_\mathcal{H}\\
&\leq \left\Vert a \right\Vert_1^2 \sum_i^n \frac{a_i}{\left\Vert a \right\Vert_1} \left\Vert \mathcal{R}(t_i, \cdot) \right\Vert^2_\mathcal{H} \\
& \leq \left\Vert a \right\Vert_1^2 C^2 \\
& \leq \frac{\left\Vert a \right\Vert_2^2}{(\min_{\Vert x \Vert_1=1} \Vert x \Vert_2)^2} C^2
\end{aligned}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3855150",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Computing the Correlation Coefficient of Two Random Variables I am a bit confused on computing the $\rho$ for two random variables. Let $f(X,Y) = 1$ with support given by $-x < y < x$ and $0 < x < 1$. I know that the definition of $\rho$ is simply $\rho = \frac{cov(X,Y)}{\sigma_X\sigma_Y}$. Hence, I have computed E[XY] =\begin{equation} E[XY] =
\int_{0}^{1} \int_{-x}^{x} xy \,dy\,dx
\end{equation}
But clearly the above quantity is just $0$, and $E[Y] = 0$. Hence, by the definition of covariance $cov(X,Y) = E[XY] - E[X]E[Y]$, but both terms in the difference are zero, hence covariance is 0 and this implies $\rho = 0$. Have I done this computation incorrectly? I feel like there is something not quite right, because there is a dependency in the support of the joint PDF, and hence it seems like there should be non-zero correlation between these random variables. Just looking for what I did wrong here if I did in fact do something wrong. Thanks for reading!
| Your reasoning to support zero correlation is correct.
And yes, you're also correct in your assertion that $X,Y$ are dependent.
Independence implies uncorrelated, but uncorrelated doesn't imply independence.
$\;\;\;\;\;$
https://en.wikipedia.org/wiki/Independence_(probability_theory)#Expectation_and_covariance
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3855436",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Problems with interesting, non-trivial analogues in finite fields I am wondering what problems* have interesting and non-trivial analogues to finite fields. For example, the Kakeya needle problem, which is usually stated in $\mathbb{R}^n$, can be asked in $\mathbb{F}_q^n$ with delightful results.
Kakeya Conjecture. The Kakeya Conjecture asserts that every set in $\mathbb{R}^n$ which contains a unit line segment in every direction has Hausdorff and Minkowski dimension $n$; this has been proven only for $n=1,2$. What about in $\mathbb{F}_q^n$? Rather than ask about dimension, we should ask for the minimum size of subset of $\mathbb{F}_q^n$ that contains a line in every direction; and it turns out this number is bounded below by $C_nq^n$, where $C_n$ is a constant dependent only on $n$.
*I use 'problems' as a shortening of 'problems, conjectures, theorems, etc.' for a more concise title; but I am interested in all of the above.
| The theory of group representations seeks to describe group elements as linear transformations of vector spaces. In the first instance, these were vector spaces over the field of complex numbers, but nowadays vector spaces over finite fields are of similar prominence.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3855605",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 5,
"answer_id": 4
} |
There's a deeper concept behind this fake proof Whenever I start to get cocky about my math knowledge, some basic property of the complex plane puts me back in my place. I just came across this fake proof that 2 = 0:
In the shallowest sense, I think I know what the problem is here. The author deceptively exploits how squaring is a 2-to-1 mapping on $C \setminus 0$, and flips the sign of a root somewhere. If we use $-i$ as the square root of $-1$ instead of $i$, we get a tautology rather than a contradiction.
But what I'd like is a fuller description of the phenomenon underlying the trick. My guess is that the proof smuggles in some special property of $R$ that is so basic that the naive reader assumes it holds in $C$, even though it doesn't. Could anyone state what this property is? Is my guess on the right track?
| The proof tacitly assumes that there is a function $sqrt:\mathbb{C}\rightarrow\mathbb{C}$ (which it calls "$\sqrt{\cdot}$") with the following two properties: | {
"language": "en",
"url": "https://math.stackexchange.com/questions/3855767",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
Finding the smooth inverse of a function Problem: The mapping $\phi: S^2 \longrightarrow S^2 $ by $$\phi(x,y,z)=(x\cos z+y\sin z,x\sin z-y \cos z,z)$$ is a diffeomorphism. Where $S^2$ is a unit sphere in $\mathbb{R}^3$.
I've already shown that $\phi$ is smooth and bijective. The only thing I can't find is $\phi^{-1}$ that is smooth.
Any help would be much appreciated!
| Let $a=(x,y,z)\in S^2$. If we know that the derivative $d\phi_a: T_aS^2\to T_{\phi(a)}S^2$ is nonsingular, then we know that $\phi$ has a local inverse, defined and smooth in a neighborhood of $\phi(a)$.
Now, thinking for the moment of $\phi$ as a map on $\Bbb R^3$, we have
$$d\phi_a = \begin{bmatrix} \cos z & \sin z & -x\sin z + y\cos z \\
\sin z & -\cos z & x\cos z+\sin z \\
0 & 0 & 1\end{bmatrix}.$$
The determinant of $d\phi_a$ is $-1$ and hence $d\phi_a$ is invertible as a map $T_a \Bbb R^3\to T_{\phi(a)}\Bbb R^3$. It follows that the restriction of $d\phi_a$ to a map $T_a S^2\to T_{\phi(a)}S^2$ must be nonsingular (why?).
Therefore, it follows that $\phi$ has a local smooth inverse mapping a neighborhood of $\phi(a)\in S^2$ to $a\in S^2$. [If you haven't seen this application before, you can deduce it by using charts or parametrizations at $a$ and $\phi(a)$ and reducing to a question about the mapping of an open set in $\Bbb R^2$ to an open set in $\Bbb R^2$.]
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3855974",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Prove $\frac{a^n−b^n}{a−b}=\sum _{k=0} ^{n−1} a^kb^{n−1−k}$ by Induction Let $a\ne b\in\Bbb R$. Show that for all $n\in\mathbb{N}$:
$$\frac{a^n−b^n}{a−b}=\sum _{k=0} ^{n−1} a^kb^{n−1−k}$$
I know using induction proof to prove it, but I can do base case but I am having trouble on proving if $n$ is true then $n+1$ is true.
Please give me some hints on this. I tried to use $n+1 - n$ but I can't let the left side equal to the right side.
My knowledge I’d only induction proof and a little bit of binomial theory.
| Hint: if $$f_n=\frac{a^n-b^n}{a-b},$$ you need a way to go from $f_n$ to $f_{n+1}$. Now
$$a^{n+1}-b^{n+1}=a\,\left(a^n-b^n\right)+(a-b)\,b^n,$$i.e.
$$f_{n+1}=a\,f_n+b^n.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3856294",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
How can $\frac{(-1)^{n+1}}{n^s} = \frac{1}{(2n-1)^s}-\frac{1}{(2n)^s}$? I am working through a paper here that demonstrates Riemann's analytic continuation of the zeta function $\zeta(s)=\sum_{n=1}^\infty \frac{1}{n^s}$ to the complex plane (except for the pole at $s=1$). At the bottom of page 5 in equation 13, the paper asserts (in the middle of a chain of equations) that
$$\begin{aligned} \sum_{n=1}^\infty \frac{(-1)^{n+1}}{n^s}+\frac{2}{2^s}\sum_{n=1}^\infty \frac{1}{n^s} &= \sum_{n=1}^\infty \bigl(\frac{1}{(2n-1)^s}-\frac{1}{(2n)^s}+\frac{2}{(2n)^s}\bigr) \end{aligned}$$
Could someone please explain this step? This much is immediately obvious:
$$\begin{aligned} \sum_{n=1}^\infty \frac{(-1)^{n+1}}{n^s}+\frac{2}{2^s}\sum_{n=1}^\infty \frac{1}{n^s} &= \sum_{n=1}^\infty \frac{(-1)^{n+1}}{n^s}+\sum_{n=1}^\infty \frac{2}{(2n)^s} \\ &= \sum_{n=1}^\infty \biggl(\frac{(-1)^{n+1}}{n^s}+\frac{2}{(2n)^s}\biggr) \end{aligned}$$
But I am not at all clear why it should be the case that
$$\begin{aligned} \frac{(-1)^{n+1}}{n^s} &= \frac{1}{(2n-1)^s}-\frac{1}{(2n)^s} \end{aligned}$$
as the equation seems to imply. Clearly, I am missing something fairly fundamental, or have made some embarrassingly stupid error. Can anyone explain?
| When $n=2k$ you have
$$
\frac{(-1)^{n+1}}{n^s}=\frac{(-1)^{2k+1}}{(2k)^s}=-\frac{1}{(2k)^s}
$$
otherwise $n=2k-1$ and you have
$$
\frac{(-1)^{n+1}}{n^s}=\frac{(-1)^{2k}}{(2k-1)^s}=\frac{1}{(2k-1)^s}
$$
so you can rewrite your series as follows
$$
\sum_{n=1}^{+\infty}\frac{(-1)^{n+1}}{n^s}
=\sum_{k=1}^{+\infty}\frac{1}{(2k-1)^s}
-\frac{1}{(2k)^s}
$$
and relabeling $k$ as $n$ your final computation reads as
\begin{align}
\sum_{n=1}^\infty \frac{(-1)^{n+1}}{n^s}+\frac{2}{2^s}\sum_{n=1}^\infty \frac{1}{n^s}
&= \sum_{n=1}^\infty \frac{(-1)^{n+1}}{n^s}+\sum_{n=1}^\infty \frac{2}{(2n)^s} \\ &= \sum_{n=1}^\infty \frac{1}{(2n-1)^s}-\frac{1}{(2n)^s}+\frac{2}{(2n)^s}\\
&= \sum_{n=1}^\infty \frac{1}{(2n-1)^s}+\frac{1}{(2n)^s}\\
&= \sum_{n=1}^\infty \frac{1}{n^s}=\zeta(s)
\end{align}
The step you miss is my second inequality; you basically separate the odd and even values of $n$, in order to "master" the sign.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3856673",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Solving Quadratic, Cubic and Higher Degree Congruence Equations I have a question about solving polynomial equations modulo some number.
Say we were to solve the following quadratic congruence equation:
$$x^2+x + 2 = 0 \quad mod \quad 4$$
We could of course just try the values $0, 1, 2, 3$ and check wheater or not they solve the equation. But what is the proper procedure for something to the likes of: $$x^4+x^3+7x^2+x+3=0 \quad mod \quad 45$$
Do we keep using a brute force method or is there a more simple way of thinking about this that I don't know of.
I have seen that for instance, quadratic congruence equations, can be solved using the quadratic formula given that you can find values to satisfiy the root and division parts. However, according to a teacher this does not work in every case. An example is the aforementioned quadratic equation.
Using the quadratic formula we would arrive at $$x=\frac{-b\pm\sqrt{b^2-4c}}{2}$$
Firstly to find the multiplicative inverse of $2$ would require solving $2y = 1 \quad mod \quad 4$ which has no solutions. This in the face of the fact that the original quadratic congruence equation does have solutions.
To clarify my question, what is the best way to go about solving these kinds of equations when we cannot simply relay on checking every single case? Is there anyway to reduce these equations? Also, does it matter if we use modulo a prime or a composite number in these equations?
| Here’s a trick that helps when your modulus $m$ is composite (for example, in your example, $m=45=5\cdot 9$).
Suppose $P(x)$ is some polynomial and the modulus $m=ab$ is composite, and you wish to solve the congruence
$$P(x) \equiv 0 \pmod m \tag{i}$$
It follows that any solution $x$ to this congruence also satisfies
$$P(x) \equiv 0 \pmod a \tag{ii}$$
To find the solutions to this equation by brute force, you only need to test $a$ numbers. Once you’ve found those solutions, you know that every solution to (i) must take the form $x=ay+x’$ where $x’$ is a solution to (ii), and $y$ ranges from $0$ to $b$. Now you only need to check $b$ possible values of $x$ in (i).
So if you’re dealing with a composite modulus $m=ab$, you can use this method to reduce the number of brute force checks from $ab$ to $a+b$. (In your example, you would only need to make $5+9 = 14$ calculations rather than $45$.)
When $m$ is prime, on the other hand...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3856851",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Verify if the following limit exists (by the formal definition): $\lim_{(x,y)\to(0,0)}\frac{x^3-y^3}{x^2+y^2}$ First, I tried approaching the point through a few curves: ($x=0,\,y=0,\, y=m\cdot x,\, y=x^2,\, x=y^2)$ and to all of those I got $0$ for my limit. Since this isn't enough to prove that the limit actually exists, I need to move to the formal definition.
The definition:
If $0<\sqrt{(x-x_0)^2+(y-y_0)^2}<\delta$ then $\left|f(x,y)-L\right|<\epsilon$
But I'm having a lot of difficulty in using this definition of limits to prove if it exists or not.
| let $\delta = \varepsilon$ then if $\sqrt{x^{2}+y^{2}}< \delta$ we have $\left | \frac{x^{3}-y^{3}}{x^{2}+y^{2}} \right |=\left |\frac{x^{3}}{x^{2}+y^{2}}- \frac{y^{2}}{x^{2}+y^{2}} \right |= \left | x\frac{x^{2}}{x^{2}+y^{2}}-y\frac{y^{2}}{x^{2}+y^{2}} \right |\leq \left | x-y \right |\leq \sqrt{x^{2}+y^{2}}= \delta $ and we have that $\left | f(x,y) \right |\leq \delta = \varepsilon $
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3857048",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Prove $ \int_{\mathbb{R}^d} \frac{|e^{i\langle \xi, y \rangle} + e^{- i\langle \xi, y \rangle} - 2|^2 }{|y|^{d+2}}dy = c_d |\xi|^2 $ My question is: Prove that there exists some constant $c_d$ such that for any $\xi \in \mathbb{R}^d$:
$$\displaystyle \int_{\mathbb{R}^d} \dfrac{|e^{i\langle \xi, y \rangle} + e^{- i\langle \xi, y \rangle} - 2|^2 }{|y|^{d+2}}dy = c_d |\xi|^2 $$
where $\langle \xi, x \rangle = \displaystyle \sum_{j=1}^{d} \xi_j y_j$. I have
$$ \dfrac{|e^{i\langle \xi, y \rangle} + e^{- i\langle \xi, y \rangle} - 2|^2 }{|y|^{d+2}} = \dfrac{|2\cos\langle \xi, y \rangle - 2|^2 }{|y|^{d+2}} = \dfrac{16|\sin^2\dfrac{\langle \xi, y \rangle}{2} |^2 }{|y|^{d+2}} \leq \dfrac{16}{|y|^{d+2}}$$
integrable since $d+2 > d$ so the mapping $y \mapsto \dfrac{|e^{i\langle \xi, y \rangle} + e^{- i\langle \xi, y \rangle} - 2|^2 }{|y|^{d+2}}$ is integrable in $\mathbb{R}^d$. But I don't know how to find prove the equality above. Are there any ideas for this problem?
Thank you so much.
| Requested Form of the Integral
Let $T$ be an orthogonal linear transformation where $T(\xi)=|\xi|(1,0,0,\dots,0)$.
$$
\begin{align}
\int_{\mathbb{R}^d}\frac{\left|\,e^{i\langle\xi,y\rangle}+e^{-i\langle\xi,y\rangle}-2\,\right|^2}{|y|^{d+2}}\,\mathrm{d}y
&=\int_{\mathbb{R}^d}\frac{\left|\,e^{i|\xi|y_1}+e^{-i|\xi|y_1}-2\,\right|^2}{|y|^{d+2}}\,\mathrm{d}y\tag{1a}\\
&=|\xi|^2\underbrace{\int_{\mathbb{R}^d}\frac{\left|\,e^{iy_1}+e^{-iy_1}-2\,\right|^2}{|y|^{d+2}}\,\mathrm{d}y}_{c_d}\tag{1b}
\end{align}
$$
Explanation:
$\text{(1a)}$: substitute $y\mapsto T^{-1}(y)$, noting that
$\phantom{\text{(1a):}}$ the Jacobian of $T$ is $1$, so $\mathrm{d}T^{-1}(y)=\mathrm{d}y$
$\phantom{\text{(1a):}}$ $T$ is an isometry, so $\left|T^{-1}(y)\right|=|y|$
$\phantom{\text{(1a):}}$ $T$ is orthogonal, so $\left\langle\xi,T^{-1}(y)\right\rangle=\left\langle T(\xi),y\right\rangle=|\xi|y_1$
$\text{(1b)}$: substitute $y\mapsto y/|\xi|$
Computing the Constant
$$
\begin{align}
\int_{x_n=t}\frac{\mathrm{d}x}{\left(t^2+|x|^2\right)^{\frac{d+2}2}}
&=\int_0^\infty\frac{\omega_{d-2}r^{d-2}\,\mathrm{d}r}{\left(t^2+r^2\right)^{\frac{d+2}2}}\tag{2a}\\[3pt]
&=\frac{\omega_{d-2}}{|t|^3}\int_0^\infty\frac{r^{d-2}\,\mathrm{d}r}{\left(1+r^2\right)^{\frac{d+2}2}}\tag{2b}\\
&=\frac{\omega_{d-2}}{|t|^3}\frac{\sqrt\pi}4\frac{\Gamma\!\left(\frac{d-1}2\right)}{\Gamma\!\left(\frac{d+2}2\right)}\tag{2c}\\[6pt]
&=\frac1{|t|^3}\frac{\pi^{d/2}}{d\,\Gamma\!\left(\frac{d}2\right)}\tag{2d}
\end{align}
$$
Explanation:
$\text{(2a)}$: convert from rectangular to polar in $x_n=t$
$\text{(2b)}$: substitute $r\mapsto rt$
$\text{(2c)}$: Beta integral
$\text{(2d)}$: $\omega_{d-1}=\frac{2\pi^{d/2}}{\Gamma\left(\frac{d}2\right)}$
Since $e^{i\langle\xi,y\rangle}+e^{-i\langle\xi,y\rangle}\in\mathbb{R}$,
$$
\begin{align}
\left|\,e^{i\langle\xi,y\rangle}+e^{-i\langle\xi,y\rangle}-2\,\right|^2
&=\left(\,e^{i\langle\xi,y\rangle}+e^{-i\langle\xi,y\rangle}-2\,\right)^2\\
&=16\sin^4(\langle\xi,y\rangle/2)\tag3
\end{align}
$$
Therefore,
$$
\begin{align}
\int_{\mathbb{R}^d}\frac{\left|\,e^{i\langle\xi,y\rangle}+e^{-i\langle\xi,y\rangle}-2\,\right|^2}{|y|^{d+2}}\,\mathrm{d}y
&=\frac{16\pi^{d/2}}{d\,\Gamma\!\left(\frac{d}2\right)}\int_{-\infty}^\infty\frac{\sin^4(|\xi|t/2)}{|t|^3}\,\mathrm{d}t\tag{4a}\\
&=\frac{8\pi^{d/2}|\xi|^2}{d\,\Gamma\!\left(\frac{d}2\right)}\int_0^\infty\frac{\sin^4(t)}{t^3}\,\mathrm{d}t\tag{4b}\\
&=\frac{8\pi^{d/2}|\xi|^2}{d\,\Gamma\!\left(\frac{d}2\right)}\int_0^\infty\frac{\cos(2t)-\cos(4t)}{t}\,\mathrm{d}t\tag{4c}\\
&=\frac{8\pi^{d/2}|\xi|^2}{d\,\Gamma\!\left(\frac{d}2\right)}\,\log(2)\tag{4d}
\end{align}
$$
Explanation:
$\text{(4a)}$: apply $(2)$ and $(3)$
$\text{(4b)}$: substitute $t\mapsto2t/|\xi|$ and apply symmetry
$\text{(4c)}$: integrate by parts twice
$\text{(4d)}$: Frullani integral
Thus,
$$
c_d=\frac{8\pi^{d/2}\log(2)}{d\,\Gamma\!\left(\frac{d}2\right)}\tag5
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3857248",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Is the restriction on this system necessary? I'm trying to solve the following non-linear system:
$\sqrt{(x-x_1)^2+(y-y_1)^2}+s(t_2-t_1) = \sqrt{(x-x_2)^2 + (y-y_2)^2}$
$\sqrt{(x-x_2)^2+(y-y_2)^2}+s(t_3-t_2) = \sqrt{(x-x_3)^2 + (y-y_3)^2}$
$\sqrt{(x-x_3)^2+(y-y_3)^2}+s(t_1-t_3) = \sqrt{(x-x_1)^2 + (y-y_1)^2}$
For the unknowns, $x$ and $y$. The system is the solution to a localization problem, where, given the coordinates $[x_i,y_i]$ of three parties, the time at which each party "saw" some signal ($t_i$), and the speed of that signal, $s$, the coordinates of the source are given by $[x,y]$. Here, we assume that the coordinates and the source are coplanar.
I read somewhere that, to write the system above, one must have that $t_1 < t_2 < t_3$. Is this the case? And if so, why?
My goal right now is to solve for $[x,y]$ algorithmically, by setting the equations equal to 0 and plugging them into a root-finder. I want to do this hundreds of times, so as to localize hundreds of events, and in many cases this constraint is not met. Can I still simply plug the values in?
| The restriction that $t_1 < t_2 < t_3$ need not be met to solve these equations and get a meaningful answer. If the restriction were necessary, this still wouldn't be a problem from an "algorithmic" point of view. We could always just rename the observers in an order aligned with $t_1 < t_2 < t_3 \; .$ In other words we would arbitrarily state that the observer that saw the signal first has coordinates $(x_1, y_1) \;$ and continue on from there.
Once again though, the equations can be solved no matter which numbered observer sees the signal first. This is true in 3d as well.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3857480",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
$T(\phi)=T(\psi)$ if $\phi$ and $\psi$ agree on the support of $T$? I just started learning about distributions. I learned what the support of a distribution $T$ is. It is the complement of the biggest set $E$, such that for every test function $\phi$ on the set $E$ we have $T(\phi)=0$ (is this right). Now I have a question. If $\phi, \psi$ are two test functions that are different but agree on the support of the distribution $T$, then $T(\phi)=T(\psi)$ right? Although I observed it is true for distributions like $T(\phi)=\int f\phi dx$, I don't know how to prove that in general case.
| The support is rather the complementary of the biggest open set $E$ such that ... (the rest is correct). This small change implies that the support is always a closed set.
The answer to your question is yes. I will rather work with $\phi_1$ and $\phi_2$. Since these two functions agree on the support $\Omega$ of $T$, we can write $\phi_1 = \psi + \tilde \phi_1$ and $\phi_2 = \psi + \tilde \phi_2$ where $\tilde \phi_i \equiv 0$ in $\Omega$ (using a smooth partition of unity over a set eventually slightly bigger than $\Omega$). Thus for both $i$ :
$$T(\phi_i)=T(\psi + \tilde \phi_i)=T(\psi)+T(\phi_i) = T(\psi).$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3857639",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Solve $n < e^{6 \sqrt{n}}$
Find for which values of $n \in \mathbb{N}$ it holds that $$n < e^{6 \sqrt{n}}.$$
I tried to use the inequality $(1 + x) \leq e^x$, but from this, I can only find that the inequality holds for $n > 36$. But I need to get $n$ as small as possible.
I also tried the induction on $n$, but I stucked in the induction step. In particular, in showing that $e^{6\sqrt{n}} + 1 \leq e^{6\sqrt{n+1}}$.
I appreciate any help and suggestions.
| The inequality holds for all positive $x$, so effectively that a proof seems overkill :-)
(Notice that $0<e^{6\sqrt0}$ and $1<\dfrac6{2\sqrt x}e^{6\sqrt x}$.)
Or using Taylor,
$$n<1+6\sqrt n+18n+\cdots$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3857782",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Where am I going wrong with my proof $\int( \sin x +\cos x) dx=0$ Let's make the substitution $x=\frac{\pi}{2}-u$ to evaluate
$$I=\int (\sin x +\cos x) dx$$
After substituting, the integral becomes
$$
I=-\int (\cos u +\sin u)du=-I \implies I=0$$
What was wrong?
| With an indefinite integral, you cannot simply treat the new variable after substitution as a dummy variable.
Let the primitive you get after integration be $F$.
So what you basically showed is that $F(x) = -F(u)$, which is completely correct once you work out the form of the primitive and put $u =\frac{\pi}{2} -x$.
Actually, there is another aspect to indefinite integration, and that's the constant of integration, which can change after a substitution. So what you really should write is that $F(x) +c_1 = -F(u)+c_2$ but that doesn't matter in this particular case because you'll find that $c_1 = c_2$. But in another case, they may be different.
You might be getting confused with definite integrals, where you have bounds (which also get transformed with the substitution). In that case, you can treat the variable of integration as a dummy variable and doing algebraic manipulations and comparisons as you did would be correct because the form of the primitive would be the same, and what you're asserting is $F(b) - F(a) = - (F(B) - F(A))$, where $a, b$ are the initial bounds and $A, B$ are the respective transformed bounds after the substitution.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3857958",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 2
} |
Quickly Computing the Dimension of a Tangent Space What is the fastest way to compute the dimension of the Zariski tangent space at the origin of $\operatorname{Spec} A$, where $A$ is the ring
$$A = k[w, x, y, z] /(wz - xy)?$$
We know it suffices to compute the dimension of the cotangent space $$\mathfrak m / \mathfrak m^2$$
where $\mathfrak m$ is the unique maximal ideal of $A_{(w, x, y, z)}$. Is there a nice way to quickly find this maximal ideal and the resulting dimension?
We know the maximal ideal of $A_{(w, x, y, z)}$ corresponds to the maximal ideal in $A$ contained in $(w, x, y, z)$ which corresponds to the maximal ideal of $k[w, x, y, z]$ containing $(wz - xy)$ contained in $(w, x, y, z)$, which should be just $(w, x, y, z)$, which feels morally wrong to me somehow. What am I missing?
| The tangent space of $V(I)$ at a point $p$ is the kernel of the Jacobian matrix with entries evaluated at $p$ (see here for more details if you need them). This makes computing the dimension of the tangent space fairly straightforward: pick generators for $I$, take derivatives, evaluate at $p$, and apply linear algebra to compute the nullity of the matrix. In this case, $I$ has a single generator $wz-xy$, and the Jacobian matrix is $$\begin{pmatrix} -y & -x & w & z \end{pmatrix}$$ which when evaluated at the maximal ideal $(x,y,z,w)$ is the zero matrix. So the tangent space is 4-dimensional at $(x,y,z,w)$. (For the connection between this and the taylor-series approach mentioned in the comments, recall that taking the derivative and evaluating at a point is exactly what you do to get the coefficient of the linear term of the taylor series of a function at that point.)
If you're looking for a more hands-on calculation, you can pick vector space bases of everything in sight and see what happens. $(x,y,z,w)/(x,y,z,w)^2$ has a vector space basis $\{x,y,z,w\}$, while $((x,y,z,w)/(wz-xy))/((x,y,z,w)^2/(wz-xy))$ has the same vector space basis.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3858119",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Understand how to evaluate $\lim _{x\to 2}\frac{\sqrt{6-x}-2}{\sqrt{3-x}-1}$ We are given this limit to evaluate:
$$\lim _{x\to 2}\frac{\sqrt{6-x}-2}{\sqrt{3-x}-1}$$
In this example, if we try substitution it will lead to an indeterminate form $\frac{0}{0}$. So, in order to evaluate this limit, we can multiply this expression by the conjugate of the denominator.
$$\lim _{x\to 2}\left(\dfrac{\sqrt{6-x}-2}{\sqrt{3-x}-1} \cdot \dfrac{\sqrt{3-x}+1}{\sqrt{3-x}+1} \right) = \lim _{x\to 2}\left(\dfrac{(\sqrt{6-x}-2)(\sqrt{3-x}+1)}{2-x}\right) $$
But it still gives the indeterminate form $\frac{0}{0}$ .
But multiplying the expression by the conjugate of the demoninator and numerator we get
$$\lim _{x\to 2}\left(\dfrac{\sqrt{6-x}-2}{\sqrt{3-x}-1} \cdot \dfrac{\sqrt{3-x}+1}{\sqrt{3-x}+1} \cdot \dfrac{\sqrt{6-x}+2}{\sqrt{6-x}+2}\right) $$
$$\lim _{x\to 2}\left(\dfrac{6-x-4}{3-x-1} \cdot \dfrac{\sqrt{3-x}+1}{1} \cdot \dfrac{1}{\sqrt{6-x}+2}\right)$$
$$\lim _{x\to 2}\left(\dfrac{6-x-4}{3-x-1} \cdot \dfrac{\sqrt{3-x}+1}{\sqrt{6-x}+2}\right)$$
$$\lim _{x\to 2}\left(\dfrac{2-x}{2-x} \cdot \dfrac{\sqrt{3-x}+1}{\sqrt{6-x}+2}\right)$$
$$\lim _{x\to 2}\left(\dfrac{\sqrt{3-x}+1}{\sqrt{6-x}+2}\right)$$
Now we can evaluate the limit:
$$\lim _{x\to 2}\left(\dfrac{\sqrt{3-2}+1}{\sqrt{6-2}+2}\right) = \dfrac{1}{2}$$
Taking this example, I would like to understand why rationalization was used. What did it change in the expression so the evaluation was possible? Especially, why multiplying by the numerator's and denominator's conjugate?
I am still new to limits and Calculus, so anything concerning concepts I'm missing is appreciated. I still couldn't understand how a limit supposedly tending to $\frac{0}{0}$ went to be $\frac{1}{2}$, I really want to understand it.
Thanks in advance for you answer.
| If you have an expression $$\frac{f(x)}{g(x)}$$where both $\lim_{x\to a}f(x)=0$ and $\lim{x\to a}g(x)=0$, you get a limit that looks like $\frac 00$. Now in this case, you write $$f(x)=(x-a)f_1(x)\\g(x)=(x-a)g_1(x)$$
Then
$$\lim_{x\to a}\frac{f(x)}{g(x)}=\lim\frac{(x-a)f_1(x)}{(x-a)g_1(x)}=\lim_{x\to a}\frac{f_1(x)}{g_1(x)}$$
We can simplify the $x-a$ terms, because the expression is non zero, no matter how close we are, except at $x=a$. If your new limit has a solution, then you are done.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3858398",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17",
"answer_count": 7,
"answer_id": 5
} |
Counting binary strings of length $n$ that contain no two adjacent blocks of 1s of the same length? Is it possible to count exactly the number of binary strings of length $n$ that contain no two adjacent blocks of 1s of the same length? More precisely, if we represent the string as $0^{x_1}1^{y_1}0^{x_2}1^{y_2}\cdots 0^{x_{k-1}}1^{y_{k-1}}0^{x_k}$ where all $x_i,y_i \geq 1$ (except perhaps $x_1$ and $x_k$ which might be zero if the string starts or ends with a block of 1's), we should count a string as valid if $y_i\neq y_{i+1}$ for every $1\leq i \leq k-2$.
Positive examples : 1101011 (block sizes are 2-1-2), 00011001011 (block sizes are 2-1-2), 1001100011101 (block sizes are 1-2-3-1)
Negative examples : 1100011 (block sizes are 2-2), 0001010011 (block sizes are 1-1-2), 1101011011 (block sizes are 2-1-2-2)
The sequence for the first $16$ integers $n$ is: 2, 4, 7, 13, 24, 45, 83, 154, 285, 528, 979, 1815, 3364, 6235, 11555, 21414. For $n=3$, only the string 101 is invalid, whereas for $n=4$, the invalid strings are 1010, 0101 and 1001.
| An aproximation for large $n$
The runs of $0$s and $1$s can be approximated by iid geometric random variables (with $p=1/2$, mean $2$). Hence we have in average $n/2$ runs, of which $n/4$ are runs of $1$s.
Then, the problem is asymtpotically equivalent to : given $m=n/4$ iid Geometric variables $X_1, X_2 \cdots X_m$ find $P_m=$ probability that $X_{i+1} \ne X_i$ for all $i$.
This does not seem a trivial problem, though (and I haven't found any reference).
A crude aproximation would be to assume that the events $X_{i+1} \ne X_i$ are independent. Under this assumption we get
$$P_m \approx P_2^{m-1}= (2/3)^{m-1} \tag 1$$
This approximation is not justified, and it does not seem to improve with $n$ increasing.
The exact value can be obtained by a recursion on the probabilities for each final value, which together with a GF gives me this recursion :
$$P_m = r(1,m) $$
$$r(z,m)= \frac{1}{2z-1} r(1,m-1) - r(2z,m-1) \tag 2$$
with the initial value $r(z,1)=\frac{1}{2z-1}$
Finally, the total number of valid sequences is $C_m = P_m \, 2^n$ ($n=4m$)
I've not yet found an explicit or asympotic for $(2)$.
Some values oc $C_m$
n m r(2) iid(1) exact
4 1 16 16 13
8 2 170.6 170.6 154
12 3 1950.5 1820.4 1815
16 4 21637.3 19418.1 21414
20 5 243540.2 207126.1 252680
24 6 2720810.9 2209345.3 2981452
28 7 30515606.3 23566350.0 35179282
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3858517",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 4,
"answer_id": 0
} |
Let $a_1,a_2,...,a_n>0$ be such that $\sum_{k=1}^n a_k=\frac{1}{2}n(n+1)$
Let $a_1,a_2,...,a_n>0$ be such that $\sum_{k=1}^n a_k=\frac{1}{2}n(n+1)$, then least value of
$$\sum_{k=1}^n\frac{(k^2-1)a_k+k^2+2k}{a_k^2+a_k+1}\text{ is,}$$
What I tried:
$$\sum_{k=1}^{n}k=\frac{1}{2}n(n+1)\implies k=a_k\tag{$\because\sum_{k=1}^n a_k=\frac{1}{2}n(n+1)$}$$
So replacing $a_k$ with $k$,
$$\begin{aligned}\require{cancel}
\sum_{k=1}^n\frac{(k^2-1)a_k+k^2+2k}{a_k^2+a_k+1}&=\sum_{k=1}^n\frac{(k^2-1)k+k^2+2k}{k^2+k+1}\\
&=\sum_{k=1}^{n}\frac{k^3-k+k^2+2k}{k^2+k+1}\\
&=\sum_{k=1}^{n}\frac{k^3+k+k^2}{k^2+k+1}\\
&=\sum_{k=1}^{n}\frac{k\color{red}{\bcancel{(k^2+k+1)}}}{\color{red}{\bcancel{(k^2+k+1)}}}\\
&=\sum_{k=1}^{n}k=\frac{1}{2}n(n+1)
\end{aligned}$$
Is this a correct method to solve this problem? because something feels off with this as the question asks for the least value but I get a direct result.
Please provide a correct method to solve if this is wrong
| Tips:
$$\frac{(k^2-1)a_k+k^2+2k}{a_k^2+a_k+1}=2k-a_k+\frac{(a_k+1)(a_k-k)^2}{a_k^2+a_k+1}\ge2k-a_k$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3858721",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
In ∆ ABC, the altitude AD and the median BM are equal in length and they intersect inside ∆ ABC. If ABC is not an equilateral triangle, ∠MBC is I have got no clue on how to proceed with the question as nothing is given except Median of one side =altitude of another moreover they also removed the possibility of equilateral triangle.
|
Area can found by: $\frac{1}{2}ab\sin\theta$ ; where $\theta$ is the angle between $a$ and $b$
$$\text{Area of $\Delta$ABC}=\text{$2\times$ Area of $\Delta$MBC}$$
$$\require{cancel}\frac{1}{2}\cdot\bcancel{\text{AD}}\cdot\text{BC}=2\times\frac{1}{2}\cdot\bcancel{\text{BM}}\cdot\text{BC}\cdot(\sin\angle\text{MBC})\tag{$\because$ AD=BM}$$
$$\fbox{$\therefore\angle\text{MBC}=30^o$}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3858865",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Any alternate proof for $2^n>n$? The normal approach for these kinds of problem is to use the mathematical induction and prove that $2^n>n$ for any natural number $n$
Case 1: $(n=1)$
$2^1=2>1$, thus the formula holds for $n=1$
Case 2: (let us assume that this statement holds for any arbitrary natural number $m$)
That implies $2^m>m$ for some natural number $m$.
Then,
$2^{m+1}=2^m \cdot 2>m \cdot 2 \geq m+1$
Thus as the statement holds for an arbitrary natural number $m$ implies that it holds for $m+1$ and thus by mathematical induction, it is proved that $2^n>n$ for any natural number $n$.
Is there any other ways to prove this problem?
I tried proving this by contradiction taking initially $2^n \leq n$, but couldn't go much far.
Any help or idea would be very much appreciated.
| By a combinatoric argument, $2^n$ is the number of subsets of a set with $n$ elements which is always greater than the number of elements.
Refer to the related
*
*The total number of subsets is $2^n$ for $n$ elements
As an alternative, by derivative, let define
$$f(x)=2^x-x \implies f'(x) =2^x \log 2-1$$
with $f(1)=1$ and $f'(1)>0$ therefore $\forall x\ge 1$
$$2^x-x \ge 0 \iff 2^x\ge x$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3859134",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
For $n \in \mathbb{N}$ and $W \leq \mathbb{F}^n$, there exists a homogeneous system of linear equations whose solution space is $W$
For $n \in \mathbb{N}$ and $W \leq \mathbb{F}^n$, show that there exists a homogeneous system of linear equations whose solution space is $W$.
Here's my work:
Since $W \leq \mathbb{F}^n$, $k = dim(W) \leq dim(\mathbb{F}^n)$. Let's say that $\{w_1,w_2,...,w_k\}$ is a basis of $W$. Now, construct a matrix $A$ (of size $k \times n$) such that its rows are elements from the basis of $W$, stacked together. The row space of $A$ is $W$, so the row space of its row-echelon form is $W$ too. At this point, I'm stuck! I'm trying to come up with a homogeneous system with the help of $A$, though there may exist other easier ways of approaching this problem.
Could someone show me the light?
P.S. $W \leq \mathbb{F}^n$ stands for $W$ is a subspace of $\mathbb{F}^n$.
P.P.S. Isn't this equivalent to saying that $W$ is the null-space of some matrix? Can we go ahead along these lines, and construct a matrix $P$ such that $Pw = 0$ for all $w \in W$?
| I figured something out myself, so I'll post it. Let $\{w_1,w_2,...,w_k\}$ be a basis of $W$ and let's extend this set to a basis of $\mathbb{F}^n$, to obtain $\{w_1,w_2,...,w_n\}$.
Now, if we define a linear map $T: \mathbb{F^n} \to \mathbb{F^n}$, such that $T(w_i) = 0$ for $1 \leq i \leq k$ and $T(w_j) = w_j$ for $k+1 \leq j \leq n$. As a side-note, we can see that $\rm{dim}(\rm{null}(T)) = k$ & $\rm{dim}(\rm{range}(T)) = n-k$. Consider the matrix $A$ corresponding to this linear map $T$. Clearly, $Ax = 0$ is the desired system of homogeneous equations!
It remains to verify that this construction of $A$ actually works, i.e. the solution space of $Ax = 0$ is $W$ and only $W$ - but I'll not include that here for brevity.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3859288",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 1
} |
Limit to infinity rule for fractions? I am reading a book and it says to solve limits to infinity with a fraction such as:
$$\frac{5X^2 + 8X - 3}{3X^2 + 2}$$
We divide the numerator and denominator by the highest power of X in the DENOMINATOR so in this case it is $X^2$. I get this helps simplify the equation, but what is to prevent someone from dividing by a higher power like $X^3$? All components would evaluate to 0.
Is there another rule for limits that I am not aware of?
Thanks!
| Yes we can divide by $X^3$ but we obtain
$$\frac{5X^2 + 8X - 3}{3X^2 + 2}=\frac{\frac 5 X + \frac 8{X^2} - \frac 3{X^3}}{\frac 3X + \frac2{X^3}}$$
which is again an indeterminate form.
In general, to avoid that this happens, the standard way is to factor out the dominating term from the numearator and the denominator.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3859443",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
possible bin packing question what is known: you are given 3 packs of boards of different lengths. ie pack A is 3.9m, B is 3m and C 2.4m and an arbitrary distance to cover, d. how do I pose and then solve this question?
3.9A + 3B + 2.4C ~= d ??
obviously A, B and C are positive, integer values and the aim is to get as close to d as possible.
n.b this is real life problem I encounter most days and normally solve by trial and error
| You can solve the problem via mixed integer linear programming as follows. Let $S\ge 0$ and $T \ge 0$ be surplus and slack variables, respectively. The problem is to minimize $S+T$ subject to
$$3.9 A + 3 B + 2.4 C - S + T = d.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3859632",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Inequality sign change with logarithm
Why does the inequality sign change when applying a logarithm on both sides, with the base less than $1$?
I came across the following math which I solved if 2 ways,
$$
\left(\frac{1}{2}\right)^n < \frac{1}{4}\\
n\log\left(\frac{1}{2}\right)< \log\left(\frac{1}{4}\right)\\
-0.301n < -0.602 \\
n > 2
$$
The second method is,
$$
\left(\frac{1}{2}\right)^n < \frac{1}{4}\\
n\log_\left(\frac{1}{2}\right) \left(\frac{1}{2}\right)< \log_\left(\frac{1}{2}\right) \left(\frac{1}{4}\right)\\
n < 2 \\
$$
Now I know the first one is the correct answer, but what I don't understand why the second method failed to give the correct inequality. Could someone please explain?
Another general question would be, if instead of values they were variable, meaning instead if $\frac{1}{2}$ it was A, and instead of $\frac{1}{4}$ it was B, how would I attempt to solve it since, with the first method, I wouldn't know if $log\left(A\right)$ was negative or positive.
| You are actually computing:
$$n \log \frac{1}{2} < \log \frac{1}{4}$$
$$\frac{n \log 1/2}{\log 1/2} \color{red}{>} \frac{\log 1/4}{\log 1/2}$$
since $\log \frac{1}{2}$ is negative, and multiplying / dividing by a negative number flips the sign. Since $f(x) = \log x$ is monotonically increasing ($x > 0$) and $\log 1 = 0$, when the base is less than $1$, $\log b$ is negative and you will need to flip the sign.
By the change of base formula, this is equivalent to:
$$n\log_\left(\frac{1}{2}\right) \left(\frac{1}{2}\right) > \log_\left(\frac{1}{2}\right) \left(\frac{1}{4}\right)$$
$$n > 2$$
as you have already said.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3860116",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Does $M\otimes_R N \cong N \otimes_R M$ hold for modules $M, N$ over noncommutative ring $R$? Here $M$ and $N$ are both left $R$-module. I have seen that $M\otimes_R N \cong N \otimes_R M$ is meaningful only when $R$ is commutative, but I can't see the reason.
In the noncommutative case, tensor product of two left $R$-module $M,N$ could be defined as an left $R$-module$M\otimes_R N$, right(although it seems that it's useless)? And then we could ask if there always holds $M\otimes_R N \cong N \otimes_R M$ as left $R$-module. I think it's true but I can't see why this is meaningless. Could you give some hints? Thanks in advance.
| First of all, note that $\Bbb Z$ acts naturally on the other side on every left or right module, and that if $R$ is a commutative ring, then we can regard any $R$-module as an $R$-$R$-bimodule.
Said that, every module can be regarded a bimodule.
The tensor product in the noncommutative setting rather serves as a composition-like operation of bimodules:
If $M$ is an $A$-$B$-bimodule and $N$ is a $B$-$C$-bimodule, then the thing we can naturally obtain is the tensor product $M\otimes_BN$ as an $A$-$C$-bimodule.
Its construction is similar, we just need to take care on the left and right actions, so that the free Abelian group on $M\times N$ can be quotiented out by $(mb,\,n)\sim (m,\,bn)$ (among the other rules to ensure distributivity).
Note that in this setting, the actions of $B$ are 'swallowed' by the tensor product, but the actions of $A$ (from left on $M$) and of $C$ (from right on $N$) are naturally preserved.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3860255",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Why is this particular case often used to introduce the property $P(A \cup B) = P(A) + P(B) - P(A \cap B)$? I'm taking up a probability course and my teacher, when explaining the fondamental properties of probability, listed this one:
$$\text{If } A \text{ and }B \text{ are mutually exclusive events, then }P(A \cup B) = P(A) + P(B)$$
Then, later, under an "other properties" list, you had:
$$P(A \cup B) = P(A) + P(B) - P(A \cap B)$$
which, per my understanding, is the general case of the first statement.
I have seen this approach taken in some books and by other teachers as well, where the particular case where the probability of the intersection is $0$ is treated as the "main" case, and the general case is listed separately, as if the two had no correlation, whereas the former is really just the latter with the additional hypothesis that $A \cap B = \emptyset$.
Is there any reason why this is? Are there any cases in which, mathematically, it makes more sense to treat a particular case as the main instance of a property or theorem?
| This is best viewed by drawing some venn diagrams.
In the first case, if it is said that they are mutually exclusive (never touching or intersecting) then we can simply add up the elements in A and B. The venn diagram for this would look like two circles labeled A and B separated from each other.
Otherwise, if nothing is said about being mutually exclusive, we assume that they are touching or intersecting and so you need to subtract or take away the overlapping part as to not double count the middle area (because we already include all of A and all of B). The venn diagram for this would be two circles labeled A and B with some overlap in the middle.
The two instances mean two different things and they usually start by teaching the nonoverlap property before learning about the other one.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3860441",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Rotating vectors in planes Suppose I have a free vector $ \vec{w}$ and I have a plane $ P$ described the following way:
$$ \vec{r} = \vec{r_o} + a \vec{u} + b \vec{v}$$
Where $a,b$ are parameters to vary and $ \vec{u}$ and $ \vec{v}$ are vectors in the plane and $ \vec{r_o}$ is position vector to some vector in the plane
Suppose I wish to rotate the component of $ \vec{w}$ in the plane $P$ along an axis parallel to the normal of $P$, how would I write out the rotated new vector $ \vec{w'}$ which has the same component as $ w$ perpendicular to plane and the parallel part to plane as rotated?
I know to start I'd have to split up $ \vec{w}$ into components perpendicular and parallel to plane as follows;
$$ \vec{w} = \vec{w}_{\parallel} + \vec{w}_{\perp}$$
Not sure what I do after this
Visual depiction:
Legend:
Black=original vector
Orange= vector part parallel to plane
Green= vector part parallel to plane which is rotated
Red= the new vector with the same perpendicular component by parallel part along plane rotated
| Given $\vec w_\parallel$ and $\vec w_\perp,$ let
$$\hat n = \frac{1}{\|\vec{w}_\perp\|} \vec{w}_\perp. $$
Then $\hat n$ is a unit normal vector to the plane.
Further, let
$$ \vec w_= = \hat n \times \vec w_\parallel.$$
(The subscript $=$ here has no particular significance except that it looks somewhat like $\parallel$ rotated ninety degrees.)
Then since $\hat n$ and $\vec w_\parallel$ are orthogonal,
$\vec w_=$ is a vector in the plane $P$ orthogonal to $\vec w_\parallel$.
Further, since $\hat n$ is a unit vector, $\vec w_=$ has the same magnitude as $\vec w_\parallel$.
Now to rotate $\vec w$ by angle $\theta$ around an axis perpendicular to $P,$ let
$$ \vec w' = \vec w_\perp + (\cos \theta)\vec w_\parallel + (\sin\theta)\vec w_=.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3860609",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Writing Zeta Function In Terms Of The J-Function I am reading through John Derbyshire's "Prime Obsession" and am struggling to understand his argument for why $\frac{1}{s} \log{\zeta(s)}=\int_{0}^{\infty} J(x)x^{-s-1}dx$ where $J(x)$ is defined as $\pi(x)+\frac{1}{2}\pi(\sqrt{x})+\frac{1}{3}\pi(\sqrt[3]{x})+\frac{1}{4}\pi(\sqrt[4]{x})+\frac{1}{5}\pi(\sqrt[5]{x})+...$
Here is what I am getting so far:
*
*I know $\zeta(s)={ \prod_{p} \left(1-p^{-s}\right)^{-1}}$.
*Taking the logarithm, $\log\left(\zeta(s)\right)=-\log(1-\frac{1}{2^s})-\log(1-\frac{1}{3^s})-\log(1-\frac{1}{5^s})+...$
*Recall $S=\sum_{k=0}^{n-1}a\cdot r^k=\frac{1}{1-r}$ whenever $a=1$ and $r\in(-1,1)$. Taking the integral, we have $\int{\frac{1}{1-r}}=\int{1+r+r^2+r^3+...}$, and $-\log(1-r)=r+\frac{r^2}{2}+\frac{r^3}{3}+\frac{r^4}{4}+...$. Then since $0 < \lvert \frac{1}{p^s} \rvert<1$, we can write each term in Euler's product formula as an infinite sum. For example, $-\log(1-\frac{1}{2^s})=\frac{1}{2^s}+\left(\frac{1}{2}\cdot\left(\frac{1}{2^s}\right)^2\right)+\left(\frac{1}{3}\cdot\left(\frac{1}{2^s}\right)^3\right)+\left(\frac{1}{4}\cdot\left(\frac{1}{2^s}\right)^4\right)\dots$
*Any term in this infinite sum of infinite sums can be written as an integral. For example, $\left(\frac{1}{3}\cdot\left(\frac{1}{2^s}\right)^3\right)=\frac{1}{3}\times\frac{1}{2^{3s}}=\frac{1}{3}\cdot{s}\cdot \int_{2^3}^{\infty}x^{-s-1}\: dx$ since $\int_{2^3}^{\infty} x^{-s-1}dx=\left(\frac{1}{s}\cdot\frac{-1}{x^s}\right)\biggr\rvert_{8}^{\infty}=\left(0\right)-\left(\frac{1}{s}\cdot\frac{-1}{8^s}\right)=\frac{1}{s}\times\frac{1}{8^s}$ which is precisely $\frac{s}{3}$ multiples of $\frac{1}{3}\times\frac{1}{2^{3s}}$.
*This is where I am not following. Derbyshire says that this specific term forms a "strip" under the J-Function. Even though the J-Function is a step function, if you think of the integral as area under the curve, the example in the previous step should not be rectangular. Another point that I don't understand is why $\int_{0}^{\infty} J(x)x^{-s-1}dx=\left[\int_{2}^{\infty} \left(\frac{1}{1}\cdot x^{-s-1} dx\right)+\int_{2^2}^{\infty} \left(\frac{1}{2}\cdot x^{-s-1} dx\right)+\int_{2^3}^{\infty} \left(\frac{1}{3}\cdot x^{-s-1} dx\right)+...\right]+\left[\int_{3}^{\infty} \left(\frac{1}{1}\cdot x^{-s-1} dx\right)+\int_{3^2}^{\infty} \left(\frac{1}{2}\cdot x^{-s-1} dx\right)+\int_{3^3}^{\infty} \left(\frac{1}{3}\cdot x^{-s-1} dx\right)+...\right]+\left[\int_{5}^{\infty} \left(\frac{1}{1}\cdot x^{-s-1} dx\right)+\int_{5^2}^{\infty} \left(\frac{1}{2}\cdot x^{-s-1} dx\right)+\int_{5^3}^{\infty} \left(\frac{1}{3}\cdot x^{-s-1} dx\right)+...\right]+...$.
Any insights into this problem?
| Let me explain a little of what reuns was intimating in his\her answer (if it’s still of benefit now). Note that we have
$$\int_n^\infty x^{-s-1}dx=\left.\frac{x^{-s}}{-s}\right|_n^\infty=\frac{1}{s}n^{-s}\,.$$
It then follows that
$$\sum_na_nn^{-s}=s \sum_na_n \int_n^\infty x^{-s-1}dx\,.$$
The next step —which I believe is what you’re interested in understanding—is getting the summation inside the integral. To see that in a step-wise fashion, define the following step-function
$$\chi(n,x):=\left\lbrace
\begin{array}{ll}
1& \mbox{if $n\le x$}\\
0& \mbox{if $n>x$}
\end{array}
\right.$$
then observe that
$$\int_1^\infty\chi(n,x)x^{-s-1}dx= \int_n^\infty\chi(n,x)x^{-s-1}dx+ \int_1^n\chi(n,x)x^{-s-1}dx$$
but the rightmost integral is simply equal to $0$ since $\chi(n,\cdot)$ vanishes on the interval $x\in(1,n)$ and we obtain
$$\int_1^\infty\chi(n,x)x^{-s-1}dx= \int_n^\infty x^{-s-1}dx$$
We can therefore conveniently rewrite our original integral as
$$\sum_na_nn^{-s}=s \sum_na_n \int_1^\infty\chi(n,x)x^{-s-1}dx$$
$$= s \int_1^\infty \sum_na_n \chi(n,x)x^{-s-1}dx$$
$$= s \int_1^\infty \sum_{n\le x}a_n x^{-s-1}dx\,.$$
We can now apply this identity to the $J$ function; we recall that the $J$ function is the same as
$$J(x)=\sum_{k\ge 1}\sum_{p^k\le x}\frac{1}{k}$$
So from the logarithm of the Riemann function, and using the definition that $a_n=\frac{1}{k}$ if $n=p^k$ in our just-derived identity, we obtain
$$\log\zeta(s)=\sum_{k\ge 1}\sum_{p~\text{prime}}\frac{1}{k}p^{-sk}$$
$$= \sum_{k\ge 1} s \int_1^\infty \sum_{p^k\le x}\frac{1}{k}x^{-s-1}dx$$
$$= s \int_1^\infty \sum_{k\ge 1}\sum_{p^k\le x}\frac{1}{k}x^{-s-1}dx$$
$$=s\int_1^\infty J(x) x^{-s-1}dx\,,$$
as desired.
Hope this helps!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3860722",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
How is $\int \frac{\frac{d y(x)}{dx}}{y(x)^2}dx = -\frac{1}{y(x)}$ I just saw wolfram-alpha making following statement...
$\int \frac{\frac{d y(x)}{dx}}{y(x)^2}dx = -\frac{1}{y(x)}$
But I am very unsure how to get there.
I always thought I can cancel out the $dx$ and call it a day. Like so:
$\int \frac{\frac{d y(x)}{dx}}{y(x)^2}dx = \int \frac{d y(x)}{y(x)^2}=\int \frac{d 1}{y(x)}=\frac{1}{y(x)}$
I suppose that's wrong, so where does the minus come from?
| I like to think of it as cancelling out the $dx$ but what is actually happening is this:
$$I=\int\frac{dy}{dx}\frac{1}{y^2}dx$$
$u=y(x)\Rightarrow du=y'dx\therefore dx=\frac{du}{y'}$
$$I=\int\frac{y'}{y'}\frac{1}{u^2}du=\int\frac{1}{u^2}du=-\frac{1}{u}+C=-\frac 1y+C$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3860922",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
} |
Let $n=p^mr$ where $p$ is prime and $r\in\mathbb{Z_{>1}}$ such that $p\nmid r$. If $G$ is a simple group of order $n$ then $p^m\mid(r-1)!$ Question: Let $n=p^mr$ where $p$ is prime and $r\in\mathbb{Z^{>1}}$ such that $p\nmid r$. If $G$ is a simple group of order $n$ then $p^m\mid(r-1)!$.
My Ideas: I feel like this should be a rather straightforward argument but I keep getting tripped up. Since $G$ is simple, the only normal subgroups of $G$ are $\langle1\rangle$ and $G$. So, we have a homomorphism $\phi:G\rightarrow S_n$, where $n=1$ or $n=p^mr$. Since $G$ is simple, $\ker\phi=\langle1\rangle$. If $n=1$, then we would violate the assumption that $r\in\mathbb{Z}_{>1}$. .....and I am sort of stuck from here... maybe this isn't the best way to go about it? Any help is greatly appreciated! Thank you
| You get a sharper bound by letting $G$ act on the conjugates of $P$ (the Sylow $p$-subgroups) instead of on the cosets; then there are $n_p = \frac{|G|}{|N_G(P)|} = \frac{r}{s}$ of these, where $s = \frac{|N_G(P)|}{|P|}$, and you get that $p^m r$ divides $n_p!$. If $r \not \equiv 1 \bmod p$ you're guaranteed that $n_p < r$ by the Sylow theorems so this bound will be strictly sharper in that case.
For example, if $|G| = 300$ and $p = 5$ then $r = 12$ and the given bound gives $25 \mid 11!$, which is true, but the sharper bound gives that $n_p = 6$ (it can't be equal to $1$ if $G$ is simple) so $25 \mid 5!$, which is a contradiction. So there's no simple group of order $300$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3861011",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
How to prove the cumulative distribution function of the maximum of a pair of values drawn from random variable X is the square of the original CDF Suppose $X=Y^2$, where $Y\sim U(0,1)$. Now we need to find the cumulative distribution function of $X$.
$$P(X<x)=P(Y^2<x)=P(Y<x\sqrt x)=F_Y(x\sqrt x)$$
That's all?
| If $X$ and $Y$ are independent and identically distributed then
$P(\max(X,Y) \le z)$
$= P(X \le z \text{ and } Y \le z) $
$= P(X\le z) P(Y \le z) \text{ because they are independent}$
$= \left(P(X\le z)\right)^2 \text{ because they are identically distributed}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3861307",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Combination of pairing elements or keeping them single
Given a number $n$, find the number of ways to keep the $n$ elements single or pair them up.
The elements in any arrangement may or may not be paired.
If $n=3$, then the number of ways would be $4$ namely $:$ $(1, 23), (12, 3), (13, 2), (1, 2, 3)$.
If $n = 4$ then the $10$ possible arrangements would be $(1, 2, 3, 4), (12, 3, 4), (12, 34), (13, 2, 4) etc..$
I'm aware that this problem is related to the formula $\frac{(2k)!}{2^{k}k!}$ where $k$ is the number of pairs. However that formula is strictly only for pairs. I'm also trying to implement this in a simple program. Can I get an explanation on how solve this. I also want to know how to treat odd numbers.
I've tried using a simple summation but I'm unable to resolve overcounting at each stage
I'm sorry if that sounded confusing. I dont know of any other way to represent pairs
Thanks!
| You say that a summmation is an acceptable answer, and there is a nice summation for these numbers:
$$
\text{# ways to pair up some of $n$ elements}=\sum_{k=0}^{\lfloor n/2\rfloor}\binom{n}{2k}\frac{(2k)!}{2^kk!}
$$The index $k$ represents the number of pairs, the $\binom{n}{2k}$ is the number of ways to choose the people to be put in pairs, and $\frac{(2k)!}{2^kk!}$ is the number of ways to pair all of the chosen people.
Perhaps this generalization is of interest: the number of ways to divide some of the people into groups of size $m$, and leave the others in singletons, is
$$
\sum_{k=0}^{\lfloor n/m\rfloor}\binom{n}{mk}\frac{(mk)!}{m^kk!}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3861417",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
The equation $x^4-x^3-1=0$ has roots $\alpha,\beta,\gamma,\delta$. Find $\alpha^6+\beta^6+\gamma^6+\delta^6$ The equation $x^4-x^3-1=0$ has roots $\alpha,\beta,\gamma,\delta$.
By using the substitution $y=x^3$, or by any other method, find the exact value of
$\alpha^6+\beta^6+\gamma^6+\delta^6$
This is a problem from Further Mathematics(9231) Paper 1, Question 1, 2009. I tried to solve it but was unable to figure it out, especially how to find the value of $\alpha^6+\beta^6+\gamma^6+\delta^6$. Could anyone try to solve this question and explain how they got the value?
| Let $P_n:=\alpha^n+\beta^n+\gamma^n+\delta^n$. Then, Newton's Sums tell us that $$1\cdot P_1-1=0\iff P_1=1\\1\cdot P_2-1\cdot P_1+2\cdot0=0\iff P_2=1\\1\cdot P_3-1\cdot P_2+0\cdot P_1+3\cdot 0=0\iff P_3=1\\\vdots$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3861550",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 4
} |
Largest square inscribed in a rectangle The side length of the largest square inscribed in a rectangle of width $a$ and height $b$ is surely $\min(a,b)$. How is this proved? (Or if it be wrong, what is the correct result?)
| The side length of a square is equal to the length of the diameter of the inscribed circle (which touches the square at the midpoints of its four sides). One diameter of the circle is parallel to the short side of the rectangle, hence cannot be longer than that side, or else the circle (and thus the square it's inscribed in) would lie partly outside the rectangle.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3861692",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Find common number divisible by six different numbers If there is recipe to find this - I would like to find the first common number divisible by the following six numbers- 260,
380, 460,560,760 and 960.
How does one calculate the numbers I need?
Any direction would be useful.
| $\DeclareMathOperator{\lcm}{lcm}$
As I said, you have to calculate the l.c.m. of the six numbers. The l.c.m. is associative, so you may group some numbers when it's easier to compute. Furthermore, all these numbers are divisible by $20$, so
$$\lcm(260,380,460,560,760,960)=20\lcm(13,19,23,28,38,48)$$
On the other hand, $\lcm(19,38)=38$, $\;\lcm(28,48)=4\lcm(7,12)=4\cdot 7\cdot12)=336$, so
$$\lcm(260,\dots,960)=20\lcm(13,23,38,336)$$
Continuing, we have $\:\lcm(38, 336)=2\lcm(19,168)=2\cdot19\cdot168=6384$, $\;\lcm(13,23)=13\cdot23=299$, and finally
$$\lcm(260,\dots,960)=20\lcm(299,6384)=20\cdot299\cdot 6384=20\cdot1\,908\,520=\color{red}{38\,176\,320}$$
if my computations are correct.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3861814",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Definite integral evaluation of $\frac{\sin^2 x}{2^x + 1}$
Compute
$$\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}} \frac{\sin^2 x}{2^x + 1} \ dx.$$
So looking at the limits I first checked whether the function is even or odd by substituting $x = -x$ but this gave me:
$$\frac{\sin^2 x}{2^x + 1} \cdot 2^x$$
so the function is clearly neither odd or even and I don't know what to infer from this. I can't think of a good substitution either.
Any help would be appreciated!
| You did the right thing. Using your substitution, you have
$$
I = \int_{-\frac{\pi}{2}}^{\frac{\pi}{2}} \frac{\sin^2 x}{2^x + 1} \ dx = \int_{-\frac{\pi}{2}}^{\frac{\pi}{2}} \frac{\sin^2 x}{2^x + 1} 2^x \ dx.
$$
Adding the two equivalent formulations you have
$$
2 I = \int_{-\frac{\pi}{2}}^{\frac{\pi}{2}} \frac{\sin^2 x}{2^x + 1} (2^x +1) \ dx
= \int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}{\sin^2 x} \ dx = \int_0^{\frac{\pi}{2}} (1-\cos(2x))\ dx=\frac{\pi}{2}$$
which gives $I = \frac{\pi}{4}$ as the answer.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3861958",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Counting Problem Double Check For this problem we are given that a password is a string of $6$ characters. The password must contain exactly $6$ characters and can contain uppercase and lowercase letters of the alphabet, numbers 0 through 9, and an underscore.
*
*How many passwords cannot have a number character?
Solution: We know that there are $26$ upper and lower case words, so $52$ letters to choose from, and we can include our underscore. Therefore, we must have $53^6$ passwords.
*How many passwords have exactly one underscore and that is not at the beginning or the end of the password.
Solution: We have one underscore in between the start and the end of the password. $62\cdot 63\cdot 63\cdot 63\cdot 63\cdot 62\cdot = 62^2 \cdot 63^4$
*Password must have at least one number.
Solution: We take all the passwords that have a number and subtract out the number of passwords that don't have a number, $63^6-53^6$
Is my thinking on this correct? I am new to counting and was wondering if my intuition on these problems are on the right track or completely wrong.
| Your answers to the first and third questions are correct.
As for the second question: Choose which of the four middle positions will be filled with an underscore. Since that is the only underscore, each of the remaining five positions may be filled with one of the other $26 + 26 + 10 = 62$ characters. Hence, there are
$$4 \cdot 62^5$$
passwords which have exactly one underscore and the underscore is not at the beginning or the end of the password.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3862356",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
A Transformation of a cross-shaped grid filled with 1s (Proof of impossibility?) Consider a cross-shaped grid of size 7 as it shows on the figure (compared to one of size 3). Each cell contains a 1. Le'ts define a transformation $\pi$ of the grid as follows: take any 3 sized sub-cross of the grid and multiply all the cells inside by $-1$.
How many $\pi$ transformations are required to transform a cross-shaped grid of size 2017 that contains a 1 in each cell into a grid that contains $-1$ in every cell?
Any ideas on how to proceed? I was trying to solve the particular case for 7 but even for that I found it quite hard.
| First, the case for grid size 2017.
Consider a grid of size $n > 3$. Reusing your drawing, consider the cells colored in red and yellow for any of the four sides of the grid:
Let us number those colored cells starting from one red cell and ending to the other red cell with indexes $1, \ldots, \frac{n-1}{2}$, so that cell $1$ and $\frac{n-1}{2}$ are the red ones.
Now define $\pi_1, \ldots, \pi_{\frac{n-1}{2}}$ the required number of transformations applied on the cells $1, \ldots, \frac{n-1}{2}$ (with the center of the 3 sized sub-cross on the cell).
$\pi_1$ and $\pi_{\frac{n-1}{2}}$ must be odd, because the corner cells are reachable only from cells $1$ and $\frac{n-1}{2}$ respectively. Then $\pi_2$ and $\pi_{\frac{n-3}{2}}$ must be even, because e.g. the border cell reachable from cell $1$ and $2$ must total an odd number of transformations, thus $\pi_1+\pi_2$ must be odd and similarly on the other side. We can continue the process along the side alternating even and odd transformations.
There are $\frac{n-1}{2}-2 = \frac{n-5}{2}$ yellow cells between the two red cells. If that number is even and it is for $n=2017$ but not for $n=7$, we will end up with the two cells $\frac{n-1}{4}$ and $\frac{n+3}{4}$ with $\pi_{\frac{n-1}{4}}$ and $\pi_{\frac{n+3}{4}}$ both even or both odd and thus $\pi_{\frac{n-1}{4}} + \pi_{\frac{n+3}{4}}$ even, so that the corresponding border cell, reachable from those cells, cannot be changed to $-1$.
Regarding the case $n=7$, consider the cells colored as below:
and with the usual notation, define $\pi_r$ the number of transformations applied on the red cells, and similarly $\pi_y$ for the yellow cells, $\pi_{p1}$ to $\pi_{p4}$ for the pink cells (choose whatever order you like), $\pi_g$ for the green cell.
$\pi_r$ must be odd, then $\pi_y$ must be even, as said above. Then the only way to have the pink cell $1$ at $-1$ is to have both $\pi_{p1}$ and $\pi_g$ odd or even, and similarly for pink cells $2,3,4$, therefore all pink cells must be odd or even, but this makes impossible changing the yellow cells to $-1$.
Maybe with a little more effort this can be extended for any other odd $n > 3$ with $\frac{n-1}{2}$ odd.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3863481",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Convergence Rate of Sample Variance Question: Given $X_1,X_2,...$ as an sequence of iid distributed random variables with $E(X_i)=0 $ and $V(X_i)=σ^2$ and the fourth order moment $E(X_i^4)<\infty$. Show that:
$\sqrt{n}(S_n^2-\sigma^2)\xrightarrow{d}N(0,E[(X_i^2-\sigma^2)^2])$, where $S_n^2$ is the sample variance.
I am sure that we have to employ the fact that $S^2_n\xrightarrow{p} \sigma^2$ together with the Central Limit Theorem. But I still can not figure out the exact proof of this problem.
| $S_n^2 = \frac{1}{n-1} \sum_{i=1}^n (X_i - \bar{X}_n)^2 = \frac{1}{n-1} \sum_{i=1}^n X_i^2 - \frac{n}{n-1}\bar{X}_n^2$.
The second term converges to $0$ almost surely and can be ignored. Apply the central limit theorem to the first term.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3863648",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Prove that f(x)=0 has exactly 3 solutions I need to prove that tan(x)=10x has exactly 3 solutions in $[-\pi/2,\pi/2]$:
I declared $f(x)=\tan(x)-10x$
*
*I have proved that $f(x)=0$ has at least 3 solution using the fact that it's odd and continuous function.
*Now I need to prove that there can't be 4 solutions, I remember a rule that says if $f(x)=0$ in four points then f'(x)=0 in three point in $[-\pi/2,\pi/2]$ but how may I continue from here?
Note: $f'(x)=\frac1{\cos^2 x}-10$
| On $[0, \frac {\pi} 2]$ there is a unique point where $cos^{2}(x)=\frac 1{10}$. [$\cos x$ decreases strictly fron $1$ to $0$ in this interval]. Same is true in $[- \frac {\pi} 2, 0]$. So there are exactly two points in $[- \frac {\pi} 2,\frac {\pi} 2]$ where $f'(x)=0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3863817",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
If $X \times X \simeq \mathbb{R}^{2}$, then $X \simeq \mathbb{R}$? I came across this question few years ago, but I have not yet reached a satisfactory answer.
If a product of a topological space $X$ with itself is homeomorphic to the real plane $\mathbb{R}^{2}$, must $X$ be homeomorphic to the real line $\mathbb{R}$? Here I am not assuming a priori that $X$ is a manifold.
| Note that I am not an expert in this, but I think this paper is what you are looking for:
Non-manifold factors of Euclidean spaces, Fund. Math. 68 (1970), 159–177, MR275396 describes classes $\Gamma_m$ and $\Gamma_n$ where if $X \in \Gamma_m$ and $Y \in \Gamma_n$, then $X \times Y \cong \mathbb{R}^{n+m}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3863995",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 1,
"answer_id": 0
} |
Prove that $y_n$ = $\frac{6^n}{n!}$ is contractive Prove that $y_n$ = $\frac{6^n}{n!}$ is contractive.
My attempt at this question is to compare $\frac{|y_{n+2}-y_{n+1}|}{|y_{n+1}-y_{n}|}$. Doing this got me to $\frac{|24-6n|}{|-n^2+3n+10|}$, but I don't know how to get a constant to prove that it is contractive.
| $$\left| \frac{y_{n+2}-y_{n+1}}{y_{n+1}-y_n}\right|\le 1$$
simplified gives
$$\frac{n-4}{(n-5) (n+2)}\leq \frac{1}{6}$$
which is verified, provided that $n\ge 7$.
Edit
Thus it is not contractive since the inequality is not true for any $n\ge 1$.
It can easily made contractive shifting $n$ as follows
$$y_n=\frac{6^{n+7}}{(n+7)!}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3864210",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Do the series converge? Let $g \colon \mathbb{N} \rightarrow \{0,1\}$ such that $g(n)=0$, if $n \equiv 0$ or $n \equiv 1 ~(\bmod~4)$; or $g(n)=1$, if $n \equiv 2$ ou $n \equiv 3 ~(\bmod~4)$.
Do $\sum_{n=1}^\infty \frac{(-1)^{g(n)}}{\sqrt{n}}$ converge?
The series is something like this
*
*If $n=1$, then $\frac{(-1)^{g(n)}}{\sqrt{n}} = \frac{(-1)^{0}}{\sqrt{1}} = 1$, partial sum of 1
*If $n=2$, then $\frac{(-1)^{g(n)}}{\sqrt{n}} = \frac{(-1)^{1}}{\sqrt{2}} \approx -0.707$, partial sum of $\approx$ 0.293
*If $n=3$, then $\frac{(-1)^{g(n)}}{\sqrt{n}} = \frac{(-1)^{1}}{\sqrt{3}} \approx -0.577$, partial sum of $\approx$ -0.284
*If $n=4$, then $\frac{(-1)^{g(n)}}{\sqrt{n}} = \frac{(-1)^{0}}{\sqrt{4}} = 0.5$, partial sum of $\approx$ 0.216
*$\cdot\cdot\cdot$ and goes on
What I thought of doing?
*
*Break the series (call it $a_n$) into two ($b_n$ and $c_n$), where $b_n$ is the positive terms and make the negative terms = $0$; and $c_n$ is only the negative terms, and the terms which are indices of positive = $0$. So we have an equality of $a_n = b_n - c_n$. If I show that either $b_n$ or $c_n$ diverges then $a_n$ diverges?
*I know, by using Alternate Series Test, that $\sum _{n=1}^{\infty }\frac{(-1)^n}{\sqrt{n}}$ converges, but does this also means $\sum_{n=1}^\infty \frac{(-1)^{g(n)}}{\sqrt{n}}$ converges? It is not properly a reordering, so I have my doubts.
I am not sure of any of the ways, so thats why I am here. Thank you.
| The series converges by Dirichlet's test. In the notation of the Wikipedia article, let $a_n=\frac1{\sqrt n}$, and let $b_n=(-1)^{g(n)}$. Then $\mid\sum_{k=1}^n b_k\mid\leq2$ and the test applies.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3864364",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
} |
Finding natural numbers $n$ such that $n^\frac{1}{n-2}\ $ is also a natural number
Find all natural numbers $n$ such that $n^\frac{1}{n-2}\ $ is also a natural number.
I know that the numbers will be $1, 3$ and $4,$ but I am not quite sure how to approach the proof. Any suggestions on how can I prove this?
| Just to give a different approach, let's write $n=m+2$ and look for cases where $m+2=k^m$ for some (positive) integer $k$. We see that
$$m+2=k^m\implies m+1=k^m-1=(k-1)(k^{m-1}+\cdots+1)\ge(k-1)m\implies 1\ge(k-2)m$$
This limits $m$ to be less than or equal to $1$ unless $k=1$ or $2$, so, on checking cases, we get $(m,k)=(1,3)$, $(-1,1)$ and $(2,2)$ as the only solutions, corresponding to $n=3$, $1$, and $4$, respectively.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3864494",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Why is the sum to infinity of a geometric distribution equal to 1? I know that the geometric distribution follows the rules of a geometric progression thus by using the sum to infinity formula (which I know its proof and is really convinced by it),
$$\frac {a}{1-r}$$
We can easily arrive at this:
$\frac {p}{1-(1-p)} =\frac {p}{1-1+p} = \frac pp = 1$
But the thing is, when my teacher asked my class what we would expect the sum to be no one hesitated to say 1. They all knew it by logic. However I couldn't understand how they arrived at that.
One guessing was that maybe the sum is 1 because "P", probability of getting my success at first trial will be added to small fractions of it, but I still couldn't arrive at anything, after all, these fractions are going to keep decreasing to very tiny numbers so I couldn't get my head around the fact that they'll end up as the same value as P and thus their sum will be 1.
Can someone explain to me the logic or intuition behind this?
| They are probably using the fact that the sum of the probabilities of all possible outcomes must equal $1$. This is true for any [discrete] random variable.
They are probably not doing the computation $p + p(1-p) + p(1-p)^2 + \cdots = \frac{p}{1-(1-p)} = 1$ in their heads instantaneously.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3864648",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Supposing you have uniformly continuous function between metric space, is the epsilon delta relationship continuous. More precisely, for $X,Y$ metric, given $f :X \rightarrow Y$, uniformly continuous, if for a given $\varepsilon>0$ we define the $\delta(\varepsilon)$ to be the sup of all $\delta>0$ s.t $ d_X(x,y)< \delta \Rightarrow d_Y(f(x),f(y))< \varepsilon $ , is $g:\mathbb{R}^+ \rightarrow\mathbb{R}^+$ given by $g(\varepsilon)= \sup${ $\delta$ >0 that work }, a continuous a function?
Edit: It occurs to me that the sup defined here way me infinity, as for constant functions. If we exclude these cases, is the above true?
| Consider a sawtooth wave from the reals to the reals...but let's make the rising slope be 1, and the falling slope be 2. That's uniformly continuous, with $\delta = \epsilon/2$ working everywhere. But on the rising segments, your $\delta_{max}$ will be $1$, and on the falling segments, it'll be $1/2$, and hence it must be discontinuous at each "breakpoint". Actully, the same argument works for
$$
f(x) = \begin{cases}
x & -1 \le x \le 0 \\
-2x & - \le x \le \frac12
\end{cases}
$$
which is a uniformly continuous function from $[-1, 0.5]$ to $[-1, 0]$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3864801",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Standard Form Ellipse Through Three Points and Parallel to X and Y Axes I want the general form $\frac{(x-x_0)^2}{a^2}+\frac{(y-y_0)^2}{b^2}=1$ for an ellipse with a specified eccentricity $e$ that passes through three (non-collinear) points $(x_1,y_1), (x_2, y_2), (x_3, y_3)$ and is parallel to the X and Y axes (i.e. major axis of ellipse parallel to the X axis and minor axis parallel to the Y axis).
I found this gem on Wikipedia:
$$
\frac{({\color{red}x} - x_1)({\color{red}x} - x_2) + {\color{blue}q}\;({\color{red}y} - y_1)({\color{red}y} - y_2)}
{({\color{red}y} - y_1)({\color{red}x} - x_2) - ({\color{red}y} - y_2)({\color{red}x} - x_1)} =
\frac{(x_3 - x_1)(x_3 - x_2) + {\color{blue}q}\;(y_3 - y_1)(y_3 - y_2)}
{(y_3 - y_1)(x_3 - x_2) - (y_3 - y_2)(x_3 - x_1)}\ .
$$
where ${\color{blue}q} = \frac{a^2}{b^2} = \frac{1}{1 - e^2}$, which I think is supposed to work, but a) converting this equation to standard form is a bear (and maybe isn't doable?), and b) seems to introduce $xy$ terms which leads me to believe the ellipse will be tilted with respect to the X and Y axes.
Is this the right equation to be working with? If so, is there a standard form of the equation? Is there a different/better way to accomplish the task?
P.S. Having the standard form is pretty important: I'm going to use this with a graphics app where knowing $x_0, y_0, a,$ and $b$ is required.
| Alternatively, the equation can be re-arranged in a compact form:
$$
\begin{vmatrix}
(1-e^2)x^2+y^2 & x & y & 1 \\
(1-e^2)x_1^2+y_1^2 & x_1 & y_1 & 1 \\
(1-e^2)x_2^2+y_2^2 & x_2 & y_2 & 1 \\
(1-e^2)x_3^2+y_3^2 & x_3 & y_3 & 1
\end{vmatrix}=0$$
where $e\ne 1$ and comparing with the general form
$$Ax^2+Bxy+Cy^2+Dx+Ey+F=0$$
Now,
\begin{align}
A &= (1-e^2) C \\ \\
B &= 0 \\ \\
C &=
\begin{vmatrix}
x_1 & y_1 & 1 \\
x_2 & y_2 & 1 \\
x_3 & y_3 & 1
\end{vmatrix} \\ \\
D &= -
\begin{vmatrix}
(1-e^2)x_1^2+y_1^2 & y_1 & 1 \\
(1-e^2)x_2^2+y_2^2 & y_2 & 1 \\
(1-e^2)x_3^2+y_3^2 & y_3 & 1
\end{vmatrix} \\ \\
E &=
\begin{vmatrix}
(1-e^2)x_1^2+y_1^2 & x_1 & 1 \\
(1-e^2)x_2^2+y_2^2 & x_2 & 1 \\
(1-e^2)x_3^2+y_3^2 & x_3 & 1
\end{vmatrix} \\ \\
F &= -
\begin{vmatrix}
(1-e^2)x_1^2+y_1^2 & x_1 & y_1 \\
(1-e^2)x_2^2+y_2^2 & x_2 & y_2 \\
(1-e^2)x_3^2+y_3^2 & x_3 & y_3
\end{vmatrix} \\
\end{align}
Re-arrange the equation as
$$A
\left( x+\frac{D}{2A} \right)^2+
C
\left( y+\frac{E}{2C} \right)^2=
\frac{D^2}{4A}+\frac{E^2}{4C}-F$$
implying the centre is
$$\left( -\frac{D}{2A}, -\frac{E}{2C} \right)$$
and the semi-axes
$$
(a,b)=
\left(
\sqrt{\frac{D^2}{4A^2}+\frac{E^2}{4AC}-\frac{F}{A}},
\sqrt{\frac{D^2}{4AC}+\frac{E^2}{4C^2}-\frac{F}{C}}
\right)$$
for $0 \le e<1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3864927",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Finding the highest point for a 3d function The height of something at point $(x,y)$ is given by a function $z(x,y)$
$z(x,y)=z_0exp(-\alpha (x^2+y^2))$ where $z_0$ and $\alpha$ are positive constants
a) At what location in the plane is the height the greatest.
My solution:
$grad (z) = -2az_0 xexp(-\alpha (x^2+y^2)) \hat{e}_x-2az_0 yexp(-\alpha (x^2+y^2)) \hat{e}_y =0 $
Then solve for values of x and y. However i cannot do that as we have two unknown variables and only one equation. We i graph the function, i can see the function is height at (0,0)
b) Find the maximum slope at the point $(x,y)=(3,4)$ and find the unit vecotr in the xy-plane that points in the direction of maximum upward slope.
My solution:
In this part you just plug the values of y and x in to grad(z) but i was wondeing how we find the unit vecotr that points in the direction of maximum upwards slope
| a) You do not have simply one equation and two unknowns. You have one vector equation that can be written as two scalar equations with two unknowns. Observe that $\nabla z = \vec 0$ directly implies that
$$\frac{\partial z}{\partial x} = 0$$
$$\text{and}$$
$$\frac{\partial z}{\partial y} = 0$$
individually. You can thus solve the system
$$\begin{align} -2az_0 x\exp(-\alpha (x^2+y^2)) &= 0 \\-2az_0 y\exp(-\alpha (x^2+y^2)) &= 0 \end{align}$$
which is trivially $$\begin{align}x&=0 \\ y&=0 \;. \end{align}$$
b) To find the maximal slope, you need to compute the directional derivative with respect to the direction of maximum increase. The gradient already points in the direction of maximal increase thus you can dot the gradient vector with itself and divide by the magnitude of the gradient vector. This can be easily shown to be equal to the magnitude of the gradient vector.
The unit vector in the $x\text{-}y$ plane that points in the direction of maximal increase is the normalized gradient vector. Compute the gradient vector at your point and divide by it's magnitude.
Let me know if you need any more clarification.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3865090",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Spectrum of a matrix operator on $L^2$ product space I am interested in a spectrum of a simple operator, effectively given by a matrix $A$, acting on a space $U$ that is $n$-th power of same base space V, $U = \underbrace{V \times \dots \times V}_{n\ \text{times}}$. Lets take $U = V \times V$, $V = L^2$ and
\begin{align}
&A = \begin{bmatrix} a &b \\ b &d \end{bmatrix} \quad \quad a,b,d \in \mathbb R, \\
&A \begin{pmatrix} f_1 \\ f_2 \end{pmatrix} = \begin{pmatrix} a f_1 + b f_2 \\ b f_1 + d f_2 \end{pmatrix} \quad \quad \forall \begin{pmatrix} f_1 \\ f_2 \end{pmatrix} \in U.
\end{align}
I would like to show that the spectrum of $A: U \to U$ is the same as the spectrum of $A: \mathbb{R}^2 \to \mathbb{R}^2$.
My idea is to go from the definition and check condition under which $A - \lambda I$ is not onto and not one-to-one. Checking the one-to-one property seems easy since that means solving a linear system
$$
(A - \lambda I) v = 0 \quad v \in U
$$
which can be done e. g. by Gauss elimination method and gives the same conditions on $\lambda$ being the root of the characteristic polynomial as in the linear algebraic case.
I have trouble showing the condition for $A-\lambda I$ being no onto. This means that the system
$$
(A-\lambda I) x = b
$$
has a solution for all $b \in U$.
The standard argument from linear algebra is that a matrix is onto when its columns are linearly independent does not translate well here. If we denote the columns of $A-\lambda I$ as $c_1, c_2 \in \mathbb{R}^2$, $x = (x_1, x_2)$, then the system can be rewritten as
\begin{align}
x_1 c_1 + x_2 c_2 = b.
\end{align}
The trouble with this is that $x_1, x_2$ that play the role of coefficient in the linear algebra case are now elements of $V$ and not $\mathbb R$ so this approach seem to lead nowhere. I think that the condition of $c_1, c_2$ being linearly independent is necessary and sufficient but I am not able to find the right argument or framework which would make this problem trivial. (The space $U$ looks a bit like $\mathbb{R}^2 \otimes V$ but I did not find any helpful reference for that.) I would be grateful for any direction or a suitable book/paper to follow.
| $A-\lambda I\in\Bbb R^{2\times 2}$ is either invertible, whence $(A-\lambda I)^{-1}$ will act as the (bounded) inverse of $(A-\lambda I):V\times V\to V\times V$,
or, its kernel is nontrivial, whence $\lambda$ is an eigenvalue of $A$, and as you correctly deduced, the action of $A-\lambda I$ on $V\times V$ is not injective either.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3865251",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Using Fermat's Little Theorem to find the smallest k for which $a ^k $≡ 1(mod 11) for a = 1, 2, 3, 4, 5, 6, 7, 8, 9, 10. We know that $a^{11-1} ≡1\bmod{11}$ for all positive integers $a$ less than $11$, by Fermat’s Little
Theorem.
But $(11 − 1)$= $10$ is not the smallest k for which $a ^k$ ≡ 1(mod 11) for all such a.How do we find the smallest k for which $a ^k $≡ $1(mod 11)$ for a = 1, 2, 3, 4, 5, 6, 7, 8, 9, 10.
$[$Examples: (i) $3^5$ ≡ 1(mod 11). Here k = 5 for a = 3.
(ii)$10^2$ ≡ $1(mod 11)$. Here k = 2 for a =10.$]$
How are the values of k related to $(11 − 1)= 10?$
| The multiplicative group $\mathbf F^\times_{11}$ is cyclic, so you can determine a generator of this group (i.e. a primitive root of unity in $\mathbf F_{11}$. It happens that (as is often the case) $2$ is such a generator:$\DeclareMathOperator{\ord}{ord}$
\begin{array}{|r|cccccccccc|}
\hline
n=& 1&2&3&4&5&6&7&8&9&10 \\\hline
2^n& 2 & 4 & 8 & 5 & 10 & 9 & 7 & 3 & 6 & 1 \\\hline
\end{array}
For the orders of the other elements, write each of them as $2^k$ and use that the order of such an element is
$$\ord(2^k)=\frac{\ord(2)}{\gcd(k,\ord 2)}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3865409",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Prove by L'Hospital's rule that $\lim\limits_{x\to\infty}(\sqrt{x}(1-\delta^2)^x)=0$, $\delta\in(0,1]$ Prove the following with the help of L'Hospital's rule:
$$\lim\limits_{x\to\infty}\sqrt{x}(1-\delta^2)^x=0$$ where $\delta\in(0,1]$ is a constant.
I broke up $\sqrt{x}(1-\delta^2)^x$ into $\sqrt{x}$ and $(1-\delta^2)^x$. Computing respective derivatives yields $$\frac{d(\sqrt{x})}{dx} = \frac{1}{2\sqrt{x}}$$ and $$\frac{d((1-\delta^2)^x)}{dx} = (1-\delta^2)^x\ln(1-\delta^2)$$ and then I am stuck as to what do I do with $x$ power.
Any help is appreciated!
| Since $\delta\in(0,1)$ we have $1-\delta^2<1$. Defining $\Delta:=(1-\delta^2)^{-1}>1$, L'Hopital gives $$\lim\limits_{x\to\infty}\frac{x^{1/2}}{\Delta^x}=\lim_{x\to\infty}\frac{(2x^{1/2})^{-1}}{\Delta^x\log\Delta}=\frac1{2\log\Delta}\lim_{x\to\infty}\frac1{\Delta^x\sqrt x}=0.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3865526",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
This algorithm seems to be consistent with counting the real interval, what's wrong with it? So I've been imagining infinite sets, specifically the real numbers,
Here's the algorithm,
start at .1, then as follows, .2, ... .9, .01, .11 ... .99, .001, ... .999, etc.
Why wouldn't this generate the reals at infinity? I realize this is a countable set, it's informally an inversion of 1 ... infinity, so it's countable. Yet if you describe .1 as existing in column 1 and .99 as column 99, that would imply that if you choose a .2516 at column 1 not equal to .1, that it would show roughly in column 6152.
Where am I mistaken?
| What you have described is a van der Corput sequence, which does get arbitrarily close to each real number in the interval, but the sequence doesn't contain numbers like $\frac19$ that don't eventually terminate. Every number in the sequence has a terminating decimal expansion.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3865657",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
$(\varepsilon, \delta)$ for continuity of a multivariable function
Define $f : \Bbb R^2 \to \Bbb R^2$, $$f(x,y) = \begin{cases}
0 & (x,y) = 0 \\
\frac{xy(x^2-y^2)}{x^2+y^2} & (x,y) \ne 0
\end{cases} $$ Determine if $f$ is continuous.
At $\Bbb R^2_{\ne 0}$ we have that $f$ is continuous. In order to see if it's continuous at the origin I was approaching it using epsilon-delta. We have that $|f(x,y) - f(0,0)| < \varepsilon$ whenever $\sqrt{(x-0)^2+(y-0)^2} = \sqrt{x^2+y^2} < \delta$.
Now $$|f(x,y)-f(0,0)| = \frac{|xy||x^2-y^2|}{|x^2+y^2|} \leqslant \frac{|xy||x^2+y^2|}{|x^2+y^2|} =|xy|.$$
However I'm losing $x^2+y^2$ here since that's what I would need for $\delta$. What am I missing in my approach here? Is there another way I could bound this?
| For all $(x,y)\in\mathbb{R}^2$ we have
$$
\left\lbrace
\begin{align*}
&2|xy|\le x^2+y^2\\[4pt]
&|x^2-y^2|\le x^2+y^2\\[4pt]
\end{align*}
\right.
$$
hence for all $(x,y)\in\mathbb{R}^2{\setminus}\{(0,0)\}$ we have
$$
2|f(x,y)|
=
\frac
{(2|xy|)(|x^2-y^2|)}
{x^2+y^2}
\le
\frac
{(x^2+y^2)(x^2+y^2)}
{x^2+y^2}
=
x^2+y^2
$$
which approaches $0$ as $(x,y)$ approaches $(0,0)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3865778",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Let $S = \{x \in \mathbb{R}| x < a_n \hspace{2mm}\text{for infinitely many $a_n$}\}$. Find a subsequence converging to $\sup S$ Let $(a_n)_{n\in\mathbb N}$ be a bounded sequence. Let $S = \{x \in \mathbb{R}| x < a_n \hspace{2mm}\text{for infinitely many $a_n$}\}$. Prove there exists a subsequence converging to supremum of $S$.
The part about "infinitely many terms" is confusing me a bit. Any help would be appreciated.
| Define
$$
\xi_N:=\sup_{k\ge N}a_n
$$
So $\{\xi_N\}_{N\in\Bbb N}$ is a decreasing sequence, thus it admits limit $L$, which is finite, since $\{a_n\}_n$ is bounded by hypothesis.
Prove by yourself that $\{\xi_N\}_{N\in\Bbb N}$ is (one of the possible) sequence you are searching for, and $L=\sup S$.
Usually one refers to such a limit as $\limsup_{n\to\infty}a_n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3865933",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
The order of elements in infinite quotient groups There are two statements that my professor made today that I'm hoping I can get some more clarification on.
The first is that $\mathbb{Q}/\mathbb{Z}$ is an infinite quotient group where every element has finite order.
The second is that $\mathbb{R}/ \mathbb{Q}$ is also an infinite quotient group but every element except the identity has infinite order.
I'm having trouble even imagining an infinite quotient group...I'm familiar with groups like $\mathbb{Z} / n\mathbb{Z}$, but how would you even notate these other groups? I understand that in order for an element $xH$, where $H$ is the subgroup, to have finite order, $x^n$ must be in $H$ for some $n$. So if every element of $\mathbb{Q}/\mathbb{Z}$ is finite, does that imply that every rational number is in $\mathbb{Z}$? Obviously that is not true, but I'm having trouble figuring out where I'm going wrong.
| Mark's comment explains the case of $\mathbb{Q}/\mathbb{Z}$. For the case $\mathbb{R}/\mathbb{Q}$, you just have to note that every nontrivial element of $\mathbb{R}/\mathbb{Q}$ is of the form $x+\mathbb{Q}$ where $x$ is an irrational number. Thus, if there exists a positive integer $n$ such that $(x+\mathbb{Q})^{n}=nx+\mathbb{Q}=0$, then there exist $a,b\in\mathbb{Z}$, $b\neq0$ such that $nx=\frac{a}{b}$. But then we get $x=\frac{a}{nb}\in\mathbb{Q}$, a contradiction. Therefore, every nontrivial element of $\mathbb{R}/\mathbb{Q}$ has infinite order.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3866087",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Counting Problem Involving a Work of Size N Imagine a programmer is creating a unique password consisting of n letters. Among the letters, two must contain a's and three must contain r's. The rest of the password can be formed using the remaining English letters. How many ways are there to arrange the letters in this password to create a unique string without having two 'a' in a row?
So far I have gotten:
$$\text{Having 5 positions already subtracted from our n, the remaining positions are: $n-5$} \\ \text{We have taken two letters from our alphabet so we have 24 more letters to choose from.} \\ \text{Choosing spots for a's and r's is: $\binom{n}{5}$ because we have the $5$ positions that are taken.} \\ \text{Ways to arrange these two letters(a and r) are $\binom{5}{3}$ because we have a total of $5$ letters and they take the $3$ positions?} \\ \text{We are left with 24 available letters and there is $24^{n-5}$ ways to put other letters in.} \\ \text{So far I have: $\binom{n}{5}\cdot\binom{5}{3}\cdot24^{n-5}$}$$
My question is where do I get the a's not in a row? Also, I don't know if $\binom{5}{3}$ is right, the one I left with a question mark.
| Is right. To go on, first notice that $\binom{5}{3}=\binom{5}{2}$ cause it is the same selecting where are the a's to select where are the r's. Notice, further, that $\binom{n}{5}\binom{5}{2}=\binom{n}{2}\binom{n-2}{5-2}=\binom{n}{2}\binom{n-2}{3}$ which means, select first the a's, then the r's. To avoid the two a's in a row, first pick the positions of the a's, without placing anything else, and notice that if they are together, you can kind of merge them. So, from $\binom{n}{2}$ take out the merged pair. There are $n-1$ possibilities, so $\binom{n}{2}- (n-1).$ Your formula becomes
$$\left (\binom{n}{2}-(n-1)\right )\cdot \binom{n-2}{3}\cdot 24^{n-5}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3866273",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How can I prove that $A$ and $B$ are not similar matrices?
Prove that $A$ and $B$ have the same characteristic polynomial, but A
and B are not similar \begin{align*} A=\begin{pmatrix} 0 & 1 & 0\\ 0
& 0 &1 \\
-2 & 3 & 0 \end{pmatrix} \ \ \ \ \ \ \ \ \ B=\begin{pmatrix} 1 & 0 & 0\\ 0 & 1 &0 \\ 0 & 0 & -2 \end{pmatrix} \end{align*}
I've already proved that: $det(A-tI)=-t^3+3t-2=det(B-tI)$, so A and B have the same characteristic polynomial.
I'm not sure how can I prove that A and B are not similar. I know that if they were similar then it means that there exists a invertible matrix such that $A=M^{-1}BM$. Any idea to prove that $A$ and $B$ are not similar?
| Suppose they are similar.
Then that means $A$ is diagonalizable with eigenvalues of $(1,1,-2)$. But $A$ is a Companion Matrix which is diagonalizable over $\mathbb C$ iff there all eigenvalues are distinct, which is a contradiction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3866439",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
Prove that $S = \{ f: [0,1]\rightarrow \mathbb{R} \ \text{continuous} : x\in\mathbb{Q}\implies f(x) \in \mathbb{Q}\}$ is. uncountable Prove that
$$S = \{ f: [0,1]\rightarrow \mathbb{R} \ \text{continuous} : x\in\mathbb{Q}\implies f(x) \in \mathbb{Q}\}$$ is uncountable.
I know that reals are uncountable, so I look for injection from reals to set $S$, i.e., if we can define a function as above for each real number, then we are done.
Am I going in the right direction?
Please help with some hint/solution.
| HINT:
For every $a\in \{0,1\}^{\mathbb{N}}$ consider
$f_a(\frac{1}{n}) = \frac{1}{2n + a_n}$, $1\mapsto 1$, $0\mapsto 0$, and extend by linearity.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3866678",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 5,
"answer_id": 3
} |
Method of undetermined coefficients for ODEs to. find particular solutions I have hit a conceptual barrier. So let's say we had the following ODE:
$$\frac{d^{4}u}{dt^{4}} - 16u = te^{2t}.$$
The general solution of the associated homogeneous equation is:
$$u_h(t) = c_{1}e^{-2t} + c_{2}e^{2t} + c_{3}\cos(2t) + c_{4}\sin(2t)$$
Now to guess the particular solution, I was following the reasoning presented in class:
We try to guess $e^{2t}$ but it is part of the homogeneous solution, so we guess $te^{2t}$ but since this is the RHS, we go one power high, and our guess is $At^{2}e^{2t} + Bt^{}e^{2t}$.
I really just do not understand the reasoning behind this. Why do we care what the RHS is to increase powers? Why do we go one power higher than the RHS? Also how are these "guesses" being made?
| A simple way of seeing what is behind these guesses is to use the annihilator polynomial method. You want to solve an equation of the form $P(D) y = f(t)$, where $P$ is a polinomial in the differentiation operator. (for instance, the dif. eq. $y'''+y''-y = e^t$ would be written as $(D^3+D^2-1)y = e^t$). If you are able to find a polynomial $Q(D)$ such that $Q(D) f(t)=0$, the original equation can be reduced to
$$
P(D) y = f(t) \Rightarrow Q(D)P(D) y = 0.
$$
So, you reduce the original equation to an homogeneous equation with higher degree (of coarse this is not possible for all $f$, just for the ones that can be a solution to an homogeneous equation).
The solution to this higher degree but homogeneous problem is obtained and decomposed in two parts: i. the general solution to the original homogeneous equation; ii. the rest.
The "rest" is what you should use as a particular solution.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3866801",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Infection in a village Consider the following problem:
Suppose a lonely wanderer infected with a virus came into an isolated village with $M$ villagers and stayed there. Every week each of the infected villagers coughs onto $n$ random other villagers (each of them chosen uniformly and independently among everyone) and then develops antibodies becoming immune to it. All villagers who are coughed upon become infected if they are not immune. Nobody left or entered the village after the arrival of the lonely wanderer. Consider time to be discrete and measured in weeks. We say, that the virus survives as long as someone is infected with it. For what $n$ is the expected time of its survival the longest?
The extremum clearly is not achieved in the border cases here.
Indeed, if $n = 0$ the lonely wanderer becomes immune before being able to infect anyone else, thus the virus will survive only for $1$ week.
If $n \to \infty$ the probability that the lonely wanderer infects everyone in the first week tends to $1$. Thus the expected time of the survival of the virus tends to $2$ in this case.
So, we must look for optimal $n$ somewhere in between. However, I have no idea how to find it (or even its asymptotic for large $M$)…
At the first glance the problem looked to be somewhat similar to two well studied problems: branching processes (villagers infected by a given infected villager - their descendants in terms of branching processes) and coupon collector problem (uninfected villagers as coupons to be collected). However, it is different from both of them (the number of ‘descendants’ changes each turn here, which makes it different from a Galton-Watson branching process, and the number of ‘coupons collected per turn’ depends on the number of ‘coupons collected on the previous turn’, which makes it different from a classical coupon collector) and methods, similar to the ones used to solve them, are unlikely to work here.
| Putting that at week $w$ we have the following numbers of infected, susceptible (infectable), and immunized
$$
I(w) = {\rm infected}\quad S(w) = {\rm infectable}\quad H(w) = {\rm immunized}
$$
then, allowing for the numbers to be rationals, we shall have
$$
\left\{ \matrix{
I(0) = 1\quad S(0) = m\quad H(0) = 0 \hfill \cr
I(w) + S(w) + H(w) = m + 1 \hfill \cr
H(w) = I(w - 1) \hfill \cr
I(w+1) = {n \over {m + 1}}S(w) \hfill \cr} \right.
$$
Solving for $I$ (excluding the banal case $n=0$)
$$
I(w) + {{m + 1} \over n}I(w + 1) + I(w - 1) = m + 1\quad \left| \matrix{
\;I(0) = 1 \hfill \cr
\;I(1) = n{m \over {m + 1}} \hfill \cr} \right.
$$
and recasting it by including the initial conditions, so that the recursion is valid for any $0 \le w$, under the acception that $w <0 \; \Rightarrow \; I(w)=0 $
$$
I(w) + {n \over {m + 1}}I(w - 1) + {n \over {m + 1}}I(w - 2)
+ \left[ {0 = w} \right]\left( {n - 1} \right) = n\quad \left| {\;0 \le w} \right.
$$
We can solve the above via ordinary generating function
$$
F(z) = \sum\limits_{0\, \le \,w} {I(w)\,z^{\,w} }
$$
readily obtaining
$$
\eqalign{
& F(z) = {{{n \over {1 - z}} - \left( {n - 1} \right)} \over {1 + {n \over {m + 1}}\left( {z + z^{\,2} } \right)}}
= {{m + 1} \over n}{{1 + \left( {n - 1} \right)z}
\over {\left( {1 - z} \right)\left( {z^{\,2} + z + {{m + 1} \over n}} \right)}} = \cr
& = {{m + 1} \over n}{{1 + \left( {n - 1} \right)z}
\over {\left( {1 - z} \right)\left( {z + {1 \over 2} + {{\sqrt {1 - 4\left( {m + 1} \right)/n} } \over 2}} \right)
\left( {z + {1 \over 2} - {{\sqrt {1 - 4\left( {m + 1} \right)/n} } \over 2}} \right)}} = \cr
& = {A \over {\left( {1 - z} \right)}} + {B \over {\left( {z + {1 \over 2}
+ {{\sqrt {1 - 4\left( {m + 1} \right)/n} } \over 2}} \right)}}
+ {C \over {\left( {z + {1 \over 2} - {{\sqrt {1 - 4\left( {m + 1} \right)/n} } \over 2}} \right)}} \cr}
$$
where, however, $A,B,C$ are rather complicated expressions
of $m,n$.
Normally it would be $n \le m$, so that the square roots will turn into immaginary values.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3866952",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 2,
"answer_id": 1
} |
How to explain this limit exists? I am trying to show the $\lim_{(x,y)\to (0,0)}\frac{x^4y}{x^2-y^2}$ exists or doesn't exist and if it exists what it is. I have chosen a path $y = x - kx^4$ and am able to show that this goes to $\frac{1}{2k}$. I believe it is appropriate to say this limit exists using this path but is it true to say that the $\lim_{(x,y)\to (0,0)} f(x,y)$ is equal to $\frac{1}{2k}$? The problem is I am not exactly how to explain why and what exactly this means. I have seen something proved like this before but I really don't understand why this proves the limit exists. Any help in understanding why would be appreciated.
| When you compute the limit along the path $y=x-kx^4$ and come to the conclusion that it is $\frac{1}{2k}$, you are showing that the limit does not exist. If it existed, you would obtain the same limit along every possible path.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3867120",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Polynomial with root $α = \sqrt{2}+\sqrt{5}$ and using it to simplify $α^6$ Find a polynomial $\space P(X) \in \mathbb{Q}[X]\space$ of degree 4 such that
$$\alpha = \sqrt{2} + \sqrt{5}$$
Is a root of $P$.
Using this polynomial, find numbers $\space a, b, c, d \space$ such that
$$\alpha^{6} = a + b\alpha + c\alpha^{2} + d\alpha^{3}$$
What have I tried so far?
I know obviously that for $\alpha$ to be a root of $P$, then $(x-\alpha)$ must be part of the polynomial. Hence, $(x-\sqrt{2} - \sqrt{5})$ will be a factor of the polynomial. Where I’m getting stuck is what to do next in order to find the other factors of the polynomial such that I get values $a, b, c$ and $d$ that satisfy the equation with $\alpha^{6}$.
Any help would be greatly appreciated!
| I think you're looking at this from the wrong perspective. Some basic Galois Theory will tell you what the other roots of the polynomial are (hint: consider how it's constructed. Whenever we conduct a squaring operation, we 'spawn' another solution corresponding to the negative of whatever quantity is being squared). But those other roots aren't actually relevant to solving the question of how to write $\alpha^6$.
Instead, consider doing some 'modular' algebra: you know that $\alpha^4=14\alpha^2-9$ (as other answers have showed). You can use this to 'reduce' any term of size $\alpha^4$ or larger in a polynomial expression to a term that's of smaller degree. For instance, by multiplying by $\alpha$ we see that $\alpha^5=14\alpha^3-9\alpha$. Now, multiply both sides of this by $\alpha$; can you see a spot to use the relation again?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3867201",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 1
} |
Invertibility of the $2\times 2$ symmetric matrix
Let $a, b, c \in \mathbb{R}$ \ $ \{0\} $ be nonzero, real constants. For which value of $a, b, c$ does the matrix $$A = \begin{pmatrix}a & b\\ b & c\end{pmatrix} $$
have two non-zero eigenvalues?
I already found the fact that $ac \neq b^2$ but I am wondering if there is anything more.
| Note the characteristic polynomial of $A$ is
\begin{align}
P\left (\lambda\right ) &= \left |\lambda I - A \right |\\
&= \left (\lambda - a\right )\left (\lambda - c \right ) - b^2\\
&= \lambda^2 - \left (a + c\right )\lambda + \left (ac - b^2\right )
\end{align}
In order for $A$ to have two distinct non-zero eigenvalues, we must have $ac - b^2 \neq 0$ and $\Delta_\lambda \neq 0$. Hence, we have
\begin{align}
\left (a+c\right )^2 - 4\left (ac - b^2\right ) &\neq 0\\
\left (a - c\right )^2 + 4b^2 &\neq 0
\end{align}
Note that this means we cannot have $a = c$ and $b = 0$ simultaneously (i.e. $A$ cannot be a multiple of the identity).
Therefore, we require $b^2 \neq ac$, and $A \neq kI$ for some real $k$.
We can easily check to confirm that these conditions are sufficient.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3867407",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to check if a graph is biplanar (thickness =2)? Given a graph G how does one check if it can be decomposed into 2 planar graphs other than enumeration.
Note: my given graph has 52 edges and 11 nodes.
Quick edit: the graph has an edge from 4 to 1*
| Lemma:
The thickness of $K_n$ is $\lfloor\frac{n+7}{6}\rfloor$ when $n\neq 9,10$ and for cases $n=9,10$ it is $3$.
So if there exists a $K_a$ with $a\geq 9$ in your graph (lets call it $G$), that subgraph has a thickness of at least $3$, so $G$ has a thickness of al least $3$
So for your graph to have a thickness of $2$, it must not have any $K_9$s as subgraphs. Using Turan's theorem, the maximum number of edges of a graph with $11$ vertices which is $K_9-$free is $52$ and if this bound is touched by some graph, it is the Turan graph $T(11,8)$.
Thus, your graph does not have a thickness of $2$ if it is not the Turan graph. If your graph is the Turan graph $T(11,8)$, then observe that it doesnt actually have $52$ edges so again, there is a $K_9$, so your graph has a thickness of at least $3$.
So you graph has a thickness $\geq 3$.
Some closing remarks:
It is NP-hard to compute the thickness of a given graph, and NP-complete to test whether the thickness is at most two.
It is pretty hard to just test the thickness of a graph, as cited above from Wikipedia, especially if you do not have the graph and only the number of vertices and edges are given.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3867600",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Existence of points Be $X, Y$ two closed subspaces of $\mathbb{R}^n$ with $X$ bounded. Show the existence of points $x_0 \in X, y_0 \in Y$ such as $d(x_0,y_0) = d(X,Y)=\inf_{x \in X, y \in Y} d(x,y)$.
My idea so far: I want to start by showing that $X_r = \{ y \in \mathbb{R}^n | d(y,X) \leq r\}$ is compact. With this subspace, without losing generality, I would replace $Y$ by $Y \cap X_r$ for $d(X,Y) \leq r$ where the intersection is nonempty.
| We can assume that $X$ and $Y$ are disjoint. Suppose that $Y$ is bounded. Then, both $X$ and $Y$ are compact, and the result is obvious: Take a sequence $(x_n, y_n)$ in $XxY$ such that $d(x_n,y_n)$ converges to $D=d(X,Y).$ Then, $x_0=\lim x_n$ and $y_0 = \lim y_n.$
Now consider the original question. Let $C\in X$ satisfy $d(C,Y)\le 2D$ and let $Y_1=Y\cap B,$ where $B$ is the ball of center $C$ and radius $4D.$ Clearly, $d(X,Y)=d(X,Y_1).$ Since $Y_1$ is closed and bounded, we have reduced to the case solved in the first paragraph.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3867758",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Interpretation of the Weil pairing in the complex torus The point addition on an elliptic curve corresponds to the vector addition on a complex torus (with suitable choice of the lattice and of the base point). Is there a similar interpretation for the Weil pairing? And for the Tate pairing?
Furthemore, the determinant of two vectors in $\mathbb{C}$ (considered as $\mathbb{R}^2$) is also an non-degenerate alternating form. Is there a corresponding pairing?
| Here are two possible answers:
*
*If you write $E=\mathbb{C}/\Lambda$ then the standard polarization
gives us an alternating map $$\langle -,-\rangle:\Lambda\times
\Lambda\to \mathbb{Z}$$
One can obviously extend this to an alternating pairing
$$\Lambda_\mathbb{Q}\times \Lambda_\mathbb{Q}\to \mathbb{Q}$$
(where $\Lambda_\mathbb{Q}:=\Lambda\otimes_\mathbb{Z}\mathbb{Q}$).
Let us then note that
$$E[N]\subseteq \Lambda_\mathbb{Q}$$
and thus we can restrict to obtain a pairing
$$\langle -,-\rangle:E[N]\times E[N]\to \mathbb{Q}$$
We then can define
$$\langle \alpha,\beta\rangle_\text{Weil}:=\exp(2\pi i N \langle
\alpha,\beta\rangle)$$
One can then show, as the notation suggests, that $\langle
-,-\rangle_\text{Weil}$ is the Weil pairing.
*One can show that Weil pairing is nothing but the cup product in
cohomology under the identifications
$$H_1(E,\mathbb{Z}/N\mathbb{Z})=E[N],\qquad
H^2_\text{sing}(E,\mu_N)\cong \mathbb{Z}/N\mathbb{Z}$$
This perspective is nice since it also extends to etale cohomology.
Both of these discussion are contained, I'm pretty sure, in Mumford's book on abelian varieties.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3868062",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Is true that every finite set has a positively linearly independent subset that positively spans itself? Since it seems that there is no definition for what is positively linearly independent on this site I will try to be concise defining it here. Let $P$ be $n$ by $m$ a matrix and $V$ be a $n$ by $o$ matrix. We say that that $P \cup V$ is positively independent if and only if for every pair of vectors $({\boldsymbol \mu,{\boldsymbol \lambda}})$ with ${ \boldsymbol \mu}$ with non-negative coordinates and
$$
P { \boldsymbol \mu} + V { \boldsymbol \lambda} =\textbf{0},
$$ imples ${ \boldsymbol \mu}={ \boldsymbol 0}$ and ${ \boldsymbol \lambda}= { \boldsymbol 0}.$ Furthermore, we say that $P \cup V$ positively spans its elements if every colum ${\boldsymbol u}$ of $P\cup V$ can be written as
$$
P { \boldsymbol \mu} + V { \boldsymbol \lambda} = u,
$$ for some ${ \boldsymbol \mu}$ with non-negative coordinates.
I'm asking if every matrix $P'$ and $V'$ has a positively linearly independent submatrix $P \cup V$ such that positively spans the set of all ${\boldsymbol u'}$ such that
$$
u'= P' { \boldsymbol \mu'} + V' { \boldsymbol \lambda'}
$$ with ${ \boldsymbol \mu'}$ with non-negative coordinates. Is that true? I strongly suppose that it is true. Any hints?
P.S.: If it is needed, assume $V = {\boldsymbol 0}.$
| Consider $P = \begin{bmatrix} 1&-1\end{bmatrix}$ and $V = 0$. The columns of $P$ are not positively linearly independent. We have $P\begin{bmatrix} 2&1\end{bmatrix}^T = [1]$ while $P\begin{bmatrix}1&2\end{bmatrix}^T = [-1]$.
If we were to obtain a positively linearly independent submatrix of $P$, then it's positive span will either have nonnegative entries or have nonpositive entries, because such submatrices are $[1]$ and $[-1]$. Therefore, no such positively linearly independent submatrix exists. Does this answer your question? If you want $V \neq 0$, then I think you can take $P = [1], V = [-1]$ and repeat the argument above.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3868301",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
$ABCD$ is a square. $E$ is the midpoint of $CB$, $AF$ is drawn perpendicular to $DE$. If the side of the square is $2016$ cm , find $BF$.
$ABCD$ is a square. $E$ is the midpoint of $CB$, $AF$ is drawn perpendicular to $DE$. If the side of the square is $2016$ cm , find $BF$.
What I Tried: Here is a picture,
I used a really peculiar way to solve this problem (I would say so). I found a lot of right-angled triangles in it and immediately used Pythagoras Theorem to find the length of $AE$ and $DE$ and found them to be $1008\sqrt{5}$ each .
Now I assumed $DF$ to be $x$, then $FE$ comes $(1008\sqrt{5} - x)$ .
From $AD$ and $DF$ again by Pythagoras Theorem I get $AF = \sqrt{2016^2 - x^2}$ .
Now comes the main part. From $AF$ and $EF$ along with $AE$ in right-angled $\Delta AFE$, I get :-
$$(2016^2 - x^2) + (1008\sqrt{5} - x)^2 = (1008\sqrt{5})^2$$
$$\rightarrow 2016^2 - x^2 + (1008\sqrt{5})^2 - 2016\sqrt{5}x + x^2 = (1008\sqrt{5})^2 $$
$$\rightarrow 2016^2 = 2016\sqrt{5}x $$
$$\rightarrow x = \frac{2016}{\sqrt{5}}$$
From here I get $FE = \frac{3024}{\sqrt{5}}$ .
Now I used Ptolemy's Theorem on $\square AFEB$, noting that it is cyclic.
$$AE * BF = (AB * EF) + (AF * BE) $$
$$ 1008\sqrt{5} * BF = (1008\sqrt{5} * 2016) + (\sqrt{2016^2 - \frac{2016^2}{\sqrt{5}}} - 1008^2)$$
Everything except $BF$ is known, so I am getting $BF$ as :- $$\frac{1270709}{630} - \frac{1008}{\sqrt{5}}$$
But to my surprise, the correct answer to my problem is simply $2016$ .
So my question is, was there any calculation errors? Or the method I used had a flaw in some way and so was not correct?
Can anyone help?
Alternate solutions are welcome too, but if someone can point out the flaw in my solution, then it will be better.
| The error in your solution is in the Ptolemy Theorem step.
First, $AB\times EF$ is not $1008\sqrt{5} \times 2016$ because $EF$ is not $1008\sqrt{5}$, in fact as you calculated it is $\frac{3024}{\sqrt{5}}$.
Second, $AF\times BE$ is not $\sqrt{2016^2 - \frac{2016^2}{\sqrt{5}}} - 1008^2$, it should be $\sqrt{2016^2 - \frac{2016^2}{\sqrt{5}}} \times 1008$.
Alternatively, there is a pretty easy way:
First since $\angle DAF=\angle CDE=\angle EAB$ we know $\angle DAE=\angle FAB$. Second since $A,F,E,B$ are co-cyclic we know $\angle FBA=\angle FEA$. Therefore triangles $\triangle DEA$ and $\triangle FBA$ are similar so $FB=AB$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3868418",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
} |
Inverse M matrices sufficient condition for reversing inequality I am asking out of curiosity, from a related question: Element-wise ordering of the inverse of two M-matrices.
I know that in general the converse is not true.
But suppose that we are given an inverse M matrix $A$ and identity matrix $I$ with $A \leq I$ (entrywise ordering), when can we say $A^{-1}\geq I$ holds true?
Try: Because $A$ is an inverse M matrix, I tried by assuming, suppose $A^{-1}$ is strictly diagonally dominant, then manipulate to show that the inequality will reverse, but so far no luck. Any hint or idea to try pursue will be really helpful.
| If $A\le I$, $A$ is a diagonal matrix. Since $A$ is the inverse of an $M$-matrix, it is a positive diagonal matrix. The assumption $A\le I$ thus implies that all diagonal entries of $A$ lie inside $(0,1]$. Hence $A^{-1}\ge I$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3868581",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why is $\frac{X}{Y} = \frac{X}{\vert Y \rvert}$ for $X, Y \stackrel{\text{i.i.d}}{\sim} \mathcal{N}(0,1)$? The justification provided was "symmetry of the Normal", but as far as I understand it $Y$ is not equivalent to $\lvert Y \rvert$, most obviously because the supports are no longer the same!
My Spidey-senses seem to suggest that it has something to do with the interaction of the signs of $X$ and $Y$ (being symmetric), but I can't quite put my finger on it. Below is my attempt to make sense of it in my head.
Thinking aloud (to hopefully illustrate my confusion/rationalization):
Suspending disbelief that densities aren't probabilities for a second (for intuition's sake), the proposition states that the likelihood of every value $\frac{X}{Y}$ is the exact same for the corresponding $\frac{X}{\lvert Y \rvert}$ (that equals $\frac{X}{Y}$).
Let's take $\frac{X}{Y} = 1$ (for concreteness), the following two disjoint events qualify (for all $t$ in $[-\infty, \infty]$):
*
*$X = t, Y = t$
or
*
*$X = -t, Y = -t$
...but if $\frac{X}{\lvert Y \rvert}$, the likelihood of a $+t$ in the denominator has doubled (and the likelihood of $-t$ has gone to $0$), but the likelihood of the numerator being $+t$ (so our ratio is $1$) is the same as it was without the absolute value in the denominator.
Rationalization: Since we've doubled our chances of seeing a denominator of $+t$ and the only remaining qualifying case is having both $X=t$ and $Y=t$ (since $\lvert Y \rvert \nless 0$) and $X, Y$ are independent, this precisely makes up for the loss of the second case above since $P(Y=t) = P(y=-t)$ from the symmetry of the normal.
Writing that out helped a lot...so I hope the reasoning is sound!
| $$\bigg\{\frac{X}{Y} \leq t\bigg\} = \{X \geq tY, Y < 0\} \cup \{X \leq tY, Y > 0\}$$
and
$$\bigg\{\frac{X}{\lvert Y \rvert} \leq t\bigg\} = \{-X \geq tY, Y < 0\} \cup \{X \leq tY, Y > 0\}.$$
Since the above sets are disjoint,
$$\mathbb{P}\bigg(\bigg\{\frac{X}{Y} \leq t\bigg\}\bigg) = \mathbb{P}(\{X \geq tY, Y < 0\}) + \mathbb{P}(\{X \leq tY, Y > 0\}) \tag{1}$$
and
$$\mathbb{P}\bigg(\bigg\{\frac{X}{\lvert Y \rvert} \leq t\bigg\}\bigg) = \mathbb{P}(\{-X \geq tY, Y < 0\}) + \mathbb{P}(\{X \leq tY, Y > 0\}) \tag{2}.$$
Since $X$ and $Y$ are iid $\text{normal}(0,1)$,
$$\mathbb{P}(\{X \geq tY, Y < 0\}) = \mathbb{P}(\{-X \geq tY, Y < 0\}).$$
To see the symmetry, consider the case $t = 1$ below. $(1)$ and $(2)$ above are obtained by integrating over the shaded regions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3868695",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Real part of function vanishes Consider the function
$$
f(a+bi) = e^{-bi} + e^{\left(\frac{a\sqrt3}{2} + \frac{b}{2}\right)i} + e^{\left(-\frac{a\sqrt3}{2} + \frac{b}{2}\right)i}.
$$
Using a Mathematica plot, I can see that the real part of $f$ vanishes on circles in the complex plane, see below. Is there a way to rigorously show this?
| They look like circles, but they are not. If you look very closely, they are a little bit flatened at their vertical extremes. I will present a far more convincing mathematical argument below.
Note that
$$
f(z) = e^{-Im(z)i} + e^{-Im(z\bar\omega)i} + e^{-Im(z\omega)i}.
$$
Then translate it to cartesian coordinates, obtaining
$$
f(a+bi) = e^{-bi} + e^{\left(\frac{a\sqrt3}{2} + \frac{b}{2}\right)i} + e^{\left(-\frac{a\sqrt3}{2} + \frac{b}{2}\right)i}.
$$
Then, the real part will be
\begin{align*}
Re(f(a+bi)) & = \cos(-b) + \cos\left(\frac{a\sqrt3}{2} + \frac{b}{2}\right) + \cos\left(-\frac{a\sqrt3}{2} + \frac{b}{2}\right) \\
& = \cos(b) + 2\cos\left(\frac{a\sqrt3}{2}\right)\cos\left(\frac{b}{2}\right).
\end{align*}
So we are looking for $(a,b)\in\mathbb R^2$ such that
$$
\cos(b) + 2\cos\left(\frac{a\sqrt3}{2}\right)\cos\left(\frac{b}{2}\right)=0.
$$
If we solve it for $a=0$, according to Wolfram alpha, we get
$$
b = 4\left(\pi n \pm \arctan\left(\sqrt{2\sqrt{3}-3}\right)\right)
$$
where the solutions closest to $0$ are
$$
b^{\pm} = \pm 4\arctan\left(\sqrt{2\sqrt{3}-3}\right).
$$
If we solve it for $b=0$, according to Wolfram alpha, we get
$$
a = \frac{4(3\pi n \pm \pi)}{3\sqrt{3}}
$$
where the solutions closest to $0$ are
$$
a^{\pm} = \pm \frac{4\pi}{3\sqrt{3}}.
$$
By the plot, it seems that the points $(a^{\pm},0)$, $(0,b^{\pm})$ are in a same circumpherence. Let us see that they are not.
Suppose the points $(a^{\pm},0)$ and $(0,b^{\pm})$ are in a circumpherence $C$ of equation $(x-x_0)^2 + (y-y_0)^2 = r^2$ (a priori, we are not even assuming that $C$ is centered at the origin).
Using that $(a^{\pm},0)$ are in $C$ and $a^-=-a^+\neq0$, we get
$$
\left\{\begin{array}{l} (a^+-x_0)^2 + y_0^2 = r^2 \\
(-a^+-x_0)^2 + y_0^2 = r^2\end{array}\right. \Rightarrow a^+x_0 = 0 \Rightarrow x_0=0,
$$
and substituting it, we get
$$
(a^+)^2 + y_0^2 = r^2.
$$
Analogously, we show that $y_0=0$ and
$$
(b^+)^2 = r^2.
$$
Putting all together, we conclude that $x_0=y_0=0$, so $C$ is centered at the origin, and its radius satisfies
$$
r= a^+ = b^+\ \ \Rightarrow\ \ \arctan\left(\sqrt{2\sqrt{3}-3}\right) = \frac{\pi}{3\sqrt{3}}.
$$
A simple computation (I used Wolfram alpha again) shows that the last equality is false.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3868839",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Good Source(s) of Old Math Puzzlers for Students I am looking for good tried and true (mathematics, science, logic) problems to present to high school math students from time to time for entertainment, as well as for the purpose of trying to sharpen their critical reasoning skills.
Tractable problems with unexpected outcomes, such as the "Rope Around the Earth Problem", is what I have in mind.
Are there any sources that anyone knows of (perhaps archived on the Internet) that contain plenty of "puzzlers" that talented students are apt to find interesting? Many thanks.
| For reasons mentioned here, I recommend Tatham's Puzzles and Manufactoria. Tatham's puzzles are automatically generated, so they can be played over and over again, and many people who like logical puzzles will definitely like them. Unfortunately, they do not have "unexpected outcomes", but I hope you find them a good puzzle source!
I want to say that many typical puzzle collections (even those claimed to be logic puzzles) do not actually rely on logic, but on pattern matching, which is not logic. Hence if your goal is to sharpen logical reasoning, then these puzzles are pointless. I have not read Peter Winkler's books mentioned in panini's answer, but it is a good sign that the author says that the problems "should suggest some general mathematical truth". Just to give an example of what is not logical, completing a sequence is not logical, and arguably most such puzzles are wrong from an information-theoretic viewpoint.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3869034",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Topological space, manifold, open sets Consider a homeomorphism $f$ sending an open set $X$ of one manifold to an open set $Y$ of another manifold:
$$
f\,:~X \longrightarrow Y~~.
$$
Being a homeomorphism, $f$ is continuous -- and so is its inverse $g = f^{-1}~$:
$$
g\,:~ Y\longrightarrow X ~~.
$$
Then a (not necessarily open) subset
$$
A \subset X
$$
is mapped by $f$ to a (not necessarily open) subset $f A\subset Y$.
The resulting restrictions of $f$ and $g = f^{-1}$ are:
$$
f_{~|A}\,: ~~~~ A \longrightarrow f(A)~~,
$$
$$
g_{~|f(A)}\,: ~~~ f(A) \longrightarrow A ~~.
$$
Now, if I postulate that the image $f(A)~$ ${\underline{\mbox{is}}}$ open, will that imply that the preimage $A$ is open also?
Stated shortly: is a restriction of a continuous map continuous too?
/It certainly is in calculus -- but how to show this in topology?/
| If $f:X\to Y$ is a homeomorphism and $A\subseteq X$, then $f_{|A}:A\to f(A)$ is a homeomorphism as well. Regardless of what $A$ is, open or not.
Indeed, being a homeomorphism means that there is a continuous inverse $g:Y\to X$. Now a restriction of continuous function is continuous and so $f_{|A}$ is continuous. Obviously $g_{|f(A)}:f(A)\to A$ is well defined (because $g(f(A))=A$) and continuous as well. This $g_{|f(A)}$ is the inverse of $f_{|A}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3869224",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Representing projection operator in terms of orthonormal basis The question given is :
Let $P: V \rightarrow V$ be a projection operator, i.e., $P^{2}=P .$ If $V$ is finite dimensional, then show that $\operatorname{tr}(P)$ is the dimension of the subspace being projected onto.
The solution is:
*
*Let $P$ be an orthogonal projection operator to $M$ dimensional subspace $V$
*Let $b_{1}, \ldots, b_{M}$ be an orthonormal basis for $V .$ Then
$$
\begin{aligned}
P=\sum_{m=1}^{M} b_{m} b_{m}^{\prime} & \\
\qquad \begin{aligned}
\operatorname{Trace}(P) &=\operatorname{Trace}\left(\sum_{m=1}^{M} b_{m} b_{m}^{\prime}\right) \\
&=\sum_{m=1}^{M} \operatorname{Trace}\left(b_{m} b_{m}^{\prime}\right)=\sum_{m=1}^{M} \operatorname{Trace}\left(b_{m}^{\prime} b_{m}\right) \\
&=\sum_{m=1}^{M} \operatorname{Trace}(1)=\sum_{m=1}^{M} 1=M
\end{aligned}
\end{aligned}
$$
Can anyone explain the first step of the proof, i.e. how the projection operator is represented as $P=\sum_{m=1}^{M} b_{m} b_{m}^{\prime}$ where $b_{1}, \ldots, b_{M}$ be an orthonormal basis for $V .$
| If $P$ is also assumed to be symmetric ($P^T=P$), and $b_1,\dots,b_M$ is an orthonormal basis of the image of $P$, then indeed one has $P=\sum_ib_ib_i^T=:B$:
Note that since $P$ is symmetric, its eigenspaces (i.e. the kernel and the range) are orthogonal.
Furthermore, it's easy to verify that if $v\in{\rm im}(P)={\rm span}(b_1,\dots,b_M)$, then $Bv=v$, and that if $v\perp{\rm im}(P)$, then $Bv=0$, and these properties characterize $P$.
Nevertheless, the statement is true for general projections as well:
If $P^2=P$, we still have $V={\rm im}(P)\oplus\ker(P)$, though these subspaces might not be orthogonal to each other in the general case.
Choosing bases $v_1,\dots,v_M$ in ${\rm im}(P)$ and $u_1,\dots,u_K$ in $\ker(P)$, we find that the matrix of $P$ in basis $v_1,\dots,v_M,u_1,\dots,u_K$ is diagonal with $M$ entries of $1$ and $K$ entries of $0$, because $Pv_i=v_i$ and $Pu_i=0$.
Consequently, ${\rm tr}(P)=M$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3869419",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Wirtinger derivative form of Cauchy–Riemann equations I'm trying to understand the Cauchy-Riemann equations using the traditional $u, v$ form and the Wirtinger derivative form.
Taking $\ln|z|$ as an example function, for the normal $u, v$ form I have:
$$\begin{align}u(x,y) &= \ln|x + iy|\\ v(x,y) &= 0\end{align}$$
so the Cauchy-Riemann equations are not satisfied:
$$\frac{\partial}{\partial x} u(x,y) = \frac{1}{x + iy} \neq v \frac{\partial}{\partial y} = 0$$
$$\frac{\partial}{\partial y} u(x,y) = \frac{i}{x + iy} \neq -v \frac{\partial}{\partial x} = 0$$
So far so good, I didn't expect them to be. But there's another form for the Cauchy-Riemann equations using Wirtinger derivatives:
$$\frac{\partial}{\partial \overline{z}} f(z) = 0$$
Doing it this way I get
$$\begin{align} \frac{\partial}{\partial \overline{z}} \ln|z| &= \\
&= \frac{1}{2} (\frac{\partial}{\partial x} + i \frac{\partial}{\partial y}) \ln|x + iy| \\
&= \frac{1}{2} (\frac{1}{x + iy} + i \frac{i}{x + iy}) \\
&= \frac{1}{2} (\frac{1}{z} - \frac{1}{z}) \\
&= 0\end{align}$$
So using the Wirtinger derivative form it would seem that $\ln|z|$ is holomorphic? I don't think that's right; I thought real valued functions should only be holomorphic if they're constant. What am I doing wrong?
| As commented, you got some of the partial derivatives wrong.
A trick that's often useful: $$\log|z|=\frac12\log|z|^2=\frac12\log(x^2+y^2).$$ Useful because we know how to differentiate polynomials: $$\frac{\partial}{\partial x}\log|z|
=\frac12\frac{\partial}{\partial x}\log(x^2+y^2)=\frac x{x^2+y^2}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3869530",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Help with Cayley-Hamilton determinant trick theorem from Matsumura's Commutative Algebra. Theorem 2.1. Suppose that $M$ is an $A$-module generated by $n$ elements, and that $\varphi \in \text{Hom}_A(M,M)$; let $I$ be an ideal of $A$ such that $\varphi(M) \subset IM$. Then there is a relation of the form:
$$
\varphi^n + a_1 \varphi^{n-1} + \dots + a_{n-1} \varphi + a_n = 0,
$$
with $a_i \in I$ for $1 \leq i \leq n$ (where both sides are considered as endomorphisms of $M$).
Proof. Let $M = A w_1 + \dots Aw_n$. By the assumption $\varphi(M) \subset IM$ there exist $a_{ij} \in I$ such that $\varphi(w_i) = \sum_{j=1}^n a_{ij} w_j$. This can be rewritten $\sum_{j=1}^n (\varphi \delta_{ij} - a_{ij}) w_j = 0, \ i=1..n$. The coefficients of this systme of linear equations can be viewed as a square matrix $(\varphi \delta_{ij} - a_{ij})$ of elements of $A'[\varphi]$, the commutative subring of the endomorphisms ring $E = \text{Hom}_A(M,M)$ generated by the image $A'$ of $A$ under $a \mapsto (x \mapsto ax)$, together with $\varphi$. Let $b_{ij}$ denote its $(i,j)$th cofactor, and $d$ its determinant. Multiply the equation through by $b_{ik}$ and sum over $i$:
$$
\sum_{i=1}^n b_{ik}(\sum_{j=1}^n (\varphi \delta_{ij} - a_{ij}) w_j ) = 0
$$
For example, if $B = (\varphi \delta_{ij} - a_{ij})$ is a $3 \times 3$ matrix we have:
$$
B = -\begin{pmatrix}
a_{11} - \varphi & a_{12} & a_{13} \\
a_{21} & a_{22} -\varphi & a_{23} \\
a_{31} & a_{32} & a_{33} - \varphi
\end{pmatrix}
$$
Then for example at $i = 1$ we have the following cofactors:
$$
b_{11} = -\det\begin{pmatrix} a_{22} - \varphi & a_{23} \\ a_{32} & a_{33} - \varphi \end{pmatrix}, b_{12} = \det\begin{pmatrix} a_{21} & a_{23} \\ a_{31} & a_{33} - \varphi \end{pmatrix}, b_{13} = -\det\begin{pmatrix} a_{21} & a_{22} - \varphi \\ a_{31} & a_{32} \end{pmatrix}
$$
Then the sum:
$$
\sum_{i=1}^n b_{ik} \sum_{j=1}^n B_{ij} w_j = 0 \iff \\
\sum_{j=1}^n \sum_{i=1}^n b_{ik} B_{ij} w_j = 0 \iff \\
\sum_{j=1}^n (\det B) w_j = 0 \iff \\
(\det B) \sum_{j=1}^n w_j = 0 \ \ \textbf{( wrong here )}
$$
I believe it should be $(\det B) w_j = 0, \ \forall j=1..n$ instead, as in this lecture video.
I followed the recipe to multiply by $b_{ik}$ and sum over $i$ and I get the wrong answer. Please help me find where I've made a mistake.
| I shall defer slightly from your notation. $b_{ij}=(i,j)$th entry of $\operatorname{Adj}B$.
We know that $\displaystyle{}\sum_{j=1}^n B_{ij}w_j=0 \ \forall \ i=1,2,\dots, n$. Then as you did , we get $\displaystyle{}\sum_{i=1}^nb_{ki}\sum_{j=1}^n B_{ij}w_j=0 \ \forall \ k=1,2,\dots, n \\
\implies \displaystyle{}\sum_{j=1}^n\left( \sum_{i=1}^n b_{ki}B_{ij}\right )w_j=0 \ \forall \ k=1,2,\dots, n \\
\implies\displaystyle{ \sum_{j=1}^n(\det B)\delta_{kj}w_j=0 \ \forall \ k=1,2,\dots, n }\\
\implies \displaystyle{(\det B)w_k=0 \ \forall \ k=1,2,\dots, n }$
which is what you wanted.
Edit: Your mistake was writing $\displaystyle {\sum_{i}b_{ik}B_{ij}=\det B}$ instead of $(\det B)\delta_{jk}$. After all, you should expect the sum to depend on $k$ in some way since you are not summing over $k$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3869675",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How do I find the normal vector?
I think to find a tangent plane I have to find a normal vector to the plane, and know a point on the plane. I know a point is $(-2,3\pi/4, \pi/4)$ but I don't know how to find a normal vector.
Also, I am a bit confused - isn't $f(x,y,z)$ function of 3 variables which means the graph is 4D? So is it tangent "hyperplane"? Or is it regular 2D tangent plane?
Thanks for help
| I found the answer from here: https://ocw.mit.edu/courses/mathematics/18-02sc-multivariable-calculus-fall-2010/2.-partial-derivatives/part-b-chain-rule-gradient-and-directional-derivatives/session-37-example/MIT18_02SC_we_17_comb.pdf
The "level surface" is $$x^2 \sin(2y) \cos^2 (z) = -2$$
Then put a new variable $w$ so that $$w = x^2 \sin(2y) \cos^2(z)$$
Now have to find gradient on $w$:
$$\nabla w = \left(\frac{\partial w}{\partial x},\frac{\partial w}{\partial y},\frac{\partial w}{\partial z}\right) = \left(2x\sin(2y)\cos^2(z), 2x^2\cos(2y)\cos^2(z), -2x^2\sin(2y)\sin(z)\cos(z)\right)$$
Now to plug in point $P (-2, 3\pi/4, \pi/4) = (x,y,z)$ to this: $$\nabla w | P = \left(-4\sin(3\pi/2)\cos^2(\pi/4), 8\cos(3\pi/2)\cos^2(\pi/4), -8\sin(3\pi/2)\sin(\pi/4)\cos(\pi/4)\right)$$
so it is $$\nabla w | P = \left(2,0,4\right)$$
this means tangent plane is: $$2(x+2)+4(z-\pi/4) = 0$$
which is same as $$2x + 4z = \pi -4 $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3869832",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
estimate of iterates of a polynomial Let $P\in\mathbb R[x]$ of degree $d\ge2$. I want to prove that for all $x\in\mathbb R$, but maybe for a set with zero Lebesgue measure, one has $\lim_{n\to+\infty}\limits\frac{\ln(P^{[n]}(x))}{d^n\ln(|x|)}=1$, where $P^{[n]}(x)$ is defined inductively by $P^{[n+1]}(x)=P(P^{[n]}(x))$ and $P^{[0]}(x)=x$. For $P(x)=x^d$, it is true obviously. But in the general case, I did not manage to prove it.
Thanks in advance for a solution or any hint.
| Fix an integer $d \geq 2$. Given a polynomial $P \in \mathbb{R}[x]$ of degree $d$ and $x \in \mathbb{R}$, it is in fact unlikely that $$\lim_{n \rightarrow +\infty} \frac{\log\left( \left\lvert P^{[n]}(x) \right\rvert \right)}{d^{n}} = \log\left( \lvert x \rvert \right) \, \text{.}$$
*
*For example, consider the polynomial $$P(x) = (x -1)^{d} +1 \in \mathbb{R}[x] \, \text{.}$$ For every $n \geq 0$, we have $P^{[n]}(x) = (x -1)^{d^{n}} +1$. Therefore, for every $x \in \mathbb{R}$, we have $$\lim_{n \rightarrow +\infty} \frac{\log\left( \left\lvert P^{[n]}(x) \right\rvert \right)}{d^{n}} = \begin{cases} 0 & \text{if } x \in (0, 2] \text{ or } (x = 0 \text{ and } d \text{ is even})\\ -\infty & \text{if } x = 0 \text{ and } d \text{ is odd}\\ \log\left( \lvert x -1 \rvert \right) & \text{if } x \in (-\infty, 0) \cup (2, +\infty) \end{cases} \, \text{.}$$
*In fact, assume that $P \in \mathbb{R}[x]$ has degree $d$ and there exists $x_{0} \in (-1, 0) \cup (0, 1)$ such that $$\lim_{n \rightarrow +\infty} \frac{\log\left( \left\lvert P^{[n]}\left( x_{0} \right) \right\rvert \right)}{d^{n}} = \log\left( \left\lvert x_{0} \right\rvert \right) \, \text{,}$$ and let us prove that $P(x) = \pm x^{d}$. Write $$P(x) = \sum_{j = 0}^{d} a_{j} x^{j} \, \text{,} \quad \text{with} \quad a_{0}, \dotsc, a_{d} \in \mathbb{R} \, \text{,} \quad \text{and} \quad m = \min\left\lbrace j \in \lbrace 0, \dotsc, d \rbrace : a_{j} \neq 0 \right\rbrace \, \text{,}$$ and let us prove that $m = d$ and $a_{d} = \pm 1$. We have $$\log\left( \left\lvert P(x) \right\rvert \right) = m \log\left( \lvert x \rvert \right) +\log\left( \left\lvert \sum_{j = m}^{d} a_{j} x^{j -m} \right\rvert \right) = m \log\left( \lvert x \rvert \right) +O_{0}(1) \, \text{,}$$ and hence $$\lim_{\substack{x \rightarrow 0\\ x \neq 0}} \frac{\log\left( \left\lvert P(x) \right\rvert \right)}{d \log\left( \lvert x \rvert \right)} = \frac{m}{d} \, \text{.}$$ For $n \geq 0$, set $$x_{n} = P^{[n]}\left( x_{0} \right) \in \mathbb{R} \quad \text{and} \quad u_{n} = \frac{\log\left( \left\lvert x_{n} \right\rvert \right)}{d^{n}} \in [-\infty, +\infty) \, \text{.}$$ Since $\lim\limits_{n \rightarrow +\infty} u_{n} = \log\left( \left\vert x_{0} \right\rvert \right)$ and $\log\left( \left\lvert x_{0} \right\rvert \right) \in (-\infty, 0)$ by hypothesis, we have $$x_{n} \neq 0 \text{ for large } n \, \text{,} \quad \lim_{n \rightarrow +\infty} x_{n} = 0 \quad \text{and} \quad \lim_{n \rightarrow +\infty} \frac{\log\left( \left\lvert P\left( x_{n} \right) \right\rvert \right)}{d \log\left( \left\lvert x_{n} \right\rvert \right)} = \lim_{n \rightarrow +\infty} \frac{u_{n +1}}{u_{n}} = 1 \, \text{.}$$ Therefore, we have $\frac{m}{d} = 1$, and hence $P(x) = a_{d} x^{d}$. It follows by induction that $$P^{[n]}(x) = a_{d}^{\frac{d^{n} -1}{d -1}} x^{d^{n}}$$ for all $n \geq 0$, and in particular $$\lim_{n \rightarrow +\infty} \frac{\log\left( \left\lvert P^{[n]}\left( x_{0} \right) \right\rvert \right)}{d^{n}} = \lim_{n \rightarrow +\infty} \left( \log\left( \left\lvert x_{0} \right\rvert \right) +\frac{1 -\frac{1}{d^{n}}}{d -1} \log\left( \left\lvert a_{d} \right\rvert \right) \right) = \log\left( \left\lvert x_{0} \right\rvert \right) +\frac{\log\left( \left\lvert a_{d} \right\rvert \right)}{d -1} \, \text{.}$$ Therefore, we have $a_{d} = \pm 1$, and hence $P(x) = \pm x^{d}$.
*Finally, let me mention that this is very related to the notion of Green function of a polynomial in complex dynamics. Given a polynomial $P \in \mathbb{C}[z]$ of degree $d$ and $z \in \mathbb{C}$, set $$g_{P}(z) = \lim_{n \rightarrow +\infty} \frac{\log^{+}\left( \left\lvert P^{[n]}(z) \right\rvert \right)}{d^{n}} \, \text{,} \quad \text{where} \quad \log^{+} = \max\lbrace \log, 0 \rbrace \, \text{.}$$ This gives a well-defined map $g_{P} \colon \mathbb{C} \rightarrow \mathbb{R}_{\geq 0}$. Now, define the filled-in Julia set of $P$ to be $$K_{P} = \left\lbrace z \in \mathbb{C} : \left( P^{[n]}(z) \right)_{n \geq 0} \text{ is bounded} \right\rbrace \, \text{,}$$ which is a compact subset of $\mathbb{C}$. The Green function $g_{P}$ of $P$ has the following properties:
*
*the map $g_{P}$ is continuous on $\mathbb{C}$;
*for every $z \in \mathbb{C}$, we have $g_{P}(z) = 0$ if and only if $z \in K_{P}$;
*the map $g_{P}$ is harmonic on $\mathbb{C} \setminus K_{P}$;
*we have $$g_{P}(z) = \log\left( \lvert z \rvert \right) +\frac{1}{d -1} \log\left( \left\lvert a_{d} \right\rvert \right) +o_{\infty}(1) \, \text{,} \quad \text{where} \quad P(z) = \sum_{j = 0}^{d} a_{j} z^{j} \, \text{.}$$
Furthermore, these properties characterize the map $g_{P}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3870247",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Understanding the proof of Prop 13.11 of Joy of Cats Theorem. Embeddings of dense subcategories preserve limits.
Part of the Proof. Let $\bf{A}$ be a dense subcategory of $\mathbf{B}$ with embedding $E : \bf{A}\to\bf{B}$, and let $D : \bf{I}\to \bf{A}$ be a diagram with a limit $\mathcal{L} = (L\overset{l_i}{\to}D(i))_{i\in \text{Ob}(\mathbf{I})}$. Then $\mathcal{L}$ is a cone over $E\circ D$. Let $\mathcal{S} = (B\overset{f_i}{\to}D(i))_{i\in \text{Ob}(\mathbf{I})}$ be an arbitrary cone over $E\circ D$. By density there exists a diagram $G : \bf{J}\to \bf{A}$ and a colimit $(G(j)\overset{c_j}{\to} B)_{j\in \text{Ob}(\mathbf{J})}$ of $E \circ G$. For each object $j$ of $\text{Ob}(\bf{J})$, $(G(j)\overset{f_i\circ c_j}{\to} D(i))_{i\in \text{Ob}(\mathbf{I})}$ is a cone over $D$. Hence for each $j \in \text{Ob}(\bf{J})$ there exists a unique morphism $g_j : G(j)\to L$ with $f_i \circ c_j = \ell_i \circ g_j$ for each $i \in \text{Ob}(\bf{I})$....
In this proof of Proposition 13.11 of Joy of Cats, I don't understand the reason for writing, the last line. Of course this will happen if $f_i \circ c_j$ is an $\mathbf{A}$-morphism. But I am not sure how that is true unless $\mathbf{A}$ is a full subcategory of $\mathbf{B}$. Could anyone explain?
| By definition a (colimit-)dense subcategory is a full subcategory, see definition 12.10 in the linked file. So since $f_i \circ c_j$ is an arrow between two objects in $\mathbf{A}$, that arrow must also be in $\mathbf{A}$. So indeed, the existence of $g_j$ follows from the universal property of the limit $\mathcal{L}$ in $\mathbf{A}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3870396",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How to find solve for second order pde with initial conditions using Wolfram Mathematica? I have next task:
$$
\frac{\partial^2 u}{\partial x \partial y} = 0,~ u(x,x^2) = 0,~ \frac{\partial u}{\partial x}(x, x^2) = \sqrt{|x|},~|x| < 1
$$
I write this, but it don't work:
weqn = D[u[x,y],x,y] == 0
ic = {u[x,x^2] == 0, Derivative[0,1][u][x,x^2] == Sqrt[Abs[x]], Abs[x] < 1}
sol = DSolve[{weqn, ic},u,{x,y}]
And I got next error:
DSolve: Equation or list of equations expected instead of Abs[x]<1 in the first argument {$\mathrm u^{(1,1)}[x,y] == 0$, {$\mathrm{u[x,x^2] == 0, u^{(0,1)}[x,x^2]==\sqrt{Abs[x]},~ Abs[x]<1}$}}.
When I try to find a solution to a similar problem, when in conditionals the second argument like $ \mathrm x ^ 2 $, I can't do it.
Where did I go wrong?
I want to use Wolfram Mathematica to solve the problem.
| $$\frac{\partial^2 u}{\partial x \partial y} =0$$
integrating with respect to $y$ we get $$\frac{\partial u}{\partial x} = f(x) $$ and similaty integrating with respect to $x$ we get $$u(x,y) =F(x) +G(y)$$
We have $$f(x)=\frac{\partial u}{\partial x} (x, x^2 ) =\sqrt{|x|}$$
therefore $$F(x) = \int \sqrt{|x|} dx $$ and hence $$F(x) =\frac{2}{3}\text{sign} (x)|x|^{\frac{3}{2}}$$
and since $$0=u(x,x^2 ) =\frac{2}{3}\text{sign}(x)|x|^{\frac{3}{2}} +G(x^2 )$$
we get $$G(y) =-\frac{2}{3}\text{sign}(y)|y|^{\frac{3}{4}}$$
Hence the solution is $$ u(x,y) =\frac{2}{3}\text{sign}(x)|x|^{\frac{3}{2}} -\frac{2}{3}\text{sign}(y)|y|^{\frac{3}{4}}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3870524",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Show that an hermitian operator is represented by an hermitian matrix. Let $\hat{A}$ be an hermitian operator.
I have to show that $\overline{A_{\alpha\beta}}=A_{\beta\alpha}$
So, if an operator is hermitian then $\hat{A} = \hat{A}*$
I started with this:
$\hat{A}*=\sum_{\alpha\beta}\overline{|u_{\alpha}\rangle A_{\alpha\beta}\langle u_{\beta}|}=\sum_{\alpha\beta}|u_{\beta}\rangle\overline{A_{\alpha\beta}}\langle u_{\alpha}|$
I don't know how to continue
| Now note that $$\hat A=\sum_{\alpha\beta}|u_\beta\rangle A_{\beta\alpha}\langle u_\alpha|$$This must be equal to your final expression $\sum_{\alpha\beta}|u_{\beta}\rangle\overline{A_{\alpha\beta}}\langle u_{\alpha}|$ by Hermitianness of $\hat A$. Compare coefficients one by one. For instance by multiplying the two sums from the left by arbitrary basis bras $\langle u|$ and from the right by arbitrary basis kets $|u\rangle$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3870710",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
If $A$ is a closed set with $\mathring{A}=\emptyset$, it is the border of another set I have to decide whether the following statement is true:
If $A\neq\emptyset$ is a closed set, and $\mathring{A}=\emptyset$, then $\exists D$ so that $A=\partial D$, where $A, D \in X$
I know that if $X=\mathbb{R^2}$ (for example) this is true and it's easy to proveit, since I just have to take $D$ as the rational elements in $A$. However, for any set $X$ this method doesn't work. Can someone help me with this?
| Yes, it is true since $$\partial A = \overline{A} \setminus \overset{o}{A} =\overline{A} \setminus \emptyset =A.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3870839",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How do I find a function $f(3n) = 3n$ such that it is different from the identity function? This is my first time posting. I'm sorry if I'm neglecting some good etiquette practices; I tried to read everything that's been sent my way, but I probably missed something anyway. Also, English is not my first language, so I'm relying on Google to translate math-specific terminology. If something isn't clear, please let me know!
I'm a Computer Science student at University, and I've been requested to find a function $f: \Bbb{N}\to\Bbb{N}$ such that $\forall n \in \Bbb{N}$, $f(3n) = 3n \land f\neq \mathrm{id}_\Bbb {N}$. I absolutely cannot find a solution, as $f(x) = x$ (and, as such, $f(3n) = 3n$ too) literally is the definition of identity function as far as I know... Am I missing something? Thanks in advance.
EDIT: Thanks a lot everyone!
| Your function is from $\Bbb{N}$ to $\Bbb{N}$, so the equation
$$f(3n) =3n\tag1$$
means
\begin{align}
&f(3) = 3\\
&f(6) = 6\\
&f(9) = 9\\
&\cdots \cdots \cdots
\end{align}
so for natural numbers other than multiple of $3$, the function $f$ may have $any$ natural value and still satisfy the equation $(1)$.
So it is sufficient to assign e.g. $f(2) = 100$, and for all other natural numbers $f(n) = n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3870972",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.