text
stringlengths 83
79.5k
|
|---|
H: Show that X1 and X2 are not independent
Let $X_1 \sim N(0,1)$ and $\xi$ be an independent symmetric random sign, i.e. $P(\xi = \pm1) = \frac{1}{2}$
Define $X_2= \xi X_1$. This random variable is normal, since for any $s\in\mathbb{R}$ (???)
Show that $X_1$ and $X_2$ are not independent.
I’ve no idea how to solve it, any suggestions?
AI: Note that independence means $P(X_1<0, X_2 < 0) = P(X_1 < 0) P(X_2 < 0)$. We will prove that this is not true, hence, they are dependent.
$$P(X_2 < 0)=P(\xi X_1 < 0)=P(X_1<0, \xi >0) + P(X_1>0, \xi <0)=$$
$$P(X_1<0)P(\xi >0) + P(X_1>0)P(\xi <0) = \frac{1}{2}\cdot\frac{1}{2}+ \frac{1}{2} \cdot \frac{1}{2} = \frac{1}{4}$$
But,
$$P(X_1<0, X_2<0)=P(X_1<0, \xi X_1 < 0) = P(X_1 < 0, \xi > 0)=$$
$$=P(X_1 < 0)P(\xi > 0)=1/4$$
Therefore, $P(X_1<0, X_2<0) = 1/4\neq P(X_1<0)P(X_2 <0) = 1/2 \cdot 1/4$.
|
H: Prove $a^k\equiv 1 \pmod k$ has no solution for infinitely many even integers k
Prove that there are infinitely many EVEN positive integers $k$ such that for each of
those $k$, the equation $\varphi(n) = k$ has no solution in positive integers $n$.
I believe there might be a way to approach this using Euler's Theorem where one proves
$a^k\equiv 1 \pmod k$
has no solution for infinitely many even numbers.
AI: Note: This is answer to your "actual" question and not the title question.
Claim: There is no positive integer $n$ such that $\varphi(n)=2p$, where $p$ is a prime and $2p+1$ is composite.
Proof
Suppose there is an $n$ such that $\varphi(n)=2p$. Let $q$ be a prime such that $q | n$, then by definition $q-1 | \varphi(n)$. This implies $q-1 | 2p$. With $p$ being a prime, the only positive divisors of $2p$ are $\{1,2,p,2p\}$. Thus $q-1 \in \{1,2,p,2p\}$.
Observe that $q-1 \neq p,2p$ because if it were then $q=p+1$ and $q=2p+1$ respectively. In both cases, from the conditions given ($p > 5$ and $2p+1$ is composite) $q$ cannot be a prime. Thus the only possibilities left are $q=2$ or $q=3$. This means should such an $n$ exist, it must of the form $n=2^a \cdot 3^b$, where $a,b \geq 0$. But then
$$\varphi(n)=
\begin{cases}
2^{a} \cdot 3^{b-1} & \text{ if } a,b \geq 1\\
2^{a-1} & \text{ if } a \geq 1, b=0\\
2 \cdot 3^{a-1} & \text{ if } a=0, b \geq 1\\
1 & \text{ if } \text{otherwise}
\end{cases}
$$
So for $\varphi(n)=2p$, we will have that $p \in \{2,3\}$. But this cannot happen as $2p+1$ is not composite in both scenarios. Thus such an $n$ cannot exist.
|
H: Relation between generator polynomial and codeword in a cyclic code
I'm trying to solve the following exercise, but I can't use an hypothesis.
Let $F$ be a finite field and $a(x)$ a poly of degree $n$ over $F[x]$. Let $C$ the smallest cyclic code of length $n$ over $F$ with $a(x)$ as codeword, and let $g(x)$ the generator poly. Show that $g(x)=\text{G.C.D} (a(x),x^n-1)$
By definition, I know that $g|x^n - 1$ and $g$ has degree $n-k$. Also, since $a(x)$ is codeword, then $a(x)=f(x) g(x)$, for some $f(x)$ of degree $n-k$, and hence $g|a(x)$.
Now I'm supposed to use the fact that $C$ is the smallest cyclic code that contains $a$ and that $a$ has degree $n$, but I really can't figure it out how two use it properly.
AI: (By the way, I think the original problem is misstated. I think they mean "the degree of $a(x)$ is less than $n$.)
This is all just the picture that cyclic codes of length $n$ correspond to ideals in $F[x]/(x^n-1)$, and the fact that $F[x]$ is a principal ideal domain.
The interpretation of the GCD in a PID is exactly: $g=\gcd(a,b)$ if and only if $(g)$ is the smallest ideal containing both $(a)$ and $(b)$.
And so you have it:
$(g(x))$ is the smallest ideal containing $(a(x))$ and $(x^n-1)$,
but this is also just saying
$(g(x))/(x^n-1)=(a(x), x^n-1)/(x^n-1)$.
So $(g(x))/(x^n-1)$ is the smallest cyclic code of length $n$ containing the word corresponding to $a(x)$.
|
H: Interesting Partition Questions
There is a good question here.
My question is;
"x is a positive integer and
$\lfloor x\rfloor$ denote the largest integer smaller than or equal to $x$. Prove that $\lfloor n / 3\rfloor+1$ is the number of partitions of $n$ into distinct parts where each part is either a power of two or three times a power of two."
There is a Theorem related with this question.
Theorem:
$
p(n \mid \text {parts in } N)=p(n \mid \text { distinct parts in } M) \quad \text { for } n \geq 1
$
where $N$ is any set of integers such that no element of $N$ is a power of two times an element of $N,$ and M is the set containing all elements of $N$ together with all their multiples of powers of two.
Can anyone help? thanks.
AI: Let’s use a generating function.
If $p(n)$ is the number of partitions of $n$ into numbers of the form $2^k$ or $3\cdot 2^k$, then we have the following generating function:
$$\sum_{n=0}^\infty p(n)x^n = \prod_{k=0}^\infty (1+x^{2^k})(1+x^{3\cdot 2^k})$$
Recall the following identity, which follows from the fact that every nonnegative integer has a unique binary representation:
$$\prod_{k=0}^\infty (1+x^{2^k})=1+x+x^2+...=\frac{1}{1-x}$$
From this, it follows that our generating function is given by
$$\sum_{n=0}^\infty p(n)x^n=\frac{1}{(1-x)(1-x^3)}$$
On the other hand, we have that
$$\begin{align}
\sum_{n=0}^\infty (\lfloor n/3\rfloor +1)x^n
&= 1+x+x^2+2x^3+2x^4+2x^5+3x^6+... \\
&= (1+x+x^2)(1+2x^3+3x^6+4x^9+...) \\
&= \frac{1+x+x^2}{(1-x^3)^2} \\
&= \frac{1}{(1-x)(1-x^3)}
\end{align}$$
Well, whaddaya know?! The two generating functions are equal to each other! Thus, we have the desired result:
$$p(n)=\lfloor n/3\rfloor +1$$
QED! Thanks for the fun problem!
|
H: How to find count of numbers in a range that satisfies both X mod N = A and X mod M = B?
For example how many numbers X are there from [0,100] that satisfies both X mod 8 = 2 and X mod 5 = 1.
My first approach was to find X such that X mod LCM(A,B) = (X mod A) + (X mod B). Because if a number is divisible by LCM(A,B) that means it's a multiple of A as well as B. And as for remainder the assumption that X mod LCM(A,B) will be equal to the summation of individual remainders X mod A and X mod B was made out of thin air. For example to find X's where X mod 3 = 2 and X mod 2 = 0 it works. But for example X mod 8 = 2 and X mod 5 = 1, LCM(8,5) = 40. 26 mod 8 = 2 and 26 mod 5 = 1 but with this approach 26 will never be an answer. So obviously it's not correct.
AI: Note that if you find one solution for $x \equiv 2 \pmod{8}$ and $x \equiv 1 \pmod{5}$, the other solutions will cycle every $40$. This is because the LCM of the two moduli is $40$ and therefore each solution will be a sum of a factor of $40$ greater (or less) than your result.
You can find that $26$ is one solution. From there, all other solutions are in the form $26+40n$, where $n$ is an integer. Obviously in the domain from $0$ to $100$, there are two solutions, namely $26$ and $66$.
Now, take the example that you don't get a trivial solution. You have two congruences: $$x \equiv 2 \pmod{8}$$ $$x \equiv 1 \pmod{5}$$
Let $x = 8k+2$, by the first equation. Plugging it into the second equation gets us $8k+2 \equiv 1 \pmod{5}$, or $8k \equiv -1 \pmod{5}$, or $$8k \equiv 4 \pmod{5}.$$ Dividing both sides by $4$ gets us $2k \equiv 1 \pmod{5}$. From here, you still need to find a solution but luckily it's only one modulo, not two.
Note that $k = 3$ works. Substituting that into $x = 8k + 2$ gets us $x = 26$, and it cycles every $40$ for a final answer of $x = 26 + 40n$.
-FruDe
|
H: Centering charts about a point in a euclidean space.
Let $M$ be a locally euclidean space. Let $p\in M$. Then there exists some chart $(V,\psi)$ such that $\psi(p)=0$.
Now, I first thought of breaking it into cases, but I didn't like that approach, so I was able to think of another way:
By definition of $M$, $p$ is in the coordinate domain of some chart, $(U,\phi)$ . Define another map $g:U\rightarrow \mathbb{R}^n$ by $g(x)=\phi(x)-\phi(p)$ for all $x$. But the issue is that i'm not completely sure why $g(U)$ is open in $\mathbb{R}^n$.
AI: By definition, $g(U) = \{\phi(x) - \phi(p) \ \vert \ x \in U\} = \phi(U) - \phi(p)$. Since $\phi(U)$ is open, so is $g(U)$ (because translations trivially preserve openess).
|
H: How to choose infinite number of different values from infinite set of infinite sets.
Let $ \aleph_{\alpha} $ be a cardinal and assume that $ \left\{ A_{\beta}:\beta<\aleph_{\alpha}\right\} $ is a set of sets, such that $ |A_{\beta}|=\aleph_{\alpha} $ for any $ \beta<\aleph_{\alpha} $.
Prove that exists set of sets $ \left\{ D_{\beta}:\beta<\aleph_{\alpha}\right\} $ such that :
$ D_{\beta}\subseteq A_{\beta} $
$ |D_{\beta}|=\aleph_{\alpha} $
if $ \beta\neq\gamma $ then it follows that $ D_{\beta}\cap D_{\gamma}=\emptyset $.
So, actually what I need to do, is to choose different values $ \aleph_{\alpha} $ from each $ A_{\alpha} $
Its a bit complicated for me. If the task would be to choose just one different value from each set, then I would know how to solve it. But here I have to choose $ \aleph_{\alpha} $ values and make sure that they are different. So any ideas would be highly appreciated.
AI: Hint: You need to pick a total of $\aleph_\alpha\cdot\aleph_\alpha=\aleph_\alpha$ elements to form all the sets $D_\beta$. Pick them one by one in a transfinite recursion of length $\omega_\alpha$.
More details are hidden below.
Let $f=(f_0,f_1):\omega_\alpha\to\omega_\alpha\times\omega_\alpha$ be a bijection. The idea is then we do a recursion of length $\omega_\alpha$ such that in the $\beta$th step, we pick the $f_1(\beta)$th element of $D_{f_0(\beta)}$. More precisely, recursively define a sequence $(x_\beta)_{\beta<\omega_\alpha}$ such that each $x_\beta$ is an element of $A_{f_0(\beta)}$ and is different from $x_\gamma$ for all $\gamma<\beta$. We can do this because $|A_{f_0(\beta)}|=\aleph_\alpha$ and there are only $|\beta|<\aleph_\alpha$ such $x_\gamma$. Now take $D_\beta=\{x_\gamma:f_0(\gamma)=\beta\}$.
|
H: Holomorphic function in complex analysis
For a holomorphic function $f$ we know that:
$f(x) \in R$, for every real number $x$
$f(1+i)=-1-i$
Determine $f(1-i).$
I'm really not seeing how this exercise can be solved. My best guess is the answer would be $f(1-i)=-1+i, $ but only because since the function sends real numbers in real numbers and $f(i+1)$ is its symmetric, then if $f(z)=-z$, it satisfies both these conditions.
But how do I prove this correctly? Is this connected in any way to the identity theorem?
AI: Let $g(z)=\overline{f\left(\overline z\right)}$. Then $g$ is holomorphic too and, if $x\in\Bbb R$, $g(x)=f(x)$. So, by the identity theorem, $g=f$. But then$$f(1-i)=f\left(\overline{1+i}\right)=\overline{f(1+i)}=\overline{-1-i}=-1+i.$$
|
H: Tangent Hyperplane $H$ to $X$ at $p \in X$ and hyperplane divisor $\operatorname{div}(H)$.
According to Rick Miranda (Algebraic Curves and Riemann Surfaces) we have the following Lemmas:
Lemma 3.7 (page 219): Suppose that $X \subset \mathbb{P}^n$ is a nondegenerate smooth curve (with $n \ge 2$). Then $X$ has only finitely many flex points.
Lemma 3.8 (page 220): Suppose that $X \subset \mathbb{P}^n$ is a nondegenerate smooth curve (with $n \ge 2$). Then there are only finitely many pairs of distinct points $p$ and $q$ with the same tangent line.
The above lemmas combine to give the following corollary:
Corollary 3.9 (page 221): Suppose that $X \subset \mathbb{P}^n$ is a nondegenerate smooth curve of degree $d$ (with $n \ge 2$).
a) The general hyperplane $H$ in $\mathbb{P}^n$ is such that its divisor $\operatorname{div}(H)$ consists of $d$ distinct points $\{p_i\}$, each having
$\operatorname{div}(H)(p_i)=1$ (i.e., the general hyperplane is transverse to $X$).
b) For all but finitely many points $p$ of $X$, the general tangent hyperplane
$H$ to $X$ at $p$ is such that
$\operatorname{div}(H) = 2\cdot p + q_3 + \cdots + q_d$ with all $q_i$ distinct
and unequal to $p$ (i.e., $H$ is neither a flexed tangent nor a bitangent
hyperplane).
My question is:
Affirmation: From these results it is correct to state that for a point $p$ in $X$ and a tangent hyperplane $H$ to $X$ in $p$ we have the following possibilities for hyperplane divisor $\operatorname{div}(H)$:
$\operatorname{div}(H)=2\cdot p + q_3 + \cdots + q_d$ with all $q_i$ distinct
and unequal to $p$; in case $H$ is general tangent;
$\operatorname{div}(H)=k\cdot p + q_{k+1} + \cdots + q_d$ with all $q_i$ distinct and unequal to $p$; in case $p \in X$ is a flex point, i.e., $k \ge 3$;
$\operatorname{div}(H)=k_1\cdot p_1+ k_2\cdot p_2 + q_{k_1 + k_2 +1} + \cdots + q_d$ with all $q_i$ distinct and unequal to $p_1$ and $p_2$; in case $H$ is bitangent to $X$.
Is this Affirmation correct? The affirmation is it true due to Lemmas 3.7, Lemma 3.8 and Corollary 3.9?
AI: No, this is not correct. Here is a counterexample: $$x^6+y^6+z^6=10(x^3y^3+y^3z^3+z^3x^3).$$
This is smooth, and it has 72 tritangents (lines tangent to the curve at 3 points).
|
H: Let $T$ be the linear operator on $M_{n}(\textbf{R})$ defined by $T(A) = A^{t}$. Find a basis $\mathcal{B}$ s.t. $[T]_{\mathcal{B}}$ is diagonal.
Let $T$ be the linear operator on $M_{n\times n}(\textbf{R})$ defined by $T(A) = A^{t}$.
(a) Show that $\pm 1$ are the only eigenvalues of $T$.
(b) Describe the eigenvectors corresponding to each eigenvalue of $T$.
(c) Find an ordered basis $\mathcal{B}$ for $M_{2\times 2}(\textbf{R})$ such that $[T]_{\mathcal{B}}$ is a diagonal matrix.
(d) Find an ordered basis $\mathcal{B}$ for $M_{n\times n}(\textbf{R})$ such that $[T]_{\mathcal{B}}$ is a diagonal matrix for $n > 2$.
MY (NEW) ATTEMPT
(a) Let $A\in M_{n\times n}(\textbf{R})$ s.t. $A
\neq 0$. Then $A$ is an eigenvector of $T$ iff $T(A) = \lambda A$, that is to say, $A^{t} = \lambda A$.
Based on it, we conclude that $A = \lambda A^{t}$. Finally, one has that
\begin{align*}
A^{t} = \lambda(\lambda A^{t}) \Rightarrow A^{t} = \lambda^{2}A^{t} \Rightarrow \det(A^{t}) = \lambda^{2n}\det(A^{t}) \Rightarrow \lambda^{2n} = 1 \Rightarrow \lambda = \pm 1
\end{align*}
But I still do not know how to justify the implication where we consider $\det(A^{t}) \neq 0$
Any help in this sense is appreciated.
(b) For $\lambda = 1$, we have that $T(A) = A^{t} = A$.
Thus the eigenvectors corresponding to $\lambda = 1$ belongs to the subspace of symmetric $n\times n$ matrices.
For $\lambda = -1$, we have that $T(A) = A^{t} = -A$.
Thus the eigenvectors corresponding to $\lambda = -1$ belongs to the subspace of skew-symmetric $n\times n$ matrices.
(c) Let us determine the eigenspace associated to each eigenvalue. We shall start with $\lambda =1$:
\begin{align*}
T(A) = 1\cdot A \Rightarrow A^{t} = A \Rightarrow
\begin{bmatrix}
a & c\\
b & d
\end{bmatrix} =
\begin{bmatrix}
a & b\\
c & d
\end{bmatrix} \Rightarrow b = c
\end{align*}
Thus the eigenspace associated to the eigenvalue $\lambda = 1$ is spanned by
\begin{align*}
E_{1} = \left\{\begin{bmatrix}
1 & 0\\
0 & 0\\
\end{bmatrix},
\begin{bmatrix}
0 & 1\\
1 & 0\\
\end{bmatrix},
\begin{bmatrix}
0 & 0\\
0 & 1\\
\end{bmatrix}\right\}
\end{align*}
We may now proceed and determine the eigenspace associated to $\lambda = -1$:
\begin{align*}
T(A) = -1\cdot A \Rightarrow A^{t} = -A \Rightarrow \begin{bmatrix}
a & c\\
b & d
\end{bmatrix} =
\begin{bmatrix}
-a & -b\\
-c & -d
\end{bmatrix} \Rightarrow b = -c,\,a = 0,\,d = 0.
\end{align*}
Thus the eigenspace associated to the eigenvalue $\lambda = -1$ is spanned by
\begin{align*}
E_{-1} = \left\{\begin{bmatrix}
0 & 1\\
-1 & 0\\
\end{bmatrix}\right\}
\end{align*}
Since $E_{1}\cup E_{-1}$ is LI and it spans $M_{2\times 2}(\textbf{R})$, we conclude that $\mathcal{B} = E_{1}\cup E_{-1}$ is a basis indeed.
(d) The same reasoning applies to this case. Precisely speaking, there are $\displaystyle\dim\text{span}\{E_{1}\} = \frac{n(n+1)}{2}$ symmetric matrices and $\displaystyle\dim\text{span}\{E_{-1}\} = \frac{n(n-1)}{2}$
skew-symmetric matrices in the corresponding bases.
COMMENT
This question has already been asked here, but I'd like to know if my new approach is fine. Any comments on the wording of my solution would be appreciated.
AI: From $A^t=\lambda A$ you can deduce $\lambda=1$ provided $A$ has a nonzero diagonal entry. But there can be eigenvectors that have zero along the diagonal and you would need to consider other cases.
There's a simpler way: every matrix $A$ can be uniquely written as the sum of a symmetric and an antisymmetric matrix: $A=A_++A_-$, where
$$
A_+=\frac{1}{2}(A+A^t),\qquad A_-=\frac{1}{2}(A-A^t)
$$
Then $A^t=A_+-A_-$ and from $A^t=\lambda A$ we get
$$
A_+ - A_-=\lambda A_+ + \lambda A_-
$$
By uniqueness, we obtain that $A_+=\lambda A_+$ and $-A_-=\lambda A_-$. Since $A\ne0$ (being an eigenvector), we see that either $\lambda=1$ or $\lambda=-1$ and, moreover, that an eigenvector is either a symmetric matrix (for the eigenvalue $1$) or an antisymmetric matrix (for the eigenvalue $-1$). Conversely, any (nonzero) symmetric matrix is an eigenvector for $1$ and any (nonzero) antisymmetric matrix is an eigenvector for $-1$.
Since the sum of the eigenspaces is $M_n(\mathbf{R})$, the endomorphism $T$ is diagonalizable and a basis of eigenvectors can be obtained by a basis of the symmetric matrices, with dimension $n(n+1)/2$, and of the antisymmetric matrices, with dimension $n(n-1)/2$. In the case $n=2$ a basis is
$$
\biggl\{
\begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix},
\begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix},
\begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix},
\begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix}
\biggr\}
$$
About your attempt: $\lambda^{2n}=1$ doesn't imply by itself that $\lambda=\pm1$. You need to exclude complex eigenvalues and the fact you're working with real matrices is not relevant: the matrix
$$
\begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix}
$$
has real entries, but no real eigenvalue. On the other hand, for its eigenvalues $\lambda$ it is true that $\lambda^4=1$. Of course it's a different situation than yours, but it shows you have to be careful when jumping to conclusions.
|
H: Finding a diagonalizable endomorphism $f : \mathbb{R}^4 \to \mathbb{R}^4$ such that $\text{ker}(f) = \text{im}(f)$
I've been struggling all day with this question. I tried to come up with a proof which shows that such an endomorphism does NOT exist, but I'm not sure it is correct.
Let $ B = (b_1, b_2, b_3, b_4)$ be a base of $ \mathbb{R}^4$;
Since $ \text{dim}(\mathbb{R}^4) = \text{dim}(\text{ker}f) + \text{dim}(\text{im}f)$ , then $\text{dim}(\text{ker}f) = \text{dim}(\text{im}f) = 2$;
We begin building an endomorphism such that $f(b_1) = 0_{\mathbb{R}^4}$ and $f(b_2) = 0_{\mathbb{R}^4}$ (I don't think it matters which vectors we choose to span the kernel). Consequently, $\text{mg}(0) = 2$;
If $\text{dim}(\text{im}f) = 2$ and $\text{im}(f) = \text{ker}(f)$, then it must be that $f(b_3) = b_1$ and $f(b_4) = b_2$;
By doing so, $b_1$ and $b_2$ are the only eigenvectors of $f$, and thus we can't find a base of $\mathbb{R}^4$ made of eigenvectors.
Please bear in mind that my knowledge of linear algebra stops at diagonalization, and that this is my very first attempt at making a proof.
If I'm wrong and building such an endomorphism is actually possible, then what am I missing?
AI: Here is a very quick proof: show that because $\ker (f) = \operatorname{im}(f)$, it must hold that $f \neq 0$ but $f^2 = 0$. However, the only diagonalizable endomorphism $f$ for which $f^2 = 0$ is the zero endomorphism. So, $f$ cannot be diagonalizable.
Regarding the points that you have written: first of all, you have not proved that $b_1,b_2$ are the only eigenvectors of $f$. Second, showing that the example you tried to make failed to be diagonalizable while satisfying the condition does not prove that there are no such endomorphisms.
|
H: Is it true that $\sqrt{ab}\le \frac{a-b}{\ln a - \ln b}$ for any $a\neq b>0$?
Is it true that $\sqrt{ab}\le \frac{a-b}{\ln a - \ln b}$ for any $a\neq b>0$?
If so, any thoughts on how to prove this?
AI: The assertion is true. When $a>b>0$, it follows by taking $x:=\frac ab$ in the inequality
$$ \log x\le \sqrt x - \frac1{\sqrt x}\qquad\text{for all $x\ge 1$}
$$
For $b>a$, just flip the roles of $a$ and $b$.
|
H: Connection between cross product and determinant
When I calculate a cross product of two vectors in Cartesian coordinates, I calculate something that seems like the determinant of a 2x2 matrix.
Is there any connection between the determinant and the cross product?
AI: If $\vec{i},\vec{j},\vec{k}$ are the three basic vectors of $\mathbb{R}^3$ then the cross product of vectors $(a,b,c), (p,q,r)$ is the determinant of the matrix
$$\left(\begin{array}{lll}\vec{i}&\vec{j}&\vec{k}\\ a &b & c\\ p&q &r\end{array}\right)$$ by definition. The coordinates of that vector are obtained by expanding this determinant along the first row.
|
H: Convergence of $\sum_{n=1}^\infty \frac{1}{n}\frac{x^n}{1-x^n}$,$\sum_{n=1}^\infty \frac{(-1)^n}{x^2-n^2}$
I am having problems to find the values of x for which the following series converge. I understand the use of root test or ratio test to find the radius of convergence when the series is a power series, but in these cases I don't see how to do it, or if I have to use the definition.
(a)$\sum_{n=1}^\infty \frac{1}{n}\frac{x^n}{1-x^n}$
(b)$\sum_{n=1}^\infty \frac{(-1)^n}{x^2-n^2}$
Thanks in advance for your help and sorry if one of this series has already be solved here. I tried to find it, but I did not have lucky.
I was just thinking that usually we use those tests for power series or series with factorial terms, since in such cases terms usually cancel and we have something nice to work. In this exercise the ratio test give us,
$\frac{n}{n+1}\frac{x(1-x^n)}{1-x^{n+1}}$
which I don't see how it helps (In the comment section jimjim showed me that this means than the ratio test fails since the limit goes to 1). Something similar happens when I use the root test and, both of them in the second case. I end with something that it is not easier to work. Or maybe I am missing something.
AI: Hints:
First series: For $|x| <1$ compare with $\sum x^{n}$ to prove convergence. For $|x| >1$ use that fact that $|\frac {x^{n}} {1-x^{n}}| \geq \frac {|x|^{n}} {1+|x|^{n}}$ and compare with $ \sum \frac 1 n$ to prove divergence The series is not defined at $x=\pm 1$
Second series: Not defined when $x$ is an integer. For other $x$ compare with $\sum \frac 1 {n^{2}}$ to prove convergence.
|
H: Abbott's proof that any rearrangement of an absolutely convergent series converges to the same limit as the original
Here is his proof in full:
Assume $\sum\limits_{k = 1}^{\infty} a_k$ converges absolutely to $A$, and let $\sum\limits_{k = 1}^{\infty} b_k$ be a rearrangement of $\sum\limits_{k = 1}^{\infty} a_k$. Let's use $s_n$ to denote the partial sums of the original series and $t_m$ for the partial sums of the rearranged series. Thus, we want to show that $(t_m) \to A$.
Let $\epsilon > 0$. By hypothesis, $(s_n) \to A$, so choose $N_1$ such that $|s_n - A| < \frac{\epsilon}{2}$ for all $n \geq N_1$. Because the convergence is absolute, we can choose $N_2$ so that $\sum\limits_{k = m + 1}^{n} |a_k| < \frac{\epsilon}{2}$ for all $n > m \geq N_2$. Now take $N = \max \{N_1, N_2\}$. We know that the finite set of terms $\{a_1, \ldots, a_N\}$ must all appear in the rearranged series, and we want to move far enough out in the series $\sum\limits_{n = 1}^{\infty} b_n$ so that we have included all of these terms. Thus, choose $M = \max \{f(k) : 1 \leq k \leq N \}$.
It should now be evident that if $m \geq M$, then $(t_m - s_N)$ consists of a finite set of terms, the absolute values of which appear in the tail $\sum\limits_{k = N + 1}^{\infty} |a_k|$. Our choise of $N_2$ earlier then guarantees $|t_m - s_N| < \frac{\epsilon}{2}$, and so
$|t_m - A| = |t_m - s_N + s_N - A| \leq |t_m - s_N| + |s_N - A| < \frac{\epsilon}{2} + \frac{\epsilon}{2} = \epsilon$ whenever $m \geq M$
I don't understand why the absolute values of $(t_m - s_N)$ must appear in the tail of this sequence.
AI: What should be written is "$t_m - s_N$ consists of a finite sum of terms, the absolute values of which appear in $\sum_{k=N+1}^\infty|a_k|$." This is simply because $t_m=s_N + \text{other terms $a_i$ where $i\notin\{1,\dots,N\}$.}$ This was the point of picking a sufficiently large partial sum of the series $\sum b_k$, so that we have included the summands in $s_N$.
|
H: Is this proof that $0x = 0$ correct?
I was wondering if this proof I wrote for $0x = 0$ (using only the field axioms) is correct. The proof is as follows:
$$0x = (1-1)x = x-x = 0$$
AI: There are a lot of steps you skipped. First, -1 is defined as the inverse of 1, so a rigorous notation is (-1).
$0 x = (1 + (-1))x = x + (-1)x$
The first equality is the axiom of the inverse element, the second the distributive axiom. Now you should you prove $(-1) x = -x$, by definition, you'll have to prove that $x + (-1)x = 0$. So, after that $x + (-1)x = x + (-x) = 0$. The last equality is from the axiom of the inverse element.
|
H: Property of cyclic quadrilaterals
Suppose $ABCD$ is a cyclic quadrilateral and $P$ is the intersection of the lines determined by $AB$ and $CD$. Show that $PA·PB= PD·PC$
Could you help me please, I have no idea how to relate the property that he is cyclic, I have reviewed other posts of cyclic quadrilaterals and the truth that I have not managed to understand or how to attack this problem. Thank you.
AI: This is a case of the power of a point theorem. You can prove it using similar triangles. By the inscribed angle theorem, $\angle BAC=\angle BDC$ and so $$\angle PAC=PDB.$$ Again by the inscribed angle theorem, $$\angle ABD=\angle ACD.$$ By AA similarity, this establishes that $$\triangle PCA \sim \triangle PBD.$$ By similarity ratios, $$\frac{PA}{PD}=\frac{PC}{PB},$$ which is what we wanted.
|
H: Angle between $n + 1$ equidistant unit vectors in $\mathbb{R}^n$
My question is basically as posed in the title. Suppose we are given $n + 1$ unit vectors in $\mathbb{R}^n$ so that the angle between any pair of them is the same. What is that angle? I have (unfounded) reason to believe that the angle is arccos$(\frac{-1}{n})$, but I don't know how to prove it. For example, we can find three unit vectors $\begin{pmatrix} 1 \\ 0\end{pmatrix}$, $\begin{pmatrix} \frac{-1}{2} \\ \frac{\sqrt{3}}{2}\end{pmatrix}$, $\begin{pmatrix} \frac{-1}{2} \\ \frac{-\sqrt{3}}{2}\end{pmatrix}$, and the angle between any pair of them is arccos$(\frac{-1}{2})$, since the dot product of any pair of them is $\frac{-1}{2}$. Can anyone supply the reasoning for general $\mathbb{R}^n$, or correct me if I'm incorrect?
AI: Since they're spaced symmetrically,
$$u_1+u_2+u_3+\cdots+u_n+u_{n+1}=0.$$
Now take the dot product with $u_1$:
$$1+u_1\cdot u_2+u_1\cdot u_3+\cdots+u_1\cdot u_n+u_1\cdot u_{n+1}=0.$$
Again by symmetry, these last $n$ dot products should be the same:
$$1+u_1\cdot u_2+u_1\cdot u_2+\cdots+u_1\cdot u_2+u_1\cdot u_2=0$$
$$1+nu_1\cdot u_2=0$$
$$u_1\cdot u_2=-1/n.$$
|
H: Using negation of Uniqueness Quantifier to show a relation is not a function.
$A = \{1,2\}$
$B = \{2,3\}$
$R \subseteq A \times B$
$R = \{(1,2), (1,3), (2,3)\}$
I want to prove $R$ is not a function. That is, I want to show:
$$
\lnot \forall x(x \in A \to \exists y(y \in B \land (x,y) \in R \land \forall z((z \in B \land (x,z) \in R) \to y = z)))
$$
and using logical equivalences I arrive at:
$$
\exists x(x \in A \land \forall y(y \in B \land (x,y) \in R \to \exists z((z \in B \land (x,z) \in R) \land y \neq z))) \tag{1}
$$
What I do not understand:
If I find an $x \in A$ such that $(x,y), (x,z) \in R$ and $y \neq z$, isn't it enough to conclude $R$ is not a function ?
I think this would be the symbolization:
$$
\exists x(x \in A \land \exists y(y \in B \land (x,y) \in R \land \exists z((z \in B \land (x,z) \in R) \land y \neq z))) \tag{2}
$$
Perhaps, I am reading in an incorrect manner the Negation of Uniqueness Quantifier. But, I cannot see what would be the difference with my last symbolization and also cannot think a case where my last symbolization fails to represent that $R$ is not a function. I will appreciate any insight.
AI: You're last statement assumes that $\operatorname{dom}(R) = A$. Yes, establishing that would prove that $R$ isn't a function. The broader statement with the $\forall y$ quantifier allows for the possibility that $\operatorname{dom}(R) \subsetneq A$: if $x\in A$ but $x \notin \operatorname{dom}(R)$, then that $x$ satisfies the statement trivially, as there are no $y\in B$ for which $(x, y)\in R$.
|
H: Rings with no zero divisors and an additional Hypothesis are commutative?
So, inspired by the question Finite integral domains are commutative?, i was wondering if the next generalization is also true:
Let $A$ be an integral domain (using the terminology of the linked question), such that every element $a \in A$ generates a finite subring. Then, is $A$ commutative? (WLOG, we can suppose that these subrigs are proper, because otherwise the answer is clear)
Now, here is what i tried (with little to no success):
Strengthening the hypothesis by requiring that every proper subring of $A$ is finite, by following the answer to the linked question, $a,b \in A$ commute if the subring generated by these two elements is proper. Not an answer, but at least something.
If we also ask that $A$ has a unit element $1$, then the subring $\widetilde{u}$ generated by $u \in A-\left \{ 0_A \right \}$ is finite and integral, so by https://proofwiki.org/wiki/Finite_Ring_with_No_Proper_Zero_Divisors_is_Field (this link was also mentioned in the question linked above) it is a field, so $u$ is invertible in $\widetilde{u}$, and so it is invertible in $A$. Then, $A$ would be a division ring. Still, not what i am looking for.
Any help would be appreciated.
AI: If the ring generated by $a\ne0 \in A$ is finite then $a^k=a^{k+d}$ for some $k,d\ge 1$. Therefore since $a(a^{k-1}-a^{k-1+d})=0$,..., so $a=a^{d+1}$. Then for every $b\ne0 \in A$, $ab-a^{d+1}b=0$, hence $a^db=b$. Similarly, $ba^d=b$. So $a^d$ is the identity element 1. Thus for every $a\ne0\in A$ there exists $d=d(a)$ such that $a^d=1$. Thus $A$ is a periodic division ring. So $A$ is a field, hence commutative.
|
H: What is meant by "dot product between random variables?"
I was having a discussion with a colleague today about correlation coefficients, and I was told that correlation coefficient between 2 random variables $X$ and $Y$ is proportional to the dot product of the two random variables.
I asked him what he means by this, and I was told that you can view random variables as vectors. I don't think I agree with that, but I don't have a sufficient background to really argue my point, but now I want to revisit this.
How can a random variable be viewed a vector? What is meant by dot product between 2 random variables -- is this actually formal terminology or something loosely used?
AI: For two joint discrete variables, the expectation of their product is a weighted dot product of their value vectors (all diagonal values are positive making the diagonal matrix positive definite):
$$ \mathbf{E}[XY] = \sum_{i=1}^n p_i x_i y_i = (x_1,...,x_n) \begin{pmatrix}
p_1 & ... & 0\\
\vdots & \ddots & \vdots \\
0 & ...& p_n
\end{pmatrix} (y_1,...,y_n)^T$$
Here, $(X,Y)$ has $n$ possible realizations $(x_i, y_i)$ with probabilities $p_i$, $i=1,...,n$.
|
H: If there exists a positive $K$ such that $|f(x)| \leq K \int_a^x |f(t)|dt$ then $f(x) = 0$
Let $f$ is continuous on $[a,b]$. There exists positive K such that $|f(x)| \leq K \int_a^x |f(t)|dt$ then $f(x) = 0$.
I was trying to prove the statement above, by trying the smallest number c such that $f(x) = 0$ for any $x<c$ and taking integral near at c. But I get stuck and can somebody give me a hint or idea?
AI: Let $$F(x)=\int_a^x|f(t)|dt$$
Now,
$$e^{-Kx}(F'(x)-KF(x))\le0\qquad\forall x\in[a,b]\\
\implies \Big(e^{-Kx}F(x)\Big)'\le0$$
Therefore, $e^{-Kx}F(x)$ is a non-increasing function taking value $0$ at $x=a$. So, it is non-positive $\forall x\in[a,b]$, and therefore is constant zero function, since both $e^{-Kx}$ and $F$ are non-negative.
|
H: Finding zeroeth coefficient of a Chebyshev polynomial expansion
Let $v_\theta = (\cos\theta,\sin\theta)$ be a unit vector in the plane. I have a kernel $p(\theta,\theta') = p(v_\theta\cdot v_{\theta'})$ that satisfies
$$\int_0^{2\pi} p(v_\theta\cdot v_{\theta'})\,d\theta' = 1\;\;\;(*)$$
for all $\theta\in [0,2\pi]$. I also have Chebyshev polynomials $T_0,T_1,\dots$ such that $T_k(cos\theta) = \cos(k\theta)$, normalized such that
$$\{T_0/\sqrt{\pi}\}\cup\{\sqrt{2/\pi}T_k\}_{k=1}^\infty$$
form an orthonormal basis of $L^2(-1,1)$ with weight $1/\sqrt{1-t^2}$.
Now I write the Chebyshev expansion of my kernel:
$$p(t) = \sum_{k=0}^\infty p_kT_k(t), \;\;\;\;\; t\in(-1,1)$$
and I want to show that $p_0 = \frac{1}{2\pi}$.
My progress so far: by orthonormality, we have
\begin{align*}
\int_0^{2\pi} &\frac{1}{\sqrt{\pi}}T_0(v_\theta\cdot v_{\theta'})\frac{\sqrt{2}}{\sqrt{\pi}}p(v_\theta\cdot v_{\theta'})\frac{1}{\sqrt{1 - (v_\theta\cdot v_{\theta'})^2)}}\,d\theta'\\
&= \sum_{k=0}^\infty\int_0^{2\pi} T_0(v_\theta\cdot v_{\theta'})\frac{\sqrt{2}}{\sqrt{\pi}}p_kT_k(v_\theta\cdot v_{\theta'})\frac{1}{\sqrt{1-(v_\theta\cdot v_{\theta'})^2}}\,d\theta'\\
&= \sqrt{2}p_0.
\end{align*}
Also, noting that $T_0\equiv 1$, I know
\begin{align*}
\int_0^{2\pi} &\frac{1}{\sqrt{\pi}}T_0(v_\theta\cdot v_{\theta'})\frac{\sqrt{2}}{\sqrt{\pi}}p(v_\theta\cdot v_{\theta'})\frac{1}{\sqrt{1 - (v_\theta\cdot v_{\theta'})^2)}}\,d\theta'\\
&=\int_0^{2\pi} \frac{\sqrt{2}}{\pi} p(v_\theta\cdot v_{\theta'})\frac{1}{\sqrt{1 - (v_\theta\cdot v_{\theta'})^2}}\,d\theta'.
\end{align*}
Then it would suffice to show
$$\int_0^{2\pi} p(v_\theta\cdot v_{\theta'})\frac{1}{\sqrt{1 - (v_\theta\cdot v_{\theta'})^2}}\,d\theta' = \frac{1}{2}.$$
This is where I'm stuck: I'm not sure how to use (*) in the expression above. Indeed since the expression above is constant in $v_\theta$ as we showed earlier, we are free to pick a particular value, say, $v_\theta = (0,1)$, to make this
$$\int_0^{2\pi} \frac{p(\cos\theta')}{\sin\theta'}\,d\theta',$$
but still I am not sure what to do with this.
AI: As it turns out, using orthonormality was a a red herring, and the solution is actually quite simple. Choosing $v_\theta = (1,0)$, we compute
\begin{align*}
1 &= \int_0^{2\pi} p(v_\theta\cdot v_{\theta'})\,d\theta'\\
&= \sum_{k=0}^\infty \int_0^{2\pi} p_kT_k(v_\theta\cdot v_{\theta'})\,d\theta'\\
&= \sum_{k=0}^\infty \int_0^{2\pi} p_kT_k(\cos\theta')\,d\theta'\\
&= \sum_{k=0}^\infty \int_0^{2\pi} p_k\cos(k\theta')\,d\theta'\\
&= 2\pi p_0 + \sum_{k=1}^\infty \underbrace{\int_0^{2\pi} p_k\cos(k\theta')\,d\theta}_{= 0},
\end{align*}
and so $p_0 = 1/2\pi$.
|
H: How to prove that the second condition for Leibniz test is met for the series?
So, here is the series: $\sum^{\infty}_{2}\frac{k}{(k\ln x +x^2)^2}$. I need to show that $\frac{k+1}{((k+1)\ln x +x^2)^2} - \frac{k}{(k\ln x +x^2)^2} \ge 0 \ \ \ \forall x\in (0, \infty), \forall k \ge 2 $
How to do that?
AI: HINT:
As written, the series diverges for all $x$. We shall assume that the series of interest is $\sum_{k=2}^\infty \frac{(-1)^k k}{(k\log(x)+x^2)^2}$.
Then, we can write the positive part of the general terms of the alternating series as
$$\begin{align}
\frac{k}{(k\log(x)+x^2)^2}&=\frac1{\log(x)}\left(\frac{k\log(x)+x^2-x^2}{(k\log(x)+x^2)^2}\right)\\\\
&=\frac1{\log(x)}\left(\frac1{k\log(x)+x^2}-\frac{x^2}{(k\log(x)+x^2)^2}\right)
\end{align}$$
Note that we have split the original series into the difference of two series, both of which have positive parts that are monotonically decreasing by inspection.
Can you finish now?
|
H: $\frac{z}{e^z - 1}$ power series at $z = 0$
I have a the following question:
Show that $f(z) = \frac{z}{e^z - 1}$ at has a removable singularity at $z = 0$ and that $f$ has power series expansion $\sum_{n=0}^\infty c_nz^n$. Calculate $c_0$ and $c_1$ and show that $c_{2n+1} = 0$ for $n \geq 1$. Find the radius of convergence.
It is pretty clear that this function has a removable singularity at $0$ and can be analytically continued to all of $\mathbb{C}$ by defining $f(0) = 1$. Thus, $f(0) = 0$. I also calculated $f'(0)$ by dividing power series and got $-1/2$.
Now, we have
$$f(z) - 1 - \frac{1}{2}z = \sum_{n=2}^\infty c_nz^n $$
My plan was to look at how $f(x) - 1 - (1/2) x$ behaves for real $x \neq 0$. Specifically, I was thinking of showing that $f(x) = f(-x)$ which would imply that all of the odd coefficients in the power series would be $0$. I am wondering if this is a valid strategy? If so, this is not true. I have calculated a few values of $f(x) - 1 - (1/2)x$ and it is not true that this function is odd. What am I doing wrong?
AI: HINT:
Let $g(z)$ be given by
$$g(z)=\frac{z}{e^z-1}-\left(1-\frac z2\right)$$
Then note that
$$\begin{align}
g(-z)&=\frac{-z}{e^{-z}-1}-1-\frac z2\\\\
&=\frac{z(e^z-1)+z}{e^z-1}-1-\frac z2
\end{align}$$
Can you wrap this up?
|
H: Tossing a fair coin 3 times
If a fair coin is tossed 3 times, what is the probability that it turn
up heads exactly twice?
Without having to list the coin like HHH, HHT, HTH, ect. to get to P=3/8. I would like to ask if there is any mathematical way to calculate this probability.
Please help, thank you!
AI: Since it is a binomial experiment, the probability can be found quickly using
$$P(k)={n\choose k}(1/2)^n$$
"if a fair die is tossed 3 times, what is the probability that it turn up six exactly twice"
$$P(2)={3 \choose 2}(1/6)^2(5/6)^1.$$
The general formula is
$$P(k)={n\choose k}p^k (1-p)^{n-k},$$
where $p$ is the probability of success.
|
H: Is it possible to build a $8×8×9$ block using $32$ bricks of dimensions $2×3×3$?
Is it possible to build a $8×8×9$ block using $32$ bricks of dimensions $2×3×3$?
I tried to show that $8×8×9$ block can't contain $32$ blocks of dimensions $2×3×3$ . For that I tried to colour $1×1×1$ cubes.
(It would give me something like dominoes on chessboard where you can't use $1$ cell so blacks are more than whites , but domino covers the same number of blacks and whites)
I thought that I have to colour them in $18$ colours , but that's too much , and impossible to visualise in three-dimensional space.
AI: The large block will have an $8\times 8$ face. It must break down into $2\times3$ and $3\times3$ faces. So each small face has area divisible by $3$, yet the total area of that face, $64$, is not divisible by $3$.
So it cannot be done.
|
H: Does real*real*real... = imaginary? $x\cdot x\cdot x\cdot x\cdot x\ ...\ =\ i, x \in \mathbb{R}$
Please be advised as is pointed out below, the video was incorrect and this:
$$ x=e^{\frac{\pi}{2}} \Rightarrow x^{x^{x^{x^{...}}}} = i$$Is completely false!
I recently watched the video by real^real^real^... = imaginary? by blackpenredpen and he shows that this is possible:
$$ x=e^{\frac{\pi}{2}} \Rightarrow x^{x^{x^{x^{...}}}} = i$$
This made me wonder if it is possible to find a similar real number for repeated multiplication rather than exponentiation?
$$x\cdot x\cdot x\cdot x\cdot x\ ...\ =\ i, x \in \mathbb{R}$$
My initial thoughts were that repeated multiplication is just exponentiation so maybe we could look at the problem like this:
$$\lim_{n\rightarrow \infty}x^n = i, x \in \mathbb{R} $$
So is this possible? If not it would be nice to see a proof.
AI: The youtube video is wrong.
$x^{x^{x^{x^{...}}}}$ is a so-called power-tower, or hyperpower function, also known as infinite tetration.
It should be noted that the power tower only converges for $x \in (\frac 1{e^e}, e^{\frac 1e})$.
To solve $x^{x^{x^{x^{...}}}} = k$, a non-rigorous "trick" is to write $x^k = k \implies x = k^{\frac 1k}$.
But for $x^{x^{x^{x^{...}}}} = k$ to give the valid solution $x = k^{\frac 1k}$, you must have $k < e$ (based on the radius of convergence I gave above).
So, the solution ($x = \sqrt 2$) is valid for $k=2$, but the solution $x = 3^{\frac 13}$ is not valid for $k = 3$ (as $3 > e$).
Similarly, saying $x^{x^{x^{x^{...}}}} = i \implies x = e^{\frac{\pi}2}$ is simply nonsense. The video is wrong.
An "infinite product" of reals (assuming convergence) has to be real. But it makes little sense to speak of $x\cdot x\cdot x \dots$ because that value is either $0$ for $|x|<1$, $1$ for $x = 1$ and undefined otherwise.
|
H: If $ |f(x)-f(y)| \leq 7|x-y|^{201} $ Then,
Let $f: \mathbb{R} \rightarrow \mathbb{R}$ be a continuous function such that for any two real numbers $x$ and $y$
$$
|f(x)-f(y)| \leq 7|x-y|^{201}
$$
Then,
(A) $f(101)=f(202)+8$
(B) $f(101)=f(201)+1$
(C) $f(101)=f(200)+2$
(D) None of the above.
My approach:-
$$|(f(x)-f(y)|\leq 7|x-y|^{201}$$
$$
\begin{array}{l}\frac{|f(x)-f(y)|}{|x-y|}|\leq 7| x-\left.y\right|^{200} \\ \lim_{x->y}\left|\frac{f(x)-f(y)}{x-y}\right| \leqslant 7|x-y|^{200} \\ f^{\prime}(x) = 0 \\ f(x)=C, \quad \text { Accordingly } f(y)=C\end{array}
$$
So, option D is correct
AI: Just for entertainment,
\begin{eqnarray}
|f(y)-f(x)| &=& \sum_{k=0}^{n-1} |f( x+{k+1 \over n} (y-x)) ) - f( x+{k \over n} (y-x)) | \\
&\le& 7 \sum_{k=0}^{n-1} ({|y-x| \over n} )^{201} \\
&=& 7 {|y-x|^{201} \over n^{200}}
\end{eqnarray}
Letting $n \to \infty$ we see that $f(y)=f(x)$.
|
H: Applying chain rule in $f(x)=\sin(x)\cdot x\ln(x)$
Can we apply chain rule in function
$f(x)= \sin(x)\cdot x\ln(x)$
What i try:: Chain rule
$$\frac{d}{dx}\bigg(f(g(x)\bigg)=f'(g(x))\cdot g'(x)$$
So $$\frac{d}{dx}\bigg(\sin (x)\cdot x\ln(x)\bigg)=\sin(x)\cdot \frac{d}{dx}\bigg(x\ln(x)\bigg)+x\ln(x)\frac{d}{dx}(\sin x)$$
I have seems that we can not apply chain rule Here. Please conform me whether i am right or not. Thanks
AI: You are correct, this is just applying the product rule twice. What is the derivative of $x\ln(x)$?
|
H: Percent on 100 tries?
If i know that something happens 50% In 1 try.
What is percent of Something to Happen at least once in 100 tries?
My friend told me that chances are 50% but i dont understand how that is possible?
could you explain with more details please
AI: Let us consider tossing a fair coin (equivalent to your problem). Now, the probability of getting no heads in 100 tosses is
$$P(\text{No Heads}) = \frac{1}{2^{100}}$$
Now, any other case other than this case has at least one head. Hence, since the total probability of all events occurring is one, we have
$$P(\text{At Least One Head}) = 1 - \frac{1}{2^{100}}$$
This is much much bigger than 50%
|
H: Meta cyclic p-group
While studying meta cyclic p groups, I came across an interesting class of meta cyclic groups which can be written as semi-direct product of two cyclic p-groups of order $p^m$ and $p^n$ respectively. These kind of groups are called split meta cyclic p-groups. I am trying to write their presentation. Can someone help me present them in a nice way?
AI: A split metacyclic group is split extension of a cyclic group by a cyclic group, a group $G$ for which the short exact sequence$$\displaystyle \{e\}\rightarrow K\xrightarrow{\text{f}} G\xrightarrow{g} H\rightarrow \{e\}$$ splits, i.e. given any epimorphism $ g : G \rightarrow H $,\ there exist a map $ h : H \rightarrow G $, such that composition of $g$ and $h$ is identity on $H$ . Using this, $G$ can be written as a semi-direct product of $K$ by $H$.
Any non-abelian split metacyclic-p-group has presentation either
$$ <\ a, b\ : a^{2^m} = b^{2^n} = e, bab^{-1} = a^{-1+2^k}> $$
where $max\{2,m-n\} \leq k \leq m$ for $p=2$ or
$$ <\ a, b\ : a^{p^m} = b^{p^n} = e, bab^{-1} = a^{1+p^k}>$$
where $(m-n) \leq k < m$ with $k \geq 1, $ for all primes $p$.
Reference Article:
Golasiński, Marek; Gonçalves, Daciberg Lima, On automorphisms of split metacyclic groups., Manuscr. Math. 128, No. 2, 251-273 (2009). ZBL1160.20017.
|
H: Polynomials question Part (a) and (b)
(a) In this multi-part problem, we will consider this system of simultaneous equations:
$$\begin{array}{r@{~}c@{~}l l}
3x+5y-6z &=&2, & \textrm{(i)} \\
5xy-10yz-6xz &=& -41, & \textrm{(ii)} \\
xyz&=&6. & \textrm{(iii)}
\end{array}$$
Let $a=3x$, $b=5y$, and $c=-6z$.
Determine the monic cubic polynomial in terms of a variable $t$ whose roots are $t=a$, $t=b$, and $t=c$.
(b) Given that $(x,y,z)$ is a solution to the original system of equations, determine all distinct possible values of $x+y$.
I've figured out the polynomial for Part (a), but I'm not sure how to solve Part (b).
AI: The polynomial from part (a) is
$$P(t) : t^3 - 2t^2 -123t + 540 = 0$$
Now, factorizing
$$P(t) = (t-5)(t-9)(t+12) = 0$$
Since it has no repeated roots, the original equation has distinct solutions corresponding to the following 6 match-ups
a = 5, b = 9, c = -12 $\implies$ $x+y = \frac{5}{3} + \frac{9}{5}$
a = 9, b = 5, c = -12 $\implies x+y = 3 + 1$
and so on.. check for duplicates
|
H: Does convexity around a point imply the function is above the tangent at that point?
Let $\phi:\mathbb [0,\infty) \to [0,\infty)$ be a $C^2$ function, and let $c>0$ be a constant.
Suppose that for any $x_1,x_2>0, \alpha \in [0,1]$ satisfying $\alpha x_1 + (1- \alpha)x_2 =c$, we have
$$
\phi(c)=\phi\left(\alpha x_1 + (1- \alpha)x_2 \right) \leq \alpha \phi(x_1) + (1-\alpha)\phi(x_2) \tag{1}
$$
Is it true that
$$
\phi(x) \ge \phi(c)+\phi'(c) (x-c) \tag{2}
$$
for every $x$ in a neighbourhood $c$?
Does the answer change if we assume in addition that $\phi$ is strictly decreasing?
The classical proofs for "convexity of $\phi$ implies $\phi$ is above its tangents do not seem to adapt to this setup.
The converse claim holds:
Indeed,
$$
\phi(x_1) \ge \phi(c)+\phi'(c) (x_1-c) , \phi(x_2) \ge \phi(c)+\phi'(c) (x_2-c)
$$
thus
$$
\alpha \phi(x_1) + (1-\alpha)\phi(x_2) \ge \phi(c)+\phi'(c)(\alpha x_1 + (1- \alpha)x_2 -c)=\phi(c).
$$
This formula here implies that $\phi''(c) \ge 0$.
AI: It's true. The most general formulation would be like this: Suppose your condition (1) is satisfied, and $$\phi'_{-}(c)=\lim_{h\to 0^-}\frac{\phi(c+h)-\phi(c)}{h}$$
exists. Then, $$\phi(x) \ge \phi(c)+\phi'_{-}(c) (x-c) \tag{3}$$ for $x>c.$ If
$$\phi'_{+}(c)=\lim_{h\to 0^+}\frac{\phi(c+h)-\phi(c)}{h}$$ exists, then
$$\phi(x) \ge \phi(c)+\phi'_{+}(c) (x-c) \tag{4}$$ for $0<x<c.$
Proof: Let $x>c,$ first. Then, for $h>0,$ we can choose $\alpha$ so that
$$c=\alpha\,(c-h)+(1-\alpha)\,x$$ i.e.
$$\alpha=\frac{x-c}{x-c+h},\quad 1-\alpha=\frac{h}{x-c+h}.$$ With $x_1=c-h$ and $x_2=x,$ your condition (1) becomes (after some algebra)
$$\phi(x)\ge(x-c)\,\frac{\phi(c)-\phi(c-h)}{h}+\phi(c),$$ and $h\to0$ proves (3). The proof of (4) (with $x_1=x<c$ and $x_2=c+h>c$) is completely analogous and left to the reader. ;-)
|
H: Linear transformation from complex to polynomial of degree 2
Is there a linear transformation $T:C^4 -> P_2(C)$ such that $im(T)= P_2(C)$?
AI: Something like
$$T(w_1,w_2,w_3,w_4)=w_1t^2+w_2t+w_3$$
can work.
|
H: Does bounded $L^2$-norm of $r$-th derivative imply bounded $L^2$-norm of $(r-1)$-th derivative?
Let $f:[0,1]\to [0,\infty)$ such that $\int_0^1 \left|\frac{d^r}{d x^r} f(x)\right|^2 d x<\infty$ for some $1\leq r<\infty, r\in\mathbb{N}$. Does it hold that $\int_0^1 \left|\frac{d^{r-1}}{d x^{r-1}} f(x)\right|^2 d x<\infty$?
AI: If you mean that $f$ is weakly differentiable $r$ times: set $g:=d^{r-1}f/dx^{r-1}$. Your assumption says that $g'\in L^2$. This implies by Sobolev embedding in dimension 1 that $g\in C^\alpha$ for some $\alpha$, in particular its $L^2$.
If you mean $f$ is differentiable a.e. : no, let $C$ be the devil's staircase and consider $r=1$, $$f(x)=\begin{cases} 1/C(x)^s & C(x)\neq 0 \\ 0 & C(x)=0\end{cases},$$ for some $s>0$. Then $f$ is constant $a.e.$ so $\int |f'|^2=0$, but by choosing $s$ correctly, I'm quite sure we can make $\int |f|^2 = \infty$.
|
H: Determinant equal to zero, what does it mean?
If I understood correctly, a $determinant = 0$ means that the matrix has no area/volume/etc. But if a system of equations, say $Ax = b$ has a unique solution $x$, does that mean that the determinant can't be zero in that case?
AI: Yes, because if the determinant is zero, then the system is either inconsistent (no solutions), or it has infinitely many solutions.
Assuming the determinant is defined (which means that you must have a square matrix), you can just take the inverse of the matrix. If the inverse exists, this will give you exactly one, unique solution.
See this other Math SE post: what does it mean if the determinant equals 0? for more on this topic.
|
H: Having the dual base $\beta^*=$ {$\phi_1,\phi_2,\phi_3$} where $\phi_1(x,y,z)=x-y$, which is $\beta=$ { $v_1,v_2,v_3$ }?
Let's suppose that I have $\beta=$ { $v_1,v_2,v_3$ } a base of $\mathbb{R^3}$ and its dual base $\beta^*=$ {$\phi_1,\phi_2,\phi_3$} where $\phi_1(x,y,z)=x-y$.
Which is a base $\beta=$ { $v_1,v_2,v_3$ }$\space $? (As $\beta$ is not uniquely determined, there are more that one base, so I have to give one of them)I mean, I have to find which are the values of $v_1,v_2$ and $v_3$.
What I have done is:
Defining $v_1=(a,b,c), v_2=(d,e,f)$ and $v_3=(g,h,i)$
$(x,y,z)=\lambda_1 (a,b,c)+\lambda_2(d,e,f)+\lambda_3(g,h,i)$
And from here I tried to solve the system giving aleatory values to $a,b,c,d,e,f,g,h$ and $i$ (knowing that $\lambda_1=x-y$). But I didn't reach to any solid conclusion... How can I solve this problem?
AI: The only thing we can deduce from the given information is that there are numbers $a,b,c,d,e,f$ for which
$$
v_1 = (a+1,a,b), \quad v_2 = (c,c,d), \quad v_3 = (e,e,f),
$$
and the vectors $v_1,v_2,v_3$ are linearly independent. Note that these vectors will be independent for "most" (or "randomly selected") numbers $a,b,c,d,e,f$.
|
H: Finding All Solutions For $\sin(x) = x^2$
Hello everyone how can I find the count of the solution for $\sin(x) = x^2$?
I know there is a one solution in $x = 0$ and for the other solutions I tried to find the extreme point of the function: $y = x^2 - \sin(x)$ and $y'$ is:
$y' = 2x -\cos(x)$ but I don't know how to solve this equation.
AI: It's quite obvious that there are no solutions when $x<0$, so we will look for $x\ge0$. You have found that $x=0$ satisfies the equation. Let's analyze for $x>0$:
Take $f(x)=x^2$ and $g(x)=\sin(x)$.
For $x=\frac{\pi}{4}$, some calculations give $f(\frac{\pi}{4})\approx 0.625$ while $g(x) \approx 0.7$: $$f(\frac{\pi}{4}) < g(\frac{\pi}{4})$$
For $x=1$, $f(1)=1$ but $g(1)<1$ since $\sin(x)$ is increasing for $x\in[0,\pi/2]$ and $\sin(\pi/2)=1$, then $$f(1)>g(1)$$
which means $f(x)$ exceeds $g(x)$ between $(\pi/4,1)$ and intersect in this inteval. Now you just need to prove that they can't intersect more than once.
|
H: Check this proof: If two columns/rows of a matrix are the same, the determinant is $0$.
I have written this proof stating that if two rows or columns of a matrix are the same, then the determinant of the matrix is equal to 0. Is it correct?
Let us say we have an n x n matrix A, shown below:
For some $i,n \in \mathbb{N}$.
If we say that $r_{i} = [a_{i1}, a_{i2}, a_{i3} ... a_{in}]$, a row vector, then we can rewrite the matrix A as:
If we then also create the swapped $S_{ij}$ matrix, i.e. swap rows $i$ and $j$ around, we have the matrix:
We know that if we swap two rows of a determinant, in this case rows $i$ and $j$, then the determinant will simply be the negative of the original determinant. We can say that the $\det(A) = -\det(S)$. But, if these two rows are identical, then $\det(S) = \det(A)$, so this means that $\det(A) = -\det(A)$, so A must be equal to $0$.
AI: In essence, yes, your proof is very correct, and thorough.
For further clarification you can see this video https://www.khanacademy.org/math/linear-algebra/matrix-transformations/determinant-depth/v/linear-algebra-duplicate-row-determinant which gives a very thorough breakdown of your proof.
|
H: Distributing $n$ unique presents to $k$ kids
Question:
You have 9 presents to give to your 4 kids. How many ways can this be done if:
The presents are unique and each kid gets at least one present?
I know the solution is by using Principle of Inclusion and Exclusion.
$4^9 - [{4 \choose 1}3^9 - {4 \choose 2}2^9 + {4 \choose 3}1^9] = 186480$ <-- Correct Solution
But, initially, when I approach this problem, my thought is this:
1.) First, make sure every kid has at least one present by finding the permutation $P(9,4)$.
$9 * 8 * 7 * 6 = 3,024$
2.) We are now left with 5 presents, since each present can be match to 4 different kids.
$4^5 = 1024$
3.) Each permutation in 1.) can be combined with each permutation in 2.) to form unique permutation.
$3024 * 1024 = 3,096,576$
Need help in figuring out how the thought is wrong because until now I still can't brain what's wrong with it.
AI: You count part of the distributions multiple times.
Say David gets a ball and a doll. He can get the ball in step 1 and the doll in step 2 or the other way around. In the second method, you count this giveaway as two different ones, instead of one.
|
H: Why can $\frac{{{x^2}}}{{\sqrt y }} \le t$ be prepresent as the the following equivalent second order cone constraint?
I am quite new to the field of convex optimization and in a research paper that I have read, some author represent this constraint $\frac{{{x^2}}}{{\sqrt y }} \le t$ as equivalent SOC constraint like this without explanation
Could you kindly explain to me why this is possible and how can they come up with an excellent use of the auxiliary variable $v$ to turn the original constrain into SOC form ?
Thank you for your enthusiasm !
AI: That's because you can use SOC to write a constraint $$pq\geq r^2,$$ namely as $$\|2r,p-q\|_2\leq p+q,$$
and that makes it possible to write various simple inequalities between monomials, and then put them together to get more complicated monomials.
Since you want to model $t\sqrt{y}\geq x^2$ it feels natural to start with $$tv\geq x^2$$ and to complete the model you are forced to take $$\sqrt{y}\geq v,$$ which is also fine because it is equivalent to the same type of SOC: $$1\cdot y\geq v^2.$$
These are rather well-known tricks you can read about in https://docs.mosek.com/modeling-cookbook/cqo.html#simple-sets-involving-power-functions.
PS. The second inequality from your post looks wrong, $v$ and $y$ should be switched.
|
H: Curvature of Fernet curve on a sphere
The question is, how to prove that the curvature of any Frenet curve on a sphere with radius $R$ is bigger or equal to $1/R$.
I have managed to prove so far that the Gauss curvature of the sphere $x^2+y^2+z^2=R^2$ is $1/R^2$, but I don't know if this helps at all
AI: Suppose $\alpha(s)$ is a unit speed curve lying in the sphere of radius $R$ centered at the origin. Then
$\alpha(s) \cdot \alpha (s) = R^2, \tag 1$
whence
$\dot \alpha(s) \cdot \alpha(s) = 0; \tag 2$
since
$\dot \alpha(s) = T(s), \tag 3$
the unit tangent vector to $\alpha(s)$, (2) becomes
$T(s) \cdot \alpha (s) = 0; \tag 4$
differentiating this equation yields
$\dot T(s) \cdot \alpha(s) + T(s) \cdot \dot \alpha(s) = 0; \tag 5$
we now recall (3), viz.
$\dot \alpha(s) = T(s) \tag 6$
and the Frenet-Serret equation
$\dot T(s) = \kappa(s) N(s); \tag 7$
then (5) yields
$\kappa(s) N(s) \cdot \alpha(s) + T(s) \cdot T(s) = 0; \tag 8$
also,
$T(s) \cdot T(s) = 1, \tag 9$
$T(s)$ being a unit vector. (8) may now be written
$\kappa(s) N(s) \cdot \alpha(s) = -1; \tag{10}$
note this forces
$\kappa(s) \ne 0; \tag{11}$
taking absolute values in (10) we find
$\kappa(s) \vert N(s) \cdot \alpha(s) \vert = 1; \tag{12}$
by Cauchy-Schwarz,
$ \vert N(s) \cdot \alpha(s) \vert \le \vert \alpha(s) \vert \vert N(s) \vert = R, \tag{13}$
since
$\vert \alpha(s) \vert = R \tag{14}$
and
$\vert N(s) \vert = 1; \tag{15}$
assembling (12) and (13) together we have
$\kappa(s) R \ge 1, \tag{16}$
or
$\kappa(s) \ge \dfrac{1}{R}, \tag{17}$
$OE\Delta$.
|
H: Why do we say that probability of an individual event in a continuous distribution is 0?
So I understand that the probability a<x<b is the definite integral from a to b of tye probability density function and that makes sense. If we use that same definition to define the probability that x is equal to some value, then we would get 0. And yet, it doesn’t really make sense to say that probability of something is 0 if it’s still possible for it to happen. Furthermore, this introduces the paradox that adding up a bunch of zero probabilities somehow gives us a total property of 1. You can’t add up zero things and get a non zero result. That makes no sense. However, you can add up an arbitrarily large number of arbitrarily tiny pieces and and get a non zero real number result. This is precisely what integration does. So why don’t we just say that the probability of a single event in a continuous distribution is simply undefined? i.e. Why not define the probability function as p(a<x<b)=integral from a to b of P(x), such that a≠b, and p(a<x<b) is undefined such that a=b, where P(x) is the probability density function?
I suppose, to an extent, this is just a matter of semantics: We can certainly redefine the word “probability” so that the sum of the individual probabilies doesn’t have to equal 1, but instead the integral of the probability density function must be equal to 1, but what’s the point? The way I see it, it doesn’t really make sense to ask the probability of a specific event when we’re dealing with a continuous probability distribution and I don’t see how it would be useful. So why not just leave those probabilities undefined?
AI: I'll rephrase your concerns for a slightly different, but very related concept: volume.
The volume of a set of points is the volume integral of 1 over the given set. Using this definition, the volume of a single point in space is 0. Yet it doesn't make sense to say that the volume of a set is 0 if there's still something in it. Furthermore, it introduces the paradox that adding up a bunch of sets with volume 0 gives a set with positive volume, since every set is just a union of sets with a single point in it.
So wouldn't it be better to just leave the volume of a point undefined?
Here, the answer should be no, a volume of 0 for point-like sets is perfectly fine. There's also no paradox here, since adding only a countable number of zeroes actually produces zero again: any countable set has volume 0. And adding an uncountable number of numbers is undefined, so the volume of uncountable sets need not be 0, since it's an uncountable union of sets with volume zero, so we can't just add their volumes. The closest thing to an uncountable sum we have is the integral, which actually does give us the desired non-zero volume.
There is no difference between the volume case and the probability case. And I want to stress this: the underlying concept behind volume and probability is exactly the same: both probability and volume are defined via so-called measures, which were specifically defined to model the behavior of volumes. Any intuition you have about volume should be carried over to probability as is, with the single exception that probability must be no larger than 1. But that has no bearing on the question wether single points should have a defined volume/probability or not.
|
H: Function of 'Max Width' of Crossing Rectangular Bars
(Looking at the Visual Example image below should greatly help with your understanding of my problem)
I'm trying to find the function for the increase of the 'maximum width' of two (identically sized, although their width is all that matters here) rectangular bars, with respect to a changing angle between these two bars.
I say 'maximum width' because I'm not sure what to call it, in the Visual Example it is the red line. When rotating the bars, the pivot point is the exact centre of the red line, so at 180 or 0 degrees, the bars are perfectly aligned on top of each other, so the 'maximum width' is the width of the bars naturally.
Visual Example
Graph
This is a quick Desmos graph of my (very roughly) found data points, the x values are obviously the angle between the rectangular bars, and the y-values represent the 'max width' of the bars (both bars have width of 1 unit). Black line is the closest function I've got, seems to indicate the real function is not a simple exponential.
I'm trying to find this function for a small coding project of mine, and if it is too complicated, I'll have to settle with the inaccurate exponential I quickly came up with due to performance reasons. Any help is appreciated. Thank you!
AI: This is an extended comment intended to help OP describe their problem in sufficient detail for others to help.
Let's consider only the intersection of the two bars, parametrized by the angle $\theta$ between them. In the $\theta = 0$ and $\theta = 180°$ cases, the intersection is the unit-width infinite-length bar. In the $\theta = 90°$ case, the intersection is an unit square. For $0 \lt \theta \lt 90°$ and $90° \lt \theta \lt 180°$, the intersection is a parallelogram.
If the width is measured as a diagonal of the intersection – i.e., between the two points where the edges on the same side of the two bars intersect –, then you'd have
$$\begin{array}{r|c}
\theta ~ ~ & \text{Width} \\
\hline
0° & 1.00 \\
45° & 1.08 \\
90° & 1.41 \\
135° & 1.08 \\
180° & 1.00 \\
\end{array}$$
where the maximum width is $\sqrt{2} \approx 1.41$.
OP's data is wildly different, indicating the "width" is not measured between the two points where the bars' edges on the same side intersect, but in some other way. How? At a fixed angle $\varphi$ with respect to one of the bars, or some other way? OP needs to clarify this.
If we use $\theta$ as the angle between the two bars, such that at $\theta = 0°$ $w = 1$, and at $\theta = 180°$ $w = +\infty$, we have the following situation:
In the yellow right triangle, the hypotenuse is half the width $w$, the horizontal side is half, and therefore
$$\frac{ ~ \frac{1}{2} ~ }{ ~ \frac{w}{2} ~ } = \frac{1}{w} = \cos\left(\frac{\theta}{2}\right) \quad \iff \quad w = \frac{1}{\cos\left(\frac{\theta}{2}\right)}$$
Some numerical values:
$$\begin{array}{r|r}
\theta ~ ~ & w ~ ~ ~ ~ ~ ~ \\
\hline
0° & 1.000000 \\
15° & 1.008629 \\
30° & 1.035276 \\
45° & 1.082392 \\
60° & 1.154701 \\
75° & 1.260472 \\
90° & 1.414214 \\
105° & 1.642680 \\
120° & 2.000000 \\
135° & 2.613126 \\
150° & 3.863703 \\
165° & 7.661298 \\
\end{array}$$
|
H: Approximation in the integral to calculate the age of the universe
I'm computing the following integral: $$T=\frac{1}{H_{0}} \int^{1}_{0} \frac{da}{\sqrt{\frac{\Omega_{M}(t_{0})}{a}}\left( \sqrt{1+\frac{\Omega_{R}(t_0)}{\Omega_{M}(t_0) \,a}+\frac{\Omega_{\Lambda}(t_0) \,a^3}{\Omega_{M}(t_0) }}{} \right)}$$
with: $\Omega_{\Lambda}(t_0)= 0.73,\Omega_{M}(t_0)= 0.27,\Omega_{R}(t_0)=8.51 \cdot 10^{-5},H_0=2.26\cdot 10^{-18}s^{-1}$
At this passage I want to do:
$$T=\frac{1}{H_{0}} \int^{1}_{0} \frac{da}{\sqrt{\frac{\Omega_{M}(t_{0})}{a}}\left( \sqrt{1+\frac{\Omega_{R}(t_0)}{\Omega_{M}(t_0) \,a}+\frac{\Omega_{\Lambda}(t_0) \,a^3}{\Omega_{M}(t_0) }}{} \right)} \simeq\frac{1}{H_{0}} \int^{1}_{0} \frac{da}{\sqrt{\frac{\Omega_{M}(t_{0})}{a}}\left( \sqrt{1+\frac{\Omega_{\Lambda}(t_0) \,a^3}{\Omega_{M}(t_0) }}{} \right)} $$
I can't see why this approximation makes sense.
AI: Using the mean value theorem, we have that for any $0 < x < y$, $$\dfrac{1}{\sqrt{1+x}}-\dfrac{1}{\sqrt{1+y}} = \dfrac{y-x}{2(1+\xi_{x,y})^{3/2}}$$ for some $\xi_{x,y}$ between $x$ and $y$. Hence, $$\left|\dfrac{1}{\sqrt{1+x}}-\dfrac{1}{\sqrt{1+y}}\right| \le \dfrac{|y-x|}{2}$$ for all $x,y > 0$.
Using this bound, we have:
\begin{align*}
&\left|\frac{1}{\sqrt{\frac{\Omega_{M}(t_{0})}{a}}\left( \sqrt{1+\frac{\Omega_{\Lambda}(t_0) \,a^3}{\Omega_{M}(t_0) }}{} \right)} - \frac{1}{\sqrt{\frac{\Omega_{M}(t_{0})}{a}}\left( \sqrt{1+\frac{\Omega_{R}(t_0)}{\Omega_{M}(t_0) \,a}+\frac{\Omega_{\Lambda}(t_0) \,a^3}{\Omega_{M}(t_0) }}{} \right)}\right|
\\
&= \frac{1}{\sqrt{\frac{\Omega_{M}(t_{0})}{a}}}\left| \dfrac{1}{\sqrt{1+\frac{\Omega_{\Lambda}(t_0) \,a^3}{\Omega_{M}(t_0)}}} - \dfrac{1}{\sqrt{1+\frac{\Omega_{R}(t_0)}{\Omega_{M}(t_0) \,a}+\frac{\Omega_{\Lambda}(t_0) \,a^3}{\Omega_{M}(t_0)}}}\right|
\\
&\le \frac{1}{\sqrt{\frac{\Omega_{M}(t_{0})}{a}}} \cdot \frac{\Omega_{R}(t_0)}{2\Omega_{M}(t_0) \,a}
\\
&= \dfrac{\Omega_R(t_0)}{\Omega_M(t_0)^{3/2}} \cdot \dfrac{1}{2\sqrt{a}}
\end{align*}
Hence, the two integrals differ by less than $$\dfrac{1}{H_0}\displaystyle\int_{0}^{1}\dfrac{\Omega_R(t_0)}{\Omega_M(t_0)^{3/2}} \cdot \dfrac{1}{2\sqrt{a}}\,da = \dfrac{\Omega_R(t_0)}{H_0\Omega_M(t_0)^{3/2}} \approx \dfrac{6.07 \cdot 10^{-4}}{H_0}$$
The second integral is at least $$\dfrac{1}{H_0}\int_{0}^{1}\dfrac{da}{\sqrt{\tfrac{\Omega_M(t_0)}{a}} \sqrt{1+\tfrac{\Omega_{\Lambda}(t_0) \cdot 1^3}{\Omega_M(t_0)}}} = \dfrac{1}{H_0}\int_{0}^{1}\dfrac{\sqrt{a}\,da}{\sqrt{\Omega_M(t_0)+\Omega_{\Lambda}(t_0)}} = \dfrac{2/3}{H_0}$$
So the second integral is within $0.1\%$ of the first integral.
|
H: What's the kernel of $w\mapsto w \bullet \bullet$?
for $v\in K^n$, the dot product defines a linear transformation
$-\bullet v: K^n\to K, w\mapsto w\bullet v$. Let $e_i$ be the i-the basis vector of $K^n$. What's the kernel of $-\bullet e_i$?
I know that the dot product of two vectors is zero if they are orthogonal. Can someone help me?
AI: Let $f:=-\bullet e_i$. If $x=(x_1,...,x_n) \in K^n$, then
$$x \in ker(f) \iff f(x)=0 \iff x \bullet e_i=0 \iff x_i=0.$$
|
H: A sufficient condition for a space not to be $T_1$ by a collection generating the topology.
I've thought about the following result, which I wanted to verify:
Let $X$ be a topological space where $\vert X\vert>1$ generated by a collection of subsets $\{ S_\alpha \}_{\alpha\in \Lambda}$, such that $\cup_{\alpha\in \Lambda} S_\alpha \neq X$, then $X$ is not a $T_1$ space.
My reasoning for this is that if $X$ is a topological space with more than one point, and it has a point $x\in X$ without a proper neighbourhood, then $\overline{ \{x \} }=X$. This follows from the fact that $y\in X$ is in $\overline{E}$ if and only if $U\cap E\neq \emptyset$ for any neighbourhood $U$ of $y$.
Is this logic sound? I am currently trying to deduce conditions from a generating collection about whether it does not satisfy separation axioms, so if there is a similar stronger result I would be happy to be informed of it.
AI: My reasoning for this is that if $X$ is a topological space with more than one point, and it has a point $x\in X$ without a proper neighbourhood, then $\overline{ \{x \} }=X$.
Let's see, consider $X=\{1,2\}$ with topology $\tau=\big\{\emptyset,\{1\},\{1,2\}\big\}$. Note that $(X,\tau)$ satisfies your assumption, since it is generated by $\big\{\{1\}\big\}$. Clearly the only open neighbourhood of $2$ is $X$ itself. But $\{2\}=X\backslash\{1\}$ is closed, i.e. $\overline{\{2\}}=\{2\}$.
It's the other way around: under your assumption the only point that has a chance of being closed is the special one you've chosen.
Assume that $x_0\in X$ is a point such that $X$ is the only open neighbourhood of $x_0$ (such point exists by your condition). It follows that $x_0$ belongs to the closure of any subset of $X$. In particular $x_0\in\overline{\{x\}}$ for any $x\in X$. Thus almost all (possibly except $x_0$) points are not closed. Hence $X$ is not $T_1$.
|
H: Is a homogeneous space $X$ of a compact Hausdorff group $G$ with closed stabilizer $G_x$ itself Hausdorff?
assume $G \times X \to X$ is a continuous group action of a compact Hausdorff topological group $G$ such that the action is transitive, i.e. $X$ is a homogeneous space. Let $G_x$ be the stabilizer subgroup of $x \in X$ and assume $G_x$ is closed. Is it then true that $X$ is a Hausdorff space?
My ideas:
We can define the map $\varphi: G/H \to X$, $[g] \mapsto gx$, which is a bijective continuous function. Then it is well-known that $G/H$ is Hausdorff since $H$ is closed. However, I'm not sure if we can show that $\varphi$ is a homeomorphism. To deduce this directly would already require the Hausdorff property for $X$.
AI: Let $G=S_{3}$, the symmetric group over $3$ elements, equipped with the discrete topology (so it's a compact Hausdorff group). There is a natural action $\varphi\colon G\curvearrowright X=\{1,2,3\}$ given by $\sigma\cdot x=\sigma(x)$. The action is transitive, and for $x=1$, $G_{x}=\{e,(23)\}$ is closed. Nevertheless, if we give $X$ the trivial topology, $X$ cannot be Hausdorff (and the action is still continuous because $X$ is trivial).
|
H: Linear Algebra - direction of ball after bounce off a plane
In an orthonormal system, a ball is thrown from a point =(2,6,5) towards a plane with
equation −=−2, in such a way that after it bounces off the plane, it passes through the point =(3,3,7). What is the ball's direction after the bounce.
This question has been posed here before but with a wrong answer.
Reflection of a line through a plane: ball bouncing
I had managed to solve the problem a few weeks ago obtaining the correct answer which is
that the ball's direction vector is = (1,-1,0). But somehow I can't remember how I solved it
last time. Although, I remember being able to find the point of impact on the plane, which I
believe is the key to solving the problem.
At the moment, my line of thinking is:
The plane is normal vector is (1,0,-1). We could consider the reflection point ′ on the other side of the plane which maybe could be calculated as as a multiple of the plane's normal that extends to ′. The parameter equation of the line passing through =(2,6,5) and normal to the plane is (x,y,z) = (2,6,5) + t(1,0,-1). => x = 2 + t, y = 6, z = 5 - t.
For a certain t, this line will cross the plane at which point the equation of the plane must be satisfied, thus (2+t) - (5-t) = -2 => t = t = 0.5. So it crosses the plane at
point M = (2.5,6,4.5), and for double the t amount we get the reflection point ′ = (3,6,4)
From this I thought maybe I can get the ball's direction vector by = −′=(0,−3,3).
But I know is wrong. Last time constructing the direction vector by
taking v = Q - 'point-of-impact' gave me the correct answer.
AI: I strongly believe that your answer is correct and the given answer is incorrect.
We can check the given solution as follows. If the direction vector is $(1,-1,0)$, then the trajectory after the bounce follows the line
$$
L(t) = Q + t(1,-1,0) = (3 + t,3-t,7).
$$
The point where the ball would bounce of the plane satisfies
$$
(3 + t) - 7 = -2 \implies t = 2 \implies L(t) = (5,1,7).
$$
From here, we already have an indication something is wrong: the solution $t \geq 0$ implies that we hit the plane after continuing along the direction of travel rather than by going backwards.
In any case, the direction from the point $A = (5,1,7)$ to $Q$ is $(-1,1,0)$, and the direction from $A$ to $P$ is $(2,6,5) - (5,1,7) = (-3,5,-2)$.
For this to make physical sense, the angle between $(-1,1,0)$ and the normal should be half the angle between $(-1,1,0)$ and $(-3,5,-2)$. The first angle is given by
$$
\cos \theta = \frac{(-1,1,0) \cdot (1,0,-1)}{\sqrt{2}\cdot{\sqrt{2}}} = \frac{-1}{2}
$$
so $\theta = 120^\circ$, which is already problematic. The other angle satisfies
$$
\cos \theta = \frac{(-1,1,0) \cdot (-3,5,-2)}{\sqrt{2}\cdot{\sqrt{38}}} = \frac{8}{2 \sqrt{17}},
$$
so $\theta \approx 14.04^\circ$. So, nothing about the solution makes sense.
An alternative way to find $P'$:
We note that $A = (0,0,2)$ is a point on the plane; any point from the plane will do; this on was computed by setting $x = y = 0$. To calculate the projected point $P'$, first calculate the projection of $P - A$ onto the plane's normal as follows. Let $n$ denote the normal vector $n = (1,0,-1)$. The projection of $P - A$ onto the plane is given by
$$
v = \operatorname{proj}_n (P - A) =
\frac{(P - A) \cdot n}{n \cdot n} n
= \frac{(2,6,3) \cdot (1,0,-1)}{n \cdot n} (1,0,-1)
\\ = \frac{-1}{2}\cdot (1,0,-1) = (-1/2,0,1/2).
$$
Since $P - v$ is the projection of $P$ onto the plane, $P - 2v$ is the reflection of $P$ across the plane. That is,
$$
P' = P - 2v = (2,6,5) - 2(-1/2,0,1/2) = (3,6,4).
$$
|
H: Well-defineness/existence of an integral
I am reading on page 2, it says
For a finite Borel measure $m$ on the real line $\mathbb{R}$, let us recall that its Cauchy transform $G_m$ is defined by
$$
G_m(z)=\int_{\mathbb{R}}\frac{1}{z-x}\mathrm{d}m(x),\qquad \textrm{for } z\in \mathbb{C}\setminus \mathbb{R}=\{z\in \mathbb{C}:\Im z\neq 0\}
$$
Question: How to justify if this integral is well-defined (in case when $m$ is complex)?
AI: $\int|\frac 1 {z-x}| d|m|(x) \leq \frac 1 {|\Im z|} |m|(\mathbb R) <\infty$. [All you have to know is that any bounded measurable function is integrable w.r.t. any complex measure].
|
H: What is the conjugate of $[i + e^{iπt}]$?
What is the conjugate of the following complex number?
$$ Z(t) = i + e^{iπt} $$
Is it $Z(t) = i - e^{iπt}$ or $Z(t) = i - e^{-iπt}$? $t\in [0,1]$
AI: $$Z(t) = i + \cos(\pi t) + i\sin(\pi t)$$
$$Z(t) = \cos(\pi t) + i(1 + \sin(\pi t))$$
$$conjugate (Z(t)) = \bar{Z}(t) = \cos(\pi t) - i(1 + \sin(\pi t))$$
$$\bar{Z}(t) = -i + \cos(-\pi t) + \sin(-\pi t))$$
$$\bar{Z}(t) = -i + e^{-i\pi t}$$
|
H: Show that $X$ must be degenerate at $n$.
For $X$ which is an integer-valued random variable
$$\mathbb{E}[X(X-1)(X-2)...(X-(k-1))] = \begin{cases}
k!\ {n \choose k},& \text{if } k \in \mathbb{N}\\
0, & \text{otherwise}
\end{cases}$$.
which can be rewritten as
$$\mathbb{E}\bigg[\frac{X!}{(X-k)!}\bigg] = \begin{cases}
k!\ {n \choose k},& \text{if } k \in \mathbb{N}\\
0, & \text{otherwise}
\end{cases}$$.
Though it was not mentioned, but $n \in \mathbb{N} \text{ and } n \geq k$ can be taken.
Show that X must be degenerate at n.
AI: For $k=1$, we get $E(X)=n$. For $k=2$ we get $E(X(X-1))=n(n-1)$ so $E(X^2)=n^2$. But this implies that the variance is $0$ and $X=n$ with probability $1$.
|
H: Möbius transformation that carries the real axis to the unit circle
It is known that any (invertible) Möbius transformation carries lines and circles in the complex plane into lines and circles. Which Möbius transformations
$$
T(z)=\frac{az+b}{cz+d}
$$
carries the real axis into the unit circle? More generally, which Möbius transformations carry lines into circles? Thanks.
AI: Select any $3$ distinct points $z_1,z_2,z_3$ on the unit circle. We note that the transformation
$$
S(z) = \frac{(z - z_1)(z_2 - z_3)}{(z - z_3)(z_2 - z_1)} = \frac{(z_2 - z_3)z + z_1(z_3 - z_2)}{(z_2 - z_1)z + z_3(z_2 - z_1)}
$$
maps $z_1,z_2,z_3$ to $0,1,\infty$. It follows that the inverse transformation
$$
T(z) = S^{-1}(z) = \frac{z_{3}(z_2 - z_1)\,z + z_1(z_2 - z_3)}{(z_1 - z_2)\,z + (z_2 - z_3)}
$$
maps $0,1,\infty$ to the unit circle. It is clear that selecting distinct $z_1,z_2,z_3$ leads to distinct transformations. Moreover, because any bilinear transformation sends the points $0,1,\infty$ somewhere, we can conclude that every bilinear transformation that takes $\Bbb R$ to the unit circle has the above form for some choice of $z_1,z_2,z_3$ on the unit circle. In other words, the above defines a one-to-one correspondence between the maps we want and $\Bbb T^3$.
One particularly "nice" example of such a transformation is
$$
f(z) = \frac{z + i}{z - i},
$$
which corresponds to selecting $z_1 = -1,z_2 = i, z_3 = 1$. Indeed, substituting these $z_j$ into the above yields
$$
\frac{1(i-(-1))z + (-1)(i-1)}{(-1 - i)z + (i - 1)} =
\frac{-(1 + i)z + (1-i)}{-(1 + i)z - (1 - i)}\\
= \frac{-(1 + i)z + (1-i)}{-(1 + i)z - (1 - i)} \cdot \frac {-(1-i)}{-(1-i)}
\\ = \frac{2z + 2i}{2z - 2i} = \frac{z + i}{z - i}.
$$
|
H: What Goes Wrong in the Application of Lax-Milgram?
I have the following equation
$$\int\nabla u\nabla v =\int fv$$
So I want to find $f$ for which there is an unique $u$ so the above is satisfied for all $v$ in the space (say $H^1$).
So define bilinear map $B$ from $H^1$ to $\mathbb R$
$$B(u,v):=\int\nabla u \nabla v$$
Clearly by Cauchy-Schwartz and Poincare inequalitites
$$B(u,v)\leq C||u||||v||$$
$$B(u,u) \geq C||u||^2$$
So now by Lax-Milgram for any functional $f$ there exists $u_f$, so that for all $v$
$$B(u_f,v)=(f,v)$$
So we should be done by Riesz representation, choosing any $f$ on $H^1$. However, if we choose $f=1$ we find that when $v=1$ regardless of $u$ we get $$\int\nabla u\nabla v=0=\int f \neq 0 $$
So this should only work when the integral of $f$ is zero. What went wrong?
AI: You used the Poincaré inequality, but this only holds (in this form) on suitable subspaces of $H^1$. For example, it is wrong for $u=1$.
|
H: What is the value of $\int_0^1\pi e^{i\pi t}dt$?
What is the integration of the following equation?
$$\int_0^1Z(t)dt =\int_0^1\pi e^{i\pi t}dt$$
AI: The integral of $\int_0^1 \pi e^{i\pi t}\ dt$ Is by u substitution with $u = i\pi t$ and $du/dt= i\pi$ is equal to (also substituting the bounds) $\frac{\pi}{i\pi}\int_0^{i\pi}e^udu$. Since the antiderivative of $e^u$ is $e^u$, again substituting in the bounds and using Euler's identity, we get $-1/i-1/i=2i$.
|
H: Is the function $f : \mathbb{R} \to \mathbb{R}$ such that $f(x) = x$ for $x \leq 0$ and $x+1$ for $x>0$ continuous?
Is the function $f : \mathbb{R} \to \mathbb{R}$ such that $f(x) = x$ for $x \leq 0$ and $x+1$ for $x>0$ continuous at $x = 0$?
If we consider the limit definition, then $\lim_{x \to 0+} f(x) = 1$ while $\lim_{x \to 0-} f(x) = 0$ and $f(0)=0$.
Is $f(0) = 0$ a necessity for continuity?
How do I contradict the continuity of $f$ at $0$ using the epsilon-delta definition?
AI: To answer $(1)$, for continuity at $x=0$, first the limit needs to exist at $x=0$, i.e. the left and right hand limits need to be equal, which it is not. If that condition holds, only then you check if $$\lim_{x\rightarrow 0} f(x)=f(0)$$, because you first need to ensure that the above LHS actually exists. And yes, the above condition is necessary to check continuity at $0$.
For $(2)$, let's remind ourselves of the epsilon-delta definition for continuity, which is "$f(x)$ is said to be continuous at $x=c$" if $$\forall \epsilon>0, \exists \delta_{\epsilon,c} >0 \text{ such that } \mid x-c \mid < \delta_{\epsilon,c} \implies \mid f(x)-f(c)\mid <\epsilon$$ (The subscripts $\epsilon,c$ denote that the choice of $\delta$ usually depends on the values of $\epsilon$ and $c$.)
For our $f(x)$, it is enough to show that there is an $\epsilon>0$ such that there is no $\delta$ for it such that the above condition holds.
Choose $\epsilon=0.0001$, we have $c=0$, where we want to check continuity, suppose $\exists \delta>0$ such that the condition mentioned above, holds.
Then for $\mid x\mid < \delta$ we should have $$\mid f(x)-f(0)\mid <0.0001 (*)$$
However, $x=\dfrac{\delta}2>0$ satisfies $\mid x\mid < \delta$, but $$\mid f\left( \dfrac{\delta}2 \right) - 0 \mid = \dfrac{\delta}2+1>0.0001 \text{ violates } (*)$$
So there doesn't exist any choice of $\delta>0$, which proves the discontinuity.
|
H: Probabilities of Bivariate Normal Distribution
I have the following normal distribution that all of the parameter are known
$$\begin{pmatrix}
X_1\\
X_2\\
\end{pmatrix}
\sim N\left[\begin{pmatrix}
\mu_1 \\
\mu_2 \\
\end{pmatrix},\begin{pmatrix}\sigma^2 & \rho \\
\rho & \sigma^2\\\end{pmatrix}
\right]$$
$1.$ $\Bbb P(X_1\le\mu_2) = \phi(\frac{\mu_2 - \mu_1}{\sigma})$
is that correct?
$2.$ How do I calculate $\Bbb P(X_1< X_2)$
AI: Correct!
It is easy to verify that the marginal distributions are
$X_1\sim N(\mu_1;\sigma^2)$
$X_2\sim N(\mu_2;\sigma^2)$
with $Cov(X_1;X_2)=\rho$
Then it is immediate to calculate
$\mathbb{P}[X_1<X_2]=\mathbb{P}[X_1-X_2<0]$
via distribution of $Z=X_1-X_2$ that is known
|
H: Is a stochastic matrix always diagonalizable?
Let $A$ be a $n$-by-$n$ left stochastic matrix.
The followings are the properties I found so far:
Algebraic Property
$A$ has left eigenvector $[1,1,\cdots,1]$ ($n$ 1s) with corresponding eigenvalue $1$.
Geometric Property
Let $S$ be a $(n-1)$-simplex whose vertices are at $[1,0,\cdots,0],[0,1,\cdots,0],\cdots,[0,0,\cdots,1]$, border inclusive.
Then $A$ maps $S$ within $S$. (Linearly, of course)
Question
By the geometric property, It seems intuitively clear that:
$A$ has a right eigenvector within $S$ with corresponding eigenvalue $1$.
Along with the vector above, $n-1$ vectors from the (hyper)plane that passes through the origin and is orthogonal to $[1,1,\cdots,1]^T$ complete a basis of the eigenspace of $A$.
Is this true?
AI: The answer to your first question is yes. In general, for any square matrix $A$ (not just the stochastic ones), $A$ has a left eigenvector corresponding to an eigenvalue $\lambda$ if and only if it has a right eigenvector corresponding to the same eigenvalue. In fact, the two eigenspaces have the same dimension, because $\lambda I-A$ has the same nullity as $\lambda I-A^\top$.
The answer to your second question is negative. See Gerry Myerson's answer to another question for a counterexample.
|
H: What are specific proofs of Jacobi Triple Product Identity?
I am looking for the Special Proofs.
Here is a reference from MSE.
Motivation for/history of Jacobi's triple product identity
I also know that a simple proof via Functional Equation from the book of An Invitation to q-series Hei-Chi-Chan. it has a very nice explanation , step by step. I also have learnt that Zeilberg's recursive proof.
What are the other interesting proofs about this identity you know? ? Thanks for your answers.
AI: For a combinatorial proof of Jacobi's Triple Identity see the article by Kolitsch.
For an elementary number-theoretic proof see the article by Herbert S. Wilf, which also has references for other proofs.
For a standard proof see for example here.
Another well-known proof is by Andrews, using Euler's identities.
|
H: Let $\langle x_n\rangle$ be a recursive relation. Find $\lim_{n\to\infty}\frac {x_n}{n^2}.$
Let $\langle x_n\rangle$ be a recursive relation given by $$x_{n+1}=x_n+a+\sqrt {b^2+4ax_n}, n\geq0, x_0 =0$$ and $a$ and $b$ are fixed positive integers. Find $$\lim_{n\to\infty}\frac {x_n}{n^2}.$$
AI: Clearly, $\lim_{n\to \infty} x_n = \infty$. We have
\begin{align}
\sqrt{x_{n+1}} - \sqrt{x_n} &= \sqrt{x_n + a + \sqrt{b^2 + 4ax_n}} - \sqrt{x_n}\\[6pt]
&= \frac{a + \sqrt{b^2 + 4ax_n}}{\sqrt{x_n + a + \sqrt{b^2 + 4ax_n}} + \sqrt{x_n}}\\[6pt]
&\to \sqrt{a} \quad \mathrm{as}\quad n \to \infty.
\end{align}
By the Stolz-Cesaro theorem, we have
$$\lim_{n\to \infty} \frac{\sqrt{x_n}}{n} = \sqrt{a}$$
and thus
$$\lim_{n\to \infty} \frac{x_n}{n^2} = a.$$
We are done.
|
H: Find the x-coordinate of the stationary point on the curve $\tan(x)\cos(2x)$ for $0 < x < \pi/2$
Can someone please show me how to find the x-coordinate for the stationary point for this curve?
$y=\tan(x)\cos(2x)$ for $0 < x < \pi/2$
This is what I've done so far:
$$\frac{dy}{dx}=\cos(2x)\sec^2(x)-2\tan(x)\sin(2x)$$
$$\cos(2x)\sec^2(x)-2\tan(x)\sin(2x)=0$$
$$1-\tan^2(x)=4\sin^2(x)$$
I don't know how to continue this to find the answer.
The final answer should be $x\approx 0.452$
AI: $\cos (2x)=2\tan x \sin (2x)\cos^{2}x$. So $\cos (2x)=\sin^{2} (2x)=1-\cos^{2} (2x)$. Solve this quadratic for $\cos (2x)$.
|
H: Change the scale of sigmoid function to get a value between -0.5 to +.0.5?
Right now the sigmoid function usually gives a value between 0 to 1. I want to scale it down by 0.5 to it gives value between -0.5 to 0.5 ?
How can I do it?
AI: You said it yourself: you want to scale it down by 0.5. So take your
$$
f(x) = \frac{1}{1+e^{-x}}
$$
and move it down by $0.5$:
$$
g(x) = \frac{1}{1+e^{-x}} - \frac{1}{2}
$$
|
H: How do I evaluate $\lim_{n\to\infty} \,\sum_{k=1}^n\left(\frac{k}{n^2}\right)^{\frac{k}{n^2}+1}$?
I came across the following problem recently in a problem sheet aimed at high school students:
Evaluate $$\lim_{n\to\infty} \,\sum_{k=1}^n\left(\frac{k}{n^2}\right)^{\frac{k}{n^2}+1}.$$
I tried to rewrite the inner sum as a Riemann sum hoping that the limit would become a definite integral, but no gain because of the extra $1/n$'s.
AI: We can rewrite the sum as
$$S = \lim_{n\to\infty}\sum_{k=1}^n \left(\frac{k}{n^2}\right)^{\frac{k}{n^2}}\cdot\frac{k}{n}\cdot\frac{1}{n}$$
We also have that for $1\leq k \leq n$
$$\left(\frac{1}{n}\right)^{\frac{1}{n}} \leq \left(\frac{k}{n^2}\right)^{\frac{k}{n^2}} \leq \left(\frac{1}{n^2}\right)^{\frac{1}{n^2}}$$
for $n > e$. Thus we can sandwich the original limit
$$\lim_{n\to\infty} \left(\frac{1}{n}\right)^{\frac{1}{n}} \cdot \sum_{k=1}^n \frac{k}{n}\cdot\frac{1}{n} \leq S \leq \lim_{n\to\infty} \left(\frac{1}{n^2}\right)^{\frac{1}{n^2}} \cdot \sum_{k=1}^n \frac{k}{n}\cdot\frac{1}{n}$$
which means that
$$S = \lim_{n\to\infty} \sum_{k=1}^n \frac{k}{n}\cdot\frac{1}{n} = \int_0^1x\:dx = \frac{1}{2}$$
by squeeze theorem.
|
H: Graph homomorphism, but edges can be mapped to paths
Let $G$ and $H$ be graphs. Is there a name for a function $f$ which
Maps each vertex $x$ of $G$ to a vertex $f(x)$ of $H$
Maps each edge $e \in E(G)$ with endpoints $x$ and $y$ to a path $f(e)$ between $f(x)$ and $f(y)$
In other words, $f$ is like a graph homomorphism, but edges can be mapped to any path with the right endpoints.
AI: A functor. If you treat the graphs as categories, where the objects are vertices, morphisms are paths, and composition is path concatenation, then what you describe is a functor between the graphs.
You also say in the comments:
The idea is that the edges in the graph represent basic transformations between certain states, and paths represent more complex transformations that are composed of several basic transformations.
This is exactly what categories are used for.
|
H: If $x+y+z=1$, prove that $9xyz+1\ge 4(xy+yz+zx)$
If $x+y+z=1$, prove that $9xyz+1\ge 4(xy+yz+zx)$ for $x,y,z\in \Bbb R^+$
I tried to solve this by splitting $9xyz$ as $3xyz+3xyz+3xyz$ and taking all the terms to the LHS before factoring, but I was unable to.
Also tried using Schur's inequality, but that didn't work either.
AI: Rearranging $$9xyz+(x+y+z)^3-4(x+y+z)(xy+xz+yz)$$ gives us
$$\sum_{cyc} (x^3+xyz-x^2y-x^2z) $$
which is $\ge 0$ because it is Schur's inequality
|
H: Amount of homomorphisms from $V$ to $S_4$
I am trying to count the amount of homomorphisms from the klein four group to $S_n$, so the homommorphisms $f: V_4\to S_n$.
I think I am almost there, but just wanted to let you guys know my way of reasoning, and if that is correct: the elements of $V_4$ all have order 2 (except the identity). So they all need to go to an element of order 2 in $S_4$, right?
The elements of $S_4$, with order 2, are: (1 2), (1 3), (1 4), (2 3), (2 4), (3 4), (1 2)(3 4), (1 3)(2 4), (1 4)(2 3). So that is 9 elements.
Then I am a bit stuck. Am I at least going in the right direction?
AI: Hint: $V \cong C_2 \times C_2$. Consider the possible homomorphisms $C_2 \to S_4$.
|
H: Equivalence of $\sigma$-field $\sigma(X,Y)$ and $\sigma (X+Y,X-Y)$
Consider any two random variable $X$ and $Y$. Is it correct to say that $\sigma$-field $\sigma(X,Y)$ and $\sigma (X+Y,X-Y)$ are equal?
In my logic this is correct, because,
$$
\sigma(X+Y,X-Y) = \sigma\{\omega:(X+Y, X-Y)(\omega)\in H\}
$$
Where $H\in \mathbb{R}^2$
Now
$$
\sigma\{\omega:(X+Y, X-Y)(\omega)\in H\} = \sigma\{(X(\omega),Y(\omega))\left[\begin{matrix}
1 & 1\\
1 & -1\\
\end{matrix}\right] \in H\} \\
= \sigma\{\omega:(X(\omega), Y(\omega))\in H'\} = \sigma(X,Y)
$$
Where $H'\in \mathbb{R}^2$, $H'$ is nothing but orthogonal transformation of $H$.
Hence the proof.
Is my logic correct? If not what should I do?
AI: Here is a more general result.
If $X : \Omega \rightarrow (E ,\mathcal E)$ and $Y : \Omega \rightarrow (F ,\mathcal F)$ are random variables such that $Y = f(X)$, where $f : (E ,\mathcal E) \rightarrow (F ,\mathcal F)$ is a measurable function. Then $\sigma(Y) \subset \sigma(X)$.
proof: By definition $\sigma(Y) = \{ Y^{-1}(B) : B \in \mathcal F\}$. Let $B \in \mathcal F$. Since $f$ is measurable, we have $A = f^{-1}(B) \in \mathcal E$. Moreover, $Y^{-1}(B) = (f \circ X)^{-1} (B) = X^{-1}(f^{-1}(B )) = X^{-1}(A)$ so $Y^{-1}(B) \in \sigma(X)$. Therefore $\sigma(Y) \subset \sigma(X)$.
Now if you take $(X,Y)$ and $(X+Y,X-Y)$ respectively instead of the $X$ and the $Y$ in the result above, you have
$$
(X+Y,X-Y) = f(X,Y)
$$
where $f : (x,y) \in \mathbb R^2 \mapsto (x+y,x-y)$ is continuous and thus measurable so
$$
\sigma(X+Y,X-Y) \subset \sigma(X,Y).
$$
You also have $(X,Y) = g(X+Y,X-Y)$ with
$g : (a,b) \in \mathbb R^2 \mapsto (\frac{a+b}{2},\frac{a-b}{2})$ so
$$
\sigma(X,Y) \subset \sigma(X+Y,X-Y)
$$
which gives you
$$
\sigma(X,Y) =\sigma(X+Y,X-Y).
$$
|
H: Proof that no integers satisfy $x^2+2y^2 = p$
Suppose $p$ is a prime such that $p\equiv 5,7 \ \pmod{8}$, then I want to show that there exist no integral solutions $(x,y)$ such that $x^2+2y^2=p$.
I did a simple approach of simply computing with $x,y=0,1,....7$. But I want to know a more technical approach with a good explanation. Hope someone can help me.
AI: Hint: Since $p$ is odd, $x$ is odd, which directly gives $x^2 \equiv 1 \pmod 8$. Then what can you say about $$2y^2 = p - x^2?$$
|
H: Ratio of two infinite cardinal numbers
Suppose $G$ is the group of all functions between $[0,1]\to\mathbb{Z}$. Let $H$ be the subgroup defined as $H=\{f\in G: f(0)=0\}$. Then, what can be said about the cardinality of $H$ and its index in $G$?
I think the cardinality of $H=G=2^{c^2}$, where $c$ is the cardinality of the natural numbers. Hence, I think the index is countable. Am I right? Any hints ? Thanks beforehand.
AI: Hint. Prove that $G/H\simeq\mathbb{Z}$ by exhibiting a surjective group morphism $G\to \mathbb{Z}$ with kernel $H$ (there is an obvious one).
Then deduce that $G/H$ has cardinality $\aleph_0$ ($c$, with your weird notation).
|
H: Modification of a reduced homology
Background:
In Hatcher's Algebraic Topology Chapter 2, reduced homology with coefficient $R$ of a space $X$ is defined as the homology groups of a chain complex
$$
...\to C_{2}(X)
\overset{\partial_{2}}\to C_{1}(X)
\overset{\partial_{1}}\to C_{0}(X)
\overset{\epsilon}\to R
\to 0
$$
where $\epsilon(\Sigma_{i}n_{i}\sigma_{i}:=\Sigma_i n_{i})$.
If my understanding is correct, reduced homology is a better-behaved alternative for usual simplicial/singular homology.
Detail
I came up with a further modification of a singular homology for some $n$-dimensional orientable, compact, manifold $X$ defined as the homology of the chain complex
$$
R
\overset{\eta}\to C_{n}(X)
\to ...\to C_{2}(X)
\overset{\partial_{2}}\to C_{1}(X)
\overset{\partial_{1}}\to C_{0}(X)
\overset{\epsilon}\to R
\to 0\\
\text{where }\eta(r) := r\cdot [X]
$$
which I refer to as reduced-reduced homology $H_{\bullet}'(X)$ of $X$.
Dually, one can also define the reduced-reduced cohomology ${H^{\bullet}}'(X)$ of $X$ by modifying the de Rham complex of $X$;
$$
R
\overset{\Delta}\to\Omega^{0}(X)
\to ...\overset{d_{n-1}}\to \Omega_{n-1}(X)
\overset{d_{n}}\to \Omega_{n}(X)
\overset{\int}\to R
\to 0
$$
where $\Delta(r)$ is a constant scalar field on $X$ and $\int$ is an usual integration of a valume form on $X$.
It is easily checked that the above defined (co)chains are indeed (co)chain complexes.
Questoin:
My question is:
Do the reduced (co)homologies well-behave? In particular, does Poincare duality $H_{n-p}'(X)\cong {H^{p}}'(X)$ holds?
Are there any reference/article about this concept?
Thank you in advance.
AI: I think you lose functoriality altogether, but I’m not entirely sure (feel free to correct me if I’m wrong). Suppose you have a map $X \to Y$. Then you have chain maps and in particular a homomorphism $C_i(X) \to C_i(Y)$ and this also works for the augmented $\mathbb{Z}$ (the square with $C_0(X), C_0(Y)$ and the two $\mathbb{Z}$’s commute). However it isn’t apparent to me what happens with the square on the left. In the case where the dimensions of $X$ and $Y$ are the same, then we are trying to show commutativity of a square with two $R$’s, and $C_n(X)$ and $C_n(Y)$. For this to commute you might need some really strong conditions, such as that $f$ maps the fundamental class of $X$ to the fundamental class of $Y$. Now when $X$ and $Y$ are of different dimensions, the left square doesn’t even make sense because you have mismatched groups.
|
H: ()Most significant bit; bit of the greatest value
A binary number is number expressed in the binary numeral system. Let $n$ be the binary number. Each digit in a number as a bit. And the definition of the most significant bit is the following: The most significant bit is the bit position in a binary number with the greatest value.
What does the greatest value means here ? Could someone explain with example? (i.e. what is the bit of greatest value in a binary number)
AI: This is usually used in computer programming. There we usually store numbers in words of some size. As we write the numbers, the leftmost bit is the most significant. In unsigned binary with $8$ bit words we can store $0_{10}=00000000_2$ to $255_{10}=11111111_2$. The leftmost bit has a value of $128_{10}$ while all the others have less value, ranging from $64$ down to $1$.
When we borrow that for math, we do not use a fixed word length, so the most significant bit is the first $1$. We have $77_{10}=1001101_2$ and the most significant bit is the first $1$, with a value of $64$.
|
H: Solve the following EDO's
They ask me to solve
$$y' +2y + \int_{0}^{x} y(t)dt = f(x)$$ with $y(0)=0$ and
$$
f(x) =
\begin{cases}
0, & x < 5 \\
2, & x \geq 5
\end{cases}
$$
I don't know how to do it.
AI: Solve the following differential equation:
$$y^\prime +2y + \int_{0}^{x} y(t)dt = f(x)\tag{1}$$
where $y(0)=0$, and
$$f(x)= \begin{cases} 0 & x < 5 \\ 2 & x \geq 5 \end{cases}$$
Solution:
Assuming $t^\prime=\frac{dt}{dx}$ for any function $t$!
Substitute
\begin{align*}
u&=y+\int_{0}^{x} y(t)dt\\
u^\prime&=y^\prime+y(x)\frac{dx}{dx}-y(0)\frac{d(0)}{dx}\\
u^\prime&=y^\prime+y\\
\end{align*}
to get
\begin{align*}
u^\prime+u&=0& \forall\ x<5\tag{2}\\
u^\prime+u&=2& \forall\ x\ge5\tag{3}\\
\end{align*}
Solving equation $(2)$:
\begin{align*}
\frac{du}{dx}&=-u\\
-\frac{du}{u}&=dx\\
-\int\frac{du}{u}&=\int dx\\
\ln u&=-x+c_1\\
u&=e^{-x+c_1}\\
u^\prime&=-e^{-x+c_1}\\
y^\prime+y&=-e^{-x+c_1}\\
y^\prime+y&=-e^{-x+\displaystyle\lim_{c_1\to-\infty}c_1}&(\because y^\prime(0)=y(0)=0\text{ from equation }1)\\
y^\prime+y&=0\\
y&=e^{-x+c_2}\\
y&=-e^{-x+\displaystyle\lim_{c_2\to-\infty}c_2}&(\because y(0)=0)\\
\Rightarrow y(x)&=0\ \forall\ x<5\tag{4}
\end{align*}
Solving equation $(3)$:
\begin{align*}
\frac{du}{dx}&=-u+2\\
-\frac{du}{u-2}&=dx\\
-\int\frac{du}{u-2}&=\int dx\\
\ln (u-2)&=-x+c_3\\
u&=e^{-x+c_3}+2\\
u^\prime&=-e^{-x+c_3}\\
y^\prime+y&=-e^{-x+c_3}\\
y^\prime(5)+y(5)&=-e^{-5+c_3}\\
\frac{d}{dx}(e^xy)&=-e^{c_3}\\
ye^x&=-e^{c_3}x+c_4\\
y&=-e^{c_3}xe^{-x}+c_4e^{-x}\\
y^\prime&=e^{c_3}e^{-x}(x-1)-c_4e^{-x}\\
\text{We know that }\qquad\qquad\qquad\qquad y^\prime(5)+2y(5)+\int_0^5 y(t)dt&=2\\
y^\prime(5)+2y(5)&=2&
\end{align*}
which is just one equation to obtain two variables $c_3$ and $c_4$.
The question seems incomplete or I am mistaken. Please comment!
|
H: How can i prove that if some set is a subset of every set in a family of sets, then it is a subset of the intersection of family too.
The question is stated as:
Prove: $(\forall B)(B \in F \Rightarrow C \subseteq B) \Rightarrow C \subseteq \bigcap_{A \in F}A$
Thats what i thinked in a textual way: If we assume that for every $B$, if it belongs to $F$ then $C$ is a subset of $B$, we have that all elements from $C$ are in every set of $F$, thus the intersection of $F$ contains at least the elements of $C$, and then $C \subseteq \bigcap F$.
I tried to do something with symbols but i got stucked in the second line:
$$(\forall B)(B \in F \Rightarrow C \subseteq B)$$
$$(\forall B)(B \in F \Rightarrow (\forall x)(x \in C \Rightarrow x \in B))$$
$$(\forall B)(B \notin F \lor (\forall x)(x \notin C \lor x \in B))$$
I dont know how to proceed, so i want to know if the textual part is correct, how this proof should be done and if its possible to do some operation with the quantifiers.
AI: Assume that the LHS holds. We want to show the RHS, i.e. if $x\in C$, then $x\in A$ for all $A\in F$.
For this, let us take $x\in C$ an fix some $A\in F$. The LHS now implies that $x\in C\subset A$, so $x\in A$. As this holds for arbitrary $A\in F$, we are done.
|
H: Why can a deterministic first-mover always be exploited in 2-player zero-sum games?
I am reading a book about Bandit Algorithms in which the authors make the following observation. I am wondering if anyone could point me to an explanation of why the first-mover can always be exploited unless following a randomized strategy:
Readers familiar with game theory will not be surprised by the need for randomisation. The interaction between learner and adversarial bandit can be framed as a two-player zero-sum game between the learner and environment. The moves for the environment are the possible reward sequences, and for the player they are the policies. The pay-off for the environment/learner is the regret and its negation respectively. Since the player goes first, the only way to avoid being exploited is to choose a randomised policy.
AI: The problem is that the idea of a zero-sum game is being conflated with a dynamic or sequential move game.
In a zero-sum game with a finite number of strategies, the players move simultaneously. But if my payoff is the negative of your payoff and vice versa, you can think of me as moving to minimize your payoff given that you are maximizing. Conversely, since $-\max = \min-$, you can think of me as maximizing my payoff subject to you acting to minimize my payoff. Since the game is zero sum, we must be randomizing (or if there are many equilibria, like the game in which all our payoffs are zero, there is one that involves randomization). Think of it like rock-paper-scissors: if I strictly prefer some strategy, that means you must be losing out, and you can make me worse off and you better off by adjusting your strategy. That must mean we are all indifferent over our pure strategies, given our expectations of how our opponents will behave. That means I pick my mixed strategy to make you indifferent over your pure strategies, and vice versa. This is usually called the minimax theorem: https://mathworld.wolfram.com/MinimaxTheorem.html .
All of that has no "timing". It is a simultaneous move game, and neither player moves first or second. It is only because the operators in the minimax theorem appear in "order" that you might get that impression. In games that genuinely have timing, the correct solution concept is Subgame Perfect Nash Equilibrium. In an SPNE, randomization only happens when players are indifferent over some of their payoffs at terminal histories in the game, and this is a fragile and somewhat rare situation (if you perturb the payoffs a very small amount, such indifferences are broken, and there will typically be a pure strategy SPNE). In your example, in the $\min \max$, the second mover does not get to observe the outcome of the first mover's randomization before picking their strategy, only the randomization itself (or else the second mover could always pick an optimal response to the first mover, which they typically do not; they simply know the distribution of states the first mover picked). To keep the first mover uncertain and maximize their payoff, the second-mover therefore plays randomly, just as in rock-paper-scissors.
|
H: How to prove using Zorn's lemma that if $X$ has at least two elements, then is a 1-1 function $H\colon X\to X$ such that $H(x)\neq x$ for all $x$?
This is the problem.
Let $X$ be an arbitrary set having at least two elements. Show that there is 1-1 function $H: X \to X$ such that for all $x \in X$, $H(x) \neq x$, by using Zorn's Lemma.
To prove by using Zorn's Lemma, I have to construct an ordered set
AI: Use Zorn's lemma to get a maximal family of pairwise disjoint two-element subsets of $X$. If you're lucky, these pairs will cover all of $X$ and you can define $H$ to be the function that maps each $x\in X$ to the other element of its pair. If you're unlucky and there's some $a\in X$ not covered by your maximal family of pairs, then there's only one such $a$ by maximality. Adjoin $a$ to one of your pairs, say $\{p,q\}$, define $H$ on the $3$-element set $\{a,p,q\}$ by $a\mapsto p\mapsto q\mapsto a$, and define $H$ on all your other pairs as in the "lucky" case.
|
H: Convergence of series problem
Consider the sequence given by $$x_n=\sqrt{-1+\sqrt{1+\frac{1}{n^{\alpha}}}}$$
For what values of $\alpha>0,$ $x_n \in l^1.$
I feel for $\alpha>4,$ $x_n\in l^1$ and for other values of $\alpha$ series $\sum |{x_n}|$ diverges.
How to prove it rigorously?
AI: As a hint : Maybe you know $$x\to 0 \implies (1+x)^k\sim1+kx$$ so you can start from here,$\frac 1n \to x$
$$n\to \infty\\x_n=\sqrt{-1+\sqrt{1+\frac{1}{n^{\alpha}}}}=\sqrt{-1+{(1+\frac{1}{n^{\alpha}})^{\frac12}}}\\\sim \sqrt{-1+{(1+\frac{1}{2n^{\alpha}})}} \\= \sqrt{\frac{1}{2n^{\alpha}}}=\frac{1}{\sqrt 2}.\frac{1}{n^{\frac{\alpha}{2}}}$$ then $for \space \alpha>2,\sum |{x_n}|$ converges
|
H: A number is selected from each of these sets,say $p$, $q$, $r$ respectively. The probability that $r=|p-q|$ is?
Let $P$, $Q$, $R$ be the sets of first $8$ natural numbers, first $12$ natural numbers,first $20$ natural numbers respectively.
A number is selected from each of these sets,say $p$, $q$, $r$ respectively.
The probability that $r=|p-q|$ is?
as $r$ can not be zero , then $\Pr (p \ne q)$ is $1-\frac 8 {12 \times 8}$
how to do further?
(natural number starts from $1$)
AI: For any choice of $p$ there are exactly $11$ choices of $q$ for which $p\neq q$. Thus the probability that $|p-q|\neq 0$ is $\frac {11}{12}$.
For any choice of $p,q$ such that $|p-q|\neq 0$ there is a unique choice of $r$ such that $r=|p-q|$. Thus the desired answer is $$\boxed {\frac {11}{12}\times \frac 1{20}}$$
|
H: How many elements of $M$ are similar to the following matrix?
Let k be the field with exactly $7$ elements. Let $M$ be the set of all $2\times 2$ matrices with entries in k. How many elements of $M$ are similar to the following matrix?
$ \begin{pmatrix}
0 & 0 \\
0 & 1
\end{pmatrix}$
I know that two matrices in $M$ are similar if and only if their characteristic polynomial is equal, i.e., $A$ is similar to the given matrix if and only if $\operatorname{tr}(A)=1$ and $\det(A)=0$. We can take all matrices of the form:
$$
\begin{pmatrix} a & b \cr c & 1-a\end{pmatrix}
$$
with $a(1-a)=bc$ in k. But this method takes long time for calculation but is full-proof.
One of the comments here says: Do you know about eigenvalues and eigenvectors? If you do you can quite quickly count that there are $\left(\frac{p^2-1}{p-1}\right)\left(\frac{p^2-p}{p-1}\right)$ of them, where $p=7$.
Can someone please explain the proof of how the above result comes? Thanks for the help.
AI: Let $B$ denote the matrix
$$
B = \pmatrix{0&0\\0&1}.
$$
Regarding the last paragraph: we note that $A$ is similar to $B$ if and only if it is diagonalizable with eigenvalues $0,1$. With that established, we can completely specify a matrix $B$ by selecting a one-dimensional eigenspace associated with $0$ and a one-dimensional subspace associated with $1$. Because the total number of one-dimensional subspaces of $K^2$ is $n = \frac{p^2 - 1}{p - 1} = 8$, the total number of ways that a matrix $B$ can be constructed is $n(n-1) = 56$, which is the desired answer.
Regarding your method: it is indeed possible to count the matrices of the form
$$
A = \pmatrix{a & b\\c & 1-a}.
$$
We divide these matrices into two cases.
Case 1: $a = 1$ or $a = 0$. We must have $bc = 0$, so $b = 0$ or $c = 0$. Each matrix that satisfies this description can be built by
Selecting $a = 0$ or $a = 1$ ($2$ possibilities),
Selecting $b,c$ with one equal to $0$ ($7^2 - 6^2 = 13$ possibilities).
Case 2: We can build the remaining matrices as follows:
Select $a$ ($5$ possibilities)
Select a non-zero $b$ ($6$ possibilities)
From there, we are forced to select $c = \frac{a(1-a)}{b}$.
In total, we have
$$
2 \times 13 + 5 \times 6 = 56
$$
matrices.
|
H: If $ 3a+2b+c=7$ then find minimum value of $ a^2+b^2+c^2$
Question:- If $ 3a+2b+c=7$ then find the minimum value of $ a^2+b^2+c^2 $.
I used vectors to solve this problem.
Let $$α=3\hat{i}+2\hat{j}+\hat{k}$$
$$β=a\hat{i}+b\hat{j}+c\hat{k}$$
Using Cauchy-Schwarz inequality
we have, $|α.β|\le |α| |β|$
$=|3a+2b+c|\le\sqrt{14}\sqrt{a^2+b^2+c^2}$
$= 7\le\sqrt{14}\sqrt{a^2+b^2+c^2}$
So, $a^2+b^2+c^2\ge \frac72$
Therefore, the minimum value of $a^2+b^2+c^2$ is $\frac72$
I want to know are there any other method to find the minimum value of
$a^2+b^2+c^2$ such as using inequalities and calculus by assuming function $f(x,y,z)=x^2+y^2+z^2$.
AI: There is a variant of the Cauchy-Schwarz inequality and its name is by a certain community of problem solvers as Titu’s lemma:
$a^2+b^2+c^2 = \dfrac{(3a)^2}{9}+\dfrac{(2b)^2}{4}+\dfrac{c^2}{1} \ge \dfrac{(3a+2b+c)^2}{9+4+1} = \dfrac{49}{14} = \dfrac{7}{2}$.
|
H: Number sequence with a given formula
$a_{n}$ is a sequence which is given by the formula: $a_{n+1}=\log_2(a_n)$, where $a_1=30$. What is a number of maximum elements of the sequence?
I tried to approximate each number but I think there must be more systematic way to solve this problem. Can you explain this to me? Note that sequence only contains real numbers.
AI: $$16<30<65536.$$
Taking the logarithm once,
$$4<\log_230<16.$$
Twice,
$$2<\log_2\log_230<4.$$
Thrice,
$$1<\log_2\log_2\log_230<2.$$
Four times,
$$0<\log_2\log_2\log_2\log_230<1.$$
Five times,
$$\log_2\log_2\log_2\log_2\log_230<0.$$
|
H: Is it a convex set?
I have the following set
$$A = \{ (x,y) \in \Bbb R^{2} : \log x + y^{2}\ge 1, x \ge 1, y \ge 0 \}$$
and I need to know if it's convex or not. I tried to have a look at this function $-\log x-y^{2}$, but the Hessian matrix is indefinite and I don't know what to do else.
AI: $(e,0)$ is in the set.
$(1,1)$ is also in the set.
The corresponding straight line connecting the two points are
$$y-1 = \frac{-1}{e-1}(x-1)$$
A point on that line segment is $x=2$ and $y=1-\frac1{e-1}$.
$$\ln 2+(1-\frac1{e-1})^2=\ln 2 + 1 - \frac2{e-1}+\frac1{(e-1)^2} \approx 0.693 + 1 -1.164+0.339 < 1$$
|
H: Does a continuous bijection from a compact, hausdorff space imply it is an homeomorphism?
Let $f:X \rightarrow Y$ be a bijective, continuous map between two toplogical spaces. Does X being compact and Hausdorff imply that $f$ must be a homeomorphism? I think it doesn't but I can not find an example.
AI: Let $X$ be your favourite compact Hausdorff space with at least $2$ points, let $Y$ be $X$ with the trivial topology, consider the identity as $f$.
|
H: Proof that Hadamard matrices of order $4k+2$ don't exist
It's known that Hadamard matrices can only exist for orders $1$, $2$ and $4k$. It's easy to show that there are no Hadamard matrices of order $2k+1$. But what is the proof that there are no Hadamard matrices of order $4k+2$?
AI: Assume the Hadamard matrix has $\ge3$ rows.
Consider the top row. You may as well assume it is all ones. (otherwise change the signs
of various columns). Then row two and row three each
consist of $n/2$ ones and $n/2$ minus ones.
So $n$ is even. As row $2$ and row $3$ are orthogonal, then they agree in $n/2$
entries. So if row $2$ and row $3$ both have ones in $k$ columns, then in $n/2-k$
columns, row $2$ has a one and row $3$ a $-1$ and so in $k$ columns, row $2$
and row $3$ both have $-1$s. So they agree in $2k$ entries: $2k=n/2$ and $n$
is a multiple of $4$.
|
H: How many of the subsets of set $A$ are also subsets of set $B$?
$A=\left\{2,3,4,5,7,8\right\}$
$B=\left\{3,4,5,7,10\right\}$
How many of the subsets of set $A$ are also subsets of set $B$?
I find $2^4=16$. Because, $A∩B=\left\{3,4,5,7\right\}$.
But, the answer is $32$. But, why? Am I wrong?
AI: Here are your $2^{\#A\cap B}= 2^4=16,$ subsets
$$\emptyset$$
$$\{3\},\{4\},\{5\},\{7\}$$
$$\{3,4\},\{3,5\},\{3,7\}$$
$$\{4,5\},\{4,7\},\{5,7\}$$
$$\{3,4,5\},\{3,4,7\},\{3,5,7\},\{4,5,7\}$$
$$\{3,4,5,7\}=A\cap B$$
|
H: Let $(G,\cdot)$ be a set with an associative operation. Show that the following two Axioms are equivalent
Let $(G,\cdot)$ be a set with an associative operation. Show that the following two Axioms are equivalent:
(a) : there exists a left-hand neutral element $e'$, so that $\forall a \in G: e'a=a$
(b): There exists a neutral element $e$, so that $\forall a\in G:ea=ae=a$
My attempt:
$(a)\Longrightarrow (b) :$
Let $e'$ be the left-hand inverse on $(G,\cdot)$.
Now let's take $a,b \in G$:
$$ab=a(e'b)=(ae')b=ab.$$
So in order for the associativity on $(G,\cdot)$ to hold, $e'$ has to be right-hand neutral as well.
$(b) \Longrightarrow (a):$
Is obvious ?
Is this correct? I mean, its quit obvious, thats why I suspect myself jumping to conclusions..
AI: The two statements are not equivalent. Although (b) implies (a), it is not the case that (a) implies (b).
To verify this, let $G=\{e,a\}$, and define the operation as follows: $ea=a$, $aa=a$, $ae=e$, $ee=e$. That is, the result of multiplying $x$ by $y$ is always $y$.
This is easily seen to be associative, since $x(yz) = yz = z$ and $(xy)z=z$.
It is also clear that both $e$ and $a$ are left inverses, since $ee=e$, $ea=a$ (and also $ae=e$ and $aa=a$). However, neither $e$ nor $a$ are two-sided inverses.
The flaw in your attempt, as has been pointed out, is that associativity does not imply cancellativity. You cannot go from $xy=xz$ to $y=z$, or from $xy=zy$ to $x=z$, from just knowing the operation is associative. But that is what you are attempting to do when claiming that $(ae’)b = ab$ requires $ae’=a$.
|
H: Missing piece for combinatorial proof
I am working on a combinatorial proof given that $n-k$ is divisible by $g$ (that is $mod(\frac{n-k}{g})=0$). I am missing a piece for this step:
$\frac{k-1}{n-1}{n+\frac{n-k}{g}-2 \choose n-2}+\frac{k+g}{n}{n+\frac{n-k}{g}-2 \choose n-1}=\frac{k}{n}{n+\frac{n-k}{g}-1 \choose n-1}$
AI: For convenience let $\ell=\frac{n-k}g$; the desired equality is then
$$\frac{k-1}{n-1}\binom{n+\ell-2}{n-2}+\frac{k+g}n\binom{n+\ell-2}{n-1}=\frac{k}n\binom{n+\ell-1}{n-1}\;.\tag{1}$$
We can use the identity $$\binom{n+\ell-1}{n-1}=\binom{n+\ell-2}{n-2}+\binom{n+\ell-2}{n-1}$$ to expand the righthand side of $(1)$ and rearrange it to get
$$\frac{k+g}n\binom{n+\ell-2}{n-1}-\frac{k}n\binom{n+\ell-2}{n-1}=\frac{k}n\binom{n+\ell-2}{n-2}-\frac{k-1}{n-1}\binom{n+\ell-2}{n-2}\;,$$
which immediately simplifies to
$$\frac{g}n\binom{n+\ell-2}{n-1}=\frac{n-k}{n(n-1)}\binom{n+\ell-2}{n-2}\;.$$
Now $n-k=g\ell$, so we can multiply both sides by $\frac{n}g$ to get
$$\binom{n+\ell-2}{n-1}=\frac{\ell}{n-1}\binom{n+\ell-2}{n-2}\;.$$
And
$$\frac{\ell}{n-1}\binom{n+\ell-2}{n-2}=\frac{(n+\ell-2)!}{(\ell-1)!(n-1)!}=\binom{n+\ell-2}{n-1}\;,\tag{2}$$
so all that you have to do to prove $(1)$ is start with $(2)$ and reverse the steps that I used to get from $(1)$ to $(2)$; they are all reversible.
|
H: How can I prove $\int_{0}^{\infty}\frac{x^2+1}{(x^4+2ax^2+1)(x^s+1)} dx=\frac{π}{2\sqrt{2a+2}}$
Question:- Prove that $$\int_{0}^{\infty}\frac{x^2+1}{(x^4+2ax^2+1)(x^s+1)} dx=\frac{π}{2\sqrt{2a+2}}$$
I Recenty got stuck on evaluating this integral,the result is independent of $s$. with $s=2$ , I verified the result.I don't know how to start evaluating the integral, since known method such as subsitution, Beta function(since it involves in these kind of problems) to me doesn't help here.
Could someone help me?
AI: Let $$I = \int_{0}^{\infty}\frac{x^2+1}{(x^4+2ax^2+1)(x^s+1)} dx$$
Start by substituting $x = 1/t$
$$\implies I = \int_{0}^{\infty}\frac{t^2+1}{(t^4+2at^2+1)(1+\frac1{t^s})} dt$$
$$ = \int_{0}^{\infty}\frac{(t^2+1)t^s}{(t^4+2at^2+1)(t^s+1)} dt$$
$$ = \int_{0}^{\infty}\frac{(t^2+1)(t^s+1)-(t^2+1)}{(t^4+2at^2+1)(t^s+1)} dt$$
$$ = \int_{0}^{\infty}\frac{t^2+1}{(t^4+2at^2+1)} dt - I$$
$$\implies 2I = \int_{0}^{\infty}\frac{t^2+1}{(t^4+2at^2+1)} dt$$
Now divide the numerator and denomintor by $t^2$
$$\implies 2I = \int_{0}^{\infty}\frac{1+\frac{1}{t^2}}{(t-\frac1t)^2+2a+2} dt$$
Now a simple substitution should do it.
|
H: Show one diagonal of $B D E C$ divides the other diagonal in the ratio?
Consider a triangle $A B C$. The sides $A B$ and $A C$ are extended to points $D$ and $E,$ respectively, such that $A D=3 A B$ and $A B=3 A C$. Then one diagonal of $B D E C$ divides the other diagonal in the ratio ?
My approach
I am trying to find the relatio between BOC and and DOE (O is the intersecting point of the diagonals).
But I am unable to do so. Although the ratio of AD/AB = AE/AC , I can't show $\triangle ABC$ and $\triangle ADE$ are equivalent.
Am i going wrong?
AI: Presumably you mean that $AE=3AC.$ If that is the case, then $\triangle ABC\sim \triangle ADE,$ so $BC$ is parallel to $DE.$ By the theorem on a transversal intersecting parallel lines, $\angle BCD=\angle CDE$ and $\angle CBE=\angle BED.$ So $$\triangle BOC\sim \triangle EOD.$$ By similarity ratios, $$\frac{BO}{EO}=\frac{CO}{DO}.$$ Is that what you were seeking?
|
H: Example of matrix $A\neq I$ such that $A^3=I$
Let $A \neq I$ be a $3 \times 3$ matrix. I need to find an example of $A$ that satisfies $A^3 = I$.
Is there any "smart" way to do this? All I can think of is to either multiply $A$ for 3 times and then try to guess the factors or try to solve $A^2 = A^{-1}$. In either case I feel that it can be solved in a smarter way. Any ideas?
AI: Fix $3$ linearly independent vectors $e_1$, $e_2$, and $e_3$ of $\Bbb R^3$ (the vectors of the standard basis, say) and consider the map $f\colon\Bbb R^3\longrightarrow\Bbb R^3$ such that $f(e_1)=e_2$, $f(e_2)=e_3$ and that $f(e_3)=e_1$. Now, let $A$ be the matrix of $f$ with respect to some basis of $\Bbb R^3$. Can you check that such a matrix will work?
|
H: What is $P(\min\{X, Y\} = 1)$?
If $y=1,2,3$ and $x=0,1,2$ where $P(X=x, Y=y) = k(2x+3y)$
I need to find $P(\min\{X, Y\} = 1)$.
I thought I need to use that the CMF of the minimum is $1-P(X)$, and maybe to find k by doing derivative on the equation and to sum it up to 1?
would love any direction on this.
AI: Given that $P(X=x, Y=y) = k(2x+3y)$
$\sum_{(x,y)}P(X=x,Y=y) = k(0+3) +k(2 + 3) + k(4 +3) + k(0+6) + k(2+6) + k(4+6) + k(0+9) + k(2+9) + k(4+9) = 72k = 1$
Hence $k = \frac{1}{72}$
Now, $P(\min(X,Y)=1) = P(X=1,Y=2,3) + P(X=2, Y=1)$
|
H: If $X$ follows an $\operatorname{Exp}(\theta)$, does $1/X$ follow an $\operatorname{Exp}(1/ \theta)$?
I heard a teacher say that if $$X \sim \operatorname{Exp}(\theta)$$
then $$\frac{1}{X} \sim \operatorname{Exp}\left(\frac{1}{\theta}\right)$$
I don't trust this teacher because he has given us wrong answers before.
I tried to obtain this myself:
If $y = g(x) = \frac{1}{x}$, then $$f_Y(y) = f_X(g^{-1}(y)) \cdot \left|\frac{d}{dy}g^{-1}(y)\right|$$
$$f_Y(y) = \theta \cdot e^{-\theta/y} \cdot \frac{1}{y^2}$$
where $1/y \in (0, +\infty)$.
I don't see an $\operatorname{Exp}(1/\theta)$ here. Am I doing something wrong?
AI: You are right and your transformation is correct!
But without doing any calculation, what is true is that $X\sim \operatorname{Exp}(\theta)=\Gamma(1;\theta)$
And this means that $\frac{1}{X}\sim \text{Inv-Gamma}$
See "Related distributions" in this link
|
H: Find a vector that is as proportional as possible to a given vector under a set of linear constraints
Let $d\in \mathbb{R}^n, \ b\in \mathbb{R}^n, \ A \in \mathbb{R}^{m\times n},\ \lambda\in \mathbb{R}$.
Let $x=\lambda d+\varepsilon $, where $\varepsilon\in \mathbb{R}^n$.
Let $E_\lambda =\left \{\varepsilon\in \mathbb{R}\ \vert \ Ax\leq b \right\}$.
Find $\varepsilon_o\in E_\lambda$ such that $(\forall \varepsilon^{\prime} \in E_\lambda) \Vert \varepsilon_o \Vert \le \Vert\varepsilon^{\prime}\Vert$.
AI: So $d, b, A, \lambda$ are fixed and $x$ is variable. You can solve this as a quadratic programming problem where you minimize $(x-\lambda d)^2$ subject to $Ax \le b$.
|
H: inequality involving exponentials and square root function
I came across the following inequality from these notes. It states that
$\frac{e^{-\lambda}}{\sqrt{1-2\lambda}} \le e^{2\lambda^2}$ for $\lambda < \frac{1}{4}$.
Is there a way to show this using calculus? I tried setting $f(\lambda) = e^{2\lambda^2} -\frac{e^{-\lambda}}{\sqrt{1-2\lambda}} $. Since $f(0) = 0$, I tried showing that at least for $\lambda \in (0,\frac{1}{4}), f'(\lambda) > 0$ so that $f(\lambda) > 0$. However, it doesn't seem straightforward to show this.
Are there other alternative approaches to show this inequality?
AI: You can take
$$
g(\lambda)=\sqrt{1-2\lambda}e^{2\lambda^2+\lambda}
$$
and we want to show that $g(\lambda)\ge 1$ for $\lambda\in (0,1/4)$. Note that $g(0)=1$. Then we use the product rule to calculate
$$
g'(\lambda)=\frac{-g(\lambda)}{1-2\lambda}+(4\lambda+1)g(\lambda)
$$
and we want to show that this is positive, that is
$$
(4\lambda+1)\ge \frac{1}{1-2\lambda}
$$
We multiply to clear the denominator, and we get that we need $\lambda$ to satisfy
$$
-8\lambda^2+2\lambda\ge 0
$$
which it does for $\lambda\in (0,1/4)$.
|
H: How do I prove both arcs are equal?
As in the following image, the segments AD, DB, BE and EC make the same angle (x) relative to the diameter of the circle QP. How can I prove the arcs L1 (AB) and L2 (BC) are equal?
AI: This question is trivially answered by simply extending $AD$ and $CE$ and observing their intersection will meet at the circle at a point $B'$, for the reason that we can simply reflect $A, B, C$ across diameter $QP$ to $A', B', C'$ and the given angles force $D$ to be collinear with $AB'$ and $E$ collinear with $CB'$. Thus $\angle AB'C$ is an inscribed angle, and $\triangle DB'E$ is is isosceles; therefore, $\angle AB'B = \angle BB'C$ and by the inscribed angle theorem, the subtended arcs $L_1$ and $L_2$ are equal.
|
H: Condition for splitting the integrals
I wanted to ask what are the conditions for splitting a definite improper integral?
For example, is it true that $$ \int_{0}^{\infty}(f(x)+g(x)) ~dx=\int_{0}^{\infty}f(x)~dx+\int_{0}^{\infty}g(x)~dx $$
If both integrals are individually convergent and are continuous throughout the domain, and if not, what are the conditions?
AI: If $f(x)$ and $g(x)$ are each Riemann integrable on $[0,b]$ for every $b > 0$, then $f(x) + g(x)$ is also Riemann integrable there, and $$ \int_0^b (f(x) + g(x))\; dx = \int_0^b f(x)\; dx + \int_0^b g(x)\; dx$$
By definition,
$$\eqalign{\int_0^\infty (f(x) + g(x))\; dx &= \lim_{b \to \infty} \int_0^b (f(x) + g(x))\; dx\cr
&= \lim_{b \to \infty} \left(\int_0^b f(x)\; dx + \int_0^b g(x)\; dx \right)\cr
&= \lim_{b \to \infty} \int_0^b f(x)\; dx + \lim_{b \to \infty}\int_0^b g(x)\; dx \cr
&= \int_0^\infty f(x)\; dx + \int_0^\infty g(x)\; dx}$$
if those last two improper integrals exist.
However, it can happen that the improper integrals of $f$ and $g$ don't exist, but that of $f+g$ does.
|
H: Consider the unbounded region below $y=e^{2x}$ and $y =e^{-2x}$ , and above $y =0$ over interval $(−\infty, \infty)$. What is the area of this region?
Consider the unbounded region below $y=e^{2x}$ and $y =e^{-2x}$ , and above $y =0$ over
interval $(−\infty, \infty)$. What is the area of this region?
How can the area be computed without knowing the terminals ?
AI: Due to the obvious symmetry, we have:
$$A=2\int_{-\infty}^0 e^{2x}dx= 2 \lim_{a \to -\infty} \int_a^0 e^{2x}dx=2\lim_{a \to -\infty} \frac12(e^{2x})_a^0 = 1.$$
Figure:
|
H: Find the area of the square $ABCD$ in terms of $u$ and $v$.
QUESTION: Given a square $ABCD$ with two consecutive vertices, say $A$ and $B$ on the positive $x$-axis and positive $y$-axis respectively. Suppose the other vertex $C$ lying in the first quadrant has coordinates $(u , v)$. Then find the area of the square $ABCD$ in terms of $u$ and $v$.
MY APPROACH: I was trying to solve it out using complex numbers, but I need a minor help. I have assumed $A$ to be $(x_1+0i)$, $B$ to be $(0+y_2i)$ and $C$ is $(u+vi)$. We know that multiplying a point by $i$ basically rotates it by $90°$, about the origin. Here, $C$ is nothing but the reflection of $A$ about the line $BD$. So if I can somehow rotate $A$ about $B$ by $90°$ then we will get $x_1$ and $y_2$ in terms of $u$ and $v$.
This is where I am stuck. How to rotate a point with respect to another?
Note that this question has been asked before. But I want to know how to solve it using complex numbers..
Any answers, possibly with a diagram will be much helpful..
Thank you so much..
AI: You can think of changing your frame of reference so that you're rotating around $B$. Look at the vector $BA = x - yi$. Then $BC = (BA)i = y + xi$.
In other words, $C = B + BC = (yi) + (y + xi) = y + (x+y)i$. Therefore $u = y$ and $v = x+y$.
You can calculate the area of the square in terms of $x$ and $y$, then convert that to $u$ and $v$.
|
H: Lebesgue points
Lebesgue points are defined by:
$$\lim_{r \to 0}\frac{1}{Leb(B(x,r))}\int_{B(x,r)}|f(y)-f(x)|dy=0 \text{ for a.e. } x \in \mathbb{R} \text{ w.r.t. } Leb$$
where $f\in L^1(\mathbb{R}), B(x,r)$ is the ball with center $x$ and radius $r$.
My question is: if we replace $Leb$ by general measure $\mu$? Do we still have the same result?
AI: There are results of the type you mention for Radon measures on $\mathbb{R}^d$. There are analytic methods (Besicovitch's coverings, Hardy-Littlewood maximal inequalities) to approach them as well as probabilistic methods (Martingales). Here is one such result:
Theorem:
Suppose $\nu$ and $\mu$ are Radon measure on $\mathbb{R}^d$, $\mu\geq0$. Let
\begin{aligned}
\lim_{r\rightarrow0}\frac{\nu(\overline{B(x;r)})}{\mu(\overline{B(x;r)})}=D_\mu\nu(x)
\end{aligned}
Suppose the Radon--Nikodym decomposition of $\nu$ with respect to $\mu$ is
\begin{aligned}
\nu=\nu_a+\nu_\perp =\frac{d\nu_a}{d\mu}\cdot\mu + \nu_\perp
\end{aligned}
be the Radon--Nikodym decomposition of $\nu$ with respect to $\mu$. Then
(i). $D_\mu\nu_a$ exists $\mu$--a.s. and
\begin{aligned}
f= \frac{d\nu_a}{d\mu}= D_\mu\nu = D_\mu\nu_a \qquad \text{$\mu$--a.s.},
\end{aligned}
and for $\mu$--a.a. $x\in \mathbb{R}^d$,
\begin{aligned}
\lim_{r\rightarrow0}\frac{1}{\mu\big(\overline{B(x;r)}\big)}\int_{\overline{B(x;r)}}|f(y)-f(x)|\,\mu(dy).
\end{aligned}
(ii). $D_\mu\nu_\perp$ exists $\mu$--a.s. and
\begin{align}
D_\mu\nu_\perp=0\qquad\text{$\mu$--a.s.}
\end{align}
|
H: A set as an algebraic structure
A set is collection of distinct objects: https://en.wikipedia.org/wiki/Set_(mathematics).
The word distinct implies the identity relation on each set: an element of a set is equal to itself, or $a = a$
(Does the word 'distinct' in the definition of Set implies an equivalence relation between the objects of the collection?).
Thus, every set by the definition is a relational structure with the identity relation.
The identity relation is a binary relation $(a, a)$ that implies the unary identity operation $f(a) = a$
(https://math.stackexchange.com/a/3719648/427611).
Thus, every set by the definition is an algebraic structure with the identity operation.
But Wikipedia states that a set is "a degenerate algebraic structure S having no operations"
(https://en.wikipedia.org/wiki/Algebraic_structure#One_set_with_operations).
Which statement is correct?
If a set is an algebraic structure, can we apply algebraic terms in set theory and vice versa?
E.g., can we call equinumerous sets isomorphic?
Update
Based on the discussion:
The concept of an identity https://en.wikipedia.org/wiki/Identity_(philosophy) is more fundamental, then the concept of a set.
Without the concept of identity it is not possible to introduce a set, since if an element $a$ of a set $A$ is not equal to itself than the statements "$a$ is an element of $A$" and "$a$ is not an element of $A$" are true at the same time.
Therefore, no matter how exactly we define a set, it has the identity relation.
The binary identity relation "an element is equal to itself" is the unary identity operation "an element is paired with itself".
Therefore, no matter how exactly we define a set, it is an algebraic structure with the identity operation.
Would it be correct?
And many thanks to all who responded.
AI: I'm going to answer in terms of ZF set theory since that is what most of us need.
In ZF, there is no definition for a set. It is a primitive idea. All you have is the notion set and membership and that gives you equality of sets, and in turn equality of members.
Whether or not two elements of a set are equal is again a matter of set equality. You don't need anything extra to tell you when two elements are equal.
It certainly can't be part of the definition of a set, because an equivalence relation is essentially a special subset of $X\times X$, and if you haven't accepted what a set is yet, you shouldn't be discussing thing like subsets of $X\times X$. You will just be going around in circles.
You can grant $X$ an equivalence relation given by the partition of the set into singletons, so that you get the "identity relation", but it doesn't tell you anything new.
E.g., can we call equinumerous sets isomorphic?
Sure you can. In the category of sets, the isomorphisms are precisely the bijections. There are "isomorphic in the category of sets."
You don't need operations to do this... a category can be made up of non-algebraic objects. That is isomorphism and homomorphism are not algebraic-only concepts.
Thus, every set by the definition is an algebraic structure with the identity operation.
It would be more plausible to say that a set is an algebraic structure with no operations. I don't know if universal algebra accepts this empty case, but they may.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.