text
stringlengths 83
79.5k
|
|---|
H: Does $\{ (1, 2, 3, ..., n), (2, 3, ..., n, 1), ..., (n, 1, 2, ..., n-1) \}$ span $\mathbb{R}^n$?
$\{(1, 2), (2, 1)\}$ spans $\mathbb{R}^2$ and $\{(1,2,3), (2,3,1), (3,1,2) \}$ spans $\mathbb{R}^3$.
In general, does $\{ (1, 2, 3, ..., n), (2, 3, ..., n, 1), ..., (n, 1, 2, ..., n-1) \}$ span $\mathbb{R}^n$?
AI: I'll denote your vectors by $v_1,\ldots,v_n$ where $v_i$ is the one with $1$ in the $i$-th component (e.g. $v_2=(n,1,2,\ldots,n-1)$).
Since we have $n$ vectors in $\mathbb R^n$, it suffices to prove they're linearly independent. To do this, we put the vectors as the columns of a matrix: $$C:=\begin{bmatrix}v_1 & v_2 & \cdots & v_n\end{bmatrix}.$$ Then $C$ is a circulant matrix, whose rank is $n-d$ where $d$ is the degree of $\gcd(1+2x+3x^2+\cdots+(n-1)x^n,x^n-1)$. This $\gcd$ is $1$, so $d=0$ and thus $C$ has full rank. Therefore, the columns of $C$ are linearly independent.
There are other properties of circulant matrices that allow you to conclude $C$ is invertible (if you don't like the rank approach above).
|
H: $\sup\{\omega,\omega_\omega,\omega_{\omega_\omega},...\}=\text{?}$
In Ordinal $\alpha$ such that $\alpha=\omega_\alpha$? the question is asked if fixed points exist for $\alpha=\omega_\alpha$.
I'm aware that $\varepsilon_0=\sup\{\omega, \omega^\omega, \omega^{\omega^\omega},\ldots\}$.
I'm not sure what the first fixed point of $\alpha=\omega_\alpha$ is called.
$$\sup\{\omega,\omega_\omega,\omega_{\omega_\omega},\ldots\}=\text{?}$$
AI: I don't think it has a standard notation - it's generally just called "the first fixed point of the $\aleph$-function" (for some reason I haven't heard the term "$\omega$-function" nearly as frequently).
Some texts will introduce notation like "$\mathit{lfp}(G)$" for least fixed points of a continuous unbounded ordinal function $G$, at which point this can be written as e.g. "$\mathit{lfp}(\omega_-)$," but I don't think any of these are standard.
|
H: Suppose that $x_{n}\to x\in X$ and $y_{n}\to y\in X$. Show that $\displaystyle\lim_{n\rightarrow\infty}d(x_{n},y_{n}) = d(x,y)$.
Let $x_{n}$ and $y_{n}$ be two sequences in a metric space $(X,d)$. Suppose that $x_{n}$ converges to a point $x\in X$ and $y_{n}$ converges to a point $y\in X$. Show that $\displaystyle\lim_{n\rightarrow\infty}d(x_{n},y_{n}) = d(x,y)$.
MY ATTEMPT
Lemma
Given any three points $x,y,z\in X$, one has that $d(x,y) \geq |d(y,z) - d(x,z)|$.
Proof
According to the triangle inequality, one has that $d(x,y) + d(y,z) \geq d(x,z)$.
Similarly, $d(z,x) + d(x,y) \geq d(z,y)$. Thus
\begin{align*}
\begin{cases}
d(x,y) \geq d(x,z) - d(y,z)\\\\
d(x,y) \geq d(y,z) - d(x,z)
\end{cases} \Rightarrow d(x,y) \geq |d(x,z) - d(y,z)|
\end{align*}
My solution
According to the definition of convergence, for every $\varepsilon/2 > 0$ there corresponds a natural $N_{1}\geq 0$ such that
\begin{align*}
n\geq N_{1} \Rightarrow d(x_{n},x) < \varepsilon/2
\end{align*}
Similarly, for every $\varepsilon/2$, there corresponds a natural number $N_{2}\geq 0$ such that
\begin{align*}
n\geq N_{2} \Rightarrow d(y_{n},x) < \varepsilon/2
\end{align*}
Gathering both results, we conclude that for every $\varepsilon/2 > 0$ there is a natural number $N = \max\{N_{1},N_{2}\}$ such that
\begin{align*}
n\geq N \Rightarrow |d(x_{n},y_{n}) - d(x,y)| & \leq |d(x_{n},y_{n}) - d(x_{n},y)| + |d(x_{n},y) - d(x,y)|\\\\
& \leq d(y_{n},y) + d(x_{n},x) < \varepsilon
\end{align*}
and we are done. Thus we conclude that any metric is continuous, since $d$ is arbitrary.
Is the wording of my proof correct? Should I fix any theoretical flaw?
AI: Looks fine. You can also forgo the lemma, incorporating it in the proof of the result: for sufficiently large $n$ we have $d(x_n,x)<\frac{\epsilon}2$ and $d(y_n,y)<\frac{\epsilon}2$, so
$$d(x_n,y_n)\le d(x_n,x)+d(x,y)+d(y,y_n)<d(x,y)+\epsilon$$
and
$$d(x,y)\le d(x,x_n)+d(x_n,y_n)+d(y_n,y)<d(x_n,y_n)+\epsilon\;,$$
and therefore $|d(x,y)-d(x_n,y_n)|<\epsilon$.
|
H: Estimation of squared normal distribution
I am given a $w \sim N(0,I_n)$ and $w \in \mathbb{R}^n$ and $X \in \mathbb{R}^{n \times d}$ such that $X_1,..., X_d \in \mathbb{R}^n $ of $X$ that satisfy $\|X_i\|^2 = n$ where $n$ is a scalar and real number.
From here I am supposed to derive that $z_i = (w^TX)_i \sim N(0,n)$.
So far I find that $E[(w^TX)_i] = 0$ from mean of $w$.
But I don't understand how I should get the standard deviation $E[(w^TX)_i((w^TX)_i)^T] = n$.
I get that:
$$E[(w^TX)_i((w^TX)_i)^T] = E[w^TX_i(w^TX_i)^T] = E[w^TX_iX_i^Tw] $$
From $\|X_i\|^2 = X_iX_i^T = n$ , I get that:
$$ E[w^TX_iX_i^Tw] = E[w^Tnw] = nE[w^Tw]$$
From here I don't understand how I should come to the $E[w^Tw] = 1$, and I guess that is something that I should get, but I don't understand how can this be derived and on which basis this works.
AI: Both $w^TX_i$ and $X_i^Tw$ are scalars, so $$w^TX_i X_i^Tw = (w^TX_i)( X_i^Tw) = (X_i^Tw)( w^TX_i)=X_i^Tw w^TX_i$$ and hence $$E[w^TX_i X_i^Tw] = E[X_i^Tw w^TX_i] = X_i^T E[ww^T]X_i = X_i^T I_n X_i^T = X_i^T X_i^T = \|X_i\|^2 = n.$$
|
H: Assigning tasks to workers - How many ways to do it?
Below is a problem I did. I would like somebody to check it for me.
Problem:
A manager needs four tasks of type A done. He also needs two tasks of type B done. He has 10 employees. The manager wants to assign one task each to six employees. This means that four employees will not get a task. How many ways can he assign the six tasks be assigned?
Answer:
Let $c$ be the number we are looking for.
\begin{align*}
c &= { 10 \choose 2}{10 \choose 4} \\
{ 10 \choose 2} &= \frac{10(9)}{2} = 45 \\
{ 10 \choose 6} &= \frac{10(9)(8)(7)} {4(3)(2)} = \frac{10(9)(7)}{3} = 10(30(7) = 210 \\
c &= 45(210) \\
c &= 9450
\end{align*}
Here is my second attempt at solving the problem:
\begin{align*}
c &= { 10 \choose 2}{8 \choose 4} \\
{ 10 \choose 2} &= \frac{10(9)}{2} = 45 \\
{ 8 \choose 4} &= \frac{8(7)(6)(5)} {4(3)(2)} = \frac{7(6)(5)}{3} = 70 \\
c &= 45(70) \\
c &= 3150
\end{align*}
I believe I have it right now. Could somebody confirm that I do?
AI: What's the significance of two types of tasks? If one worker can be assigned at most 1 job, the total number of allocations is $\binom{10}{6}$. If for some reason you want to separate workers into groups performing each type of task it is $\binom{10}{2} \binom{8}{4}$
|
H: $j$-invariants for elliptic curves over $\mathbb{F}_p$
I'm reading an article about elliptic curve volcanos. I know how to compute the $j$-invariant given a curve in Weierstrass form, but i don't have any idea on how to compute every possible $j$-invariant possible for curves defined over $\mathbb{F}_p$, other than brute forcing every Weierstrass form curve.
In the paper the number of $j$-invariants is finite and every one of them is smaller than $p$.
How were those computed?
AI: A curve defined over a given field $K$ the $j$-invariant of an elliptic curve is an element of that field. Therefore for a finite field of prime order the $j$-invariant can be represented by a number less than $p$.
As for which $j$-invariants are possible, they all are! The curve
$$ y^2 + xy = x^3 - \frac{36}{j_0 - 1728} x - \frac{1}{j_0 - 1728} $$
is well known and you can calculate its $j$-invariant to be $j_0$, the only edge case is $j_0 = 1728$ where the above formula breaks down, nevertheless, an elliptic curve of $j$-invariant 1728 are given by $y^2 = x^3 - x$ for $p\ne 2$.
In particular I assume that in the diagram it is not implied that those are all cordilliera, just some examples of some.
|
H: Prove that $u(x,y) = \frac{x}{x^2+y^2}$ is harmonic in $\mathbb{R}^2\setminus\{(0,0)\}$
I want to check if I did this right. I reached the conclusion that $u$ is not harmonic. We know that a function is harmonic if $$\displaystyle\frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2} = 0$$
For $u(x,y)$, I obtained that $$ \displaystyle\frac{\partial^2 u}{\partial x^2} = -\frac{2x(3y^2 - x^2)}{(x^2+y^2)^3}$$ and $$ \displaystyle \frac{\partial^2 u}{\partial y^2} = -\frac{4x(x^2 - 4y^2)}{(x^2+y^2)^3} $$ Then, $$ \displaystyle\frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2} = \frac{-6xy^2+2x^3-4x^3+16xy^2}{(x^2+y^2)^3} = \frac{10xy^2-2x^3}{(x^2+y^2)^3} = \frac{-2x(x^2 - 5y^2)}{(x^2+y^2)^3} $$
Therefore, $u$ is not harmonic.
AI: Your mistake comes from the computation of $\frac{\partial^2u}{\partial y^2}$.
Indeed, one has :
$$\frac{\partial u}{\partial y}=\frac{0\times(x^2+y^2)-x\times2y}{(x^2+y^2)^2}=\frac{-2xy}{(x^2+y^2)^2}$$
Therefore :
$$
\frac{\partial^2 u}{\partial y^2}=\frac{-2x\times(x^2+y^2)^2-(-2xy)\times(2\times 2y \times (x^2+y^2))}{(x^2+y^2)^4}\\
=\frac{-2x(x^2+y^2)+2xy\times4y}{(x^2+y^2)^3}=\frac{6xy^2-2x^3}{(x^2+y^2)^3}=\frac{2x(3y^2-x^2)}{(x^2+y^2)^3}
$$
Alternate solution :
Put $z=x+iy$. Then :
$u(x,y)=\frac{x}{x^2+y^2}= Re(\frac{x}{x^2+y^2}-i\frac{y}{x^2+y^2})=Re(\frac{\bar{z}}{|z|^2})=Re(\frac{1}{z})$
So $u$ is harmonic (it's the real part of a holomorphic function).
|
H: Determine the convergence of the series $\sum_{n=1} ^{\infty} \frac{5^{n}-2^{n}}{7^{n}-6^{n}}$
Does
$$\sum_{n=1} ^{\infty} \frac{5^{n}-2^{n}}{7^{n}-6^{n}}$$
converge?
I tried the ratio test but I failed.
AI: Ratio test:
$$
\frac{\displaystyle\frac{5^{n+1}-2^{n+1}}{7^{n+1}-6^{n+1}}}{\displaystyle\frac{5^{n}-2^{n}}{7^{n}-6^{n}}}
=\frac{7^{n}-6^{n}}{{7^{n+1}-6^{n+1}}}\,\frac{5^{n+1}-2^{n+1}}{5^{n}-2^{n}}
=\frac{1-(6/7)^n}{7-(6/n)^{n+1}}\,\frac{5-(2/5)^{n+1}}{1-(2/5)^n}\to\frac57<1,
$$
so the series converges.
|
H: Checking irreducibility of polynomials in two variables
There are a few exercises in Hartshorne about checking singularity of an affine curve. For example, $Y$ defined by $x^2 = x^4 + y^4$ over a field $k$ (with ${\mathrm{char}}k \neq 2$). This is easy.
However, I was wondering how easy it is to prove that it is a curve; or equivalently $f(x,y) = x^4 + y^4 - x^2$ is irreducible.
AI: I just figured it out (Sorry to bother). If it is reducible then we have two choices:
(1) $y^4 + x^4 - x^2 = (y + f(x))(y^3 + g_2(x)y^2 + g_1(x)y + g_0(x))$. In this case either $f(x) = 0$ or $g_0(x) = 0$. Neither is possible as the polynomial is not in the ideal $\langle y \rangle$.
(2) $y^4 + x^4 - x^2 = (y^2 + f_1(x)y + f_0(x))(y^2 + g_1(x)y + g_0(x))$. Here comparing the terms either $f_1(x) = 0$ or $f_0(x) = 0$. In the first case $g_1(x) = 0 = g_0(x)$. In the second case $f_1(x) = 0$. None of them are possible as the polynomial is not in the ideal $\langle y^2 \rangle$.
|
H: $\operatorname{tr}(AB)$ in terms of $\operatorname{tr}(A)$
I have two symmetric and positive semi-definite matrices $A$ and $B$. I know $\operatorname{tr}(AB) \neq \operatorname{tr}(A)\cdot \operatorname{tr}(B)$.
Are there any ways to think of $\operatorname{tr}(AB)$ in terms of $\operatorname{tr}(A)$?
AI: No. Choose any matrix $A_0$ and form
$$
A=\begin{bmatrix} A_0&0\\0&0\end{bmatrix}.
$$
Now, with $$B=\begin{bmatrix} I&0\\0&0\end{bmatrix},$$ you have $\operatorname{Tr}(AB)=\operatorname{Tr}(A)$. But if
$$
B=\begin{bmatrix} 0&0\\0&I\end{bmatrix},
$$
now $\operatorname{Tr}(AB)=0$. Same $A$.
|
H: How do I prove the floor identity $⌊x + n⌋ = ⌊x⌋ + n$ in a more precise way?
I am having trouble understanding the proof provided by the author for the property stated after "Goal:".
Except from the text here is a list of useful properties:
(PROPERTY 1a) $⌊x⌋ = n$ if and only if $n ≤ x < n + 1$
(1b) $⌈x⌉ = n$ if and only if $n − 1 < x ≤ n$
(1c) $⌊x⌋ = n$ if and only if $x − 1 < n ≤ x$
(1d) $⌈x⌉ = n$ if and only if $x ≤ n < x + 1$
(2) $x − 1 < ⌊x⌋ ≤ x ≤ ⌈x⌉ < x + 1$ (3a) $⌊−x⌋ = −⌈x⌉$
(3b) $⌈−x⌉ = −⌊x⌋$
(4a) $⌊x + n⌋ = ⌊x⌋ + n$
(4b) $⌈x + n⌉ = ⌈x⌉ + n$
Goal: As an exercise, prove the property $⌊x + n⌋ = ⌊x⌋ + n$
Proof: We will prove the property using a direct proof. Suppose that
$⌊x⌋ = m$, where $m$ is a positive integer. By property (1a), it follows
that $m ≤ x < m + 1$. Adding $n$ to all three quantities in this chain of
two inequalities shows that $m + n ≤ x + n < m + n + 1$. Using property
(1a) again, we see that $⌊x + n⌋ = m + n = ⌊x⌋ + n$. This completes the
proof. Proofs of the other properties are left as exercises.
From: Rosen, Kenneth. Discrete Mathematics and Its Applications (p. 159)
My understanding of the property we are tasked with proving is that it is an identity that is stating the proposition $∀x∀n(⌊x + n⌋ = ⌊x⌋ + n)$ where the domains of discourse for $x$ and $n$ are the set of all real numbers and the set of all integers, respectively. With that in mind, since this is a direct proof, shouldn't we start with arbitrary values of the domains (represented by real number $x$ and integer $n$, and NOT JUST POSITIVE INTEGER $n$) and try to show that their properties imply the equation $⌊x + n⌋ = ⌊x⌋ + n$? (Ignoring the positive-n issue, how could we even do this if the only properties we are allowed to assume
about $x$ and $n$ are that $x$ is real and $n$ is an integer?) Also, why was he able to start with and assume $⌊x⌋ = m$ ("suppose $⌊x⌋ = m$") out of nowhere?
If anyone can provide a version of his proof that is more precise and doesn't skip steps of reasoning or explanations, that would answer my question.
AI: You’re correct in thinking that $m$ should not have been limited to positive integers. However, once the word positive is removed, the proof is fine. In particular, the author is simply assuming that $x$ is a real number and that every real number has a floor. Ignoring positive, which is an error, we should understand his first sentence as being equivalent to the following sentence; where $m$ is an integer is just a reminder to the reader, since the floor function necessarily evaluates to an integer.
Let $x$ be a real number, and let $m=\lfloor x\rfloor$.
Nothing there comes ‘out of nowhere’: we want to show something about all real numbers, so we start with an arbitrary real number $x$. We want to be able to talk easily about its floor, so we give the floor a name, $m$.
The rest of the proof says everything that needs to be said. At most one might add a little explanation after adding $n$, something like this:
Now $m+n$ and $m+n+1$ are consecutive integers, and $$m+n\le x+n<m+n+1\;,$$ so by $(1a)$ we know that $$\lfloor x+n\rfloor=m+n=\lfloor x\rfloor+n\;.$$
But I would do that only when the floor function is first introduced, and then only in a very elementary course — and I say that as one who tends to give more rather than less detail.
|
H: Use cases for interval notation
The notation $x \in [a,b]$ and $a \le x \le b$ are equivalent in how they describe the variable $x$. When writing mathematics, when does the use of one take precedence over the other? In the former case we explicitly point out that $x$ is the element of some set whose shorthand is given by $[a,b]$. In the latter we can infer the set $x$ belongs to, but it is more of an ephemeral reference to what $x$ is "greater" and "less" than in a passing context.
The question: Is there an appropriate use for each notation or is it purely a matter of taste? Can you provide use and non-use examples for each case?
AI: Of course, either can be used in a particular place, but there may (need to) be some change of language around the usage going from one to the other.
"Let ...". If we write "Let $x \in [a,b]$", it is clear that $x$ is the variable being bound. If we write "Let $a \leq x \leq b$", it is less immediate which of the three symbols is the bound variable. (Surely, $a$ and $b$ are already bound, but the newly bound variable is not the first symbol, so the cognitive load of parsing the sentence increases.) Resolving this is the wordier "Let $x$ be such that $a \leq x \leq b$ ...".
If the interval containing $x$ depends on prior variables... "Let $x \in [a,b] \subset (a',b')$" is equivalent to "Let $x,a,b \in \Bbb{R}$ satisfy $a' < a \leq x \leq b < b'$". Now the difference is emphasis -- do you intend to emphasize that the closed subset is in the open subset or that the variables are ordered?
Do you need a set operation? "Let $x \in [a,b] \cap \Bbb{Z}$" versus "Let $x$ be an integer, $a \leq x \leq b$." The more complex the operation, the more likely it is unambiguously expressed by set operations. The order notation leads one to list the various sets in the set expression with ever-more-conflatable meaning. Try
$$ x \in [a,b] \cap \left(\Bbb{Z} \cup \left[ \frac{a+b}{2}, \frac{a+3b}{2}\right)\right) $$
starting with "Let $x$ be such that $a \leq x \leq b$ ...".
There's no hope of "$a \leq x \leq b$" when $x$ is an element of a not linearly ordered set (for instance, $\Bbb{C}$). If you are using many unordered or not linearly ordered sets, use of the order notation stands out loudly. Do you intend that much emphasis? And, you have to specify from which ordered set come $a$, $x$, and $b$.
|
H: Find orthonormal $\{b_i\}$ such that $\mathcal{A} = \sum_i \sum_j \lambda_i \mu_j b_i b_j^T$
Suppose $\{b_i\}_{i = 1}^d \subset \mathbb{R}^n$ is an orthonormal set of vectors, $d \leq n$, and assume that a matrix $\mathcal{A} \in \mathbb{R}^{n \times n}$ has the following form:
$$\mathcal{A} = \sum_{i = 1}^d \sum_{j = 1}^d \lambda_i \mu_j \; b_i b_j^T = \sum_{i = 1}^d \sum_{j = 1}^d \lambda_i \mu_j \; b_i \otimes b_j$$
where $\lambda_i, \mu_j \in \mathbb{R}$.
Then the problem I am interested in is how one can recover the vectors $b_i$ (at least some set of orthonormal vectors, since such a decomposition might not be unique).
My question is whether this is a problem people have already looked at before. If not, here is an idea that I have:
ATTEMPT:
Notice that if $\{b_i\}$ is an orthonormal set, then any matrix of the
form $b_i b_j^T$ will either have its columns or rows summing to $0$,
meaning $\mathbf{1}_n^T b_i b_j^T \mathbf{1}_n = 0$, where
$\mathbf{1}_n \in \mathbb{R}^n$ is a vector of ones.
If we find a unit vector $x \in \mathbb{R}^n$ where $x = b_k$ for some
$k$, then:
$$\mathcal{A}xx^T = \mathcal{A}b_k b_k^T = \sum_{i = 1}^d \sum_{j = 1}^d \lambda_i \mu_j \; (b_i b_j^T)(b_k b_k^T) = \sum_{i = 1}^d \lambda_i \mu_k \; b_i b_k^T$$
which is a sum of matrices of the form mentioned above. Thus, a
minmizer to the problem:
$$\text{minimize} \; \; \mathbf{1}_n^T \mathcal{A} x x^T \mathbf{1}_n$$ $$\text{subject to} \; \; \|x\| = 1$$
exists and is equal to $0$.
By solving this problem, we can find at least one of the $b_i$'s and
maybe find the others somehow.
The potential problem that I see is that this problem might not be
convex. And this is as far as I have gone so far.
AI: If $\mathcal A$ is non-zero, your sum can be written as
$$
\mathcal A = \sum_{i = 1}^d \sum_{j = 1}^d \lambda_i \mu_j \; b_i b_j^T
\\=
\pmatrix{b_1 &b_2 & \cdots & b_d} \pmatrix{\lambda_1 \mu_1 & \lambda_1 \mu_2 & \cdots & \lambda_1 \mu_d\\
\lambda_2 \mu_1 & \lambda_2\mu_2 & \cdots & \lambda_2 \mu_d\\
\vdots & \vdots & \ddots & \vdots \\
\lambda_d \mu_1 & \lambda_d\mu_2 & \cdots & \lambda_d \mu_d}\pmatrix{b_1 &b_2 & \cdots & b_d}^T\\
= B \lambda \mu^T B^T = (B\lambda)(B\mu)^T
$$
Here, $B$ is the matrix with columns $b_i$, $\lambda,\mu$ are the column-vectors with entries $\lambda_i,\mu_i$, where we define $\mu_i = \lambda_i = 0$ for $i > d$.
It follows that $\mathcal A$ is necessarily a rank-1 matrix.
Suppose that the $(i,j)$ entry $a_{ij}$ of $\mathcal A$ are non-zero. One choice of $B,\lambda,\mu$ that works is as follows: take $b_1,b_2$ to be orthonormal vectors whose span contains $\operatorname{col}(\mathcal A) + \operatorname{col}(\mathcal A^T)$. Extend to an orthonormal basis. Select $\lambda_1,\lambda_2$ so that $\lambda_1 b_1 + \lambda_2 b_2$ is the $j$th column of $\mathcal A$, and select $\mu_1,\mu_2$ so that $\mu_1 b_1 + \mu_2 b_2$ is the $i$th row of $\frac 1{a_{ij}}\mathcal A$. Extend the columns of $B$ to form an orthonormal set, and set the remaining entries of $\mu,\lambda$ to be zero.
The choices of $\mu,\lambda,B$ here are highly non-unique. For any $d \times d$ orthogonal matrix $U$, we find that
$$
\mathcal A = (BU)(U^T\lambda)(\mu^T U)(BU)^T
$$
is an alternative representation.
|
H: Given topological space X, t, is there an elegant way to express Int(X), the interior of X?
Given the topological space X, t, is there an elegant way to express Int(X), the interior of X - the entire space?
I know one way: Int(X) = $\bigcup Fi$ where $Fi$ is an open set in X.
But this comes by definition. Is there a way using other derived sets, like closure, frontier, exterior, and weak derived set?
AI: Since, $X$ is an open subset of $X$, when we take the union $\bigcup U_i$ over all open $U_i\subseteq X$, we get $\bigcup U_i=X$. Hence $\mathrm{Int}\left(X\right)=X$.
|
H: Expected value of $X$ when an unbiased coin is tossed $5$ times
An unbiased coin is tossed $5$ times. Suppose that a variable $X$ is assigned the value $k$ when $k$
consecutive heads are obtained for $k = 3, 4, 5$ otherwise $X$ takes the value $–1$. Then the expected value of $X$, is :
For $k=0, X=-1, P(X)={5\choose0}(\frac12)^5$
$k=1, X=-1, P(X)={5\choose1}(\frac12)^5$
$k=2, X=-1, P(X)={5\choose2}(\frac12)^5$
$k=3, X=3, P(X)={5\choose3}(\frac12)^5$
$k=4, X=4, P(X)={5\choose4}(\frac12)^5$
$k=5, X=5, P(X)={5\choose5}(\frac12)^5$
So, Expected value $E(X)=(-1\cdot1+-1\cdot5+-1\cdot10+3\cdot10+4\cdot5+5\cdot1)(\frac1{32})=\frac{39}{32}$
The answer is given as $\frac18$.
AI: You’ve counted the number of ways to get $k$ heads, but what’s needed for each $k$ is the number of ways to get $k$ consecutive heads. For example, there are just $5$ ways to get $3$ consecutive heads, not $10$: HHHTT, HHHTH, THHHT, TTHHH, and HTHHH. There are just $2$ ways to get $4$ consecutive heads: HHHHT and THHHH. And there is one way to get $5$ consecutive heads. Each of the other $2^5-(5+2+1)=24$ outcomes results in a score of $-1$, so
$$E(X)=\frac{5\cdot3+2\cdot4+1\cdot5-24}{32}=\frac4{32}=\frac18\;.$$
This problem is small enough that it really is easiest simply to list the possibilities.
|
H: Rudin theorem 9.21 converse part proof
Theorem: Suppose f maps an open set $E \subset R^n$ into $R^m$. Then $f \in \mathcal{C'}(E)$ if and only if the partial derivatives $D_j f_i$ exist and are continuous on E for $1 \leq i \leq m, 1 \leq j \leq n$.
For the converse $\text{it suffices to consider the case m=1} \tag 1$
Fix $x \in E$ and $\epsilon >0$, since E is open, there is an open ball $S \subset E$, with center at x and radius r, and the continuity of the functions $D_jf$ shows that r can be chose so that $|(D_jf)(y)-(D_jf)(x)| < \frac{\epsilon}{n} (y \in S, 1 \leq j \leq n)$.
Suppose $h=\sum h_j e_j, |h|<r$, put v_0=0, and $v_k=h_1e_1+...+h_ke_k$, for $1 \leq k \leq n$, then
$$f(x+h)-f(x)=\sum_{j=1}^n [f(x+v_j)-f(x+v_{j-1})] \tag 2$$.
Since $|v_k|<r$ for $1 \leq k \leq n$ and since S is convex, the segments with end points $x+v_{j-1}$ and $x+v_j$ lie in S. Since $v_j=v_{j-1}+h_je_j$, the mean value theorem show that the jth summand in $f(x+h)-f(x)=\sum_{j=1}^n [f(x+v_j)-f(x+v_{j-1})]$ is equal to $$h_j(D_jf)(x+v_{j-1}+\theta_jh_je_j) \tag 3$$ for some $\theta_j \in (0,1)$, and this differs from $h_j(D_jf)(x)$ by less than $|h_j| \epsilon n$, using $|(D_jf)(y)-(D_jf)(x)| < \frac{\epsilon}{n}$. By tag 2, it follows that $$|f(x+h)-f(x)-\sum_{j=1}^n h_j(D_jf)(x)| \leq \frac{1}{n} \sum_{j=1}^n |h_j| \epsilon \leq |h| \epsilon \tag 4$$ for all h such that $|h|<r$. This says that f is differentiable at x and that $f'(x)$ is linear function which assigns the number $\sum h_j(d_jf)(x)$ to the vector $h=\sum h_j e_j$. The matrix $[f'(x)]$ consists of the row $(D_1f)(x),...,(D_nf)(x)$. Since $D_1f,...,D_nf$ sre continuous functions on E, we proved the statement.
I am not sure about the idea of the whole proof, especially those tags, can someone help with explaining? This is Rudin's theorem 9.21 the converse part.
AI: To prove $f$ is differentiable at $x$ you need to show the estimate $|f(x + h) - f(x) - Lh|=o(h)$, as $h \to 0$, for some linear transformation $L: \mathbb{R^n} \to \mathbb{R} $.
At (2), he's rewriting $f(x + h) - f(x)$ in a way that there is change in at most one direction(paralell to some axis) at a time. Think of $n=2$, i.e., a rectangle.
At (3) he's merely applying the usual Mean Value Theorem of one variable at each direction.
At (4) he's showing that $L=[\nabla f]$ is the linear transformation we're after.
|
H: Distribution of the product of a standard normal and uniform distribution
Given that $X\sim N(0,1)$ and $Z\sim \operatorname{Unif}(\{\pm1\})$. Prove that $Y = XZ$ is a standard Gaussian distribution.
My approach:
I started trying to find the pdf of them. I found that:
$$f_Y(y)=\int_{-\infty}^\infty f_Z(z)f_X\left(\frac{y}{x}\right) \frac{1}{x} \, dx=\int_{-\infty}^\infty \frac{\exp(-\frac{(x/y)^2}{2})}{x\sqrt{2\pi}}\,dx$$
and I can't seem to find finish the calculation.
So I moved on to the MGF approach with:
$$M_{ZX}(t)=E[e^{zxt}] = \int_{-1}^{1}\int_{-\infty}^\infty \exp(zxt)\frac{\exp(-x^2/2)}{\sqrt{2\pi}}\frac{1}{2} \, dx \, dz = \frac{1}{2}\int_{-1}^{1}\exp(\frac{1}{2}(zt)^2) \, dz$$
and again, I can't finish it...
All of the answers seems to lead up to erfi functions...
edit: additional info: $Z$ and $X$ are independent of each other
AI: Reading the text I understand that Z is a discrete rv taking the values $\pm1$ with probability $\frac{1}{2}$ so,
simply observe that $Y=-X\sim N(0;1)$ (it's trivial but you can derive the result with fundamental transformation theorem)
so $f_Z(z)=1/2\phi(z)+1/2\phi(z)=\phi(z)$
|
H: What is the probability of 7 numbers picked from 1-30 ending with a different digit?
Between 1 and 30, each number with same last digit occurs 3 times.
i) 30C1, Last digit is 0..
ii) 27C1, Last digit is 1..
iii) 24C1, Last digit 2...
iv) 21C1,
v) 18C1,
vi) 15C1,
vii) 12C1,
Possible outcomes: 30C7
Favorable outcomes: 30C1.27C1.24C1.21C1.18C1.15C1.12C1
Probability: 30C1.27C1.24C1.21C1.18C1.15C1.12C1/30C7
Is my answer correct?
AI: Your calculation is on the right track, but there are some errors and oversights. It yields the number of ways to pick one card, then a card with a different last digit, then a third card with yet another last digit, and so on. Thus, you’re counting each set of $7$ cards $7!$ times, once for each of the $7!$ orders in which it could have been dealt. In your denominator, however, you’re just counting sets of $7$ cards, without regard for order, and you’re counting sets of $7$ cards from a deck of $52$ cards, not $30$. You must correct the $52$ to $30$ and either divide the numerator by $7!$ to get rid of the overcounting or, equivalently, multiply the denominator by $7!$ to take into account the order in which each hand of $7$ cards is dealt. When you do that, you get
$$\begin{align*}
\frac{\binom{30}1\binom{27}1\binom{24}1\binom{21}1\binom{18}1\binom{15}1\binom{12}1}{7!\binom{30}7}&=\frac{30\cdot27\cdot24\cdot21\cdot18\cdot15\cdot12}{7!\binom{30}7}\\
&=\frac{3^7\cdot10\cdot9\cdot8\cdot7\cdot6\cdot5\cdot4}{7!\binom{30}7}\\
&=\frac{3^7\cdot10!}{3!7!\binom{30}7}\\
&=\frac{3^7\binom{10}7}{\binom{30}7}\;.
\end{align*}$$
There is a shorter way to arrive at this result. There are $10$ possible last digits, so there are $\binom{10}7$ ways to choose $7$ of them to be the last digits of a ‘good’ set. There are $3$ cards having each of those $7$ last digits, so once we know what the $7$ last digits are, there are $3^7$ ways to choose the specific $7$ cards having those last digits. Thus, there are $3^7\binom{10}3$ possible ‘good’ hands.
|
H: Quadratics with Integer Intercepts and Extrema
I'm a math teacher and was creating an exam for my Algebra 1 students when I tried picking an equation that had integer Intercepts (both x- and y-), as well as extrema. I wanted to do so because I figured integer values would be most easily identifiable and comfortable for my students.
But that got me wondering... How would one guarantee that a given quadratic in standard form meets those three constraints?
I've got some ideas, but curious to know of other approaches that might be out there.
AI: Let $f(x)=a(x-r)(x-s)$, where $a,r,s$ are integers and $a\ne 0$.
The vertex has $x$-coordinate ${\Large{\frac{r+s}{2}}}\!$,$\;$so the only additional condition you need is for $r+s$ to be even.
For example, using $a,r,s=1,3,-5$, you get
$$f(x)=1(x-3)(x+5)=x^2+2x-15$$
which has $x$-intercepts $(3,0),(-5,0)$,$\;y$-intecept $(0,-15)$, and vertex $(-1,-16)$.
|
H: What is the interval of $U$ and $V$?
If the joint distribution of X and Y is given by
$$f_{x,y}(x,y)=2e^{-(x+y)} \hspace{.3cm} I_{(0,y)} (x) I_{(0,\infty)}(y) $$
find the joint distribution of $X$ and $X + Y$.Find the marginal distributions of
$X$ and $X + Y$.
Let $U=X$ and $V=X+Y$
doing the Jacobian
$|J|=1 $
so $f(u,v)= 2e^{-v}$, but I don't know what limits $U$ and $V$ have.
AI: Observe that the original support is
$0<x<y<\infty$
that means also
$0<u<v-u<\infty$ or also
$0<2u<v<\infty$
So you join density becomes
$f_{UV}(u,v)=2e^{-v}\mathbb{1}_{(0;\infty)}(u)\mathbb{1}_{(2u;\infty)}(v)=2e^{-v}\mathbb{1}_{(0;\infty)}(v)\mathbb{1}_{(0;\frac{v}{2})}(u)$
Now it is very easy to determine the marginals
$U\sim Exp(2)$
$V\sim Gamma(2;1)$
|
H: A linear algebra problem related to system of linear difference equations
In the teory of system of linear difference equations, I have the following fact:
If $A\alpha − B$ is singular for all $\alpha$, then its rows are linearly dependent, and
hence there exists a matrix polynomial $\psi(\alpha)$ such that:
$$\psi(\alpha) (A\alpha − B) = 0 \quad \quad \quad \quad \quad \quad \quad \quad(*)$$
identically in $\alpha$.
How can I prove the existence of $\psi$ and the $(*)$ equation?
AI: To make sense of this, I'm assuming that $A$ and $B$ are $n \times n$ matrices over a field $\mathbb F$ with more than $n$ elements, and $A \alpha - B$ is singular for every $\alpha \in \mathbb F$. Thus $\det(A\alpha - B)$, which is a polynomial in $\alpha$ of degree at most $n$, has more than $n$ zeros, and therefore is identically $0$. Thus, as a matrix over the field $\mathbb F(\alpha)$ of rational functions in the indeterminate $\alpha$ over $\mathbb F$, $A \alpha - B$ is singular, and therefore it has linearly dependent rows, i.e. there is a row vector $v(\alpha)$ with entries in $\mathbb F(\alpha)$, not all $0$, such that $v(\alpha) (A \alpha - B) = 0$. Multiplying by a common denominator, we can get $\psi(\alpha)$ whose entries are polynomial in $\alpha$.
|
H: Confusion regarding cross product formula
I understand that the cross product formula is
$$\mathbf{A} \times \mathbf{B} =\left\|\mathbf{A} \right\|\left\|\mathbf{B} \right\|\sin(\theta)\ \mathbf{n},$$
where $\mathbf{n}$ is a unit vector perpendicular to the plane containing $\mathbf{A}$ and $\mathbf{B}$ in the direction given by the right-hand rule.
I started doing the following exercise from chapter 1.1.2 Vector Algebra: Component Form of Introduction to Electrodynamics, fourth edition, by David J. Griffiths:
Example 1.2. Find the angle between the face diagonals of a cube
I begin by using a cube of side $1$, as shown here in the textbook:
Let $\mathbf{A} = (0, 1, 1)$ and $\mathbf{B} = (1, 0, 1)$.
Then the cross product formula gives us that
$$(1, 1, -1) = (\sqrt{2})(\sqrt{2}) \sin(\theta) \mathbf{n}.$$
So we need to solve for $\theta$, which means I need to deduce $\mathbf{n}$ first, right?
But then I remember: Isn't $\mathbf{A} \times \mathbf{B}$ itself supposed to be the "normal/perpendicular vector" to the plane in which $\mathbf{A}$ and $\mathbf{B}$ lie? So, in that case, how does the formula $\mathbf{A} \times \mathbf{B} =\left\|\mathbf{A} \right\|\left\|\mathbf{B} \right\|\sin(\theta)\ \mathbf{n}$ make sense? And, furthermore, how do we now find $\mathbf{n}$ in $(1, 1, -1) = (\sqrt{2})(\sqrt{2}) \sin(\theta) \mathbf{n}$? This is odd, because the cross product formula is actually often written as $\mathbf{A} \times \mathbf{B} =\left\|\mathbf{A} \right\|\left\|\mathbf{B} \right\|\sin(\theta)$ (with the absence of $\mathbf{n}$), no? As you can see, I became very confused.
I would greatly appreciate it if people would please take the time to clarify my confusion/thoughts here.
AI: $\mathbf A=(0,1,1)$
$\mathbf B=(0,0,1)$
$\mathbf A\times\mathbf B=(1,1,-1)$
$\mathbf n=\dfrac{(1,1,-1)}{\|(1,1,-1)\|}=\left(\dfrac1{\sqrt3},\dfrac1{\sqrt3},-\dfrac1{\sqrt3}\right)$
Can you take it from here?
|
H: Spivak's Calculus Chapter 1, Question 19a
I found this post as solution to the question. Here's a quote for easy reference.
Supposing $y_1$ and $y_2$ are not both $0$, and that there is no number $\lambda$ such that $x_1=\lambda y_1$ and $x_2=\lambda y_2$, then $$\begin{array}{tcl}0 &<& (\lambda y_1-x_1)^2 + (\lambda y_2-x_2)^2 \\ &=& \lambda^2 (y_1^2+y_2^2)-2\lambda(x_1y_1+x_2y_2)+(x_1^2+x_2^2),\end{array}$$ and the equation $$\lambda^2 (y_1^2+y_2^2)-2\lambda(x_1y_1+x_2y_2)+(x_1^2+x_2^2)=0 \\$$
has no solution $\lambda$. So by problem 18(a) we must have $$\Bigg[\frac{2(x_1y_1+x_2y_2)}{({y_1}^2+{y_2}^2)}\Bigg]^2-\frac{4({x_1}^2+{x_2}^2)}{({y_1}^2+{y_2}^2)} < 0,****** \\$$ which yields the Schwarz inequality.
Notice the heavily asterisked line. I don't understand how we derive this. I recognize that this is "completing the square." Question 18 emphasized that $b^2 - 4c < 0$ means $x^2 + bx + c > 0$. Except, in this problem, it's not clear why he choose the $b$ the way he did. Where does the 2 come from? In the sense that, isn't $b = \frac{-2(x_1y_1+x_2y_2)}{({y_1}^2+{y_2}^2)}$.
AI: REMARK Your top equation ends on $x_1^2+x_2^2$ but the middle equation has $x_1^2 + {y_1^2}^*$, which one is correct, please clarify? Assuming the second one is really intended and the first is a typo.
You yourself quote that $b^2-4c<0 \iff x^2 + bx + c > 0$. You have the equation
$$
\lambda^2 (y_1^2+y_2^2)-2\lambda(x_1y_1+x_2y_2)+(x_1^2+{y_1^2}^*)=0
$$
which is equivalent to
$$
\lambda^2
-\left(2 \frac{x_1y_1+x_2y_2}{y_1^2+y_2^2} \right) \lambda
+ \frac{x_1^2+{y_1^2}^*}{y_1^2+y_2^2}
= 0
$$
So choosing
$$
b = -2 \frac{x_1y_1+x_2y_2}{y_1^2+y_2^2},
c = \frac{x_1^2+{y_1^2}^*}{y_1^2+y_2^2}
$$
and plugging into $b^2-4c < 0$ will give you the result you are seeking. Note that the sign of $b$ makes no difference here since you are only looking at $b^2$.
|
H: Solution to the integral equation with constant parameter
I wanted to solve this integral equation, I'll be grateful for explanation with step by step solution.
$y(x)=1+α\int_{0}^xdp sin(x-p)y(p)$
AI: Hint for first steps.
\begin{align*}y(x)&= 1+\alpha \int_{0}^x \sin(x-p)y(p) dp \\
&= 1+\alpha \int_{0}^x (\sin x \cos p - \cos x \sin p)y(p) dp \\
&= 1+\alpha \sin x\int_{0}^x (\cos p) y(p) dp - \alpha \cos x \int_0^x (\sin p) y(p) dp \\
& = 1 + \alpha \sin x C(x) - \alpha \cos x S(x)
\end{align*}
where $C(x) = \int_{0}^x (\cos p) y(p) dp $ and $S(x) = \int_{0}^x (\sin p) y(p) dp $.
Take derivative $d/dx$ to have
\begin{align*}y'(x) &= \alpha \cos x C(x) + \alpha (\sin x \cos x) y(x)+ \alpha \sin x S(x) - \alpha (\cos x \sin x) y(x) \\
& = \alpha \left( \cos x C(x) + \sin x S(x) \right)
\end{align*}
Take derivative $d/dx$ once more to have
\begin{align*} y''(x) & = \alpha \left( -\sin x C(x) + (\cos x \cos x) y(x) + \cos x S(x) + (\sin x \sin x) y(x)\right) \\
& = \alpha y(x) - \alpha \sin x C(x) + \alpha \cos x S(x)\\
& = \alpha y(x) + 1 -y(x) = 1 - (1- \alpha) y(x)
\end{align*}
Can you proceed from here?
Edit. I will proceed.
This is linear differential equation with constant coefficient and is not homogeneous. There is homogeneous solution $y_h$ for $y'' = (\alpha-1)y$, and a particular solution $y_p$ satisfying $y'' = (\alpha-1) y +1$, which are summed up to form a solution $y= y_h + y_p$.
By assuming $\alpha \ne 1$, it is easy to see that $y_h = A e^{\sqrt{\alpha-1}x} + B e^{-\sqrt{\alpha-1}x}$ for constants $A, B$. (Is it easy for you?)
For $y_p$, there is one trivial solution $y_p(x) = \frac{1}{1-\alpha}$. So the solution is of the form $$y(x) = A e^{\sqrt{\alpha-1}x} + B e^{-\sqrt{\alpha-1}x} +\frac{1}{1-\alpha}.$$
If $\alpha <1$, one might feel more comfortable with
$$y(x) = A_0 \cos{\sqrt{1-\alpha}x} + B_0 \sin{\sqrt{1-\alpha}x} +\frac{1}{1-\alpha}.$$
Initial conditions are implicitly given; one can compute $y(0)$ and $y'(0)$. I will left this part for you.
Finally, if $\alpha = 1$, we have $y''(x) = 1$ and solution is simply $y = \frac{x^2}{2} + Ax + B$.
|
H: How to find the probability of one sample variance is two times larger than another?
I have two normal distribution where $X\sim N(\mu_{x}, 40^{2})$ and $Y\sim N(\mu_{y}, 50^{2})$. 8 samples from X and 16 samples from Y is drawn. How to determine the probability that the variance of the first sample is more than twice that of the second?
At first I was thinking about Chi-square distribution, but it seems that it only works for checking one variance.
Any help would be appreciated.
Edit:
I now reached to something like $\frac{25S_{X}^{2}}{16S_{Y}^2}\sim F_{7, 15}$,then I am trying to calculate $P(S_{X}^{2}>2S_{Y}^2)$
$$\begin{eqnarray}
P(S_{X}^{2}>2S_{Y}^2) &=& P(\frac{S_{X}^{2}}{S_{Y}^2}>2) \\
&=& P(\frac{25S_{X}^{2}}{16S_{Y}^2}>2(\frac{25}{16})) \\
&=& P(\frac{25S_{X}^{2}}{16S_{Y}^2}>3.125)\\
&=& 1 - F_{7, 15}(3.125)
\end{eqnarray}$$
Am I doing it correct? I am trying to lookup the value from the table but where's the alpha-value (significance value)
AI: First of all I underline that X and Y must be independent.
Hint:
$U=\frac{7S_X^2}{40^2}\sim \chi_{(7)}^2$
$V=\frac{15S_Y^2}{50^2}\sim \chi_{(15)}^2$
...but you know that $\frac{\frac{U}{7}}{\frac{V}{15}}=\frac{15}{7}\frac{U}{V}$ follows a Snedercor's F Distribution....
...now I think you can go on by youself
|
H: some confusion regarding empty set
let $X= \{ a, b, c\}$
Then choose the correct option
$1.$ $\varnothing \in X $
$2.$ $\{\varnothing\} \in X $
$3.$$\varnothing \subseteq X $
$4.$ $\{\varnothing\} \subseteq X $
My attempt : I think all option $1$, $2$, $3$,and $4 $are true because empty set contains no elements.
AI: The empty set is still an object. It contains nothing, but it is something.
It sometimes help to think of sets as plastic bags. We can put all sorts of objects into plastic bags. We can even put plastic bags into plastic bags. The empty set is an empty plastic bag. On the other hand, $X$ is a plastic bag containing letters $a$, $b$, $c$.
Option 1 asks us, if there is an empty plastic bag in $X$? No there isn't: there's three items in there, $a$, $b$, $c$, but no empty plastic bag. So, this is false.
Option 2 presents us with $\{ \emptyset\}$ a plastic bag containing one thing: another empty plastic bag. Can we find such an object in $X$? Again, we can't: there are three items in there, none of which are plastic bags.
Option 3 presents us with out empty plastic bag again, but asks us about subsets. The question now is, can we find everything in our empty plastic bag inside $X$? Well, there's nothing in our empty plastic bag and all of that "nothing" lies inside $X$. For every one of the $0$ objects we could pull out of the empty plastic bag, they all belong to $X$. This makes this option true!
Option 4 asks us if our plastic bag $\{\emptyset\}$, containing just an empty plastic bag, also has this property: that no matter what object we pull from it, this object will belong in $X$. In this case, we can pull only the empty plastic bag from $\{ \emptyset\}$. Can we find this empty plastic bag in $X$? No; indeed this option is logically equivalent to option 1.
I don't know if this analogy will help you, but it sometimes helps my students.
|
H: Prove that the following quadratic form is positive semi-definite. (Simplification)
The quadratic form that was given is below:
$$n\sum_{i = 1}^{n} x_i^2 - (\sum_{i = 1}^{n} x_i)^2 \quad (1)$$
So far, I've simplified it to
$$(n-1)x_1^2 - 2x_1x_2 - ... - 2x_1x_n \\
+ (n-1)x_2^2 - 2x_2x_3 - ... - 2x_2x_n\\
+ ... \\
+ (n-1)x_n^2 \quad (2)$$
which I know is equal to the following:
$$\sum_{1 \leq i< j\leq n}^{n} (x_i - x_j)^2 \quad (3)$$
I can see how they're all equivalent, but how would one take the steps to simplify from (2) to (3)? What kind of intuition or techniques are involved?
Thank you for any help.
AI: One approach is as follows. We can write
$$
\begin{align}
n\sum_{i = 1}^{n} x_i^2 - \left(\sum_{i = 1}^{n} x_i\right)^2 &=
n\sum_{i=1}^n x_i^2 - \sum_{i,j=1}^n x_ix_j
= n\sum_{i=1}^n x_i^2 - \sum_{i=1}^n x_i^2 - \sum_{i\neq j}^n x_ix_j
\\ & = (n-1)\sum_{i=1}^n x_i^2 - 2\sum_{i< j}^n x_ix_j
\\ & = \left(\sum_{i<j} x_i^2 + x_j^2\right) - 2\sum_{i< j}^n x_ix_j
\\ & = \sum_{i<j} (x_i^2 - 2x_ix_j + x_j^2) = \sum_{i<j}(x_i - x_j)^2.
\end{align}
$$
|
H: How to find number of spanning tree?
Suppose $G$ is a $k$-regular graph with $n$ vertices and with eigenvalues
$k = λ_1 > λ_2 ≥ \cdots ≥ λ_n.$
Find the number of spanning trees in $G$.
AI: You can use Kirchoff's Theorem, which states that number of spanning trees,
$$ t(G) = \frac{1}{n} \lambda_1 ' \lambda_2 '... \lambda_{n-1} '$$
Notice that only $n-1$ eigenvalues are taken in the formula. The eigenvalue which is missing is the smallest eigenvalue. All eigenvalues must be real since it is a symmetric matrix.
Here the eigenvalues are of the Laplacian Matrix that is equal to the difference between the graph's degree matrix (a diagonal matrix with vertex degrees on the diagonals) and its adjacency matrix (a $(0,1)$-matrix with $1$'s at places corresponding to entries where the vertices are adjacent and $0$'s otherwise).
Since this is a $k$ regular graph, the degree matrix is basically $kI$ where $I$ is the identity matrix. Now we have to figure out how eigenvalues change when (i) they are multiplied by $-1$ (scalar) and (ii) when a constant is added to each diagonal element. You can try to answer this yourself. The answers are (i) unchanged except for their sign which flips and (ii) eigenvalues are increased by same constant. After this the correct answer should be evident,
$$ \boxed{ t(G) = \frac{1}{n} (\lambda_2 + k) (\lambda_3+k) ... (\lambda_n+k)}$$
But here the missing eigenvalue is the largest among the given eigenvalues of the graph, since the signs flipped when multiplying by $-1$.
|
H: Is there some relationship between a square constraint and $l_\infty$ constraint?
A square constraint is a constraint of the type $$\{x \in \mathbb{R}| x \in [a,b], a < b\}$$ (generalization to arbitrary dimension is using Cartesian products, with $<$ overloaded)
$$\{x \in \mathbb{R}^n| x \in [a,b]^n, a < b\}$$
We know that the $l_\infty$ norm of a vector $x$ has an pre-image that looks sort of like a square
$$\|x\|_\infty = \max_{i = 1, \ldots, n} |x_i|$$
Is there some connection between the $l_\infty$ ball and the square constraint (similar to how there is a relationship between the Euclidean ball and the circle constraint)?
AI: Indeed there is a connection. For $x \in \mathbb R^n$, the $\ell_\infty$ ball
$$\{x \in \mathbb R^n: \| x\|_\infty \leq a\}$$
is equivalent to saying that $|x_i| \leq a$ for every $i \in \{1, \dots, n\}$. This in turn corresponds to the square constraint
$$\{x \in \mathbb R^n: x \in [-a,a]^n\}. $$
|
H: Are null sets the only ones that are disjoint but not distinct?
While studying analysis (which includes a chapter about set theory) I read this:
Two sets $A,B$ are said to be disjoint if $A ∩ B = ∅$. Note that this is not the same concept as being distinct, $A \neq B$. For instance, the sets $\{1, 2, 3\}$ and $\{2, 3, 4\}$ are distinct (there are elements of one set which are not elements of the other) but not disjoint (because their intersection is non-empty). Meanwhile, the sets $∅$ and $∅$ are disjoint but not distinct.
Note: Not including curly braces for the empty sets isn't a typo on my part, I copied everything as it is.
While I understand why these $2$ sets are disjoint but not distinct, it seems very illogical to me that any sets with non-null elements would have this property.
I think the same would apply to sets such as $\{∅,∅\}$ and $\{∅,∅\}$ (someone correct me if I am wrong) but I can't think of any examples such that the set has at least one element that it is not null, so is it the case that null sets the only ones that are disjoint but not distinct (as this wasn't mentioned in the textbook) or am I missing something?
AI: I think you are a bit confused about set notation. There is a difference between the set $\emptyset$ and the set $\{\emptyset\}$. The first one is a set with no elements, and it has cardinality $0$. The second one is a set with one element, namely the empty set, and it has cardinality $1$.
Moreover, a set is a collection of distinct objects. Sets cannot include the same element twice. That is to say, the set $\{\emptyset, \emptyset\}$ is non other than just $\{\emptyset\}$. It has cardinality $1$.
Now, regarding your question, as noted in the comments, if two sets $A$ and $B$ are disjoint but not distinct then they both have to be the empty set $\emptyset$.
Be careful, the sets $\{\emptyset\}$ and $\{\emptyset\}$ are neither distinct, nor disjoint. We have that
$$\{\emptyset\}\cap\{\emptyset\} = \{\emptyset\} \neq \emptyset$$
This might sound a bit confusing at first, but it is absolutely necessary to take the time to understand all of the above.
|
H: Proof that two iid Gaussian random variables are conditionally independent of their mean given their sum
My question is a simplification of a statement in this book that i.i.d. Gaussian random variables $X_1, X_2, ..., X_n \sim \mathcal{N}(\Theta, 1)$ are conditionally independent of $\Theta$ given their sum $X_1 + ... + X_n$.
I started working on the statement with $n=2$, aiming to show that the conditional distribution of $X_1,X_2$ does not involve the parameter $\Theta$. Immediately, I run into the problem of having the region $X_1 + X_2 = c$ having zero area in $\mathbb{R}^2$. Because of this, I cannot talk about
$$
\int f_{X_1,X_2|Z}(x,y|z) dxdy
$$
where $Z=X_1+X_2$ because any neighborhood of any point in $X_1+X_2=z$ for a fixed $z$ will contain points outside of the restricted region. Working with a conditional distribution does not look promising to me.
My question is: is there an alternative way to show conditional independence of $X_1,X_2$ with the mean $\Theta$ given their sum $X_1+X_2$?
AI: Yes, there is! It is enough to prove that $\sum_i X_i=T$ is the sufficient estimator of $\theta$
(you can prove it in many ways, e.g. factorization theorem)
More, as Gaussian belongs to the Exponential family, T is not only sufficient but also complete and minimal.
Now what you are looking to prove is exactly the definition of sufficiency of T
|
H: If $\lim_{x\to\infty} {\{x-f(x)\}}=2$, then find $\lim_{x\to\infty} \frac{\sqrt{x+1}-\sqrt{f(x)}}{\sqrt{x}-\sqrt{f(x)}}$
Question: If $\lim_{x\to\infty} {\{x-f(x)\}}=2$, then find $$\lim_{x\to\infty} \frac{\sqrt{x+1}-\sqrt{f(x)}}{\sqrt{x}-\sqrt{f(x)}}$$
I changed the form to use the condition:
$$\lim_{x\to\infty} \frac{x+1-f(x)}{x-f(x)}\cdot\frac{\sqrt{x}+\sqrt{f(x)}}{\sqrt{x+1}+\sqrt{f(x)}}$$
Even though I can get a value of
$\lim_{x\to\infty}\frac{x+1-f(x)}{x-f(x)}$ which is 3/2 I guess, I couldn't find the infinite limit of the second fraction. I also thought about using squeeze like
$$\frac{\sqrt{x}}{\sqrt{x+1}+\sqrt{f(x)}}\le\frac{\sqrt{x}+\sqrt{f(x)}}{\sqrt{x+1}+\sqrt{f(x)}}\le\frac{\sqrt{x}+\sqrt{f(x)}}{\sqrt{x+1}}$$
but still did not work well.
I also thought of changing the condition as
$$\lim_{x\to\infty} {x\left(1-\frac{f(x)}{x}\right)}=2$$
Since $\lim_{x\to\infty}x=\infty$, it might follow that $$\lim_{x\to\infty} {f(x)\over x}=1$$
but is this procedure always right? If so, then I think I can get a infinity limit of the second one.
I could predict the answer as just putting $f(x)=x-2$, which in turn we get 3/2. But, as you know, there can exist other possible answers, so I need precise solutions.
Could you please give me some ideas for the question? Thanks.
AI: Since $\displaystyle \lim_{x \to \infty} \left(x-f(x)\right) = 2$, there exist $N>>0$ which satisfies $x > N \implies 1< x-f(x) <3$, i.e. $x-3 < f(x) < x-1$. Then for $x>N$ we have
$$ \dfrac{\sqrt x + \sqrt{x-3}}{\sqrt{x+1} + \sqrt{x-1} } \le \dfrac{\sqrt x + \sqrt{f(x)}}{\sqrt{x+1} + \sqrt{f(x)} } \le \dfrac{\sqrt x + \sqrt{x-1}}{\sqrt{x+1} + \sqrt{x-3} }$$ and here you can proceed with squeezing.
|
H: In how many ways, Miyad can do the Trick?
The numbers 1 to 6 are written on six balls. Miyad took four balls from there such that if he multiplies the numbers of the balls and then say it to Mun, then Mun can’t say the number of the balls surely. In how many ways, Miyad can do this?
This is a question of Math Olympiad and I'm slightly confused in line 2. Can someone please help?
AI: The prime $5$ occurs in only one of the numbers, so Mun can tell whether it’s one of the $4$ balls, and if it is, divide it out of the product. Thus, we can split the problem into two cases: Choose $4$ numbers without $5$ such that their product is not unique, or choose $3$ numbers in addition to $5$ such that their product is not unique.
The first case is impossible, since only one of the other $5$ numbers is omitted, so Mun can find that number by dividing the product of all $5$ other numbers by the given product.
For the second case, if we choose two different triples out of $5$ numbers, they have at least one number in common. They can’t have the same product if they differ only in one number; so they must have exactly one number in common. The only two pairs of numbers up to $6$ that have the same product are $2\cdot3=1\cdot6$ and $2\cdot6=3\cdot4$.
Thus, there are exactly four ways in which Miyad can choose the numbers such that their product doesn’t uniquely identify them, namely $\{1,4,5,6\}$ and $\{2,3,4,5\}$ (for $2\cdot3=1\cdot6$) and $\{1,2,5,6\}$ and $\{1,3,4,5\}$ (for $2\cdot6=3\cdot4$).
|
H: Find $\lim \limits_{x \to 2} {sin(x)cos(x)e^{cos(x)}\over ln(x)}$
The limit can be found if I use a calculator but how do I find it without using one? I tried to use the Taylor' series and this is what I have so far but it doesn't look right:
$${(x-{x^3 \over 3} + o(x^5)) (1 - {x^2\over2} o(x^4))(1 + cos(x) + o(cos(x))^2)\over ln(x)}$$
How do I proceed further?
AI: The first step into finding a limit is plugging in where your variable tends to in the limit (here your variable is $x$ and it is tending to the number $2$) sometimes, it happens that you are lucky and it yields a number instead of getting an indeterminate form.
$\lim \limits_{x \to 2} {sin(x)cos(x)e^{cos(x)}\over ln(x)}$ = ${sin(2)cos(2)e^{cos(2)}\over ln(2)}$
Putting this into a calculator would yield $-0.360078...$, you will never be asked to find the result of such a fraction without a calculator.
|
H: A problem in sequences and series
Three numbers in AP are removed from first $n$ consecutive natural numbers and average of remaining numbers is found to be $\frac{43}{4}$. Find $n$ as well as removed numbers if one of the removed number is a perfect square.
AI: If we let $k-d$, $k$, $k+d$ be the 3 terms in AP which are removed, then we are trying to find integer solutions to $$\dfrac{1}{n-3} \left( \dfrac{n(n+1)}{2} - 3k \right) = \dfrac{43}{4}$$
We can conclude that $n \geq 18$, because if $n = 17$, the greatest average we could get would be to remove $1$, $2$, $3$, giving an average of $10.5 < 43/4$. And of course, if $n < 17$ the average would be even smaller. Similarly $n \leq 23$.
Now, in the above equation, since $n(n+1)/2 - 3k$ is an integer, and we need the denominator to be $4$, we must have that $n-3$ is divisible by $4$. This gives only two possible values: $n=19$ and $n=23$. Putting $n=23$ into our original equation gives $k = 61/3$, which is bogus. However, $n=19$ gives $k=6$. Restricting one of the terms removed to be a square, any of $d=2,3,5$ will work.
|
H: Reciprocal of square root of a binomial to series.
by square root algorithm and long division (or by binomial theorem) it is simple matter to find $1/\sqrt{(1-x^2)} = (1+x^2/2 + 3x^4/8 + 5x^6/16 + ...)$ >
I am new to this kind of thing. Can someone explain what is the method (step by step) to convert this reciprocal of a redical to a series ?
I know there is a formula of binomial expansion but how do u do it with a reciprocal and how to make it an infinite series.
Thanks.
AI: If $\alpha\in\Bbb r$ and if $x<1$, we have$$(1+x)^\alpha=1+\binom\alpha1x+\binom\alpha2x^2+\cdots,$$where$$\binom\alpha n=\frac{\alpha(\alpha-1)\cdots(\alpha-(n-1))}{n!}.$$Applying this (with $-x^2$ instead of $x$, which is not a problem, since $|x|<1\implies|-x^2|<1$) and $\alpha=-\frac12$, we have the equality that you're after.
|
H: Approximate solution to a transcendental equation in the limit of a variable
I have the following transcendental equation:
$$2\cot{x}=\frac{kx}{hL}-\frac{hL}{kx}\tag1$$
I would like to inquire whether an approximate solution to $(1)$ can be developed in the limit $h\rightarrow0$.
Some typical values are: $k=16, L=0.25,h=0.001$. Using numerical methods following solutions are obtained for this set of parameters:
x = 0.00559016, 3.1416, 6.28319, 9.42478, 12.5664, 15.708, ....
AI: Changing notations, you want to find the zero's of function
$$f(x)=\cot(x)-a x+\frac 1{ax}$$ where $a$ is potentially large.
Instead, in order to remove the discontinuities, I prefer to consider the function
$$g(x)=x \cos (x)-a \left(x^2-1\right) \sin (x)$$ Developing as series around $x=k \pi$ and using later series reversion, we have as an approximation
$$\color{blue}{x_k=t+\frac{t}{a \left(t^2-1\right)}-\frac{t \left(t^2+1\right)}{a^2 \left(t^2-1\right)^3}-\frac{t \left(t^6-8 t^4-14 t^2-3\right)}{3 a^3 \left(t^2-1\right)^5}+O\left(\frac{1}{a^4}\right)}$$ where $\color{red}{t=k \pi}$.
Using your numbers $(k=16, L=0.25,h=0.001)$ gives $a=32000$. Let us use only the first and second term since $a$ is effectively very large. The first roots will then be
$$\{0,3.1416037,6.2831904,9.4247813,12.566373,15.707965,18.849558\}$$ which seems to be OK.
Notice that I missed the very small first root because of the transformation form $f(x)$ to $g(x)$.
Edit
For the small root, we need a special treatment
$$\cot(x)-a x+\frac 1{ax}=0 \implies a=\frac{\left(\frac{\sqrt{5 x^2-3 x^2 \cos (2 x)}}{\sqrt{2}}+x \cos (x)\right) \csc (x)}{2 x^2}$$ Developing as series around $x=0$ gives
$$a=\frac{1}{x^2}+\frac{2}{3}-\frac{31 x^2}{45}+\frac{1069 x^4}{945}+O\left(x^6\right)$$ Now, series reversion
$$x=\frac 1{\sqrt a} \left( 1+\frac{1}{3 a}-\frac{8}{45 a^2}+\frac{53}{630
a^3}+O\left(\frac{1}{a^4}\right)\right)$$ which, for $a=32000$ gives $x=0.005590228174$ while the exact solution is exactly the same.
|
H: What is fog(x),f(g(x)),f"(x),fof^(n-1)(x), fog^2012(x)?
I found a math assuming fog(x)=f(g(x)), f"(x)=fof^(n-1)(x), fog^2012(x)=0.
What do they mean actually? Please describe in a way that I understand.
AI: $(f$ o $g)(x)$ is the concatenation of two functions. It is described by the first equation
$$(f \text{ o } g)(x)=f(g(x))$$
As an example:
If $f(x)=x^2$ and $g(x)=2x$ then
$$(f \text{ o } g)(x)=f(g(x))=f(2x)=(2x)^2=4x^2$$
In other words, you use the function value of the inner function as new argument for the outer function and evaluate there.
The other two equations seem strange. They are probably just true for this exercise and you need to work with it further. But with the knowledge of the concatenation, hopefully now its clearer what they mean. :)
|
H: Let $z, w \in \mathbb{C}$. Prove that if $zw$ and $z + w \in \mathbb{R}$, then $z = \bar{w}$
Let $z, w \in \mathbb{C}$. Prove that if $zw$ and $z + w \in \mathbb{R}$, then $z = \bar{w}$.So I have started to see complex numbers in class and by reading I find this problem. So I know that $$Re(z)= \frac{z + \bar{z}}{2},$$ but I don't seem to get to the answer. I also tried using other basic properties of $\mathbb{C}$.
Any help/hints?
AI: Let $z=a+ib$, $\omega = c + id$, with $a,b,c,d \in \mathbb{R}$. We know that:
$$ z \omega = ac + iad + ibc - bd \in \mathbb{R} $$
This implies that:
$$ ad+bc=0 $$
that is, $ad=-bc$. Moreover, we also know that:
$$ z + \omega = a+c + i(b+d) \in \mathbb{R}$$
which implies that
$$ -b=d $$
Consequently:
$$ ad= dc $$
Suppose $d \neq 0$. Then, this implies that $a=c$, and thus:
$$ z= a +ib = c - id = \overline{\omega} $$
If $d=0$, then $\omega = c$ and:
$$ b=-d=0 $$
that is, $z=a$. It is then clear that the result does not hold in this case: indeed, if this were true, it would mean that just because the sum of two real numbers is real, and their product is real, the number are certainly equal, which is absurd.
|
H: Proving doubly stochastic matrix limit
Let n > 0 and Xn be an irreducible aperiodic Markov chain having a doubly stochastic transition matrix.
By definition, $\sum_{y∈S} P(x,y) = 1$ and $\sum_{x∈S} P(x,y) = 1$ for all x y ∈ S.
I want to show that if |S| < $\infty$ then $P^{n}(x,y)\rightarrow 1/|S|$ as $n\rightarrow \infty$
Could somebody please help? Thanks in advance!!!
AI: A basic theorem in MC theory shows that the limit exists and is independent of $x$. Let $\pi (y)=\lim P^{n}(x,y)$. Now $\sum_{x \in S} P(x,y)=1$ also gives $\sum_{x \in S} P^{n}(x,y)=1$ for all $n$. Since this is a finite sum we can take limit as $n \to \infty$ to see that $\sum_{x \in S} \pi (y)=1$ which means $|S| \pi (y)=1$ or $\pi (y)=\frac 1 {|S|}$.
Reference for the limit theorem: Theorem III.2.1, p. 67 of 'Markov Chains: Theory and Applications' by Isaakson and Madsen.
|
H: Where is the loophole in this homotopy argument?
Suppose $f,g:[0,1]\to [0,1]$ are two maps then we have that $f$ is homotopic to $g$ since $[0,1]$ is a convex space and if $h:[0,1] \to X$ is a path in an arbitrary space $X$, it implies $h \circ f$ homotopic to $h \circ g$ since homotopy is invariant under composition.
Now, if I take $f=Id_{[0,1]}$ and $g=0$, then this implies $h \circ f = h$ is homotopic to $h \circ g = c_{h(0)}$ i.e any path in $X$ is homotopic to a constant map which is not true in general.
What am I missing here ?
AI: To close the question: any path in a space is contractible, which implies that all paths into a path-connected space are homotopic. However, this is not true if we require the endpoints to be fixed during the homotopy.
|
H: Circular Bayes Theorem?
I came across a seemingly simple Bayes Theorem question, but I am going in circles currently to try to figure it out.
I am given only that $$P(A) = .78$$ $$P(B|A) = .66$$ $$ P(B'|A') = .45$$
I am trying to find P(A'|B'). Obviously the first step is to find P(B).
So far I have shown that $$P(B|A) = \frac{P(B)*P(A|B)}{P(A)} \rightarrow P(B)*P(A|B) = .4752$$ So to find P(B) we need to find P(A|B)
$$P(A|B) = \frac{P(A)*P(B|A)}{P(B)} = \frac{.4752}{P(B)}$$ so this doesnt help at all.
The other way to find P(B) is by $$P(B) = P(A)*P(B|A) + P(A')*P(B|A') $$ but we are not given P(B|A') or P(B'|A) so we can't find either.
How would you go about solving this?
AI: Hint: $P(B'|A) = 1 - P(B|A)$.
|
H: Are computer integers a finite group (under addition with overflow)?
The integers and the integers modulo a prime are both groups under addition.
What about the computer representation of integers (e.g. int64)?
It's closed under addition, since a sum that is too large wraps around to the negatives. It also inherits the other group properties from the integers (associativity, identity, inverse).
So int64 seems like a finite group, but am I missing anything?
AI: If you just let overflows happen without doing anything about it, and in particular with 2's complement representation (or unsigned), a computer's $n$-bit integer representation becomes the integers modulo $2^n$. So yes, you're entirely right: It is a finite group (and with multiplication becomes a finite ring).
(As a side note, working with and thinking about 2's complement became a lot easier for me once I realized this. No one really told me during my education, so for ages I was stuck having to remember all the details in the algorithm for taking negatives, i.e. actually taking the 2's complement. Now that I have the algebraic knowledge of what's actually going on, I can just deduce the algorithm on the fly whenever I need it.)
It's not entirely obvious to check explicitly that they satisfy, say, associativity when overflow is in the picture. It's easier to set up the obvious bijection with the integers modulo $2^n$ and show that addition stays the same, and prove the group properties that way.
|
H: If A is a non-proper subset of B, can we deduct A=B?
I have the following definition:
Let $A,B$ be sets. We say that $A$ is a subset of $B$, denoted $A ⊆ B$, iff every element of A is also an element of B, i.e.
For any object $x$, x$ ∈ A ⇒ x ∈ B$.
We say that $A$ is a proper subset of $B$, denoted $A \subsetneq B$, if$ A ⊆ B$ and
$A\neq B$.
Note: I know that "$⊆$" is usually used to denote a proper subset and "$\subset$ " is used to denote a subset instead but I guess that is beyond the point of the question, I just copied the textbook content.
My question:
Logically speaking (according to the definitions) $A$ being a subset of $B$ shouldn't imply $A=B$ since that would mean for any object $x$, x$ ∈ A ⇒ x ∈ B$ but this isn't enough to consider $A=B$ as we would need as well for any object $y$, y$ ∈ B ⇒ x ∈ A$ however if $A$ is a subset of $B$ and $A \neq B$ then we would just denote is as a prober subset instead of a subset only, so I find this kind of contradicting.
AI: By definition, $A$ is an proper subset of $B$ if $A \neq B$, so if $A$ is an improper subset of $B$, then $A=B$.
Note that $A = B$ if and only if $A \subseteq B$ and $B \subseteq A$.
|
H: Can the following conditional value be expressed using the min and max functions?
For example, consider $y$ such that
$$
y=\cases{c, & if $c\le0$,\\0, & if $c>0$.}
$$
Then, it can be easily expressed as $$y=\min\{c, 0\}.$$
Using the above trick, can I express the following $y$ using $\max$ or $\min$ ?
$$
y = \cases{c, & if $c\le d$,\\ 0, & if $c>d$.}$$
AI: The second function is discontinuous, you will need an extra "step" term, like
$$y=\min\{c-d,0\}+d\cdot(c\le d).$$
If you don't mind having an undefined value at $c=d$, you can "emulate" with
$$y=\min\{c-d,0\}\frac c{c-d}.$$
|
H: measurable finite-valued function
I am reading the measurable function in the book of stein's real analysis. It is stated that
The finite-valued function $f$ is measurable if and only if $f^{-1}(O)$ is measurable for every open set $O$.
I am not sure whether the 'finite-valued' condition is necessary. Is there any counter example to verify this condition?
AI: Let $E$ be a non-measurable set. If $f=\infty I_E -\infty I_{E^{c}}$ then the inverse image of any open subset of $\mathbb R$ is empty, hence measurable. However $f$ is not measurable.
|
H: Is this version of the dominated convergence theorem true?
Let $I$ be a real interval, $t_0 \in I,$ $E$ a measurable set in $\mathbb{R}$ and $f \colon I \times E \to \mathbb{R}$ a function such that:
$f(t, \cdot) \in L^1(E)$ for every $t \in I;$
There exists $\lim_{t \to t_0} f(x, t) =: F(x) \in \mathbb{R}$ for almost every $x \in E;$
$F \in L^1(E).$
Can we conclude that
$$\lim_{t \to t_0} \int_E f(t, x) \, dx = \int_E F(x) \, dx?$$
It seems to me that it should be true, but I don't see how it can be a consequence of the dominated convergence theorem.
AI: Let $I=E=[0,1]$ and $f(t,x)=\frac 1 t x^{1/t}$ for $t >0$ and $f(0,x)=0$. Take $t_0=0$ Then $F(x)=0$ which is integrable. However a simple calculation shows that $\int f(t,x) dx \to 1$ and $t \to t_0$.
|
H: $x^2+3x+3$ is irreducible in $\mathbb{F}_{25}[x]$
Give an example of an irreducible non-linear polynomial in $\mathbb{F}_{25}[x]$.
I know that $x^2+3x+3$ is irreducible in $\mathbb{F}_{25}[x]$ but I know no shorter proof then the exhaustive search (since $\mathbb{F}_{25}[x]$ is not a factorial ring we cannot use Eisenstein's criterion).
Can you please help me to find a more elegant idea?
AI: The quadratic formula works very well in any non-characteristic-$2$ field. So the roots of your $x^2+3x+3$ are
$$
\frac{-3\pm \sqrt{3^2 - 4\cdot 1\cdot 3}}{2\cdot 1} = 1\pm3\sqrt2
$$
Since $\Bbb F_{25}$ is a quadratic extension of $\Bbb F_5$, it contains all the square roots of any element in $\Bbb F_5$. So $\sqrt 2$ actually exists in $\Bbb F_{25}$, and your polynomial has roots. In fact, we get that any quadratic polynomial over $\Bbb F_5$ has roots in $\Bbb F_{25}$. So in order to find irreducible quadratics, you need to use at least one of the twenty non-$\Bbb F_5$ elements of $\Bbb F_{25}$.
For instance, we could take $x^2 + \sqrt2$. There is no square root to $\sqrt2$ in $\Bbb F_{25}$ (proof below), so this is irreducible.
All elements in $\Bbb F_{25}$ may be written in a unique way as $a + b\sqrt2$, where $a, b\in \Bbb F_5$. If $\sqrt2$ has a square root in $\Bbb F_{25}$, then we have a solution to
$$
(a + b\sqrt2)^2 = \sqrt2\\
a^2 + 2b^2 + 2ab\sqrt2 = \sqrt2\\
a^2 + 2b^2 = 0\quad\land\quad 2ab = 1
$$
The only solution in $\Bbb F_5$ to $a^2 + 2b^2 = 0$ is $a = b = 0$, but that doesn't solve $2ab = 1$, so $\sqrt{\sqrt2}$ doesn't exist in $\Bbb F_{25}$.
|
H: finding $h$ that make absolute instability for Euler method
I have an Euler method that has this form:
$$\hat{I}(t_{n+1}) = \hat{I}(t_{n})+h\beta \hat{I}(t_{n})[1-\frac{\hat {I}(t_{n})}{N}]$$
which can also be written like
$$\hat{I}(t_{n+1})=\phi (\hat{I}(t_{n}))$$
where $\phi (x)$ is the iteration function down below:
$$\phi(x) = x + h \beta x (1- \frac{x}{N})$$
I use $h=6$ in this method but if I use a $h$ which is a little bit bigger (for example $h=20$), I have an absolute instability error. I want to find the value of $h$ from which this absolute error is shown?
Thanks in advanced
AI: Divide the equation by $N$ to get a new equation in $x=I/N$ that does not contain $N$. Replace $k=h\beta$, so that what remains is the more simple equation
$$
x_{n+1}=x_n+kx_n(1-x_n)=x_n(1+k-kx_n)
$$
Now finally divide by $(1+k)$ and set $y=kx/(1+k)$ to get the discrete logistic map
$$
y_{n+1}=(1+k)y_n(1-y_n)
$$
You can read off the stability of that iteration from any plot of the Feigenbaum diagram.
In summary, for $k\in[0,2]$ you get convergent behavior, after that periodic solutions, looking increasingly chaotic after $k=2.82..$
|
H: Lebesgue integral, Is the solution right?
I'am trying to understand Lebesgue integration
Compute
$\int_{0}^{\pi}$ f(x)dx
Where
$f(x) = \begin{cases}
sin x & \text{ if } x \in \mathbb{I} \\
cosx & \text{ if } x \in \mathbb{Q}
\end{cases}$
I tried this
(L)$\int_{[0,\pi]} f(x) dx = (L)\int_{[0,\pi] \cap \mathbb{Q}} f(x)dx + (L)\int_{[0,\pi] \cap \mathbb{I}} f(x)dx = (L)\int_{[0,\pi] \cap \mathbb{Q}} cos x dx + (L)\int_{[0,\pi] \cap \mathbb{I}} sinxdx = (R)\int_{[0,\pi]} cos x dx + (R)\int_{[0,\pi]} sinxdx = \int_{0}^{\pi} (sinx+cosx)dx = sin(\pi) - sin(0) + - (cos(\pi) - cos(0)) = 0 - (-2) = 2$
Is my sollution correct?
Can you recommend any good materials with emamples of lebesgue integration?
Thank you!
AI: Note that the set $\Bbb Q$ has Lebesgue measure zero. Thus, we outright have
$$\int_{[0,\pi] \cap \Bbb Q} \cos(x)dx = 0$$
since a subset of the rationals will obviously have measure zero. Similarly, $\Bbb I \cap [0,\pi]$ has Lebesgue measure $\pi$ as result since $\Bbb I = \Bbb R \setminus \Bbb Q$. In doing so, we notice that
$$\int_{[0,\pi]} f(x)dx = \int_0^\pi \sin(x)dx$$
as a result. Your answer still ends up being correct, but only by chance: the working is wrong.
|
H: RVs that are uncorrelated but not independent
Given that $X\sim N(0,1)$, $Z\sim Unif(\{\pm 1\})$, and $Y=XZ$.
$Z$ is independent of $X$. After calculation we found that $Y\sim N(0,1)$.
Show that X and Y are uncorrelated but not independent.
I can only assume that since $Y=XZ$ then, $Y$ must be dependent of $X$.
AI: If $X$ and $Y$ are indepedent then so are $X^{2}$ and $Y^{2}$. But $Y^{2}=X^{2}$ since $Z^{2}=1$. This makes $X^{2}$ independent of itself which is possible only when it is a constant. This contradicts the fact that $X \sim N(0,1)$.
By independence $EXY=EX^{2}Z=EX^{2}EZ=(1)(0)=0$ and $EXE(XY)=(0)E(XY)=0$ so cavariance of $X$ and $Y$ is $0$.
|
H: Vector triple product: BAC-CAB rule
I am currently studying Introduction to Electrodynamics, fourth edition, by David J. Griffiths. Chapter 1.1.3 Triple Products introduces the vector triple product as follows:
(ii) Vector triple product: $\mathbf{A} \times (\mathbf{B} \times \mathbf{C})$. The vector triple product can be simplified by the so-called BAC-CAB rule:
$$\mathbf{A} \times (\mathbf{B} \times \mathbf{C}) = \mathbf{B}(\mathbf{A} \cdot \mathbf{C}) - \mathbf{C}(\mathbf{A} \cdot \mathbf{B}). \tag{1.17}$$
Notice that
$$(\mathbf{A} \times \mathbf{B}) \times \mathbf{C} = - \mathbf{C} \times (\mathbf{A} \times \mathbf{B}) = - \mathbf{A}(\mathbf{B} \cdot \mathbf{C}) + \mathbf{B}(\mathbf{A} \cdot \mathbf{C})$$
is an entirely different vector (cross-products are not associative). All higher vector products can be similarly reduced, often by repeated application of Eq. 1.17, so it is never necessary for an expression to contain more than one cross product in any term. For instance,
$$(\mathbf{A} \times \mathbf{B}) \cdot (\mathbf{C} \times \mathbf{D}) = (\mathbf{A} \cdot \mathbf{C})(\mathbf{B} \cdot \mathbf{D}) - (\mathbf{A} \cdot \mathbf{D})(\mathbf{B} \cdot \mathbf{C});$$
$$\mathbf{A} \times [ \mathbf{B} \times (\mathbf{C} \times \mathbf{D})] = \mathbf{B}[\mathbf{A} \cdot (\mathbf{C} \times \mathbf{D})] - (\mathbf{A} \cdot \mathbf{B})(\mathbf{C} \times \mathbf{D}). \tag{1.18}$$
This all seems like total gibberish to me. For vectors $\mathbf{A}$ and $\mathbf{B}$, the expression $\mathbf{A} (\mathbf{B})$ does not make sense. Furthermore, the author claims that $(\mathbf{A} \times \mathbf{B}) \times \mathbf{C} = - \mathbf{C} \times (\mathbf{A} \times \mathbf{B}) = - \mathbf{A}(\mathbf{B} \cdot \mathbf{C}) + \mathbf{B}(\mathbf{A} \cdot \mathbf{C})$; although, it is not clear to me that this is true, nor does the author justify their claim. I do not understand what the "BAC-CAB rule" is supposed to be, nor do I understand the broader points that the author is trying to make in this section.
I would greatly appreciate it if people would please take the time to clarify this.
AI: The notation can be slightly confusing. Note that in the RHS of the following equation
$$\mathbf{A} \times (\mathbf{B} \times \mathbf{C}) = \mathbf{B}(\mathbf{A} \cdot \mathbf{C}) - \mathbf{C}(\mathbf{A} \cdot \mathbf{B})$$
$\mathbf{A} \cdot \mathbf{C}$ is a scalar (because dot products are scalars). This means that $\mathbf{B}(\mathbf{A} \cdot \mathbf{C})$ is just the vector $\mathbf{B}$ scaled by a real number. This operation is well defined. While the proof is slightly involved, some sanity checks can be instructive. For example, we expect that $(\mathbf{A} \times (\mathbf{B} \times \mathbf{C})) \cdot \mathbf{A} = 0$ since cross product of a vector is perpendicular to the vector itself. Indeed, taking the dot product of the RHS with $\mathbf{A}$ yields,
$$(\mathbf{B}\cdot \mathbf{A})(\mathbf{A} \cdot \mathbf{C}) - (\mathbf{A}\cdot \mathbf{C})(\mathbf{A} \cdot \mathbf{B})$$
which is clearly zero since the dot product is commutative. To convince yourself I would suggest
Doing more of these sanity checks on the other equations you've written
Evaluating both sides of these equations by hand for concrete values of $\mathbf{A}, \mathbf{B}, \mathbf{C}$
EDIT: To prove the identity
$$(\mathbf{A} \times \mathbf{B}) \cdot (\mathbf{C} \times \mathbf{D}) = (\mathbf{A} \cdot \mathbf{C})(\mathbf{B} \cdot \mathbf{D}) - (\mathbf{A} \cdot \mathbf{D})(\mathbf{B} \cdot \mathbf{C})$$
use the cyclic property of the scalar triple product
$$(\mathbf{A} \times \mathbf{B}) \cdot (\mathbf{C} \times \mathbf{D}) = \left[(\mathbf{A} \times \mathbf{B}), \mathbf{C}, \mathbf{D}\right] = \left[\mathbf{C}, \mathbf{D}, (\mathbf{A} \times \mathbf{B})\right] = \mathbf{C}\cdot (\mathbf{D}\times(\mathbf{A} \times \mathbf{B}))$$
You can expand the vector triple product using the BAC-CAB rule to get the RHS.
|
H: Show that the set in the ring of polynomials in $\mathbb Q[x]$ is an ideal of $\mathbb Q[x]$.
In the ring of polynomials with rational coefficients $\mathbb Q[x]$, let
$$I = \{h(x) \in \mathbb Q[x] : (x^2 - 2) \ \text{ divides }\ \ h(x)\}$$
Show $I$ is an ideal of $\mathbb Q[x]$.
I know I must show that the set is an additive subgroup and that if $s\in I$ and $r\in R$ then $rs$and $sr \in I$, but I am getting confused with the divider property in the first steps. Any help with this proof would help.
Also, how would I be able to show if this was a maximal ideal?
Thank you.
AI: The condition that $x^2 - 2$ divides a polynomial $h(x) \in \mathbb{Q}[x]$ means that there exists $g(x) \in \mathbb{Q}[x]$ such that $h(x) = (x^2 - 2)g(x)$, i.e. $x^2 - 2$ is a factor of $h(x)$. Now if you have two polynomials $h_1(x)$ and $h_2(x)$ in $I$, then $(x^2 -2 )$ is a factor of both, a common factor. So we can write $h_1(x) + h_2(x) = (x^2 - 2)f(x)$ for some $f(x)$ and you can see the sum is in $I$. The other conditions you need to check follow similarly.
To see that $I$ is maximal, you just need to show that $x^2 - 2$ is irreducible (see why?). This follows from Eisenstein's criterion.
Edit: To go from irreducible polynomial to maximal ideal, you can proceed as follows: Suppose $J\subseteq \mathbb{Q}[x]$ is an ideal that properly contains $I$, so $I \subsetneq J$, and let $f \in J \setminus I$. Then $x^2 - 2$ doesn't divide $f$ (since $f \not\in I$) so their GCD is 1. By the Euclidean algorithm, you can find polynomials $s$ and $t$ such that $1 = f\cdot s + (x^2 - 2)\cdot t$. From here (check!) you get that $1 \in J$, hence $J = \mathbb{Q}[x]$. Thus the only ideal properly containing $I$ is the whole ring, so $I$ is maximal.
|
H: Each face of a rectangle Q ha measure zero in Jordan measure
Definition
Let $A$ a subset of $\Bbb{R}^n$. We say $A$ has measure zero in $\Bbb{R}^n$ if for every $\epsilon>0$, there is a covering $Q_1,Q_2,...$ of $A$ by countably many rectangles such that
$$
\sum_{i=1}^\infty v(Q_i)<\epsilon
$$
Definition
If $Q=[a_1,b_1]\times...\times[a_n,b_n]$ is a rectangle of $\Bbb{R}^n$ then for any $i=1,...,n$ the sets $F_{a_i}=\{x\in Q:x_i=a_i\}$ and $F_{b_i}=\{x\in Q:x_i=b_i\}$ are called the $i$-th faces of $Q$.
Statement
Any face of $Q$ has measure zero in $\Bbb{R}^n$
Unfortunately I can't prove the statement so I ask to prove it. I specify that I use Jordan measure. So could someone help me, please?
AI: Let $F_{a_i}= \{x \in Q \: | \: x_i = a_i\}$ be an i'th face of $Q$. Then for any $\delta > 0$ the rectangle
$$Q_\delta = [a_1,b_1] \times \dots \times [a_i - \delta,a_i + \delta]\times \dots\times [a_n,b_n]$$
is a covering of $F_{a_i}$. The jordan measure of $Q_\delta$ is
$$v(Q_\delta) = 2\delta \prod_{j\neq i} (b_j-a_j)$$
so for any $\epsilon>0$, we can choose
$$\delta < \frac{\epsilon}{2\prod_{j\neq i} (b_j-a_j)}$$
such that
$$v(Q_\delta) < \epsilon .$$
Alternatively one could argue (depending on the definition of rectangle), that
$$F_{a_i} = [a_1,b_1] \times \dots \times \{a_i\} \times \dots \times [a_n,b_n]$$
is in fact a rectangle, and that $v(F_{a_i})=0$, and that $F_{a_i}$ is clearly a covering of itself.
|
H: Convex conjugate of a function
I am working on the convex conjugate, defined as $\forall y \in \mathbb{R}^n, f^*(y)=\sup_{x \in dom(f)}\langle x,y\rangle-f(x)$, where $f:\mathbb{R}^n \rightarrow \mathbb{R} \cup \{+\infty\} $.
I have to show that for $f(x)=\frac{1}{2}\langle Ax, x \rangle-\langle b,x \rangle, $ where $A$ is a symmetric matrix and $b\in \mathbb{R}^n$, $f^*(y)=\frac{1}{2}\langle A^{-1}(b+y),b+y \rangle $
I tried using the equivalence $y\in\partial f(x)\iff f^*(y)= \langle x,y\rangle-f(x)$, but I am not sure if this is works.
Could you help me with that? Thanks a lot
AI: I don't know if there is an easier approach, using some theorems on the convex conjugate maybe, but below I am just working with the definitions you gave:
If we write $$\langle x,y\rangle - f(x) = \langle y - \frac{A}{2}x + b, x\rangle$$
Then we see, that we just have to show that for a fixed $y$ the maximum is attained at $$x = A^{-1} (b+y)$$
Thus, we fix $y$ and differentiate
$$\frac{\partial}{\partial x} \langle y - \frac{A}{2}x + b, x\rangle = y - \frac{A}{2} x + b - \frac{A}{2} x $$
Setting this to 0 will yield the solution as required, provided that A is positive definite.
|
H: Where is $f(x) = |4x-4-x^2|$ differentiable and where is it $C^1$ and $C^2$.
I'm a university student taking a real analysis paper. I'm currently working down my problem sheet and have arrived at a series of questions reguarding smoothness and the like. I'm still getting the hang of these types of question so if any one has any tips or tricks to steer me in the right direction it would be much appreciated.
Reguarding the above question. The function is an absolute value so it is defined peice-wise.
$$f(x)= \begin{cases}
(x-2)^2 & x < 0 \\
-(x-2)^2 & x \geq 0 \\
\end{cases}
$$
$$f(x)= \begin{cases}
4x-4-x^2 & x < 0 \\
x^2-4x+4 & x \geq 0 \\
\end{cases}
$$
$$f'(x)= \begin{cases}
4-2x & x < 0 \\
2x-4 & x \geq 0 \\
\end{cases}
$$
$$f''(x)= \begin{cases}
-2 & x < 0 \\
2 & x \geq 0 \\
\end{cases}
$$
I am confused as to the classification of $C^1$ and $C^2$ as well as being generally derivable. I would assume the absolute value ensures the derivative does not exist at $x=2$ for all three except the second derivative which appears to have its derivatve not exist at $x=0$, therefore the function cannot be $C^1$ or $C^2$? I am very new to this so if anyone can help me out it would be much appreciated!
AI: Note the function is
\begin{align}
f(x) &= |4x - 4 - x^2|\\
&= |x^2 - 4x + 4|\\
&= |(x-2)^2|.
\end{align}
Observe that $(x-2)^2 \ge 0$ for all $x \in \Bbb R$ and so, the $|\cdot|$ can be dropped and we get $$f(x) = (x-2)^2.$$
Now, it is clear that $f$ is infinitely differentiable everywhere.
It can be seen that your original piecewise definition is incorrect because you write $-(x-2)^2$ for $x \ge 0$. If you plug something like $x = 1$, you get a negative value whereas $|\text{something}|$ can never be negative.
|
H: Summation of binomial-like terms
Simplification of the two expressions
$$S_1=n\sum_{k=0}^{n-1} {n-1 \choose k} \frac{(-1)^k}{k+1}$$
and
$$S_2=n\sum_{k=0}^{n-1} {n-1 \choose k} \frac{(-1)^k}{k+3}$$
It seems that $S_1=1$, for example; are these familiar forms,
available in a list of summations?
AI: Hint:
$$\binom{n-1}k\cdot\dfrac1{k+3}=\dfrac{(k+1)(k+2)}{n(n+1)(n+2)}\binom{n+2}{k+3}$$
Let $(k+1)(k+2)=(k+3)(k+2)+A(k+3)+B$
$k=-3\implies(-2)(-1)=B\iff B=2$
$k=-2\implies A(-2+3)+B=A+2\iff A=-4$
$$\implies(k+1)(k+2)\binom{n+2}{k+3}$$
$$=(k+3)(k+2)\binom{n+2}{k+3}+(-4)(k+3)\binom{n+2}{k+3}+2\binom{n+2}{k+3}$$
$$=(n+2)(n+1)\binom n{k+1}-4(n+2)\binom{n+1}{k+2}+2\binom{n+2}{k+3}$$
Now $\displaystyle\sum_{k=0}^{n-1}\binom{n+2}{k+3}(-1)^k$
$=\displaystyle\sum_{r=3}^{n+2}\binom{n+2}r(-1)^{r-3}$
$=-\displaystyle\sum_{r=3}^{n+2}\binom{n+2}r(-1)^r$
$\displaystyle=\sum_{r=0}^2\binom{n+2}r(-1)^r-(1-1)^{n+2}$
Can you take it from here?
|
H: Nine cards $1$ to $9$, drawn twice simultaneously , whats the expected value of the score?
Nine cards $1$ to $9$, drawn twice simultaneously, $S$ denote sum of $2$ numbers drawn. $S \leq 5$ the score is $10-S$, and when the $S>5$ the score is $2$. whats the expected value of the score?
I don't understand the questions "whats the expected value of the score?" so, I can't solve it, so far I found the probability of getting the sum $S\leq 5$ that is $1/9$.
AI: There are $\dbinom92=36$ possible draws. Of these, only $4$ have $S\le5$, namely $(1,2),(1,3),(1,4),(2,3)$, scoring $7,6,5,5$ respectively. Everything else ($32$ draws) scores $2$.
The expected value is the average value, and is:
$$\frac{7+6+5+5+32\cdot2}{36}$$
$$=\frac{87}{36}=\frac{29}{12}=2\frac{5}{12}$$
|
H: Help on calculating this integral and changing variables
Calculate the integral $I=\iint_D e^\frac{x^3+y^3}{xy}dA$ where D is the surface bounded by $x=y^2$,$x^2=y$ and $x^2=2y$.
Attempt: Let $u=\frac{x^2}{y}$ and $v=\frac{y^2}{x}$. Then we have $1≤u≤2$ and $0≤v≤1$.
$$\frac{∂(u,v)}{∂(x,y)}=A = \begin{vmatrix} \frac{2x}{y} & -\frac{x^2}{y^2} \\-\frac{y^2}{x^2} & \frac{2y}{x} \end{vmatrix}
\quad=3⇒\frac{∂(x,y)}{∂(v,u)}=\frac{1}{3}$$
$$I=\frac{1}{3}\int\limits_{1}^{2}du\int\limits_{0}^{1}e^{u+v}dv=\frac{1}{3}\int_{1}^{2}e^udu\int\limits_{0}^{1}e^vdv$$
Now how should I continue this? Is my approach correct?
AI: Using $u = \frac{x^2}{y}$ and $v = \frac{y^2}{x}$ instead we get a Jacobian of $\frac{1}{3}$ by the same process. The boundaries convert into constants:
$$\begin{cases} y = x^2 \\ 2y = x^2 \\ x = y^2 \\ \end{cases} \implies \begin{cases} u = 1 \\ u = 2 \\ v = 1 \\ \end{cases}$$
However, this does not define a closed region. Given that the original region of integration was strictly in the first quadrant, $v > 0$ which means our last choice of bounds for $v$ could either be some function above or below the line $v = 1$ in the $uv$ plane.
Taking into account that the origin in the $xy$ plane is where the two $u$ curves intersect, we have that
$$uv = xy = 0$$
which is the equation for the axes exactly (a degenerate hyperbola). Thus the missing boundary could either be $u=0$ or $v=0$, but only $v=0$ fits the region (the curves for $0\leq u \leq 1$ aren't between the two parabolas in the $xy$ plane, but the curves for $0\leq v \leq 1$ are). Thus the integral becomes
$$\frac{1}{3} \int_0^1 \int_1^2 e^{u+v}\:du\:dv = \frac{e(e-1)^2}{3}$$
|
H: Solving $C^{12}_{x} + C^{12}_{x+1} = C^{13}_{2x}$.
$C^{12}_{x} + C^{12}_{x+1} = C^{13}_{2x}$
I did find by brute force the solutions $n=1$ and $n=4$, through the inequalities $2x \le 13, x \ge 0 \implies x \in \{0,1,2,3,4,5,6\}$
But is there a more analytical way to solve this?
Here is my attempt:
$C^{12}_{x} + C^{12}_{x+1} = C^{13}_{2x}$
$C^{13}_{x+1} = C^{13}_{2x}$
$\dfrac{13!}{(x+1)!(13-x-1)!}=\dfrac{13!}{(2x)!(13-2x)!}$
$\dfrac{(2x)!}{(x+1)!}=\dfrac{(13-x-1)!}{(13-2x)!}$
$\dfrac{(2x)(2x-1)...(2x-x)(2x-(x-1))!}{(x+1)!}=\dfrac{(13-x-1)!}{(13-2x)!}$
$(2x)(2x-1)...x=\dfrac{(13-x-1)(13-x-2)...(13-x-(x+1))(13-2x)!}{(13-2x)!}$
$(2x)(2x-1)...x=(13-x-1)(13-x-2)...(13-2x+1)$
Wolfram says these products can be written in terms of Pochhammer symbols as:
$x(x+1)_{x} = \dfrac{2(x-6)(13-2x)_{x+1}}{x-13}$
But I have no idea how to solve this.
Also, is by numerical methods the only way to get the non-natural solutions $-9,-8,-7,-6,-5,-4,-3,-2$ ?
Thanks.
AI: Using the property
$$C_r^n+C_{r-1}^n=C_r^{n+1}$$
On the LHS, with $n=12,r=x+1$ you have
$$C^{13}_{x+1}=C^{13}_{2x}$$
You now have either $x+1=2x$ or $2x+(x+1)=13$.
From here, it's fairly easy to get $x=1,4$.
|
H: Convergence of random variables series
Let $\xi_1, ... \xi_n, ...$ - are independent random variables such that $\mathbb{P}(\xi_n=\pm 1/n)=1/2 $
I want to prove that if $\phi : \mathbb{N} \rightarrow \mathbb{N}$ is any bijection of natural numbers, then the series
$\sum_{n=1}^{\infty} \xi_{\phi(n)}$ converges a.s.
In particular, that $\sum_{n=1}^{\infty} \xi_n$ converges a.s
I think I can use Borel-Cantelli lemma, but I can't figure out how exactly. Please help.
AI: This is an easy consequence of Kolmogorov's Three Series Theorem. Note that $E\xi_{\phi (n)}=0$ for all $n$ and $E\xi^{2}_{\phi (n)}=\frac 1{\phi(n)^{2}}$. Sinec $\sum _n \frac 1{\phi(n)^{2}} <\infty$ we get almost sure convergence of $\sum \xi_{\phi (n)}$.
[Ref: https://en.wikipedia.org/wiki/Kolmogorov%27s_three-series_theorem
Take $A=1$ in this theorem].
|
H: exponentiation of cardinal number
How to prove that $a^{x}\leq b^{y}$ for any cardinal number a,b, $x,y$ with a$\leq$b , $x\leq y$?
AI: We can assume that $b > 0$. The case of $a = b = 0$ holds trivially.
Let $A, B, X, Y$ be sets with cardinalities $a, b, x, y$. We may identify $A$ and $X$ with subsets of $B$ and $Y$, respectively. Thus, we may consider $A \subset B$ and $X \subset Y$.
Let $b_0 \in B$ and fix $b_0$. (We can do this since $b > 0$.)
Now we define an injection $\Phi:A^X \to B^Y$.
Let $f \in A^X$. That is, let $f: X\to A$ be a function. We define $\Phi(f):Y\to B$ as
$$\Phi(f)(t) = \begin{cases}f(t) & t \in X\\ b_0 & t \notin X \end{cases}$$
It is easy to see that $\Phi(f)$ is well-defined for all $f\in A^X$ and that $\Phi$ is an injection.
|
H: How would you simplify the following boolean expression $(!A B)+(B !C)+(BC)+(A !B !C)$?
How would you simplify the following boolean expression $(!A B)+(B !C)+(BC)+(A !B !C)$?
I factorised B and managed to get $B(!A+!C+C)+(A !B !C) = B+(A !B !C)$, but I do not know how to continue.
Using a K-map, I managed to get the result of $B+A!C$ and I am trying to achieve the same result using regular identities and laws of boolean algebra. By the way, sorry for poor formatting, but I do not know how I could paste an expression from word to make it look better and easier to read.
AI: Actually, there is a non-intuitive (doesn't hold in ordinary algebra) Boolean algebra law that can be applied here: distributivity of disjunction over conjunction expressed as follows
$$x+yz=(x+y)(x+z).$$
Using your notation $x=B$, $y=!B$, $z=A!C$. Hence we have
$$B+!BA!C=(B+!B)(B+A!C)=1\cdot(B+A!C)=B+A!C.$$
|
H: pigeonhole principle cube problem
i got this problem that we need to prove that we cannot pick 28 points that are 1.75cm(atleast) from eachother in a cube where each edge is 3cm long .
i tried $dividing$ each square in the cube to $4$ , that way we will get that each edge is $1.5cm$ now and the hypotenuse will be $2.12$ $\left(\frac{3\sqrt 2}{2}\right)$ i did this because i think its the smallest distance we can get
now since a cube has 6 squares , and my squares are divided to have 4 smaller cubes in each one we get $4*6=24$ which leaves us 4 extra points which means some squares will have $2$ points in them but i cannot tell if i did right or even close to it because of the information in the question (atleast 1.75 distance between each point)
i tried to demonstrate what i meant
thanks to any helpers , and sorry if there are translation mistakes
AI: The hint is the number 28, which is one more than 27, which is $3^3$
Hint: Divide the cube into 27 smaller cubes of side length $1$ cm.
|
H: Given an affine space, will an orthogonal projection have 0 inner product?
I know that if we have a vector space $V$ and orthogonally project $x$ onto it to get $\hat x$, then we have $\langle v, x-\hat x \rangle$ for any $v \in V$. However, I do not know whether the same holds for an affine space $S$. It seems that no, because for example if we consider $S = \left\{s \in \mathcal{L}^2(\mathbb{R}) ~|~ s(t) = y(t) \text{ for } t \in [\alpha,\beta]\right\}$, where $\mathcal{L}^2(\mathbb{R})$ denotes the set of finite energy continuous signals, $y$ is an arbitrary signal in $\mathcal{L}^2(\mathbb{R})$, and $\alpha$ and $\beta$ are two constants; then to project $x$ onto $S$, we set $x(t) = y(t)$ for $t \in [\alpha,\beta]$ and leave the rest of $x$ untouched. However, when I looked at the inner product of $s$ and $x - \hat x$, the result was nonzero. Am I making a mistake with the inner product/orthogonal projection onto $S$, or is my guess correct?
Thanks in advance.
AI: For some arbitrary vector space (not necessarily $\mathcal L^2(\mathbb R)$),
let $S$ be an affine space and $s_1$ a vector such that $S = s_1 + V$ where $V$ is a subspace.
Let $s_0$ be the orthogonal projection of the zero vector onto $S.$
Then $s_0 \neq 0$ and for $x = 0,$ we have $x - \hat x = -s_0$.
Moreover, for $v = s_0$ we have
$\langle v, (x - \hat x)\rangle = \langle s_0, -s_0\rangle \neq 0.$
What you do have is that the inner product $\langle v, (x - \hat x)\rangle$ is the same for every vector $v \in S.$
|
H: Wave Equation: what do Characteristic Curves mean?
This is my question:
Compute the characteristic curves of the following wave equation
$$
\frac{\partial^{2} u}{\partial t^{2}}-a^{2} \frac{\partial^{2} u}{\partial x^{2}}=0
$$
and draw them on an $x-t$ coordinate system.
I couldn't figure out what "charachteristic curves" means. I found solutions about wave quation but couldn't found anything about this "characteristic curve". I will appreciate any help.
Thanks for your care, (Stay home and keep dealing with math )
AI: The wave equation has the following general solution (in your notation)
$$u(x,t) = A\sin(kx-\omega t)$$
Here, $k$ would denote the wavenumber in $m^{-1}$, and omega the wave frequency in $rad/s$, such that
$$a = w/k$$
Now, a characteristic for a wave is the plane along which there is a constant deflection wrt both space and time, i.e, it would correspond to the following family of planes
$$x-at = c$$
Where c is a constant. The physical intuition of defining these being how you would define a wavefront. If you study acoustics or any wave propagation, a wave is drawn as it's characteristic planes. This allows you to visualise the geometry of the wave, and is especially useful for boundary conditions and reflection/refraction
|
H: Positive continuous supermartingale is a proper martingale
Let $M$ be continuous positive supermartingale with $\mathbb{E}[M_0]< \infty$. By the supermartingale convergence theorem $M_\infty = \lim M_t$ exists almost surely. How do I show that, if $\mathbb{E}[M_0]=\mathbb{E}[M_\infty]$, $M$ is a proper martingale?
AI: By Fatou's lemma,
$$\mathbb{E}(M_0) = \mathbb{E}(M_{\infty}) \leq \liminf_{t \to \infty} \mathbb{E}(M_t).$$
Since $(M_t)_{t \geq 0}$ is a supermartingale, we have $\mathbb{E}(M_t) \leq \mathbb{E}(M_T) \leq \mathbb{E}(M_0)$ for all $t \geq T$, and so
$$\mathbb{E}(M_0) = \mathbb{E}(M_{\infty}) \leq \mathbb{E}(M_T) \leq \mathbb{E}(M_0).$$
Thus,
$$\mathbb{E}(M_0) = \mathbb{E}(M_{\infty}) = \mathbb{E}(M_T)$$
for all $T \geq 0$, i.e. the supermartingale has constant expectation.
By the supermartingale property,
$$\int_F M_t \, d\mathbb{P} \leq \int_F M_s \, d\mathbb{P} \tag{1}$$
for all $s \leq t$ and $F \in \mathcal{F}_s$. Replacing $F$ by $F^c$ we get
$$\underbrace{\mathbb{E}(M_t)}_{=\mathbb{E}M_0} - \int_F M_t \, d\mathbb{P} \leq \underbrace{\mathbb{E}(M_s)}_{\mathbb{E}(M_0)} - \int_F M_s \, d\mathbb{P},$$
i.e.
$$\int_F M_t \, d\mathbb{P} \geq \int_F M_s \, d\mathbb{P} \tag{2}$$
Combining $(1)$ and $(2)$ gives
$$\int_F M_t \, d\mathbb{P} = \int_F M_s \, d\mathbb{P}, \qquad F \in \mathcal{F}_s, s \leq t,$$
i.e. $\mathbb{E}(M_t \mid \mathcal{F}_s) =M_s$.
|
H: Is the set $I$ a maximal ideal?
If $I = \{f(x) ∈ Q[x] \ | \ \text{f(x) is divisible by} \ x^2 - 1 \}$
is an ideal.
Is $I$ a maximal ideal?
I know the definition of a maximal ideal, but I am struggling to understand how to prove it.
An ideal of a ring R is maximal if and only if
i) I ⊄ R; and
ii) there does not exist an ideal of R, for example J, such that I⊄J⊄R
If anyone could show me how this works with this example it would be a great help!
Also, the next step is to show if the quotient ring Q[x]/I is an integral domain. Any tips or hints for this part will also be appreciated.
Thanks!
AI: A useful characterisation of maximal ideals is the following: the ideal $I$ of $A$ is maximal if and only if $A/I$ is a field. In your case, the polynomial $x^2-1 = (x-1)(x+1)$ is not irreducible, so the ideal it generates is not even prime, that is, $\mathbb{Q}[X]/I$ is not an integral domain. (basicaly because $x-1\neq 0$ and $x+1\neq 0$ in $\mathbb{Q}[X]/I$, but their product is $0$).
|
H: Is there a general solution to $ \int{f \cdot g } $ if I know that $ \dfrac{d^n}{dx^n}f = $ c$ \cdot f $ $, $ for some $n \in \mathbb{N} $
These type of integrals seem to pop up a lot in physics and it got me wondering whether a closed form expression exists, even if it has some limitations (only for even n-s for example ). I'm asking about integrals that are usually evaluated by repeated application of integration by parts, like $$ \int \sin(x) \cdot e^x dx$$.
AI: If you know that $f$ is $n$-times differentiable and satisfies $f^{(n)}=cf$ for some $c\in\mathbb{C}$ then applying integration by parts $n$ times gives
\begin{align}
&\int f(x)\cdot e^x\mathrm{d}x\\
&=e^x\sum_{k=0}^{n-1}(-1)^kf^{(k)}(x)+(-1)^nc\int f(x)e^x\mathrm{d}x\end{align}
which implies that
\begin{align}
&\int f(x)\cdot e^x\mathrm{d}x\\
&=\frac{e^x}{1-(-1)^nc}\sum_{k=0}^{n-1}(-1)^kf^{(k)}(x)+\text{constant}
\end{align}
|
H: Sum $\sum \frac{1}{(4k-3)(4k-2)(4k-1)(4k)}$
I am stuck on this problem for quite a while now, and I don't seem any closer to the solution. So, here it is:
$S = 1/4! + 4!/8! + 8!/12! + 12!/16! + ......$
I crossed out the factorials first, and it could be easily represented by the general term,
$T = \frac{1}{(4n-3)(4n-2)(4n-1)(4n)}$
It looked like it could be expressed as the difference of two expressions which could come useful to find the sum, what we call the 'diagonal cancellation',
$T = \frac{1}{3}(\frac{1}{(4n-1)(4n-2)(4n-3)} - \frac{1}{4n(4n-1)(4n-2)})$
but unfortunately it doesn't work. I even unintentionally split it further into subtractions of two more expressions for both terms and so on, which finally leads to a harmonic series, which is certainly not expressible in "closed-form'.
Please help by suggesting a simple math solution, this is merely a class notes illustration and I am not an advanced maths student.
AI: \begin{align*}
& \frac{1}{(4n-3)(4n-2)(4n-1)(4n)} \\
&= \frac{1}{3}\left(\frac{1}{(4n-1)(4n-2)(4n-3)} - \frac{1}{4n(4n-1)(4n-2)}\right) \\
& = \frac{1}{3\cdot 2}\left( \frac{1}{(4n-2)(4n-3)} - \frac{2}{(4n-1)(4n-2)} + \frac{1}{4n(4n-1)}\right) \\
& = \frac{1}{3\cdot 2\cdot 1}\left( \frac{1}{4n-3} - \frac{3}{4n-2} + \frac{3}{4n-1} - \frac{1}{4n}\right) \\
\end{align*}
So the sum is \begin{align*}
&\frac{1}{6}\sum_{n=1}^{\infty}\left( \frac{1}{4n-3} - \frac{3}{4n-2} + \frac{3}{4n-1} - \frac{1}{4n}\right) =\\
& =\frac{1}{6}\sum_{n=1}^{\infty}\left( \int_0^1 x^{4n-4} dx -3 \int_0^1 x^{4n-3} dx+ 3\int_0^1 x^{4n-2} dx - \int_0^1 x^{4n-1} dx\right) = \\
& = \frac{1}{6}\sum_{n=1}^{\infty}\left( \int_0^1 x^{4n-4} - 3 x^{4n-3} +3x^{4n-2} - x^{4n-1} dx\right) = \\
& \stackrel{*}{=} \frac{1}{6} \int_0^1 \sum_{n=1}^{\infty} \left(x^{4n-4} - 3 x^{4n-3} +3x^{4n-2} - x^{4n-1}\right) dx = \\
& = \frac{1}{6} \int_0^1 \frac{1 - 3x + 3x^2 - x^3}{1-x^4} dx = \\
& = \frac{1}{6} \int_0^1 \frac{(1-x)^2}{(1+x)(1+x^2)} dx = \cdots = \frac{1}{24}(6\ln2 - \pi)\\
\end{align*}
Here the step $\stackrel{*}{=}$ should be justified; the integrand is of the form $x^{4n-4}(1-x)^3$ and this is nonnegative on $[0,1]$ so Fubini-Tonelli is appliable.
|
H: Any reference(a book) that defines the $n$-dimensional rotation matrix?
I want to refer to a mathematics book that explains the n-dimensional rotation matrix or rotation transformation.
Wikipedia concentrates most on 2D or 3D.
There are things that one can say definition here and there, but I think it is not a good idea to use the definition there. Actually they don't seem to be definitions.
Strang's "Linear Algebra", Barret O'neill's "Elementary Differential Geometry" deal only with 2D or 3D cases. I think physicist are more interested in the general case, due to the theory of relativity. I found one explanation in "Geometrical Methods of Mathematical Physics" by Bernard Schutz. But I think it doesn't define the rotation matrix.
Artin's "Geometric Alegebra" defines the rotation group as an isometry $\sigma:V\to V$ such that $\det\sigma=1$.
But the language there is so abstract that I can't catch any of them.
Can anyone give a reference that defines rotation transformation on $\mathbb R^n$ and state as a property that $A$ is a rotation matrix if and only if $A\in SO(n)$?
This is the end of the question and the below is what I wanted to do.
I wanted to prove that if $A\in SO(n)$, then $A$ is a rotation about a line through the origin in $\mathbb R^n$.
So I need to define the rotation transformation(or matrix) in $n$ dimensional Euclidean space.
AI: The definition Artin gives is probably the most transparent and standard:
It is a linear isometry, meaning that it preserves distances and lengths as given by a bilinear form (You may want to restrict yourself
to the dot product in $\mathbb R^n$).
It has determinant $1$, meaning that it doesn't change the orientation of the space by reflecting it ($\mathbb R$-linear isometries must have determinant $\pm 1$, so we are excluding half with this condition.)
I think you should spend time trying to understand this rather than discarding it as "too abstract." It's really quite concrete.
But I think it doesn't define the rotation matrix.
To split hairs for a second, the most important nature of a rotation is that it is a transformation, not just a matrix. A matrix is just a particular way to represent a transformation. That's why the definition above emphasizes the qualities that make it a rotation: it does not mess with distances, and it does not mess with orientation. (Out of infinitely many choices of bases it can have infinitely many different matrix representations.)
I wanted to prove that if $A\in SO(n)$, then $A$ is a rotation about a line through the origin in $\mathbb R^n$.
That isn't even true in $\mathbb R^2$, as I'm sure you can see.
It happens to be true for rotations in $\mathbb R^n$ for odd $n$ due to the fundamental theorem of algebra, because it says each rotation (like in Artin's definition) has an eigenvector. But even then I doubt it's what you wanted: some rotations in $\mathbb R^n$ preserve much more than just one line through the origin.
And for even $n$'s, you still might not preserve any line: take, for example
$\begin{bmatrix}0&-1&0&0 \\
1&0&0&0 \\
0&0&0&-1\\
0&0&1&0\end{bmatrix}
$
as an $\mathbb R$ linear transformation.
|
H: How can i prove Delta function property with derivative?
How can i proof below delta function property?
$$\int_{-\infty}^{\infty}f(t)\delta^{(n)}(t-a)dt=(-1)^{n}f^{(n)}(a)$$, where (n) denotes nth derivate.
AI: If you don't know the formal definition of distributions and derivatives of such, just do integration by parts $n$ times. You will end up with $\int_{-\infty}^{\infty} f^{(n)}(t) \, \delta(t-a) \, dt,$ which equals $f^{(n)}(a).$
|
H: Matrix multiplication commutativity
We know that if $A$ is $2\times 2$ square matrix\begin{bmatrix}a&b\\c&d\end{bmatrix}, such that $A$ is commutative over multiplication with any $2\times 2$ matrix, then $A$ is a scalar matrix.
To prove that I tried to rely on the matrix multiplication and then I got $4$ equations to solve. I calculated the dot product of $A$ and $B$:\begin{bmatrix}e&f\\g&h\end{bmatrix} two times in both directions.
So $$ce + dg = ga + hc \ \ \text and \ \ af + bh = eb + fd.$$
And for the diagonal elements we get $$gb = cf = ea.$$ So the result of the multiplication give us a matrix such that its diagonal elements are the same. That's all I could get.
I just needed to prove that the diagonal elements of $A$ are scalars and the rest are zeros, but I faced a dead end, because I got stuck in a circle.
Can you help?!
AI: The fact that $A$ is supposed to commute with all matrices means we don't have to be general about it. We can pick a few nice matrices and just work with those.
Consider multiplying $A$ with the matrix
$$
E = \begin{bmatrix}1&0\\0&0\end{bmatrix}
$$
We have
$$
AE = \begin{bmatrix}a&0\\c&0\end{bmatrix}, \quad EA = \begin{bmatrix}a&b\\0&0\end{bmatrix}
$$
so for $A$ to commute with $E$, we must have that $b = 0$ and $c = 0$, so $A$ is diagonal.
Now that we know that $A$ is diagonal, can you think of a matrix that in a similar way demonstrates that we must have $a = d$?
|
H: Union of arbitrary path connected sets is necessarily a path connected?
Assume $X$ is a topological space that does not have a countable dense set. Let $\{ A_\alpha \}$ be an arbitrary set of path connected sets such that if $\alpha$, $\beta$ are two different indices then $A_\alpha$ is different from $A_\beta$ and that for every $\alpha$ there exists $\beta$ such that $A_\alpha $ intersects $A_\beta$.Then does it necessarily follow that the arbitrary union of these $A_\alpha$'s is path connected? If it is not do please provide a counterexample and also a hint as to how to strengthen the conditions so that the result holds ,without removing the condition that the $X$ does not have any countable dense set if possible of course.
AI: Take four closed path-connected sets $A_1$, $A_2$, $A_3$, $A_4$ such that $A_1$ intersects $A_2$ and $A_3$ intersects $A_4$, but $A_1 \cup A_2$ is disjoint from $A_3 \cup A_4$. These satisfy your conditions, but $A_1 \cup A_2 \cup A_3 \cup A_4$ is not connected.
|
H: If R is a local ring, then its maximal left-ideal is a left and right ideal
I have been trying to prove the statement in the title, however I seem to get stuck at a certain point. Let $M$ be the maximal left-ideal of $R$. Then consider $M.r$ for $r \in R$. If $M.r \neq R$, we have that $M.r \subseteq M$. Now if this assumption would be true, we would have that $M$ is a right ideal as well. However when trying to prove that $M.r \neq R$ for all $r \in R$ I don't get very far. I know that if we assume equality, one would get $m.r=1$ for one $m \in M$. However I don't know how to go from here. Thank you in advance!
AI: If $R$ has one maximal left ideal, then it is the Jacobson radical. Do you know the Jacobson radical is two-sided?
Different idea:
Lemma: a local ring has only trivial idempotents. (Proof: if $e$ were a nontrivial idempotent, then how could $Re$ and $R(1-e)$ both be contained in the maximal left ideal?)
Hint: Now if $mr=1$ for some $m\in M$, $r\in R$, then $rm$ is an idempotent...
|
H: Probability solution doubts
I have a doubt on an exercise and this is the request:
A family would have 5 children, what is the probability to have all the children female (so 5 female children) OR all the children male (so 5 male children)?
Probability female = 0.55
Probability male= 0.45
Any help is apprecited.
Thank you
AI: Hint: What is the probability that the desired outcome happens once? Twice? etc.
This problem is an application of repeated events.
|
H: Russian Roulette Probability Puzzle understanding
"We are to play a version of Russian Roulette, the revolver is a standard six shooter but I will put one bullet in the gun. I spin the chambers, put the gun to my head, pull the trigger and survive. I hand you the gun and give you a choice...
You may put the gun straight to your head and pull the trigger, or you may re-spin the gun
before you do the same.
What is your choice and why?
"
My question here is the probability of shooting myself if I don't randomly spin.
The answer for this seems to be 1/5 as mentioned here.
The explanation for the same sounds fair. But, I have a different approach which gives a different answer.
I have following cases:
1. B-----
2. -B----
3. --B---
4. ---B--
5. ----B-
6. -----B
Now, I will shoot myself if the second case occurs i.e the first trigger doesn't do anything and the second trigger fires. Also, the probability of that case occurring is 1/6. So, shouldn't the probability be 1/6 instead of 1/5.
AI: Coloring blue the chamber used by the first person and red the following chamber which would be used by the second person if it were not spun again, $O$'s for empty chambers and $X$'s for filled chambers, when initially spinning the chambers the following six outcomes are indeed equally likely:
$\color{blue}{X}\color{red}{O}OOOO$
$\color{blue}{O}\color{red}{X}OOOO$
$\color{blue}{O}\color{red}{O}XOOO$
$\color{blue}{O}\color{red}{O}OXOO$
$\color{blue}{O}\color{red}{O}OOXO$
$\color{blue}{O}\color{red}{O}OOOX$
We are told that the first player just grabbed the gun and pulled the trigger. The first player survives and then hands the gun off to the second player. We are asked... given this information what is the probability that the second player will shoot themselves.
Well... since we are told that the first player survived... we know that the original state of the revolver was not $\color{blue}{X}\color{red}{O}OOOO$ as if it were this case then the first player would have shot themselves. We can imagine then that this was not included in the list. The remaining five outcomes are still equally likely to have occurred, only one of which result in us shooting ourselves.
From the comments "Let B be the event that my partner didn't shoot himself, and let A be the event that I shoot myself. Then P(A|B)=1/5. Got it. But, the probability of case 2 occurring should be 1/6 without any conditions i.e P(A∩B)=1/6, right ? Although, I will have to compare 1/6 with 5/36 (non conditional prob) instead of 1/5 (conditional prob), right ?"
Do not lose sight of what the problem asked. The problem asks whether you should spin again or not spin. The implication being that we want to avoid shooting ourselves and the game will end immediately after the next time the trigger is pulled.
If we weren't to spin again, we will shoot ourselves with probability $Pr(A\mid B) = \frac{1}{5}$. If we were to spin again we will shoot ourselves with probability $\frac{1}{6}$ which is less than $\frac{1}{5}$. Since spinning again reduces the chance that we will shoot ourselves this is what we should choose to do (assuming that the spins were truly random). End of story.
|
H: Prove ${\bf u}.{\bf Tv} = ({\bf u} \otimes {\bf v}) : {\bf T}$
Example 8 (on Page 6) of these lecture notes appear to suggest that for any vectors $\bf u$ and $\bf v$ and tensor $\bf T$
$
{\bf u}.{\bf Tv} = ({\bf u} \otimes {\bf v}) : {\bf T}
$
I'd appreciate help proving this.
AI: To reframe this in terms of matrix multiplication, note that for column-vectors $x,y$ and matrices $A,B$ of the same shape, we have
$x \cdot y = x^\top y$,
$u \otimes v = uv^\top $,
$A : B = \operatorname{Tr}(A^\top B)$.
With that, we can prove this result as a consequence of the cyclic property of trace. Or, if you prefer, we could use the slightly simpler result $\operatorname{Tr}(xy^\top) = x^\top y$ for column-vectors $x,y$. In any case, we have
$$
(u \otimes v): T = \operatorname{Tr}((uv^\top)^\top T) =
\operatorname{Tr}(v u^\top T) =
\operatorname{Tr}(u^\top Tv) = u^\top (Tv) = u \cdot Tv
$$
as desired.
If we want to stick to dyadic notation, we might state the following: $T$ can necessarily be written as a sum $T = x_i \otimes y_i$ for some vectors $x_i,y_i$. It follows that
$$
(u \otimes v) : T = (u \otimes v):(x_i \otimes y_i) = (u \cdot x_i)(v \cdot y_i),\\
u \cdot Tv = u \cdot (x_i \otimes y_i)v = (u \cdot x_i)(v \cdot y_i).
$$
It follows that the two expressions are indeed equal.
|
H: An interesting property of a particular set of triples - multiplying two and adding the other always gives 1
Find all triples of real numbers such that multiplying any two in a
triple and adding the third always gives $1$.
When will this be the case? How can we find all such triples?
So far, I've let the numbers be $a$, $b$ and $c$.
Therefore, $ab+c$ etc must $= 1$ but how can I restrict the possibilities to find all possible triples?
I seem to think this has something to do with 1s and 0s, eg. $0,0,1$ or $0,1,1$
Many thanks guys!!
AI: You have three equations and three variables to solve for. The three equations are $$ab + c=1$$ $$ac +b=1$$ $$bc + a=1$$
Using the first equation and solving for $c$, we get $$c = 1-ab$$
This now reduces to two equations $$a(1-ab)+b=a+b-a^2b=1$$ $$b(1-ab)+a=a+b-ab^2=1$$
Subtracting the second equation from the first, I get $$-a^2b+ab^2=0$$ This can be factored as $$ab(b-a)=0$$
Which means that there are three possible cases $a=0, b=0,$ and $a=b$.
If $a=0$, it must also be true that $b=c=1$ in order to satisfy the original three equations. Similarly, if $b=0$, it must be true that $a=c=1$. If $a = b \not = 0$, there is the cubic $$2a-a^3=1$$ that must be solved. There are three solutions $$a= \frac{-1 \pm \sqrt{5}}{2}, 1$$
This means that all solutions to the problem are $(0, 1, 1), (1, 0, 1), (1, 1, 0), \left(\frac{-1 \pm \sqrt{5}}{2}, \frac{-1 \pm \sqrt{5}}{2}, \frac{-1 \pm \sqrt{5}}{2}\right)$
|
H: In the ring $\mathbb Z_n$,we have $kx=k.x$.
Let $\mathbb Z_n$ be the ring of integers modulo $n$.Now,suppose for $x\in \mathbb Z_n$,define $kx=x+x+...+x(k$ times$)$.And $k.x$ denote the product of $k\in \mathbb Z_n$ and $x\in \mathbb Z_n$.Is it true that $kx=k.x$?It is although very trivial still relevant thing to note.
AI: Write $k = k_0 + nb$ and $x = x_0 + nc$ where all numbers are integers. Then
\begin{align}
kx &= (k_0 + nb)(x_0+nc)\\
&= k_0 x_0 + n\left( bx_0 + ck_0 + nbc\right)
\end{align}
where all calculations are in $\mathbb{Z}$. Thus, modulo $n$, you have
$[kx] =[k][x] = \underbrace{[x]+[x]+\ldots + [x]}_{k \text{times}}$. So the answer is yes.
|
H: $\operatorname{tg}2x=-1$ find $x$
$\operatorname{tg}2x=-1$
$x∈[\pi/2,\pi]$
I tried expressing $\operatorname{tg}2x=\sin2x/\cos2x$ but is there any elegant other method?
AI: Compute the reference angle (first quadrant) by ignoring the sign of the tangent first. Let $y=2x$.
$\tan y = 1 \implies y =\frac{\pi} 4$
Since the tangent is negative the angle can be either in the second quadrant ($\pi - y$) or fourth quadrant ($2\pi - y$).
So we get $2x = \frac{3\pi}4$ or $2x = \frac{7\pi}4$.
giving $x = \frac{3\pi}8$ or $x = \frac{7\pi}8$.
and only the latter value lies in the required range.
So $x = \frac{7\pi}8$.
|
H: A trigonometric definite integral with a parameter $\int_0^\pi\frac{dx}{1+\alpha^2\sin^2(x)}$
Give the expression of the integral
$$\int_0^\pi\frac{dx}{1+\alpha^2\sin^2(x)}$$
where $ \alpha \in (0,+\infty)$.
I tried the substitution $ t=\tan(x)$ but both bounds become zero.
I used $t=\tan(\frac x2) $ but it became
complicate, i think there is an easier way.
Thank you in advance.
AI: Your first substitution of $t=\tan{x}$ is correct, but as @metamorphy said, you must adjust the bounds to because it is not one to one with the $\tan{x}$ substitution (and this works because of symmetry with $\sin^2{x}$ from $0$ to ${\pi}$ is twice of $\sin^2{x}$ from $0$ to $\frac{\pi}{2}$.
$\frac{dt}{t^2+1}=dx$ and $\sin^2{x}=\frac{t^2}{t^2+1}$:
$I=2\int_0^{\infty} \frac{\frac{dt}{t^2+1}}{1+\frac{a^2t^2}{t^2+1}}=2\int_0^{\infty} \frac{dt}{\left(a^2+1\right)t^2+1}=\frac{2}{\sqrt{a^2+1}}\arctan{\left(t\sqrt{a^2+1}\right)} \big\rvert_0^{\infty}=\frac{2}{\sqrt{a^2+1}} \cdot \frac{\pi}{2}=\boxed{\frac{\pi}{\sqrt{a^2+1}}}$
Hope this helps.
|
H: Integer side length right triangles with area = perimeter
How many right triangles with integer side lengths (up to congruence) are there with the property that the area of the triangle is the same as the perimeter?
I found that $5, 12, 13$ has both an area and perimeter of 30 but I'm not sure if there are any more of them.
AI: We are looking for $a,b\in\mathbb N$ such that $ab = 2a+2b+2\sqrt{a^2+b^2}$. After some manipulation, we have $$4a^2+4b^2=a^2b^2-4ab(a+b)+4a^2+8ab+4b^2$$$$a^2b^2-4ab(a+b-2)=0$$$$a^2b^2=4ab(a+b-2)$$$$ab=4(a+b-2)$$$$ab-4a-4b+16=8$$$$(a-4)(b-4)=8$$
So, there are only two possibilities as there are only two possible pairs of $(a-4,b-4)$: $(1,8)$ and $(2,4)$, which correspond to $(5,12,13)$ and $(6,8,10)$, respectively.
|
H: What is the total number of positive integers <300 whose digit sum is a multiple of 5?
As stated in the title
For one digit number there is $1$
For two digit numbers there are $27(=4+(4+5)+(4+5+5)) (sum
=5n, n=1,2,3)$ $since (5n<18)$
But I'm struggling to find the ones that are of three digits
$(sum)=5n, n=1,...,4)$$for (5n\leq20)$ (without writing out all the possible combinations).
The possible answers are $18,45,60,61$
Can someone please explain which of the above answers is the most appropriate?
AI: If the first two digits are $0,0$ than the last digit must be $0$ or $5$. There are two such numbers.
If the first two digits are $0,1$ then the last digit must be $4$ or $9$. There are two such number.
If the first two digits are $2$ and $7$ that then last digit must be $1$ and $6$. There are two such numbers.
And so on.
The first two digits are $00$ to $29$, that's $30$ option and there are two options for $c$ so that there are $2*30 = 60$ such numbers. But $000$ is not acceptable so there are $59$.
So, $59$. Final answer.
....
But we must prove that if the first two digits are $a$ and $b$ there are exactly $2$ digits that third digit can be.
=====
ANd... if you add up $a$ and $b$ and take the remainder from dividing be $5$ (call it $r$) there are exactly $5$ possible remainders values of $r$. For $a+b +c$ to be a multiple of $5$ then the remainder $a+b+c$ must be $0$ and the remainder of $c$ must be the opposite $r$. That is if $r=1$ then the remainder of $c$ (call it $s$) must be $4$ and if $r=2$ then $s = 3$ and $r=3$ then $s=2$ and if $r=4$ then $s=1$ and if $r=0$ then $s = 0$.
And there are exactly $5$ possible opposite remainders that $s$ can be. $c$ can be $s+0$ and $s+5$.
....
Putting that argument in terms of
Modular arithmetic: If $a+b \equiv r \pmod 5$ then $c \equiv - r\pmod 5$. As $\{0,1,2,3,4\}$ and $\{5,6,7,8,9\}$ are two complete residue systems there are exactly one representative for $-r$ in each set so there are two option for digit $c$.
.....
Remainder theorem:
Let $a+b = 5K + r$. $a+b+c = 5M + 0$. Then $c= (5M + 0)- (5K+r)=5(M-K)+r$ and $0\le c \le 9$ and $0 \le r \le 4$. If $r=0$ then $0\le c= 5(M-K)\le 9$ so $c=0, 5$. If $0 < r < 5$ then $0\le c = 5(M-k)-r \le 9$ then $c = 5-r$ or $c = 10-r$.
|
H: Prove that $f_n \to f$ in the metric of $C(R)$, iff $f_n \to f$ uniformly on every compact subset of $\mathbb{R}$
Let $f_n$ be a sequence in $C(R)$. Then $f_n$ converges to $f$ in the metric of $C(R)$, iff $f_n$ converges uniformly to $f$ on every compact subset of $\mathbb{R}$.
Here, the metric on $C(R)$ is $$d(f,g):=\sum_1^{\infty}\frac{1}{2^n}\frac{d_n(f,g)}{1+d_n(f,g)}, \text{where } d_n(f,g)=\max_{|t|\le n}|f(t)-g(t)|$$
While the left $\to$ right implication is trivial, I was not able to prove the other direction.
AI: For the other direction, assume that $f_k$ converges uniformly to $g$ on every compact subset of $\Bbb R$.
Given $\epsilon > 0$ we have to show that $d(f_k, g) < \epsilon$ for all sufficiently large $k$. The idea is to split the sum into a finite part and a remainder. The first part becomes small because $d_n(f_k, g) \to 0$ for every $n$, and the second part becomes small because each term is less than $1/2^n$. Concretely:
$$
d(f_k, g) = \sum_{n=1}^{N}\frac{1}{2^n}\frac{d_n(f_k,g)}{1+d_n(f_k,g)} + \sum_{n=N+1}^{\infty}\frac{1}{2^n}\frac{d_n(f_k,g)}{1+d_n(f_k,g)} \\
\le \sum_{n=1}^{N} d_n(f_k,g) + \sum_{n=N+1}^{\infty} \frac{1}{2^n}
\le N d_N(f_k, g) + \frac{1}{2^N} \, .
$$
Now choose $N$ such that $\frac{1}{2^N} < \frac 12 \epsilon$, and then $K$ sufficiently large that $ N d_N(f_k, g) < \frac 12 \epsilon$ for $k \ge K$.
|
H: Express in terms of Euler integrals
Express in terms of Euler integrals:
$$
\int_{0}^{+\infty}\frac{x^{m-1}}{(1+x)^n} dx
$$
AI: Let $ t=\frac{1}{1+x} $
\begin{align}
I&=\int_0^1 t^{n-m-1}(1-t)^{m-1} \,dt \\
&=B(n-m,m) \\
&=\frac{\Gamma(n-m) \Gamma(m)}{\Gamma(n)}
\end{align}
|
H: Show inequality $a\sqrt{b - 1} + b\sqrt{a - 1} \le ab$
Given numbers $a$ and $b$; $a, b \ge 1.$
I'm trying to prove
$$a\sqrt{b-1} +b\sqrt{a - 1} \le ab.$$
Also conditions for turning it to equality.
I tried to use AM-GM to the $(a - 1)(b - 1)$, which less than ab, but got nothing.
Applying AM-GM to $a\sqrt{b-1}$ and $b\sqrt{a - 1}$ also didn't give me some result. How can i do it?
AI: We have $\sqrt{x-1}\leq x/2$: this is because the LHS is concave and tangent-line equality occurs at $x=2$. So we have
$$
a\sqrt{b-1}+b\sqrt{a-1}\leq a(b/2)+b(a/2) =ab
$$
|
H: Matrices how to prove $A^{-1} = A$
Apologies mix up from earlier the wrong values where placed in $x_2$ and $x_3$.
Question 1
Proof that the following is true for matrix $A$, $A^{-1}$ = $A^{T}$ = $A$
$A$=
$$
1/7 \begin{pmatrix}
2 & 3 & 6 \\
3 & -6 & 2 \\
6 & 2 & -3 \\
\end{pmatrix}
$$
$A^T$=
$$
1/7 \begin{pmatrix}
2 & 3 & 6 \\
3 & -6 & 2 \\
6 & 2 & -3 \\
\end{pmatrix}
$$
The determinant is $343$
The rule has already been applied to the matrix $(+ - +)
$
$A^{-1}$=
$$
1/343 \begin{pmatrix}
14 & 21 & 42 \\
-14 & -21 & -42 \\
42 & 14 & 21 \\
\end{pmatrix}
$$
This is as far as I can go the identity rule is not producing $1$ in the diagonal how can I solve it from here?
$A^{-1}$
$x_1$ = $2/49$
$A$
$x_1$ = $2/7$
AI: To check that $A^{-1}=A$, you don't need to "calculate" $A^{-1}$. If $A^{-1}=A$, then $A^2=A^{-1}A=I$; and viceversa, if $A^2=I$, then you know that $A^{-1}=A$. Here you can calculate directly that $A^2=I$.
Now, in light of the above, your calculation of $A^{-1}$ is wrong. You don't say what computations you made, so I cannot comment on that.
|
H: How to compute channel capacity?
I am working on the following exercise:
Let $\mathcal{C} = (\mathcal{X}, P, \mathcal{Y})$ be the following channel:
$$\mathcal{X} = \{0,1,2,3\}$$
$$\mathcal{Y} = \{0,1,2,3,4,5,6,7\}$$
$$P = \begin{bmatrix}
0 &1/8 &1/8 &0 &1/4 &0 &1/2 &0 \\
0 &1/8 &0 &1/2 &1/4 &1/8 &0 &0 \\
0 &1/8 &1/4 &0 &0 &1/8 &0 &1/2 \\
1/2 &1/8 &1/8 &0 &0 &1/4 &0 &0 \\ \end{bmatrix}$$
Compute the channel capacity.
I know that the channel capacity $Cap(\mathcal{C)}$ is defined as
$Cap(\mathcal{C}) := \max_{p(\cdot)} I(X;Y)$
, where $X$ and $Y$ are the input and output RVs. However, I do not see how to compute $\max_{p(\cdot)} I(X;Y)$ in this case since the outputs of this channel are overlapping, so I can not use that in
$$I(X;Y) = H(X)- H(X \mid Y)$$
$H(X \mid Y) = 0$, which until now always worked in the previous examples. Could you help me?
AI: What you have is a weakly symmetric channel, i.e. (1) all rows are permutations of each other, and (2) all column sums are equal.
For weakly symmetric channels, channel capacity is achieved by a uniform distribution on the input alphabet. (See Theorem 7.2.1 in Cover and Thomas.)
|
H: Ring of locally finite matrices
Let $I$ be a countable set. A matrix $A=(a_{ij})_{i,j \in I}$ is locally finite if for every $i \in I$, the families $(a_{ij})_{j\in I}$ and $(a_{ji})_{j\in I}$ have finite support . Denote by $M_{I}(\mathbb{Z})$ the ring of locally finite matrices with integral entries indexed by $I \times I$. My question is, how is this ring defined if $|I|= \infty$? I guess addition is just the usual matrix addition but what about the multiplication?
AI: As usual: $(A.B)_{ij}=\displaystyle\sum_{k\in I} a_{ik}b_{kj}$. Since $A$ has finite support for each $i\in I$, $a_{ik}=0$ except for a finite number of indices $k$, so the sum is finite.
|
H: Continuity of functions containing greatest integer function
I need help with this problem:
For all $x \ge \frac{1}{2}$ let $f(x)=[x]+(x-[x])^{[x]}$.
Prove that $f(x)$ is continuous and increasing on $[1,+\infty[$.
I've been given this hint but still I don't really understand it:
We got :
\begin{equation}
f(x)=\begin{cases}
1 & \text{, if $x \in [\frac{1}{2},1[ $}\\
n + (x - n)^n & \text{, if $x \in [n,n+1[, n \in \mathbb{N}^*$}
\end{cases}
\end{equation}
Even with this hint in hand I still don't know how to start it.
AI: Let $x=i+d$ (integer and decimal part). The function is
$$i+d^i.$$
Between two integers, this is a continuous function (a polynomial).
For $x\to i^-$, $f(x)\to i-1+1=i$, and for $x\to i^+$, $f(x)\to i+0=i$. Thus the function is continuous for all $x>0$.
Now for $i\ge0$, $d^i$ is an increasing function, so that by continuity, $f(x)$ is increasing for all $x>0$ (constant in $(0,1]$).
|
H: $n^2$ coins on a board flipping game
This is something I needed a little help with, I've been stuck on this problem for a while now.
Suppose you have $n^2$ coins on an $n \times n$ grid, $\left( n \gt 0\right)$, each with their heads side up. In each move, you can pick one of the $n$ rows or columns and flip over all of the coins in that row or column. No other move is permitted. You have an unlimited number of moves. You win by reaching a configuration where there is exactly one tail side up coin. Does this apply to all $n$?
I really have no idea how to go about it. I tried for the $2\times 2$ grid and I can say that there is no way to achieve one tail side up for the $2\times 2 $ grid.
AI: You cannot on any board except $n=1$.
First note that the order you do flips does not matter because the effects commute. Doing a flip twice is equivalent to not doing it at all, so all you care about is which rows and columns you have flipped once.
A simple proof is that you cannot succeed for even $n$. You start with an even number of heads and each move flips an even number of coins, so the number of heads must stay even.
On a board with odd $n$ all the coins are equivalent, so we might as well try to have the top left coin be the one that finishes tails. Then if we just look at the $2 \times 2$ piece of the board in the upper left, we have to solve that and the only moves that matter are the top two rows and the left two columns. If we can solve the subboard, we can solve $2 \times 2$, which we cannot.
|
H: Difference between Fourier series on $(0,l)$ and Fourier series on $[0,l]$
I'm currently using Strauss's partial differential equations book and there is something that confuses me. The Fourier series of $f(x)=x$ on $(0,l)$ is not the same on $[0,l]$.
For $(0,l)$ the Fourier series is
$$x=\frac{2l}{\pi} \sum_{n=1}^{\infty} \frac{(-1)^{n+1}}{n}\text{sin}\left(\frac{n\pi x}{l}\right)$$
For $[0,l]$ the Fourier series is
$$x=\frac{l}{2}-\frac{4l}{\pi^{2}}\sum_{n=1,3,5,...}\text{cos}\left(\frac{n\pi x}{l}\right)\frac{1}{n^{2}}$$
Why $(0,l)$ has sine series but $[0,l]$ has cosine series?
AI: Whether you include the endpoints of the interval or not is immaterial. There are Fourier sine series and Fourier cosine series on any interval. The cosine series is
the full Fourier series of the function extended to $[-l, l]$ by $f(-x) = f(x)$ and then made periodic.
The sine series is the full Fourier series of the function extended to $[-l,l]$ by $f(-x) = -f(x)$ and then made periodic.
BTW: your cosine series should be
$$ \frac{l}{2} - \frac{4l}{\pi^2} \sum_{n=1,3,5,\ldots} \frac{\cos\left(\frac{n\pi x}{l}\right)}{n^2} $$
|
H: How to find d(A,B)?
Let A is the rectangular hyperbola xy=1 and B is the union of axes xy=0.find d(A,B).
Now if A and B be two non-empty subset of a metric space X then d(A,B)=inf{d(a,b): a $\in$ A and b $\in$ B}.
Honestly I didn't understand this question and have no idea how to proceed.
AI: For $x>0$, $(x,\frac{1}{x})$ is in $A$. It is at distance $\frac{1}{x}$ of $B$. So, what is the infimum of such distances ?
|
H: Order of elements in an external direct product
Can anyone explain why the largest order of an element in $\mathbb{Z}_2 \oplus \mathbb{Z}_2 \oplus \mathbb{Z}_2 \oplus \mathbb{Z}_3 \oplus \mathbb{Z}_4 $ is 12?
Also, why does $\mathbb{Z}_2 \oplus \mathbb{Z}_2 \oplus \mathbb{Z}_2 \oplus \mathbb{Z}_3$ have no elements of order 12?
How do you find the possible orders of elements in these external direct products?
AI: The way this works is that an element $(a,b)\in G\times H$ has order $\rm{lcm}(|a|,|b|)$.
It's straight forward to see why this is based on the way the group operation is defined (component wise) on the product.
The two results you mentioned follow.
|
H: Polynomial Division Under Certain Remainders
Let $P(x)$ be a polynomial such that when $P(x)$ is divided by $x-17$, the remainder is $14$, and when $P(x)$ is divided by $x-13$, the remainder is $6$. What is the remainder when $P(x)$ is divided by $(x-13)(x-17)$?
Here was my process, that I'm not sure if it's right or not:
We can write $P(x)$ in the form of $$P(x)=Q(x)(x-17)(x-13)+cx+d$$
Thus, by the remainder theorem, we have a system of equations:
\begin{align*}
14c+d &=6,\\
6c+d &=14.
\end{align*}
Solving gets $c=-1, d=20.$
Thus, our remainder is $\boxed{-x+20}.$
Did I make any flaws during my process. Thanks in advance for helping. :)
AI: From $$P(x)=Q(x)(x-17)(x-13)+cx+d$$
Now, let $x=17$, then we have $$17c+d=14$$
If we let $x=13$, then we have
$$13c+d=6$$
Now solve for $c$ and $d$.
Subtract the two equations, we ahve $4c=8 \iff c=2$. Proceed on to solve for $d$ to get the remainder.
|
H: Find a first integral ODE
I am having trouble finding a first integral for the following system:
$$x'=x\left (1-x^2-\frac{y^2}{2}\right)$$
$$y'=y\left(1-\frac{x^2}{2}-y^2\right)$$
Could someone please help me?
AI: This is a planar gradient system. Let $f(x,y)=x(1-x^2-y^2/2)$ and $g(x,y)=y(1-x^2/2 -y^2)$. Notice that $\partial_yf=\partial_xg$.
The idea is to find a real valued function $V(x,y)$ such that $\partial_xV=f$ and $\partial_yV=g$. finding antiderivatives for $f$ and $g$ gives
\begin{aligned}
V(x,y)=\int f(x,y)\,dx &= \frac{x^2}{2}-\frac{x^3}{3}-\frac{x^2y^2}{4} + \phi(y)\\
V(x,y)=\int g(x,y)\,dy &= \frac{y^2}{2}-\frac{x^2y^2}{4} -\frac{y^3}{3} + \psi(x)
\end{aligned}
Equating this expression gives $\phi(y)=\frac{y^2}{2}-\frac{y^3}{3}$ and $\psi(x)=\frac{x^2}{2}-\frac{x^3}{3}$. All in all,
$$
V(x,y)=\frac{x^2}{2}-\frac{x^3}{3} + \frac{y^2}{2}-\frac{y^3}{3} -\frac{x^2y^2}{4}
$$
|
H: An example of a local ring of $V$ at $p$ not ufd
Let be $O_{V,p}$ the local ring of an irreducible variety $V$ at point $p$, I would like to know an example where $O_{V,p}$ is not UFD.
AI: Take the zero locus in the plane of $y^2 = x^3$. At the origin there are two factorizations of $y^2$.
|
H: Odd prime $p$ implies positive divisors of $2p$ are $1,2,p,$ and $2p$
$1,2,p,$ and $2p$ are indeed divisors of $2p$. I want to show these are the only positive divisors. Is there a more elegant or concise way to prove this besides the proof I have below?
Suppose that positive $a \in \left([3,2p-1] \cap \mathbb{N}\right) \setminus\{p\}$ divides $2p$. So $ak$=$2p$ for $k \in \mathbb{Z}$, and clearly $2 \leq k \leq p$. Since $ak=2p$ is even, at least one of $a$ or $k$ must be even.
If $k$ is even, then $a\frac{k}{2}=aj=p$ for integer $1 \leq j \leq \frac{p}{2}<p$, so $j | p$, so $j=1$, so $a=p$ which is a contradiction.
Similarly, if $a$ is even then $k | p$, so $k=p$. But then $a=2$.
So there are no other positive divisors besides $1,2,p$, and $2p$.
The motivation for this is to show that if a group $G$ has order $2p$ for odd prime $p$, then nonabelian $G$ is isomorphic to $D_{2p}$. The proof begins with "the possible orders for nonidentity elements of $G$ are $2,p,$ and $2p$," which I am trying to prove with Lagrange's Theorem. If there is an alternative way to justify this statement using group theory, then I would appreciate seeing that as well.
AI: If $p$ is an odd prime number, then by the fundamental theorem of number theory, $2\times p$ is the unique primary decomposition of $2p$. Once you express a positive integer $n$ as it's unique primary decomposition, say $p_1^{a_1}\dots p_k^{a_k}$, then all the positive factors will be of the form $p_1^{b_1}\dots p_k^{b_k}$ where $0\leq b_i\leq a_i$ for each $i$. With this observation you should be able to answer your question.
|
H: $f(x)=x+2x^2\sin(1/x)$ increasing?
The function is as follows.
$$f(x)= \begin{cases} x+2x^2 \sin(\frac{1}{x}) &\text{, if $x\neq 0$}\\ 0 &\text{, otherwise} \end{cases}$$
i solved the problem as follows,
$f$ is not increasing for any neighborhood of $0$
By using the fact,
For $x_n={1\over2n\pi}$,
$f'(x_n)=-1$ then f is not monotone increasing for any closed interval $[-\epsilon,\epsilon]$, thus the above statement holds.
Is it correct? If it is correct i have the 2 questions.
How to show that
$f(x)<0$ for any $x\in({-1\over2},0)$
$f(x)>0$ for any $x\in(0,{1\over2})$
I solved the for any $n\in N$, $f(x_n)<f(y_n)$ where $x_n={1 \over 2n\pi}$ and $y_n={1 \over (2n+{1 \over 2})\pi}$.
How can i solve it?
AI: HINT:
Note that $-1\le \sin(1/x)\le 1$. So, we have
$$x-2x^2\le x+2x^2\sin(1/x)$$
and $x-2x^2>0$ when $0<x<1/2$. Can you finish the argument?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.