text
stringlengths 83
79.5k
|
|---|
H: $x^4-4=y^2+z^2$ prove that it has no integer solution
$x^4-4=y^2+z^2$ prove that it has no integer solution
I tried to check mod$4$ , mod $3$ ...
It doesn't give anything.
I want to solve this problem
by supposing that I'm finding the smallest solution and then prove that a smaller one exists. But for that I have to find something on which every member of equation divides.Which I couldn't find.
AI: Let $A=x^2+2,B=x^2-2$. Then the equation can be written as $$AB=y^2+z^2.$$ Clearly $\gcd(A,B)|(A-B)=4$. One has two cases to consider.
Case 1. $2|\gcd(A,B)$.
In this case, $x=2x’$ is even and $y,z$ must have the same parity. The case when $y,z$ are both odd can be ruled out, otherwise, taking congruence mod 4 would yield a contradiction. Writing $y=2y’,z=2z’$, one has then $$(4x’^2+2)(4x’^2-2)=4(y’^2+z’^2)$$ $$\Rightarrow 4x’^4-1=y’^2+z’^2$$ which is impossible since a sum of two squares is not congruent to 3 mod 4.
Case 2. $\gcd(A,B)=1$.
In this case, $x$ must be odd. But since $\gcd(A,B)=1$ and $AB$ is a sum of two squares, any prime factor $p$ in $A$ must be either congruent to 1 mod 4 or congruent to 3 mod 4 with even multiplicity. It follows that $A$ must be congruent to 1 mod 4. But since $x$ is odd in this case, $A=x^2+2$ is congruent to 3 mod 4, which is a contradiction.
QED
|
H: Evaluating the indefinite Harmonic number integral $\int \frac{1-t^n}{1-t} dt$
It is well-known that we can represent a Harmonic number as the following integral:
$$H_n = \int_0^1 \frac{1-t^n}{1-t} dt$$
The derivation of this integral doesn't need you to derive the indefinite integral first, so now I'm wondering what the indefinite integral is and how one can derive it. According to WolframAlpha the indefinite integral is:
$$\int \frac{1-t^n}{1-t} dt = \frac{t^{n+1}{}_2F_1(1,n+1;n+2;t)}{n+1} - \ln(1-t) + C$$
where ${}_2F_1(a,b;c;z)$ is a Hypergeomtric function. I understand why $-\ln(1-t)$ is at the end, that's the result of splitting up the integrand, but I don't understand how a Hypergeometric function ends up there.
AI: Note that for $n\in\mathbb{N}$, we have
$$\frac{1-t^n}{1-t}=1+t+t^2+\cdots+t^{n-1}$$
It follows that
$$\int\:\frac{1-t^n}{1-t}\:dt=t+\frac{t^2}{2}+\frac{t^3}{3}+\cdots+\frac{t^n}{n}+C$$
|
H: How this kind of permutation is called in math?
Assume we have a set $\{a_0,a_1,a_2,...,a_{n-1}\}$. Our permutation maps each element $a_i$ to $a_{ki\bmod n}$, where n and k are relatively prime.
Geometrically it looks like if take a regular $n$-polygon, which vertices are numbered from $0$ to $n - 1$ clockwisely, then we start walking from vertex $a_0$ to another vertex $a_k$ that is shifted by $k$ positions, and then to another, ... until we make a cycle.
For example for $n=5, k=3$ we have $\{a_0, a_1, a_2, a_3, a_4\} \to \{a_0, a_3, a_1, a_4, a_2\}$.
The question is: Does this kind of permutation have special name?
AI: Permutations of the form $x\mapsto kx+a$, $x,a\in\Bbb{Z}_n$, $k\in\Bbb{Z}_n^*$, are sometimes called affine permutations modulo $n$, or affine transformations of $\Bbb{Z}_n$.
More typically an affine transformation concerns a vector space $V$ over some field $k$, when transformations of the form $x\mapsto T(x)+a$, $T$ a linear transformation, $a$ a fixed vector. Those can be described using $(n+1)\times(n+1)$-matrices, $n=\dim_kV$.
Similarly, the affine transformation $T_{k,a}:x\mapsto kx+a$ can be described with the matrix
$$
M_{k,a}=\pmatrix{ k&a\cr0&1\cr}
$$
with matrix entries viewed as elements of the ring $\Bbb{Z}_n$.
The composition of such transformations then corresponds with the usual matrix product.
Anyway, all the affine transformations modulo $n$ form a group, we can call it $\operatorname{Aff}(n)$ for lack of better notation. The transformations you described, with $a=0$, form a subgroup isomorphic to $\Bbb{Z}_n^*$. The transformations with $k=1$ form another subgroup. In affine language they could be called "translations", in the case of permutations "cyclic shifts" is more common. The group of all affine transformations modulo $n$ is a semi-direct product of these two subgroups.
|
H: Solutions of $e^x-1-k\cdot \arctan{x}=0$,
consider $h(x)=e^x-1-k\cdot\arctan{x}$,then find on which condition on $k$ there will be two solutions for $h(x)=0$ ($k$ is real).
I let $f(x)=e^x-1$ and $g(x)=k.\arctan{x}$ and let $f(x)-g(x)=0$ has roots $0$ and $y$. I found that if $y$ tends to $0$ then $k$ tends to $1$. How to do further?
AI: $x=0$ is always one of the solutions. Then the function
$$f(x)=\frac{e^x-1}{\arctan x}$$ can be shown to be strictly increasing and has an horizontal asymptote
$$y=\frac2\pi.$$
You can conclude.
|
H: Find all continuous function $ \frac{1}{2} \int_{0}^{x}(f(t))^{2} d t=\frac{1}{x}\left(\int_{0}^{x} f(t) d t\right)^{2} $
Find all continuous function $f:(0, \infty) \rightarrow(0, \infty) \ni f(1)=1$ and
$$
\frac{1}{2} \int_{0}^{x}(f(t))^{2} d t=\frac{1}{x}\left(\int_{0}^{x} f(t) d t\right)^{2}
$$
My approach:-
Let $ F(x)=\int_{0}^{x} f(t) dt$ and $G(x)=\int_{0}^{x}(f(t))^{2} dt$.
Since $f:(0, \infty) \rightarrow(0, \infty)$ we have $F(x)>0 (\forall x>0)$
Also, $\frac{1}{2} G(x)=\frac{1}{x}\{F(x)\}^{2},$ from the given condition on differentiation, we have
$$
\frac{1}{2} G^{\prime}(x)=\frac{1}{x} \cdot 2 F(x) \cdot F^{\prime}(x)-\frac{1}{x^{2}}(F(x))^{2}
$$
Next I am confused what to do??
Any suggestion or solution would be appreciated.
AI: Move the $x$ on over to the other side and differentiate:
$$\frac{x}{2}\int_0^x [f(t)]^2\:dt = \left(\int_0^x f(t)\:dt\right)^2$$
$$ \implies \frac{x}{2}[f(x)]^2 + \frac{1}{2}\int_0^x [f(t)]^2\:dt = 2f(x)\int_0^x f(t)\:dt$$
Then multiply both sides by $x$ and substitute in the first equation for the middle term:
$$\left(\int_0^x f(t)\:dt\right)^2 - 2xf(x)\int_0^x f(t)\:dt + \frac{x^2}{2}[f(x)]^2 = 0$$
Quadratic equation gives us
$$\int_0^x f(t)\:dt = xf(x) \pm \sqrt{x^2[f(x)]^2 - \frac{x^2}{2}[f(x)]^2} = \left(1 \pm \frac{1}{\sqrt{2}}\right)xf(x)$$
since $x,f\in(0,\infty)$. One more derivative gives us a workable differential equation
$$f = \left(1 \pm \frac{1}{\sqrt{2}}\right)\Bigr(f + xf'\Bigr) \implies (1\pm \sqrt{2})xf' + f = 0$$
which has the solutions
$$f(x) = x^{-\frac{1}{1\pm\sqrt{2}}} = x^{1\pm\sqrt{2}}$$
both of which satisfy $f(1) = 1$. Both functions are locally integrable around $0$ so we must keep them both as solutions.
|
H: How can I construct a nilpotent matrix of order 100 and index 98?
I know to construct a nilpotent matrix of order $n$ with index of nilpotency $n$, but how to construct a nilpotent matrix of order $n$ but index of nilpotency $(n-2)$? Is there any general rule for the same?
AI: The other answer gives you a very general hint on how to tackle the problem. It can be narrowed to your problem.
Let say we want to construct a matrix of order $m$ and index $n$ ($m\ge n$). The most simple nilpotent matrix having order equal to the index is the one having $1$ on the upper (or under, but this is not the usual form) diagonal, for example with order (and index) $4$ :
$$J = \begin{bmatrix} 0 & 1 & 0 & 0 \\ 0 &0 &1&0\\ 0&0&0&1\\0&0&0&0\end{bmatrix}$$
Geometrically, if we take the canonical basis $\{e_1,e_2,e_3,e_4\}$, we project $e_4$ into $e_3$, $e_3$ into $e_2$, $e_2$ in to $e_1$ and $e_1$ into $0$. Thus $e_4$ will be projected to $0$ at the $4$-th application of $J$ and is the last one. This kind of matrix is known as Jordan's matrix.
Following the hint given in the other answer, the block matrix $\begin{bmatrix} J &\mathbf{0}\\\mathbf{0}^T&0 \end{bmatrix}$, where $\mathbf{0}$ is a zero vector of compatible size (4 elements in this case) , has the same index than $J$, thus $4$, but has order $5$.
Based on this, we are able to construct a matrix with index $4$ and arbitrary order $\ge4$. Can you generalize the idea ?
|
H: Can the vertices of $20$-gon be labeled with $1, 2, ..., 20$ in such a way that sum of any $4$ consicutive numbers is less than $43$?
Can the vertices of a regular $20$-gon be labeled with numbers $1, 2, ..., 20$ in such a way that each label is used exactly once and for every four consecutive vertices the sum of their labels is less than $43$?
First I tried to find some example. Then I got nothing , so decided to try something else.
I wrote that sum of $1+2+3...20=210<43×5$
So it seems to have a solution.
But I think something is disturbing this problem to have a solution.
I tried to group all numbers into 4 parts . If I colour all verexes in 4 colours it will be
1,2,3,4,1,2,3,4....4 on 20-gon
so if I choose 4 consicutive vertexes they all will be different colours.
That means that there's a way to divide all 20 numbers into 4 groups of each colour or there's not , so we've to prove
that it's impossible.
AI: No. Suppose the labels are $p_1,p_2,\dots,p_{20}$ in order (and consider indices modulo $20$). If it were possible, then $p_j+p_{j+1}+p_{j+2}+p_{j+3}\le42$ for all $1\le j\le 20$; summing these $20$ inequalities yields $4(p_1+p_2+\cdots+p_{20}) \le 840$. But $4(p_1+p_2+\cdots+p_{20}) = 4(1+2+\cdots+20) = 840$ as well, which would force $p_j+p_{j+1}+p_{j+2}+p_{j+3}=42$ exactly for all $1\le j\le 20$. But it's impossible for $p_1+p_2+p_3+p_4=42=p_2+p_3+p_4+p_5$ since $p_1$ and $p_5$ are distinct.
(One moral: when dealing with integers, always convert inequalities like $s<43$ to $s\le42$.)
|
H: Example of a set with empty boundary in $\mathbb{Q}$
I was dealing with a problem, if subset of a metric space has empty boundary then it is open as well as closed in the space. The proof is easy. But I am wondering for a nontrivial example of such set (which has empty boundary).
Since $\Bbb{R}$ is connected, so in $\Bbb{R}$ there is not any such. Then if we take a disconnect subspace of $\Bbb{R}$ then easily we get an example. Can any body give an example of subset with empty boundary in the disconnect subspace $\Bbb{Q}$ of $\Bbb{R}$?
AI: For example the set $\mathbb{Q}\cap (-\infty, \sqrt{2})$. It is both open and closed in $\mathbb{Q}$.
|
H: What is the solution of this summation?
$$S(x) = \frac{x^4}{3(0)!} + \frac{x^5}{4(1)!}+\frac{x^6}{5(2)!}+.....$$
If the first term was $$x^3$$ and the next terms were $$x^{3+i}$$ then differentiating it would have given $$x^2.e^x$$ and then it was possible to integrate it. But how to solve this one?
AI: I'm assuming you've made a typo and you actually have
$$S(x) = \frac{x^4}{3(0)!} + \frac{x^5}{4(1)!}+\frac{x^{{\color{red}6}}}{5(2)!}+ \cdots.$$
Consider $P(x) = S(x)/x$. We have
\begin{align}
P(x) &= \frac{x^3}{3(0)!} + \frac{x^4}{4(1)!}+ \dfrac{x^5}{5(2)!} + \cdots\\
&=\int_0^x\left(\dfrac{t^2}{0!} + \dfrac{t^3}{1!} + \dfrac{t^4}{2!} + \cdots\right){\rm d}t\\
&= \int_0^x t^2e^t{\rm d}t\\
&= e^x(x^2 - 2x + 2) - 2.
\end{align}
Thus,
$$\boxed{S(x) = e^x(x^3 - 2x^2 + 2x) - 2x}.$$
|
H: Finding the value of $\sin^{-1}\frac{12}{13}+\cos^{-1}\frac{4}{5}+\tan^{-1}\frac{63}{16}$
Find the value of $\sin^{-1}\frac{12}{13}+\cos^{-1}\frac{4}{5}+\tan^{-1}\frac{63}{16}$.
My attempt: $$\sin^{-1}\frac{12}{13}+\cos^{-1}\frac{4}{5}+\tan^{-1}\frac{63}{16}$$
$$=\tan^{-1}\frac{12}{5}+\tan^{-1}\frac{3}{4}+\tan^{-1}\frac{63}{16}$$
$$=\tan^{-1}(\frac{\frac{12}{5}+\frac{3}{4}}{1-\frac{12}{5}\cdot\frac{3}{4}})+\tan^{-1}\frac{63}{16}$$
$$=\tan^{-1}\frac{63}{-16}+\tan^{-1}\frac{63}{16}$$
$$=-\tan^{-1}\frac{63}{16}+\tan^{-1}\frac{63}{16}$$
$$=0$$
But the answer is given as $\pi$. What is my mistake?
AI: Actually
$\tan^{-1}\frac{12}{5}+\tan^{-1}\frac{3}{4}=$
$=\pi+\tan^{-1}\left(\frac{\frac{12}{5}+\frac{3}{4}}{1-\frac{12}{5}\cdot\frac{3}{4}}\right)$
We can notice that
$\frac{\pi}{2}<\tan^{-1}\frac{12}{5}+\tan^{-1}\frac{3}{4}<\pi$.
I am going to prove that
if $\frac{\pi}{2}<\alpha+\beta<\frac{3\pi}{2}$ then $\alpha+\beta=\pi+\tan^{-1}\left(\frac{\tan\alpha+\tan\beta}{1-\tan\alpha\cdot\tan\beta}\right).$
Proof:
$\tan(\alpha+\beta-\pi)=\frac{\tan\alpha+\tan(\beta-\pi)}{1-\tan\alpha\cdot\tan(\beta-\pi)}=\frac{\tan\alpha+\tan\beta}{1-\tan\alpha\cdot\tan\beta}$
As $|\alpha+\beta-\pi|<\frac{\pi}{2}$ and the function tangent is invertible in $\left]-\frac{\pi}{2},\frac{\pi}{2}\right[$, it follows that
$\alpha+\beta-\pi=\tan^{-1}\left(\frac{\tan\alpha+\tan\beta}{1-\tan\alpha\cdot\tan\beta}\right)$, therefore:
$\alpha+\beta=\pi+\tan^{-1}\left(\frac{\tan\alpha+\tan\beta}{1-\tan\alpha\cdot\tan\beta}\right)$.
|
H: Definition of subsequence used in defining accumulation points
According to the definition given on this page, a number $a$ is an accumulation point of a sequence $(a_n)$ if there is a subsequence $(a_{n_k})$ that converges to $a$ in the $\lim_{k\to \infty}$.
What does the word subsequence mean in this definition?
In the page linked above, Theorem 1 says that if a sequence converges, then it has only one accumulation point, namely, the value that the original sequence converges to.
For example, example 2 on the page linked above claims that the only accumulation point of $$(a_n) : a_n = \frac{n+1}{n}, \quad n \in \mathbb{N}$$
is $1$ because $\lim_{n \to \infty} a_n = 1. $
Now, Wikipedia says that a subsequence is formed by deleting terms from the parent sequence. So, what if we picked the subsequence consisting only of the first term with $n=1$? Then $a_1 =2$ and $2$ appears to be an accumulation point.
Must the subsequence be infinite in size, and if so, how can we modify the Wikipedia definition of a subsequence to require that an infinite number of terms remain after deletion?
AI: A subsequence of a sequence $(a_n)_{n\in\Bbb N}$ is a sequence $(a_{n_k})_{k\in\Bbb N}$ such that $(n_k)_{k\in\Bbb N}$ is a strictly increasing sequence of natural numbers. So, for instance, $(a_{2n})_{n\in\Bbb N}$ and $(a_{n^2})_{n\in\Bbb N}$ are subsequence of $(a_n)_{n\in\Bbb N}$. In the first case, I took $n_k=2k$ and, in the second case, I took $n_k=k^2$.
|
H: Check function uniform continuity
The task is to check function uniform continuity in terms of the following set $\{(x,y): x^2+y^2 \geq 2\}$:
$$f(x,y)=(x^2+y^2)\cdot \sin\left(\frac{1}{x^2+y^2}\right)$$
Can you help me with this one?
I have been solving with only single point to check, however here is the whole set, have no idea what to do with that.
AI: Considering new variable $z = \frac{1}{x^2+y^2}$ we have composition of continuous function with uniformly continuous $f(z)=\frac{\sin z}{z}$ on $0 \leqslant z \leqslant 1$.
|
H: Compliment probability
Two events A and B have the following probabilities: P[A] = 0.4, P[B] = 0.5, and P[A ∩ B] = 0.3
Calculate the following:
(a) P[A ∪ B] =0.6
(b) P[A ∩ '] = 0.1
(c) P[A' ∪ '] =0.7
I get this which is incorrect:
P[' ∪ '] = P(A')+p(B')-P(A ∩ B) =0.6
What is the correct formula.
AI: a) $P(A \cup B)=P(A)+P(B)-P(A \cap B)=0.4+0.5-0.3=0.6$
b) $P(A \cup \overline{B})=P(A)-P((A \cap B)=0.4-0.3=0.1$
c) Using one of De Morgan's law
$P(\overline{A} \cup \overline{B})=P(\overline{A\cap B})=1-0.3=0.7$
|
H: Hausdorff and locally compact
Theorem A space $X$ is locally compact and Hausdorff if and only if it is
homeomorphic to an open subset of a compact Hausdorff space.
Can any one give me hint to prove this result. I want the strategy of proof of this particular theorem.
Thanks in advance.
AI: Hint: if $X$ is compact, then you can conclude. If not, use the Alexandroff compactification of $X$.
|
H: Differential Equation: Cauchy-Euler Boundary Value Problem
I'm a bit confused with how my text finds the constants:
$$
x^{2} y^{\prime \prime}-3 x y^{\prime}+3 y=24 x^{5}, \quad y(1)=0, \quad y(2)=0
$$
Auxiliary / Characteristic Equation: $m(m-1)-3 m+3=(m-1)(m-3)=0$. General solution of the associated homogeneous equation is $y=c_{1} x+c_{2} x^{3}$. And this is where I get a bit confused.
Applying $ y(1) = 0 $ to this solution implies $ c_1 + c_2 = 0$ or $c_1 = - c_2 $. By choosing $c_2 = -1 $ we get $c_1 = 1$ and $y_1 = x - x^3 $. On the other hand, $ y(2) =0$ applied to the general solution shows $2 c_{1}+8 c_{2}=0$ or $c_1 = -4 c_2$. The choice $c_2 = -1$ now gives $c_1 = 4$ and so $y_{2}(x)=4 x-x^{3}$
It goes on to the Wronskian ($W\left(y_{1}(x), y_{2}(x)\right)$) and into the Green's function for the boundary-value problem and to the particular integral and solution.
My confusion is how values for $ c_1 $ and $ c_2$ are defined. It seems like arbitrary integers are are assigned corresponding to $y_1$ and $y_2$? How do I make sense of that?
EDIT: Rest of the problem:
$$
W\left(y_{1}(x), y_{2}(x)\right)=\left|\begin{array}{ll}x-x^{3} & 4 x-x^{3} \\ 1-3 x^{2} & 4-3 x^{2}\end{array}\right|=6 x^{3}
$$
Hence the Green’s function for the boundary-value problem is
$$
G(x, t)=\left\{\begin{array}{ll}\frac{\left(t-t^{3}\right)\left(4 x-x^{3}\right)}{6 t^{3}}, & 1 \leq t \leq x \\ \frac{\left(x-x^{3}\right)\left(4 t-t^{3}\right)}{6 t^{3}}, & x \leq t \leq 2\end{array}\right.
$$
In order to identify the correct forcing function f, we put into standard form:
$$
y^{\prime \prime}-\frac{3}{x} y^{\prime}+\frac{3}{x^{2}} y=24 x^{3}
$$
From this equation we see that $f(t)=24 t^{3}$ and so $y_{p}(x)$ is
$$
\begin{aligned} y_{p}(x) &=24 \int_{1}^{2} G(x, t) t^{3} d t \\ &=4\left(4 x-x^{3}\right) \int_{1}^{x}\left(t-t^{3}\right) d t+4\left(x-x^{3}\right) \int_{x}^{2}\left(4 t-t^{3}\right) d t \end{aligned}
$$
Simplification leads to
$$
y_{p}(x)=3 x^{5}-15 x^{3}+12 x
$$
AI: Obviously, the linear system
$$
c_1+c_2=0\\2c_1+8c_2=0
$$
only has the trivial solution $c_1=c_2=0$. You should have left a parameter after the first step, $y(x)=c(x-x^3)$ so that then $0=y(2)=-6c$ directly gives $c=0$.
But in your task you first have to find a particular solution before determining the constants of the complementary or homogeneous part. The method of undetermined coefficients gives $y_p(x)=ax^5$ with some constant $a$. Then find the coefficients so that the boundary conditions are satisfied for $$y(x)=ax^5+c_1x+c_2x^3.$$
In the solution with variation-of-constants, to construct the Green function you need basis solutions $y_1$ satisfying the left boundary condition and $y_2$ satisfying the right boundary condition. Then the product $y_1(\min(t,x))y_2(\max(t,x))$ satisfies the homogeneous DE for $t\ne x$ and considering the jump in the derivative, has a right side $W(x)\delta(t-x)$ in the normalized equation. As it is a product, any factor in the basis solution cancels out by dividing with the Wronski determinant, so indeed the choice of the second initial condition is arbitrary.
|
H: Solution to a differential equation using equations that have no analytic solution
So I have the following question here.
Suppose that $y_1$ solves $2y''+y'+3x^2y=0$ and $y_2$ solves $2y''+y'+3x^2y=e^x$. Which of the following is a solution of $2y''+y'+3x^2y=-2e^x$?
(A) $3y_1-2y_2$
(B) $y_1+2y_2$
(C) $2y_1-y_2$
(D) $y_1+2y_2-2e^x$
(E) None of the above.
The answer is supposed to be $A$. But I am not really sure how that happened.
I know that for $2y''+y'+3x^2y=-2e^x$ the solution is always the homogeneous part and the particular part added together.
Furthermore, I know that the homogeneous part is given as $y_1$.
I then know that for $2y''+y'+3x^2y=e^x$ the solution for that one is composed of the homogeneous part and the particular part and that I can also write the ODE as $-4y''-2y'-6x^2y=-2e^x$. So this implies that the homogeneous portion is just $-2y_1$.
I can't get further than that though. Is my thought process right so far? If not, what more can I do and how can I proceed from here? I can't even solve the first two equations since they have no analytic solution.
This is from an old exam I was looking at an not an assignment so feel free to show work.
AI: You are on the right track. Just notice that $ay_1$ is a solution to the homogeneous for any constant a. So, when you say that "the solution to the homogeneous is just $-2y_1$", you could also say that $3y_1$ is also a solution. Then substitute $-2y_2$ in the LHS, and use the information you have about $y_2$, what do you get?
|
H: If $f(x) = x^4 - x^2 + 1$, find the values of $x$ such that $f(f(f(x))) \le x^8$
If $$f(x) = x^4 - x^2 + 1$$ find the values of $x$ such that $f(f(f(x))) \le x^8$
I noticed that $f(\pm 1) = 1 \implies \underbrace{f(f(f(... f(\pm 1)..)))}_{\text{n times}} = 1$. thus, $f(f(f(x))) = x^8$ at $x = \pm 1$. Fortunately, these were the only two solutions to this problem. However, this is hacky at best and incomplete at worst.
Can somebody provide a rigorous proof for this problem?
AI: $f(x)$=$x^2(x^2+1/x^2-1)$ ,
Now notice that
$(x^2+1/x^2-1)\ge1$ for all real x . (Apply $AM \ge GM$ for $x^2$ and $1\over x^2$)
That is $f(x)\ge x^2$ , which implies that $f(f(f(x))) \ge x^8$
But in the question it is given that $f(f(f(x))) \le x^8$
Therefore , $f(f(f(x))) = x^8$ , which will happen when $(x^2+1/x^2-1)=1$ , hence $x = \pm 1$ are the only solutions.
|
H: Binomial coefficient expansion question
I'm trying to follow this expansion (linked below) for one of my classes, but nothing I have tried is proving successful.
Any hints or help would be very appreciated.
Thanks :)
Binomial coefficient expansion
AI: There is a mistake, here is the correct expansion
$${n\choose r}=\frac{n!}{r!(n-r)!}=\frac{n}{n-r}\cdot\frac{(n-1)!}{r!(n-1-r)!}=\frac{n}{n-r}{n-1\choose r}$$
|
H: Eigen values and vector norm
$A:\mathbb R^2\to\mathbb R^2$ is a 2 by 2 matrix whose eigen values are 2/3 and 9/5. Prove that there exists a non zero vector v such that |Av|=|v|.
It’s not given that A is symmetric. So I can not conclude that A is positive definite. How do I proceed?? Please help.
AI: Hint : Let $S$ be the unit sphere $\{\|x\| = 1\}$. Define $T : S \to \mathbb R$ by $T(x) = \|Ax\|$.
S is a connected subset of $\mathbb R^2$ (because you're using the Euclidean norm , the unit sphere is connected).
$T$ is a continuous map (composition of continuous maps).
Thus $T(S)$ is a connected subset of $\mathbb R$, hence an interval.
That interval contains $\frac 23$ and $\frac 95$, so it also contains $1$.
(Complete any unknown details)
|
H: Closed set that is not complete
Can anyone help me out with an example of a closed set that is not complete? I have read up on the set of irrational numbers with the euclidean metric is such an example on other web pages, but that does not make any sense to me since the set of irrational numbers with the euclidean metric in not closed to begin with
AI: Yes, the set $\Bbb I$ of irrational numbers is an example. It is a closed subset of itself and it is not complete (with respect to the usual distance).
On the other hand, every closed subset of a complete metric space is always complete.
|
H: $\sigma(\xi)$ is independent of a fixed $\sigma$-algebra
I want to prove that if $\lim_{n->\infty}\xi_n = \xi$ pointwise and each $\xi_n$ is independent from a fixed sigma-algebra $F$ then $\xi$ is independent from $F$.
I understand that $\sigma(\xi)$ will be independent from $F$ too.
How can it be proved: if $\sigma(\xi)$ is independent from $F$ then $\xi$ is independent of $F$?
AI: $\sigma (\xi)=\{\xi^{-1} (A): A \in \mathcal B\}$ where $\mathcal B$ is the Borel sigma algebra of $\mathbb R$. The hypothesis gives $P(\xi ^{-1}(A) \cap E)=P(\xi ^{-1}(A) )P(E)$ for any Borel set $A$ and any $E \in F$. This is the definition of $\xi$ being independent of $F$.
|
H: Correspondence between time and the level of water while filling a hemisphere with water at a constant rate
I am trying some question in quantitative aptitude section and I couldn't reason for those question.
A hemispherical bowl is being filled with water at a constant volumetric rate. The level of water in the bowl increases
A. in direct proportion to time,
B. in inverse proportion to time,
C. faster than direct proportion to time,
D. slower than direct proportion to time.
I thought as time increases level of water rising decreases as at a height above it has a large surface area. So, I chose B.
But this is not the case.
Answer is ->
D
So, please help. Also, it would be really helpful if someone can give a purely mathematical treatment of the reasoning used.
AI: Here is an idea for a possible answer. Suppose you hemispheric bowl is
$$
x^2 + y^2 + z^2 \leq 1, \quad z \leq 0.
$$
The bowl is filled at constant volumetric rate $K > 0 $ so the volume of water in the bowl at time $t$ is $V(t) = Kt$. Let $z(t)$ be the level of the water at time $t$, with $z(t) = -1$ the initial level when the bowl is empty. The volume of the bowl when it is filled at $z(t)$ is
$$
V(t) = Kt = \int_{z=-1}^{z(t)} \underbrace{\pi (1-z^2)}_{\text{area of the disk $x^2 + y^2
\leq 1 - z^2$ in the plane ($z$ fixed)}} dz
$$
So differentiating in $t$ we get
$$
K = \pi(1-z^2(t))z'(t) \qquad \text{i.e.} \qquad z'(t) = \frac{K}{\pi(1-z^2(t))}.
$$
Now you have to integrate this ODE with initial condition $z(0) = -1$ and compare $z(t)$ to $K$. Unfortunately I am not too good with ODEs so I am letting you look for how to solve this!
EDIT: As TonyK says in the comments, you can already see from the differential equation that $z(t)$ is decreasing in time, which gives the answer to the initial question.
Now concerning intuition, the bowl gets larger and larger in radius as the water fills it so adding a small volume $dV$ to the bowl when it is already partially filled will make the level rise by much less than if the the bowl is almost empty. Therefore the speed at which the level rises decreases in time. But filling an erlenmeyer you would get the opposite.
|
H: Determine a basis in $R^4$ containing the vectors $u$, $v$ and $ w$.
Let $u = (2, 3, 4, 4)^T$, $v = (0, 1, 2, 6)^T$
, $w = (0, 0, 1, 1)^T$.
Determine a basis in $\mathbb{R^4}$
containing the vectors $u$, $v$ and $ w$.
I thought the basis will contain only $u,v, w$ . However in answer-sheet there is another fourth vector added. My question is that how can I know the how many vectors bases will contain? Because in one question it asks for " V consists of those vectors in $\mathbb{R^4}$
in which the sum of the upper two coordinates is equal to the sum of the lower two coordinates and determine bases in $\mathbb{R^4}$ " and the answer is basis which contains 3 vectors.
AI: Because here what is being asked to you is a basis of $\Bbb R^4$. And, since $\dim\Bbb R^4=4$, every basis of $\Bbb R^4$ has $4$ elements.
|
H: Showing that $f$ is always cohomologous to $f_m$, for some $m$.
I am working through problem 10.16 of Morandi's Field and Galois Theory, which is a guided computation of the second cohomology of a cyclic group of order $n < \infty$.
Let $G =\langle\sigma\rangle$ be cyclic of order $n$, let $G$ act on an Abelian group $M$, and let $f \in Z^2(G,M)$ be a $2$-cocylce. Let $m \in M^G = \{m \in M : \sigma m = m\}$ and define
$$
f_m(\sigma^i,\sigma^j) = \begin{cases}
0 &\text{if} \ \ i + j < 0 \\
m &\text{if} \ \ i+j \geq n
\end{cases}
$$
for $i,j \in \{0,\dots, n-1\}$. I want to show that $f$ is cohomologous to $f_m$, where $m = \sum_{i=0}^{n-1} f(\sigma^i,\sigma)$. It isn't hard to show that $m \in M^G$, but I have no idea how to find a cochain $h : G \to M$ such that
$$
\delta_1(h)(\sigma^i, \sigma^j) = \sigma^ih(\sigma^j) - h(\sigma^{i+j}) + h(\sigma^i) = (f - f_m)(\sigma^i,\sigma^j).
$$
I have seen some computations of the cohomology of cyclic groups that use a lot of homological algebra, but I am very new to the subject so I haven't managed to use them to solve this problem. I understand that $h$ must have a piece-wise definition since $f_m$ does, but otherwise I feel like I am just taking shots in the dark looking for a good candidate for $h$.
AI: So, I've toyed around with the problem (and checked up on $H^2$ groups), and I've come up with the following argument -- I can't help you see why, but it works up to a sign (are you sure you don't want $f$ cohomologous to $f_{-m}$?).
Let $h(\sigma^i)=\sum_{k=0}^{i-1}{f(\sigma^i,\sigma)}$ for $0 \leq i < n$.
For $0 \leq i,k \leq n$, $\sigma^i(f(\sigma^k,\sigma))=f(\sigma^{i+k},\sigma)+f(\sigma^i,\sigma^{k+1})-f(\sigma^i,\sigma^k)$.
It follows that if $0 \leq i,j < n$, $h(\sigma^i)+\sigma^ih(\sigma^j)=f(\sigma^i,\sigma^j)-f(\sigma^i,e_G)+\sum_{k=0}^{i+j-1}{f(\sigma^k,\sigma)}$.
But it is easy to see from the cocycle equation that $f(e_G,e_G)=0_M$ and then that $f(g,e_G)=0_M$. Therefore $\delta_1(h) = f+f_m$.
|
H: Evaluate the integral using Euler integrals
I have the following integral:
$$\int_{0}^\infty \frac{\sqrt{x}}{7+x^7} \ dx$$
I want to evaluate this using the Euler integral. What I have tried:
I tried to make a substitution, because I want to evaluate it via gamma integrals. But I can not find the substitution. Can somebody help me with the substitution?
My attempt:
I made the substitution $$t = \frac{1}{7}x^7, \ \ \ x = (7x)^{1/7}, \ \ \ dx = (7t)^{-6/7} dt, \ \ \ \Rightarrow x^{1/2} = (7t)^{1/14}$$ I fill in and receive:
$$\int_{0}^\infty \frac{\sqrt{x}}{7+x^7} \ dx = \frac{1}{7} \int_{0}^\infty \frac{\sqrt{x}}{1+\frac{1}{7}x^7} \ dx = \frac{7^{(-11/14)}}{7}\int_{0}^\infty \frac{t^{(1/14) - (6/7)}}{1+t} \ dt$$
After that, I continued: $$\frac{7^{(-11/14)}}{7}\int_{0}^\infty \frac{t^{(-11/14)}}{1+t} \ dt = \frac{7^{(-11/14)}}{7} B(\frac{3}{14}, 1-\frac{3}{14}) = \frac{7^{(-11/14)}}{7} \frac{\Gamma(\frac{3}{14})\cdot \Gamma(1-\frac{3}{14})}{\Gamma(1)} = \frac{7^{(-11/14)}}{7}\frac{\pi}{\sin(\frac{3\pi}{14})}$$
But the answer has to be $\frac{1}{7^{25/14}}\frac{\pi}{\sin(\frac{3\pi}{14})}$ Where did I make the mistake?
AI: By the change of variable
$$
t=\frac{x^7}7,\quad x=(7t)^{1/7},\quad dx=(7t)^{-6/7}dt,
$$ one is led to the Euler beta integral
$$
B(x,y) = \int_0^\infty\frac{t^{x-1}}{(1+t)^{x+y}}\,dt=\frac{\Gamma(x)\,\Gamma(y)}{\Gamma(x+y)}, \quad \operatorname{Re}(x)>0,\ \operatorname{Re}(y)>0.
$$Hope you can take it from here.
|
H: Show that T is a regular distribution
I am given a Distribution $$T:D(\mathbb{R}) \rightarrow \mathbb{C}$$ and need to show that it is in fact a regular distribution.
Let $T(\varphi)=\varphi(-1)+\varphi'(1)$.
How can I show that this is in fact a regular distribution ? By definition I would need to find a locally integrable function $u$, such that $$\int_{\mathbb{R}}{u(x)\varphi(x)dx = T(\varphi)} \; \text{ for all } \varphi \in D(\mathbb{R}) $$
I would appreciate a hint as I am stuck.
AI: I don't think such a function $u$ exists. The support of $T$ is contained in $\{-1,1\}$ and there is no non-zero locally integrable function with finite support.
|
H: What test should I use for this problem? (assessing the significance of a change in a categorical variable between two different sized populations)?
I have 2 high schools, School A and School B.
For the first school, I have 5 classes of students; for the second, I have 3 classes (so 8 classes in total).
Within each class, I have different categorical information about each student, for example whether they're male, whether they study French, etc.
The number of students in each class is different.
So the data might look like this (for example):
SCHOOL A
Class 1: 50 students, 20 males, and 5 students who study French
Class 2: 300 students, 50 males, and 8 students who study French
...
Class 5: 25 students, 17 males, and 3 students who study French
SCHOOL B
Class 1: 140 students, 80 males, 10 students who study French
Class 2: 2500 students, 600 males, 110 students who study French
Class 3: 200 students, 110 males, 9 students who study French
What test to I do to assess whether there is a significant difference in the number of males or students who study French between School A and School B?
I'm confused because the different sample sizes presumably mean we should be looking at proportions, but if I look ONLY at proportions, am I still factoring in the magnitudes of the original values? (e.g. far more students are males than study French, so 6/100 students studying French v.s. 3/100 will look small in terms of proportion changes) Would it be a t-test on the proportions?
AI: Since every variable is categorical (school, gender, topic studied), you can run a chi-squared test for independence. You want to compare proportions while factoring in the sample sizes, this is exactly what chi-squared tests do.
Simply compute what the expected numbers are in case of mutual independence of all variables, and compute the statistic $$\chi^2=\sum \frac{(O-E)^2}{E}$$
where $O$ stands for Observed numbers and $E$ stands for Expected. The degree of freedom of the system is the product "number of schools $-1$" times "number of topics $-1$" times "number of genders $-1$". Given the way the question is asked, the classes are irrelevant.
|
H: Alternating series estimation test proof
The first part of the proof of the error estimate theorem in integral calculus is confusing me. It states that $$\biggr\vert \sum_{n=0}^{\infty}(-1)^nb_n-\sum_{n=0}^N(-1)^nb_n\biggr\vert=\biggr\vert \sum_{n=N+1}^{\infty}(-1)^nb_n)\biggr\vert$$
I don't understand why the lower bounds reduce to $N+1$. I've tried drawing a graph where the x-axis represents the bounds of summation and the y-axis represents the sum. My domain starts at $x=0$, then to the right extends to $x=N$, and then it continues for $x>N$. I drew two identical decreasing curves $s$, existing on $[0, \infty)$, and $s_N$, existing on $[0, N]$. When I tried $s-s_N$ I geometrically got a lower bound of $n=N$ for my summation rather than $n=N+1$
AI: Note that$$\sum_{n=0}^\infty(-1)^nb_n=b_0-b_1+b_2-b_3+\cdots+(-1)^{N-1}b_{N-1}+(-1)^Nb_N+(-1)^{N+1}b_{N+1}+\cdots,$$whereas$$\sum_{n=0}^N(-1)^nb_n=b_0-b_1+b_2-b_3+\cdots+(-1)^{N-1}b_{N-1}+(-1)^Nb_N.$$So, if you subtract that second sum from the first one, every term of the form $(-1)^kb_k$ with $k\leqslant N$ will disappear and what remains is$$(-1)^{N+1}b_{N+1}+(-1)^{N+2}b_{N+2}+\cdots=\sum_{n=N+1}^\infty(-1)^nb_n.$$
|
H: Is every closed operator a bijection?
Let $H$ be a $\mathbb R$-Hilbert space and $A$ be a closed linear operator on $H$. Can we conclude that $A$ is an isomorphism between $\mathcal D(A)$ and $H$?
My intuition is that this is clearly wrong, but this seems to be what's been claimed in this book after the proof of Lemma IV.5.3. While they particularly consider the Stokes operator, they claim that the isomorphismness follows by the fact that $A$ is closed.
AI: Any bounded opearator whose domain is the whole of $H$ is closed. So the zero operator is a counter-example.
|
H: Let $f$ and $g$ be the functions defined by f$(t)=2t^2$ and $g(t)=t^2+5t$
I have solved the following:
$f′(t)=4x$
$g′(t)=2x+5$
To solve the next step:
Let $p(t)=2t^2(t^2+5t)$ and observe that $p(t)=f(t)⋅g(t)$. Rewrite the formula for p by distributing the $2t^2$ term. Then, compute p′(t) using the sum and constant multiple rules.
So, I've done the following, using the sum rule:
$2t^2(t^2+5t)$
=$2t^4+10t^3$
=$\frac{d}{dx}\left(2x^4\right)+\frac{d}{dx}\left(10x^3\right)$
=$8x^3+30x^2$
Would this be the right process? Or, would the sum and constant multiple rules be used together?
AI: While your process is right, you need to clear up which variable you are using. Considering the function is in terms of $t$, you would have to differentiate in terms of $t$. Therefore, the derivatives are $f'(t)=4t$ and $g'(t)=2x+5$. Then distributing the $2t^2$ in the $p(t)$ function, you have correctly found the derivative with the rules:
$$\frac{d}{dx}[f(x)+g(x)]=\frac{d}{dx}f(x)+\frac{d}{dx}+g(x)$$
$$\frac{d}{dx}c\times{f(x)}=c\times\frac{d}{dx}f(x), c\in\mathbb{R}$$
$$\frac{d}{dx}x^n=nx^{n-1}$$
So in your third line, you take out the coefficient with the second rule, then differentiate with the third rule. You do this separately to both terms before using the first rule to add up the derivatives. Note that these rules only work if $f(x)$ and $g(x)$ are differentiable on $[a,b]$ (which they are for your case) and the variable is $t$ instead of $x$. Also, at the top, you need to write $\frac{d}{dx}[2t^4+10t^3]$ instead of $2t^4+10t^3$. Your final derivative should therefore be:
$$\frac{d}{dx}[2t^4+10t^3]$$
$$\frac{d}{dt}2t^4+\frac{d}{dt}10t^3$$
$$=2\frac{d}{dt}t^4+10\frac{d}{dt}t^3$$
$$=2(4t^3)+10(3t^2)$$
$$=8t^3+30t^2$$
Hope this helps!
|
H: A basic question on Probability in quantitative aptitude
I am trying some aptitude questions and this question asks for use of probability . I am not able to find the right answer.
A cupboard is filled with a large number of balls of 6 different
colours. You already. have one batl of each colour. If you are
blind-folderd, how many balls do you need to draw to be sure of having
3 colour-matched pairs of balls?
A. 3
B. 4
C. 5
D. 6
I think as there are large no of balls of each colour, so there must be $\frac{1}{6}$ probability of getting ball of 1 color and hence $12$ attempts of balls must be needed.
But I am wrong.
Answer is
A .
Can anyone tell what mistake I am making?
AI: I'd approach it in the following way: The probability the first sampled ball is on 'new' color is obviously 1, whatever the color. Now you have one match, after the first sample. The probability to sample a ball of a different color is $\frac{5}{6}$, so the mean number of trials needed to get one of those colors is $\frac{6}{5}$. Now you have 4 'unsampled' colors left. The probability to get any of them is obviously $\frac{4}{6}$, so the mean number of samples is $\frac{6}{4}$. In total:
$$
ET = 1 + \frac{6}{5} + \frac{6}{4} = 3\frac{8}{15}
$$
I believe this is where the solution (3) comes from
|
H: A zero divisor which is not an element of a minimal prime ideal
Iam looking for a ring R in whiche there is a zero divisor which is not an element of a minimal prime ideal.
In rings which I have checked I couldn't find such an element...
AI: et $K$ be a field, and $R=K[X,Y]/(X^2, XY)$. This ring has a single minimal prime ideal, generated by the congruence class of $X$.
However, in $R$, (the class of) $X+Y$ is annihilated by $X$, yet $X+Y\notin (X)$.
|
H: Solve the equation $\operatorname{arcsinh}=\operatorname{arcsech}(x)$ analytically
I am trying to obtain an analytical solution of the equation.
$$\operatorname{arcsinh}(x) = \operatorname{arcsech}(x)$$
Equating the logarithmic definitions leads to the rather unwieldy equation
$$x^4+x^3\sqrt{x^2+1} +x^2 -1.0 -\sqrt{1-x^2}$$
Needless to say I am struggling to obtain an expression for x ! Can anyone offer a solution ?
AI: $$\log(x+\sqrt{x^2+1})=\log\left(\frac1x+\sqrt{\frac1{x^2}-1}\right)$$
is equivalent to
$$x^2-1=\sqrt{1-x^2}-x\sqrt{x^2+1}.$$
Then with squaring,
$$x^4-2x^2+1=1-x^2-2x\sqrt{1-x^4}+x^2(x^2+1)$$
simplifies to
$$x=0\lor x=\sqrt{1-x^4}.$$
The last equation can be reduced to biquadratic.
$$x=\dfrac1{\sqrt\phi}.$$
|
H: Question on vector cross product.
Show that $\big((\mathbf{u} \times (\mathbf{u}\times \mathbf{v})) \times \mathbf{v}\big) \times (\mathbf{u} \times \mathbf{v})=0$.
From wolfram, it gives zero but there's no details. How to prove this?
AI: Using the identity
$$\textbf{A}\times (\textbf{B}\times \textbf{C})= (\textbf{A}.\textbf{C})\textbf{B}-(\textbf{A}.\textbf{B})\textbf{C}$$
we get,
$\begin{align}
((\textbf{u}\times(\textbf{u}\times \textbf{v}))\times \textbf{v})\times(\textbf{u}\times \textbf{v})\\&=(((\textbf{u}.\textbf{v})\textbf{u}-(\textbf{u}.\textbf{u})\textbf{v})\times\textbf{v})\times(\textbf{u}\times \textbf{v})\\&=((\textbf{u}.\textbf{v})(\textbf{u}\times\textbf{v})-(\textbf{u}.\textbf{u})(\textbf{v}\times\textbf{v}))\times(\textbf{u}\times \textbf{v})\\&=(\textbf{u}.\textbf{v})(\textbf{u}\times\textbf{v})\times(\textbf{u}\times \textbf{v})=0\\
\end{align}$
|
H: What can we say about a group $G$ if for all $a,b,c,d$ in the group, $ab=cd\implies ba=dc$?
Let $G$ be a group. Suppose that, for elements $a, b, c, d$ of $G$, we have $ab = cd \implies ba = dc$. Can we derive anything from this, or are there any conditions that result in such a property? I.e. what does it imply, and what might imply it?
For example, this seems to be true of the dihedral group. And obviously it is true for any abelian group.
Note: The phrase "commute via equality" is just something I used to describe this and is probably not a standard description.
AI: This is equivalent to the group being abelian; pick $d = e$ (identity).
Then for arbitrary elements $a, b \in G$, denote the product $ab = c$.
Then by the assumption $ba = c$ too, so that $ab = ba$.
|
H: Does this formula defines a definable subset?
I am studying definable subsets in Introduction to O-minimal geometry, M. Coste, and I have recently seen some formulas named: first order formulas: if $\phi$ is a first order formula, the set $ \{ x \in R^n : \phi(x) \}$ is definable in $R^n$, where $R$ is a real closed field. I have the following doubt. We know that if $A$ is definable, then $x \in A$ is a first order formula. But, what happens in the situation when $(x,y)$ is a tuple, $A_y$ is a definable subset for every $y$, and we have the subset:
$$ \{ (x,y) \in R^2 : x \in A_y \}.$$
Is this subset definable? Or, analogously, is $\phi (x,y) \equiv x \in A_y$ a first-order formula? I know the union in $y_0 \in R$ of $\{ (x,y) \in R \times \{ y_0 \} : x \in A_{y_0} \}$ need not be definable, since the arbitrary union of definable need not be definable too, but I do not know how to even begin to prove my question... Thanks in advance.
AI: In general, the set $X :=\{ (x,y) \in \mathbb{R}^2 : x \in A_y \}$ won't be definable unless the $A_y$ are defined uniformly.
e.g consider the following (non-uniform) family :
$$A_y = \left\{ \begin{array}{cc} \{ 1 \} & \textrm{if } \lfloor y \rfloor \textrm{ is even} \\ \{ -1 \} & \textrm{if } \lfloor y \rfloor \textrm{ is odd}. \end{array} \right.$$
Clearly, every $A_y$ is definable, but $X$ isn't definable for if it were, then the set
$\{ y \in \mathbb{R} \ \big| \lfloor y \rfloor \textrm{ is even}\}$ would be definable, a contradiction to O-minimality since .this later set can't be written as a finite union of intervals.
uniform family the family $(A_y)$ is uniformly definable if there exist a formula $\varphi(x, y)$ such that for all $y$, $A_y = \{x \in \mathbb{R} \ \big| \ \models \varphi(x, y) \}$, i.e $x\in A_y \equiv \varphi(x, y)$.
|
H: A confusion about complex measure
The 3.13 proposition from the book "Real Analysis" by Folland:
Let $\nu$ be a complex measure on $(X,\mathcal{M})$.
a.$\left| \nu \left( E \right) \right|\le \left| \nu \right|\left( E \right) $ for all $E\in\mathcal{M}$.
b.$\nu \leqslant \left| \nu \right|$,and $d\nu/d\lvert \nu\rvert$ has absolute value 1 $\lvert \nu \rvert-a.e.$
c.$L^1(\nu)=L^1(\lvert \nu \rvert)$, and if $f\in L^1(\nu)$, then $\lvert \int f d\nu\rvert \leq \int \lvert f\rvert d\lvert \nu\rvert$.
The following is the proof of this book:
Suppose $d\nu=f\mu$ as in the definition of $\lvert\nu\rvert$.Then
$$
\left| \nu \left( E \right) \right|=\left| \int_E{f}d\mu \right|\le \int_E{\left| f \right|d\mu}=\left| \nu \right|\left( E \right)
$$
This proves (a) and shows that $\nu \ll \lvert \nu\rvert$.If $g=d\nu/d\lvert \nu\rvert$,then,we have$$
fd\mu =d\nu =gd\left| \nu \right|=g\left| f \right|d\mu
$$
so $g\lvert f\rvert =f$ $\mu$-a.e. and hence $\lvert \nu\rvert-a.e.$But clearly $\lvert f\rvert>0 \lvert \nu\rvert-a.e.$,whence $\lvert g\rvert=1 \lvert \nu\rvert-a.e.$
Part (c) is left to the reader.
My question is that why "clearly $\lvert f\rvert>0 \lvert \nu\rvert$-a.e"?
AI: Let $A=\{x: |f(x)|=0\}=\{x: f(x)=0\}$. For every measurable set $B \subseteq A$ we have $\nu (B)=\int_B fd\mu=0$. This implies that $|\nu|(A)=0$.
|
H: How do I prove that eigenspaces and root subspaces are invariant for A?
So the eigenspace is $Ker(A-λI)$ where $λ$ is an eigenvalue of A and the root subspace is $Ker(A-λI)^r$ where $r$ is the exponent of $(x-λ)$ in the minimal polynomial for $A$. My professor stated that both of these are invariant for $A$, but didn't provide a proof.
Them being invariant means that if $u \in Ker(A-λI)$ then $Au \in Ker(A-λI)$. The same applies for $Ker(A-λI)^r$. So far I've gotten this: $$(A-λI)v=0,$$ $$A(A-λI)v=0,$$ but to move on I'd have to prove that $A$ commutes with $(A-λI)$. Is that really true?
AI: Of course:
$$
A(A-\lambda I)=A^2-\lambda A
$$
and
$$
(A-\lambda I)A=A^2-\lambda A
$$
and
|
H: when $n=12$ and $k=7$, can we generate a sub-group?
$\def\llg{\langle} \def\rrg{\rangle}$
There is a theorem that says if $|\llg a\rrg|=n$, then for each positive divisor $k$ of $n$, the group $\llg a\rrg$ has exactly one subgroup of order $k$—namely, $\llg a^{n/k}\rrg$.
Assume $\llg a\rrg$ is a cyclic group of order $12$, i.e. $|\llg a\rrg|=12$.
Take $k=7$,
$$
\begin{align}
\llg a^7\rrg
&=
\{a^7,a^{14},a^{21},a^{28},a^{35},a^{42},a^{49},a^{56},a^{63},a^{70},a^{77},a^{84}\}\\
&=\{a^7,a^2,a^9,a^4,a^{11},a^6,a,a^8,a^3,a^{10},a^5,e\}\\
&=\llg a\rrg
\end{align}
$$
But $7$ does not divide $12$.
AI: $7$ does not divide $12$, which is why $\langle a \rangle$ has no subgroup of order $k$.
You have shown that $\langle a^7\rangle=\langle a \rangle$, which clearly shows that the order of the group $\langle a^7\rangle$ is not $7$, but is in fact $12$, so this result is entirely consistent with the theorem.
You also made an error. Taking $k=7$ and $n=12$, the expression $$\langle a^{\frac nk}\rangle$$ does not become the expression $$\langle a^7\rangle,$$ but rather the expression
$$\left\langle a^{\frac{12}{7}}\right\rangle$$
which, in the constext of your question, is ill defined, since you only define $a^x$ for integer values of $x$ (and, because $7$ is not a divisor of $12$, $\frac{12}{7}$ is not an integer).
|
H: Exercise 3.4.7 from Tao Analysis I (Set of all partial functions)
This is an exercise you can find here, but I recall the context:
Let $X, Y$ be sets. Define a partial function from $X$ to $Y$ to be
any function $f: X' \rightarrow Y'$ with $X' \subseteq X$ and $Y'\subseteq Y$.
Show that the collection of all partial functions from
$X$ to $Y$ is itself a set.
Tao's hint is to use the following four results from set theory exposed in his textbook:
Lemma 3.4.9. Let $X$ be a set. Then there exists a set $\{Y \, : \, Y \text{ is a subset of } X\}$. It is denoted $2^X$.
Axiom 3.10. Power set axiom: let $X$ and $Y$ be sets. Then there exists a set, denoted $Y^X$, which consists of all the functions from $X$ to $Y$.
Axiom 3.6. Replacement axiom.
Axiom 3.11. Union axiom: let $A$ be a set, whose all elements are themselves sets. Then there exists a set $\bigcup A$ whose elements are those objects which are elements of elements of $A$, i.e., $x \in \bigcup A$ iff $x \in S$ for some $S \in A$.
A consequence: if one has some set $I$, and for each element $\alpha \in I$ we have one set $A_\alpha$, then we can form the union set $\bigcup_{\alpha \in I} A_\alpha$ by defining: $\bigcup_{\alpha \in I} A_\alpha := \bigcup \{ A_\alpha \, | \, \alpha \in I\}$.
There are some very complete solutions out there, e.g. here. My sketch of proof is much shorter, thus I think that there are many errors in it. Here it is:
Let be $X' \subseteq X$ and $Y' \subseteq Y$. If both $X'$ and $Y'$ are fixed, then per the power set axiom (3.10), there exists a set $Y'^{X'}$ which consists of all the functions from $X'$ to $Y'$.
By lemma 3.4.9, there exist a set $2^X$ which consists of all the subsets of $X$, and a set $2^Y$ which consists of all the subsets of $Y$.
Now we fix an element $X'$ of $2^X$. Let be $Y'$ an element of the set $2^Y$, $f$ a function, and $P$ the property ``$P(Y', f)$: $f$ is a function from $X'$ to $Y'$''. Per the replacement axiom, there exists a set $\{f \, | \, P(Y', f) \text{ is true for some } Y' \in 2^Y\} = \{f \, | \, f: X' \rightarrow Y' \text{ for some } Y' \in 2^Y\}$. This set is related to a fixed subset $X' \subseteq X$, so let's denote this set $S_{X'} = \{f \, | \, f: X' \rightarrow Y' \text{ for some } Y' \in 2^Y\}$.
Now we apply the union set (3.11), especially in its second formulation. If we denote $I = 2^X$, then for each element $X' \in I$ we do have one set $S_{X'}$, which is defined above. Thus, there exists a set $\bigcup_{X' \in 2^X} S_{X'} := \bigcup \{S_{X'} \, | \, X' \in 2^X\}$. And, for every function $f$, we have $f \in \bigcup \{S_{X'} \, | \, X' \in 2^X\}$ iff there exists $X' \in 2^X$ such that $f \in S_{X'}$, i.e. if there exists $X' \subset X$ and $Y' \subset Y$ such that $f: X' \rightarrow Y'$.
Consequently, we have proved that there exists a set which consists of the collection of all partial functions from $X$ to $Y$.
What does make this proof incomplete and/or incorrect?
Thanks!
AI: The key observation is that this set is equal to
$$\bigcup \{Y'^{X'}: (X', Y') \in 2^X \times 2^Y\}$$
so you can use union axiom if you have shown that
$$\{Y'^{X'}: (X',Y')\in 2^X \times 2^Y\}$$
is a set. That this is actually a set follows by replacement applied to the set $2^X \times 2^Y$, in which we use power set axiom and pairing axiom.
|
H: Coin toss probability - With two variables
My question is -
If I toss a fair coin $3$ times,
$X$ - The number of heads in the first two tosses.
$Y$ - The number of heads in the last two tosses.
$Z$ is $Z = X + Y$. What is $V(Z)$?
So I'm thinking if I just should decide that X is the probability to get heads in the first two tosses which is $1/8$, and $1/8$ to get two head in the last two tosses.
and Then to calculate $V(Z) = E(Z) - E(Z^2)$
AI: $Z=T_1+2T_2+T_3$, where $T_i= 1 $ if ith trial is head and $0$ otherwise.
Assume the $V$ refers to variance, the rests is trivial, you should get 1.5 quite easily.
In the case that $V$ is not variance but really the function that you quoted, $E(Z)=2$ and $E(Z^2)=E(T_1^2+4T_2^2+T_3^2+4T_1T_2+2T_1T_3+4T_2T_3)=5$ and $V(Z)=-3$
|
H: Probability of winning a lot
So there's this game that I'm analysing, in which out of $45$ numbered balls ( numbered from $1$ to $45$ ), I choose $8$ balls.
$6$ out of the $45$ balls are drawn in the end of round by the organiser, and who get's $6$ out of $8$ balls of his draws, matching with the winning $6$ drawn balls, he wins the whole round.
my question is what is the probability of drawing those $6$ balls( $6$ winning numbers ), knowing that I drew 8 balls. So as a total we'll have $6$ correct matching balls and $2$ wrong. Order doesn't matter here.
My approach was that we have $6\times5\times4\times3\times2$ ways of drawing those $6$ balls, and $39 C 2$ ways to draw the $2$ wrong balls, over the all possible ways of drawing the $8$ balls out of the $45$ balls.
so my answer is $\frac{6\times5\times4\times3\times2 (39 C 2)}{45 C 8}$
Am I correct doing so ?
AI: Your attempt is close to correct. The problem is that you are looking at all orders of choosing the six "correct" balls. Instead, there is only one way to choose all six of the six "correct" balls:
$$\dfrac{({_6C_6})({_{39}C_2})}{ {_{45}C_8} }$$
|
H: Proving that $f_n(\alpha_{n+1}) > 0$
We have that : $ f_{n}(x) = 2x - 2 + \frac{\ln(x^2+1)}{n}$
and : $ f_{n}(\alpha_{n}) = 0 $
Where : $ 0 < \alpha_{n} < 1$
and they asked us to prove that : $f_{n}(\alpha_{n+1}) > 0$
and to prove that $ \alpha_{n} $ is a geometric series.
AI: $f_n$ is differentiable on $[0,1]$ and
$$
f_n'(x) = 2 + \frac{2x}{n(x^2+1)} > 0
$$
so $f_n$ is (strictly) increasing on $[0,1]$ and since $f_n(0) = -2 < 0$ and
$f_n(1) = \frac{\ln 2}{n} > 0$ there exists a unique $0 < \alpha_n < 1$ such that
$f_n(\alpha_n) = 0$. (In fact, $f_n$ is a strictly increasing bijection from $[0,1]$ to
$[-2,\frac{\ln 2}{n}]$ and thus its inverse $g_n$ is a strictly increasing bijection from $[-2,\frac{\ln 2}{n}]$ to $[0,1]$.)
Now for all $x$ in $]0,1]$ you have $\ln(x^2 + 1) > 0$ s0
$$
f_{n+1}(x) < f_n(x)
$$
so in particular
$$
0 = f_{n+1}(\alpha_{n+1}) < f_n(\alpha_{n+1}).
$$
|
H: Isomorphism between a principal bundle and a pullback bundle.
I have seen in many texts on the classification of main bundles that, given two homotopically equivalent X and Y spaces, this equivalence being the function $f: Y \rightarrow X$, given a group G, if $k_{G}(X), k_{G}(Y)$ represents the set of isomorphism classes of principal G-bundles over $X$ and $Y$ respectively, there is a bijection between them. But this means that, given a principal G-bundle equivalence class of X, say $[(P, X, G, \pi)]$, there is only one corresponding to it, say $[(E, Y, G, \rho)]$, and we would have for $(P, X, G, \pi)$ the corresponding $(f^*P, Y, G, \sigma)$.
My question is whether the principal G-bundles $(P, X, G, \pi)$ and $(f^*P, Y, G, \sigma)$ are isomorphic as the principal G-bundles.
Appreciate.
AI: If $X$ is homotopy equivalent to a paracompact space, then $k_G(X) = [X, BG]$ where $BG$ is the classifying space of the topological group $G$ and the brackets denote the collection of homotopy classes of maps $X \to BG$. If $f : Y \to X$ is a continuous map, then there is an induced map $[X, BG] \to [Y, BG]$ given by $\alpha \mapsto \alpha\circ f$. If $Y$ is homotopy equivalent to a paracompact space, we can view this as a map $k_G(X) \to k_G(Y)$ which, by construction of the identification $k_G(X) = [X, BG]$, is precisely $f^*$, i.e. the map $k_G(X) \to k_G(Y)$ is given by $[P] \mapsto [f^*P]$. Now, if $f$ is a homotopy equivalence, then $f^*$ is a bijection (if $g$ is a homotopy inverse of $f$, then $g^*$ is an inverse of $f^*$).
In conclusion, if $X$ and $Y$ are homotopy equivalent to paracompact spaces, and $f : Y \to X$ is a homotopy equivalence, then $f^* : k_G(X) \to k_G(Y)$ given by $[P] \mapsto [f^*P]$ is a bijection.
|
H: How AM-GM is applied here
I don't understand how AM-GM is applied in the last part of the picture. This is on the the $16^{th}$ page of the book in chapter $1$ about $AM-GM$.
AI: We need to prove that:
$$\frac{a+c}{b+c}+\frac{a+c}{a+d}\geq\frac{4(a+c)}{a+b+c+d}$$ or
$$(a+b+c+d)\left(\frac{1}{b+c}+\frac{1}{a+d}\right)\geq4,$$ which is true by AM-GM:
$$(a+b+c+d)\left(\frac{1}{b+c}+\frac{1}{a+d}\right)\geq2\sqrt{(b+c)(a+d)}\cdot\frac{2}{\sqrt{(b+c)(a+d)}}=4.$$
The second inequality we can prove by the same way.
I think a proof by C-S and after this by AM-GM is much more better.
|
H: A ratio of red beads to black is $r:b=4:3$. Why doesn't this translate to $4r=3b$ instead of $3r=4b$?
Suppose that for every 4 red beads ($r$), there are 3 black beads ($b$)
The ratio of red beads to black beads is $r:b=4:3$
But then why is this ratio not equal to $4r=3b$ and instead $3r=4b$?
I have that 4 red beads ($4r$) is the same as 3 black beads ($3b$) after all....
AI: I can think of two ways to think about this:
1. First think about writing the relation as a fraction:
For every $4$ red beads, there are $3$ blue beads. Then we have $$\frac{r}{b} = \frac{4}{3}$$ You can then cross multiply to get $3r = 4b$.
2. Lowest Common Multiple
"I have that $4$ red beads ($4r$) is the same as $3$ blue beads ($3b$) after all..."
This suggests you don't really get the relationship intuitively, so let me try to explain by thinking about lowest common multiples
Consider the problem where red beads are sold in packs of $4$ and blue beads are sold in packs of $3$.
How many packs of each colour do you need to buy to have the same number of red and blue beads?
This involves finding the lowest common multiple of $4$ and $3$ which is $12$.
You can therefore buy $3$ packs of red beads and $4$ packs of blue beads to give you the same amount.
The ratio is exactly the same. You have $4$ red beads for every $3$ blue beads.
So if you have $4$ red beads, there are $3$ blue beads. If you have $8$ red beads, there are $6$ blue beads. If you have $12$ red beads, there are $9$ blue beads. If you have $16$ red beads, there are $12$ blue beads.
So you can immediately see that $3 \times 4$ red beads = $4 \times 3$ blue beads.
Generalising, it's just $3 \times$ number of red beads = $4 \times$ number of blue beads, so $3r = 4b$
Hope that helps.
|
H: for what values of $a,b$, $\int_{-1}^{1}((x^{2}+3 x+1)-(a x+b))^{2} \sqrt{1-x^{2}} d x$ is minimal?
Let $V=\mathbb{R}_{\leq 3}[X]$ I need to find $a,b \in \mathbb{R}$ such that the below expression is minimal.
$$\int_{-1}^{1}\left(\left(x^{2}+3 x+1\right)-(a x+b)\right)^{2} \sqrt{1-x^{2}} d x$$
I got a hint to show that $$\langle f(x), g(x)\rangle=\int_{-1}^{1} f(x) g(x) \sqrt{\left(1-x^{2}\right)} d x$$ is an inner product space and so I did but I am not sure how to continue from here
AI: Idea:
You can work in $V' = \Bbb R_{\le 2}[x]$ for the purpose of this question. In the following, we fix the inner product as given in the hint.
Take the (ordered) basis $B = (1, x, x^2)$ of $V'$.
Using Gram Schmidt (GS), obtain an orthonormal (ordered) basis $B' = (p_0(x), p_1(x), p_2(x))$ of $V'$.
Since GS preserves the span of the first $k$ vectors, we see that $\{p_0(x), p_1(x)\}$ is a basis of $\Bbb R_{\le 1}[x]$.
Express $x^2 + 3x + 1$ as
$$\alpha p_0(x) + \beta p_1(x) + \gamma p_2(x)$$
for appropriate choice of $\alpha, \beta, \gamma \in \Bbb R$.
(You can find these by taking the inner product of $x^2 + 3x +1$ with the appropriate basis vectors. You don't even need to find $\gamma$.)
The desired polynomial $ax + b$ will be given as
$$\alpha p_0(x) + \beta p_1(x).$$
|
H: If a polynomial is irreducible and nonconstant over a finite field, it has a multiple root iff it is in the variable $x^p$
I am a very basic field theory question. I must be mixing up a theorem here, but I am unsure which.
My goal here is to determine if there exists an inseparable, irreducible polynomial in a finite field.
Milne's book Fields and Galois Theory states that a nonconstant, irreducible polynomial $f \in F \left[ x \right]$ has a multiple root if $F$ has nonzero characteristic $p$, and $f$ may be written exclusively in the variable $x^p$.
More clearly, this second condition means there is some $g \in F \left[ x \right]$ so $f \left( x \right) = g \left( x^p \right)$.
This leads me to believe that $ f \left( x \right) = x^4 + x^2 +1$ is irreducible over the finite field of order 2, $F_2$.
Clearly it is nonconstant. $f \left( 0 \right) = f \left( 1 \right) = 1$, so it is irreducible over $F_2$. Lastly, if we define $g \left( x \right) = x^2 + x + 1 \in F_2 \left[ x \right]$, then $f\left( x \right) = g \left( x^2 \right)$. As the characteristic of $F_2$ is $2$, this seems to suggest the theorem holds, and so $f$ is inseparable over $F_2$.
However, later on Milne defines a perfect field as one over which all irreducible polynomial are separable. He says that a field with characteristic $p$ is perfect is every element of the field is a $p$-th power.
Now in $F_2$:
$0 = 0^2$ and $1^2 = 1$, so it seems like every element is a $2$-nd power, so $F_2$ should be perfect, and hence every irreducible polynomial is separable.
This two results seem to clash. Maybe the theorems in the book are poorly worded, or maybe I just keep misreading them.
Could someone please clarify what I have got wrong here? Also any comment about my overall goal, about whether there exist inseparable, irreducible polynomials over a finite field, would be appreciated too.
AI: Indeed, $x^4+x^2+1$ is not irreducible over $\Bbb F_2$, since
$$
x^4+x^2+1=(x^2 + x + 1)^2.
$$
|
H: What is the sum of the $n^2$ terms obtained this way?
We multiply each entry of an $n × n $ matrix A by the cofactor belonging to it. What is the sum of the
$n^2$ terms obtained this way?
I dont understand how a cofactor can belong it entry. Does it mean one cofactor is same for each entry of according row or column?(according to Expansion theorem).
AI: By definition the determinant of a matrix is given by the sum of the entries of any row or column multiplied by their cofactors. So the result you want is just $n\cdot\det{(A)}$ by calculating the determinant over every row or every column and adding the results.
|
H: Compound Poisson distribution and infinitely divisible probability generating function
Let $(X_j)$ be a sequence of r.v. with common distributions $(f_j)$, $N$ be a r.v. having a Poisson distribution with mean $\lambda t$. Let $S_N = X_1 + \cdots + X_N$. Then, $S_N$ has the compound Poisson distribution with a generating function $e^{-\lambda t + \lambda tf(s)}$, where $f(s) = \sum f_j s^j$ (a generating function of $(f_j)$). Let this generating function be $h_t(s) = e^{-\lambda t + \lambda tf(s)}$.
(Feller Vol.1, P.289-290) A probability generating function $h$ is called infinitely divisible if for each positive integer $n$ the $n$th root $\sqrt[n]{h}$ is again a probability generating function.
This follows that if $h_{t+r}(s) = h_t(s) h_r(s)$ for some positive integer $t$ and $r$, then $\sqrt[n]{h_t(s)} = h_{t/n}(s)$, and the right side is a probability generating function. Therefore, $h_t$ is infinitely divisible.
In addition, the author says that a probability generating function, which satisfy $h_{t+r}(s) = h_t(s) h_r(s)$ must follows the compound Poisson distribution (i.e. $h_t(s) = e^{-\lambda t + \lambda tf(s)}$).
Then, I have the following theorem, which basically show that the converse is also true. I am struggling with the proof of this theorem .
(2.2) is $h_t(s) = e^{-\lambda t + \lambda tf(s)}$.
I don't particularly understand the statement highlighted with the purple line. Maybe I can express $\sqrt[n]{h(s)}$ as $- \sum_{k=0}^\infty {1/n \choose k} (1-h(s))^k$. This implies that $\sqrt[n]{h(0)} =- \sum_{k=0}^\infty {1/n \choose k} (1-h_0)^k$. But, I don't understand why $\sqrt[n]{h}$ vanishes. I am also not sure of how the author derives this convergence $\sqrt[n]{h(s)} \to 1$.
I would greatly appreciate if you elaborate the proof. Let me know if you need more context.
AI: I will write $p_{n,k}$ for the coefficient of $s^k$ in $\sqrt[n]{h(s)}$, that is
$$\sqrt[n]{h(s)} = \sum_{k=0}^\infty p_{n,k} s^k.$$
Suppose that $h_0 = 0$. Then, since $h_0 = \left(\sqrt[n]{h(0)}\right)^n = p_{n,0}^n$, we would have $p_{n,0} = 0$. This is what Feller means by "the absolute term in the power series for $\sqrt[n]{h}$ would vanish". Then
$$\sqrt[n]{h(s)} = p_{n,1}s + \sum_{k=2}^\infty p_{n,k}s^k,$$
and this would imply that
$$h(s) = \left( p_{n,1}s + \sum_{k=2}^\infty p_{n,k}s^k\right)^n = p_{n,1}^n s^n + \cdots.$$
In other words, $h_0 = h_1 = \ldots = h_{n-1} = 0$. Since $n$ is arbitrary, we would then have $h_k = 0$ for every $k \geq 0$ which is obviously impossible as $h$ is a PGF. Thus $h_0 >0$.
From this, since $\sqrt[n]{h}$ is nondecreasing, we can deduce that for every $s \in [0,1]$
$$\sqrt[n]{h_0} \leq \sqrt[n]{h(s)} \leq \sqrt[n]{h(1)} = 1.$$
Letting $n \to \infty$, this yields $\sqrt[n]{h(s)} \to 1$.
|
H: A question on Triangle Inequality in $\mathbb{R}^n$
I'm reading a textbook on Topology. We know that $(\rho,\mathbb{R}^n)$ is a metric space, where
$$\rho(x,y)=\sqrt{\sum_{i=1}^n(x_i-y_i)^2}$$for any $x=(x_1,x_2,\ldots,x_n),y=(y_1,y_2,\ldots,y_n)\in\mathbb{R}^n$. When proving that $\rho(x,z)\le \rho(x,y)+\rho(y,z)$, the author uses Schwarz Inequality.
I can understand the method, but I wonder if we can do it directly. We know that three non-collinear points can determine a plane. If those three points $x,y,z$ are on a single line, then of course we can apply the Triangle Inequality on $\mathbb{R}$; if they are not, then they are on a same plane, still we can apply the Triangle Inequality. Isn't it just a question on $\mathbb{R}^2$ essentially?
Maybe I'm missing something, but I can't find it myself. Is my reasoning correct? Thank you!
AI: Good point! I can’t explain why the author would go through a full algebraic proof rather than using what you noticed to simplify the problem considerably. However, I can give some advice about formalizing your idea into a full proof of the triangle inequality in $\mathbb R^n$.
Your observation that
Any three points $\vec x, \vec y, \vec z\in\mathbb R^n$ either lie on the same line or the same plane.
is equivalent to the statement
For any $\vec x, \vec y, \vec z\in\mathbb R^n$, there exists an isometry (distance-preserving transformation) mapping all but the first three coordinates of $\vec x, \vec y,$ and $\vec z$ to zero.
This statement shouldn’t be difficult to prove using a bit of linear algebra to show that a certain system of equations has at least one solution. Once you’ve done that, the triangle inequality in $\mathbb R^n$ follows, because any triplet of points $\vec x, \vec y, \vec z$ can be reduced by isometry to points of the form $(a,b,c,0,...,0)$, at which point the triangle inequality on $\mathbb R$ or $\mathbb R^2$ can be applied.
|
H: difference of operators defined through 2 different functional calculi.
Lets say, we have a fixed function $f$ and two normal/self-adjoint operators $\mathcal{L} ,\tilde{\mathcal{L}} $ with discrete spectrum. Are there some conditions (on $f$ or the normal operators , such as lipschitz continuity of f) that ensure:
\begin{equation}
\label{ineq: pertOfLaplCont}
\| f(\mathcal{L}) - f(\tilde{\mathcal{L}}) \|
\leq C \cdot \| \mathcal{L} - \tilde{\mathcal{L}} \|
\end{equation}
Here $f(\mathcal{L}) $ shall describe the functional calculus defined through $\mathcal{L} $ and so on.
Thanks for answers!
AI: This an interesting question, which has been studied quite a lot by the Russian operator theory school (Birman, Solomyak, Peller ...).
A function $f\colon I\to \mathbb{C}$ with the property that there exists $C>0$ such that $\|f(S)-f(T)\|\leq C\|S-T\|$ for all normal operators with spectrum contained in $I$ is called operator Lipschitz. It is well-known that the set of operator Lipschitz function (say, on $\mathbb{R}$) is strictly smaller than the set of Lipschitz functions. For example, the absolute value fails to be operator Lipschitz. In fact, every operator Lipschitz function on $\mathbb{R}$ is differentiable.
A sufficient condition for a function $f$ on $\mathbb{R}$ to be operator Lipschitz is that it belongs to the Besov class $B^1_{\infty,1}(\mathbb{R})$, a slightly more elementary one is that the derivative of $f$ is the Fourier transform of a complex Borel measure on $\mathbb{R}$.
In contrast, operator Lipschitz functions on the complex plane (i.e. those for which all normal operators are allowed as arguments) are quite boring - they are all of the form $f(z)=az+b$.
If you want to learn more, I can recommend the article Aleksandrov, Peller. Operator Lipschitz functions, arXiv:1611.01593. There you can find proofs for the results I stated.
|
H: Existence of complementary subspace
Let $E$ be a real vector space. If $E$ has finite dimension, then for any subspace $F\subset E$ there is always some subspace $G\subset E$ such that
$$E = F \oplus G$$
In infinite dimension, I know that the axiom of choice allows to construct such a $G$ for any subspace $F\subset E$. Is it possible to do without the axiom of choice when $F$ (but not $E$) is of finite dimension?
I know it is when $E$ is Hilbert. In that case, any finite-dimensional subspace $F\subset E$ is closed, therefore $F\oplus F^\perp = E$. I am wondering if there are ways to do something like this when $F$ is "nice" (e.g., finite-dimensional) in spaces more general than Hilbert spaces.
AI: No. You can't do it.
It is consistent that for any field $F$ there is a vector space $V$ such that no proper subspace of $V$ has a direct complement. In particular for $\Bbb R$. This is based on the work of Läuchli in
Läuchli, H., Auswahlaxiom in der Algebra, Comment. Math. Helv. 37, 1-18 (1962). ZBL0108.01002.
In which he showed (amongst other things) that it is possible to have a vector space (over a countable field) which is not finitely generated, but every proper subspace is finitely generated. In my masters thesis I "refreshed" the argument to a broader context:
Given any field $F$, it is consistent for any given infinite cardinal $\lambda$, that $\sf DC_{<\lambda}$ holds and there is a vector space over $F$ such that every proper subspace is generated by a set of size $<\lambda$, while the space itself is not generated by any well-orderable set.
Moreover, we can do this without changing the extensional definition of $F$, so in the case of the real numbers when moving from the one universe of set theory to the one witnessing the failure, we can do it in a way that no real numbers are added.
Taking any $\lambda>\aleph_0$ ensures, if so, that $\sf DC$ holds, and therefore countable choice as well. In my Ph.D. thesis I developed a framework for iterating these sort of failures, and in November 2019 I wrote a paper showing that Läuchli's result can be iterated in a very strong way to obtain the result mentioned at the start. The framework is still under work, and I hope to prove the necessary theorems for accommodating the preservation of $\sf DC_{<\lambda}$ soon enough, and obtain the most general result.
Even if the space is a Banach space, there might not be a direct complement. For example, it is consistent with $\sf ZF$ that $\ell^\infty/c_0$ does not have any linear functionals except $0$, continuous or otherwise. In that case, if $v$ is any non-zero vector, if $\operatorname{span}(\{v\})$ had a direct complement, the projection will naturally define a linear functional.
The models witnessing this fact are models where analysis "can be developed" which means $\sf ZF+DC$ holds there. The above is a consequence of statements such as "Every set of reals is Lebesgue measurable" or "Every set of reals has the Baire property", both have been shown to be consistent without the axiom of choice (with $\sf ZF+DC$, of course), although the former requires us to assume mild large cardinal axioms are consistent as well (the latter does not).
|
H: How to show that $\lim_n \beta_n|f(\alpha_n x)|=0 \, \, a.e.$
Let $f\in L^1(\mathbb{R})$, $\{\beta_n\}$ a positive sequence and $\{\alpha_n\}$ such that $\sum_n \beta_n/|\alpha_n|<\infty. $ Prove that
$$\lim_{n\to \infty} \beta_n|f(\alpha_n x)|=0 \, \, a.e.$$
AI: Hint: This follows from the a priori stronger fact that $$\int\sum_n \beta_n|f(\alpha_n x)|<\infty.$$
|
H: Counterexample for 2-22 of Spivak's Calculus on Manifolds?
If $f: \mathbb{R}^2 \rightarrow \mathbb{R}$ and $D_2f = 0$, show that $f$ is independent of the second variable.
I was thinking of ways to show this, when I came across what I think might be a counterexample.
Possible counterexample: Consider the function $f: \mathbb{R}^2 \rightarrow \mathbb{R}$ defined by $$f(x,y) =
\begin{cases}
x & \text{ if $y \geq 0$} \\
x^2 & \text{ if $y < 0$.} \\
\end{cases}$$
Then $D_2f = 0$, but $f(x,1) = x, f(x,-1) = x^2 \Rightarrow f(x,1) \neq f(x,-1)$, showing that $f$ is not independent of the second variable. Am I missing something here? It seems like the above theorem should work, i.e. that $f$ is independent of the second variable, but the counterexample seems convincing enough that I'm afraid I might have overlooked something.
An idea that just came to be is that $\lim_{y\rightarrow 0^-} \frac{f(x_0,y)-f(x_0,0)}{y-0}=\infty$, which does not equal to $\lim_{y\rightarrow 0^+} \frac{f(x_0,y)-f(x_0,0)}{y-0} = 0$. Does it sound right?
AI: hint: the above statement can be proved using the mean value theorem with respect to $y$. As I said in the comments, your counterexample is not valid, since the exercise assumes that $D_2f$ exists everywhere. Your function fails to be differentiable with respect to $y$ in every point $(x_0, 0)$.
|
H: Location of new point in new rectangle
I am stuck on a problem that maybe trivial but I am stumped. Suppose that there is a rectangle with points $(5,0), (20,0), (5,5) \text{and} (20,5)$. Inside the rectangle, there is a point $(6,1)$. Doing modifications to the rectangle, we get a new rectangle with points $(10,1),(16,1),(10,4),(16,4)$. I was wondering were the point related to $(6,1)$ would be located? I was also wondering if this process of finding these new points can be generalized as well?
Thank you
AI: One natural transformation that maps the first rectangle to the second one is:
$$
x' = (x-5)\cdot \frac{16-10}{20-5}+10,
\quad
y' = (y-0)\cdot \frac{4-1}{5-0}+1
$$
The first expression maps the interval $[5,20]$ to the interval $[10,16]$.
The second expression maps the interval $[0,5]$ to the interval $[1,4]$.
|
H: For about real eigenvector.
Let $A$ be a real $n×n$ matrix. If $A$ has a complex eigenvalue, then is there any possibility to have a real eigenvector corresponding to this complex eigenvalue?
And my second question is if there is no real eigenvector for this complex eigenvalue, then is it not violating the definition of eigenvector to be always in $\mathbb{R^n}$?
AI: Let $A$ be a real matrix, and let $v$ be its real eigenvector corresponding to a complex eigenvalue $\lambda$. Then, $Av = \lambda v$. However, $Av$ is a product of a real matrix and a real vector, thus a real vector, whereas $\lambda v$ is a complex vector. Therefore, either $\lambda$ is real or $v$ is complex.
There are two ways to look at this case. One could say that, since we are working with real vector spaces, complex eigenvectors are not eigenvectors at all, as well as complex eigenvalues. This way, some matrices may have no eigenvalues at all (consider the $2\times 2$ matrix of plane rotation by $90^\circ$).
Another option is to view the matrix as acting on a complex vector space (that is, the matrix can in general be complex, it is just this particular matrix that happens to have real entries). Then, the eigenvalues and eigenvectors can be complex (and in fact any matrix has one, thanks to the fundamental theorem of algebra).
|
H: Let R = $\{ (n+4,n) \mid n \in \Bbb Z^+\}$, Find $R^2$
I found that this relation is not transitive, does this mean that $R^2$ does not exists?
Any help is appreciated thanks!
AI: The relation $R$ on the set $\mathbb{Z}^+$ is given by
$$
R \colon= \left\{ \, (n+4, n) \colon n \in \mathbb{Z}^+ \, \right\}.
$$
This $R$ can also be written as
$$
R = \big\{ \, (5, 1), (6, 2), (7, 3), (8, 4), \ldots \big\}
$$
Thus the relation $R^2 \colon= R\circ R$ is given by
$$
R^2 \colon= \left\{ \, (a, b) \in \mathbb{Z}^+ \times \mathbb{Z}^+ \colon \exists \, c \in \mathbb{Z}^+ \mbox{ for which } (a, c) \in R, (c, b) \in R \, \right\}.
$$
That is,
$$
\begin{align}
R^2 &= \left\{ \, (a, b) \in \mathbb{Z}^+\times \mathbb{Z}^+ \colon \exists \, c \in \mathbb{Z}^+ \mbox{ for which } c+4 = a, b+4 = c \, \right\} \\
&= \left\{ \, (a, b) \in \mathbb{Z}^+ \times \mathbb{Z}^+ \colon \exists \, c \in \mathbb{Z}^+ \mbox{ for which } a-4 = c = b+4 \, \right\} \\
&= \left\{ \, (a, b) \in \mathbb{Z}^+ \times \mathbb{Z}^+ \colon a-4 = b+4 \, \right\} \\
&= \left\{ \, (a, b) \in \mathbb{Z}^+ \times \mathbb{Z}^+ \colon a = b+8 \, \right\} \\
&= \left\{ \, (b+8, b) \colon b\in \mathbb{Z}^+ \, \right\}.
\end{align}
$$
Using the same logic, we can show that, for any $k \in \mathbb{Z}$, we have
$$
R^k = \left\{ \, (b+4k, b) \colon b \in \mathbb{Z}^+ \, \right\}.
$$
And, then
$$
R^* = \bigcup_{k=1}^\infty R^k.
$$
Hope this helps.
|
H: Find the values of $a$ and $b$ if $\lim_{x\to -\infty}$ $\sqrt{x^2-x+1} + ax - b = 0$?
I took out x from the square root and reached the following expression,
$$\lim_{x\to -\infty} x\sqrt{1-\frac{1}{x}+\frac{1}{x^2}} + ax - b = 0$$
then I separated the part of the expression which contains x from b and tried evaluating its limit, giving me the following expression,
$$\lim_{x\to -\infty} \frac{\sqrt{1-\frac{1}{x}+\frac{1}{x^2}}+a}{\frac{1}{x}} $$
Now, the problem is that I am unable to proceed from here. What I have read and learnt till now tells me that since the denominator of this expression tends to 0 the numerator must also tend to 0 for the limit to exist as a finite number, but I don't understand why only 0? Can't it be something else? Can someone please explain me the approach to solving such questions and the reason behind why it works? And it would be great if you could tell me the values of a and b in this case.
Note: Assuming that the numerator must also tend to 0 for the limit to be finite, and then applying L'Hopital's rule I got the values of a and b as -1 and -1/2 respectively. The only thing is that I don't have the answer of this question with me, so it would be great if you could tell me what I am getting is right or wrong.
AI: What I have read and learnt till now tells me that since the denominator of this expression tends to 0 the numerator must also tend to 0 for the limit to exist as a finite number, but I don't understand why only 0?
Let me address your question on how or why it works that way.
If the numerator were to converge to $\pm\infty$, then the absolute value of the numerator increases without bound, and this happens while the denominator gets smaller and smaller, so we expect the quotient of the two to get really massive as $n \to \infty$ without bound. Hence, in this case, the limit converges to $\pm\infty$.
If the numerator were to converge to some non-zero finite number, then the limit converges to some $\frac{c}{0}$ for some finite number $c$. That's essentially infinite as well. In rigorous terms, the limit converges also to $\pm\infty$ depending on the sign of $c$.
Hence, for the limit to exist, the numerator indeed needs to converge to zero, so at least it can catch up with how the denominator converges to $0$. With this, the limit has a higher chance of converging to a finite number. In this case, the limit converges to the indeterminate form $\frac{0}{0}$ which calls for the L'Hospital rule to evaluate the limit.
Update
To address OP's follow-up comment, let's get back to the question. We want to find out for which values of $a$ and $b$ does the limit equation
$$\lim_{x \to -\infty} \sqrt{x^2 - x + 1} + ax - b = 0$$
hold. In this answer, I'll try to follow OP's apparent strategy in solving the equation.
Indeed, this limit equation holds if and only if
$$\lim_{x \to -\infty} \sqrt{x^2 - x + 1} + ax = b$$
This is the beginning of OP's attempt to solve the equation. The question reduces to finding the values of $a$ and $b$ for which the limit
$$\lim_{x \to -\infty} \sqrt{x^2 - x + 1} + ax$$
exists. What OP tried to do is factor out $-x$ to get the equivalent limit
$$\lim_{x \to -\infty} -x \left(\sqrt{1 - \frac{1}{x} + \frac{1}{x^2}} - a\right)$$
Substituting $-x$ into $x$ so that $x \to \infty$, we have
$$\lim_{x \to \infty} x \left(\sqrt{1 + \frac{1}{x} + \frac{1}{x^2}} - a\right)$$
Here, multiplying by $x$ is equivalent to dividing by $1/x$, so we have
$$\lim_{x \to \infty} \frac{\sqrt{1 + \frac{1}{x} + \frac{1}{x^2}} - a}{\frac{1}{x}}$$
Clearly, the denominator converges to $0$. So for this limit to exist, the numerator must converge to $0$. Hence, we must have
$$\lim_{x \to \infty} \sqrt{1 + \frac{1}{x} + \frac{1}{x^2}} - a = 0$$
The limit on the left-hand side evaluates easily to $1 - a$, so we have
$$1 - a = 0 \Rightarrow a = 1$$
Now that we got $a$, we can plug it back into the limit equation to solve for $b$ and we can proceed from here.
Update
Let's finish this. Plugging in $a = 1$ into the equation, we have
$$b = \lim_{x \to \infty} \frac{\sqrt{1 + \frac{1}{x} + \frac{1}{x^2}} - a}{\frac{1}{x}} = \lim_{x \to \infty} \frac{\sqrt{1 + \frac{1}{x} + \frac{1}{x^2}} - 1}{\frac{1}{x}}$$
Take $u = 1/x$ so that $u \to 0$ and we have
$$b = \lim_{u \to 0} \frac{\sqrt{1 + u + u^2} - 1}{u}$$
From here, we get the indeterminate form $0/0$ and L'Hospital's rule kicks in.
$$b = \lim_{u \to 0} \frac{2u + 1}{\sqrt{1 + u + u^2}} = \frac{1}{2}$$
Hence, we have $a = 1$ and $b = 1/2$.
|
H: Double integration over a Region $D$ enclosed by circle and lines
If $D=\{(x,y):x^2+y^2\geq 1, 0\leq x\leq 1,0\leq y\leq 1\}$, then find $$\iint_{D}\frac{1}{(x^2+y^2)^2}\,dA$$
What i try: I have draw diagram of region $D$
Using polar coordinates $x=r\cos\theta,y=r\sin\theta$
And $x^2+y^2=r^2$ and $dA=rdrd\theta$
So Integral $$I=\iint_{D}\frac{r}{r^4}dr\,d\theta=\iint_{D}\frac{1}{r^3}dr\,d\theta$$
I do not understand how can I set limit for $r$ and $\theta$. Help me please. Thanks
AI: Following up on my comment:
Notice that when $\theta\in[0,\pi/4]$, $r$ varies between the circle ($r=1$) and the line $x=1$, which in polar coordinates is described by $r=1/\cos\theta$.
On the same manner, when $\theta\in[\pi/4,\pi/2],$ $r$ varies between $r=1$ and the line $y=1$, which in polar coordinates is given by $r=1/\sin\theta$. Therefore you are looking for the solution of the following integral:
$$\int_0^{\pi/4}\int_{1}^{1/\cos\theta}r^{-3}drd\theta+\int_{\pi/4}^{\pi/2}\int_{1}^{1/\sin\theta}r^{-3}drd\theta$$
|
H: Proof that greatest entry of a unit vector is $\leq 1$ and $\geq \frac{1}{\sqrt{n}}$
I have a real orthogonal matrix so the column vectors form an orthogonal system and thus the vectors have length one.
I now want to show that for an arbitrary column vector $v_k \in \mathbb{R^n}$ the absolute value of the greatest entry $|v_{k_i}|$
i) is less or equal than one
ii) is greater or equal than $\frac{1}{\sqrt{n}}$. So
$$\dfrac{1}{\sqrt{n}} \leq \;\max\{|v_{k_1}|,|v_{k_2}|,...,|v_{k_n}|\} = |v_{k_i}|\; \leq 1.$$
i) This is intuitive but I struggle to come up with a proof. Can I assume that for an orthogonal matrix, the vectors are always orthonormal regarding the standard inner product or do other inner products are possible, too? If not I would prove it using an indirect proof, else I don't know.
ii) I have no idea how to approach this, any hints are welcome.
Thank you
AI: Every column has norm $1$. So, the sum of the squares of its entries is equal to $1$ and therefore no entry can have absolute value greater than $1$.
But if all of them had absolute value smaller than $\frac1{\sqrt n}$, then the sum of the squares of the entries would be smaller than$$\overbrace{\frac1n+\frac1n+\cdots+\frac1n}^{n\text{ times}}=1.$$
|
H: Differentiability using differentiation
We use this method for piecewise functions to determine differentiability at the point where function changes its definition.
For example-
$f(x)$ =
\begin{cases}
x+1, & x<1 \\
2x^2, & x\geqslant 1
\end{cases}
First we check the continuity at the value of $x=1$. Then we calculate the limiting value of $f'(x)$ at $x=1$. If this value exists then we say that the function is differentiable otherwise not.
The idea of the method is quite obvious.
But my instructor said this method is not quite reliable. He said if the derivatives of the sides do not exist because limit of fluctuations then we should use the standard method. But I don't understand why? Can someone point out other flaws in this method as well?
AI: Consider the function
$$f(x)=\begin{cases}0&x\le0\\x^2\sin{\left(\frac1x\right)}&x\gt0\end{cases}$$
Then we have
$$f'(x)=\begin{cases}0&x\le0\\2x\sin{\left(\frac1x\right)}-\cos{\left(\frac1x\right)}&x\gt0\end{cases}$$
Note that $\lim_{x\to0^-}f'(x)=0$ exists but $\lim_{x\to0^+}f'(x)$ doesn't exist. Although $f'(0)=0$ is defined by the typical definition of the derivative.
|
H: Deduce that $ e < \left({1+ \frac 1n}\right)^{n+\frac12} < e^{1+\frac {1}{2n (n+1)}} $ from the result obtained.
Prove that $$ 2x < \log{\frac {1+x}{1-x}} < 2x \left[{\frac {1+x^2}{3 (1-x^2)}}\right] $$ where $0 < x <1.$ Hence, deduce that $$ e < \left({1+ \frac 1n}\right)^{n+\frac12} < e^{1+\frac {1}{2n (n+1)}} $$
I have solved the first part by taking derivative but I'm not able to deduce the second part from the result obtained.
AI: It is simple. Just substitute $x$ with $\frac{1}{1+2n}$.
Then take exponential on both sides.
To prove RHS, you might need an extra inequality that: $1< 1+2n$. Which is true only because $x < 1$ and $x > 0$.
|
H: Is local minimum/maximum necessarily global when it's the only stationary point of a continuous & differentiable function?
Couldn't find this theorem even though it feels very intuitive to me.
If the $f:R^n \to R$ is continuous, and has only one stationary point - a local minimum/maximuma. Doesn't it necessarily makes it global?
If not - can you please give an example?
If yes - where is it proven?
AI: For $n=1$: you need $f \in C^1$ (the function is not only continuous but continuously differentiable). (a counterxample: $f(x) = e^x - |x + 1|$)
For $n>1$ things are more complicated. See here
|
H: Choose balls out of boxes
Part (a) How many ways are there to put $4$ balls into $3$ boxes, given that the balls can all be distinguished and so can the boxes? (For instance, perhaps each ball is a different color, and each box is a different color as well.)
Part(b) How many ways are there to put $4$ balls into $3$ boxes, given that the balls can all be distinguished but the boxes are not distinguished? (Thus, for example, putting all the balls in the first box is counted as the same outcome as putting all the balls in the second box.)
Part (c) How many ways are there to put $4$ balls into $3$ boxes, given that the balls are not distinguished but the boxes are?
I know that there are 4 possible arrangments for Part (a):
4 0 0,
2 2 0,
2 1 1,
3 1 0. I counted the number of combinations of each of these and got 15, but that is incorrect. I'm not sure what I did wrong and I don't know how to do Part (b) or Part (c). Any help is appreciated.
AI: $\blacksquare$ For part (a), [I've explained at the end why you were getting a wrong answer]
for each of the balls $O_1, O_2, O_3, O_4$ (distinguishable, hence we can assign names like this) you have $3$ options $Box_1, Box_2, Box_3$ for them to go in, and the placement of $O_1$ (or $O_2$, or $O_3$, or $O_4$) does not depend or affect the placement of any of the other balls, that is to say, these are independent choices, so we can multiply their options as $3\times 3\times 3\times 3=81$
$\blacksquare$ For part (b),
you have $4$ distinguishable balls $A,B,C,D$ and the problem is equivalent to finding the number of partitions of the set $Balls=\{A,B,C,D\}$ into at most $3$ disjoint subsets. You have the following partitions of the number $4$ as sums $$1+1+2, \ 2+2, \ 3+1,\ 4$$ and for each of these partitions, you have to find the number of ways in which the set $Balls$ can be broken up. As you have mentioned, there is ${4\choose 4}=1$ $\textbf{[for the partition $4=4$]}$ only one way of putting all the balls together. There are $\frac1{2!}{4\choose 2}{4-2 \choose 2}=3$ $\textbf{[for the partition $4=2+2$]}$ ways of pairing up the balls in two boxes (which are indistinguishable), ${4\choose 3}{4-3 \choose 1}=4$ $\textbf{[for the partition $4=3+1$]}$ ways of putting $3$ balls together in one box and $1$ ball in another box and ${4\choose 2}{4-2\choose 1}{4-2-1\choose 1}\frac1{2!}=6$ $\textbf{[for the partition $4=2+1+1$]}$ ways to put $2$ balls together in one box, and the remaining two separately in different boxes.
The way the counting has been done here is, say for the $4=2+1+1$ partition, you need to choose $2$ balls from $Balls$ to account for the first $2$, which can be done in ${4\choose 2}$ ways, now you have $4-2$ balls remaining, and you must choose $1$ from there to account for the first $1$ in the partition above, which you can do in ${4-2\choose 1}$ ways, and similarly the last ball can be chosen from the remaining ${4-2-1}$ balls in ${4-2-1\choose 1}$ way. Of course, writing the last term in these products is redundant, because once we have chosen all the remaining groupings for the partitions, there is only one ball remaining for us to choose which can obviously be done in $1$ way only. Finally we divide by $2!$ to account for the overcounting that we have due to summands of the same magnitude, i.e. $\{AB, C, D\}, \ \{AB, D, C\}$ are the same.
So, you add up all these numbers to get $6+4+3+1=14$ for part (b). Counting the partitions like this gets tough as you might figure, so check out Stirling numbers of the second kind which count just the answer to this problem when you feel comfortable.
$\blacksquare$ For part (c),
Since the balls are not distinguishable but the boxes are, let's say we place $x$, $y$, $z$ balls in the $1^{st},\ 2^{nd}, \ 3^{rd}$ boxes respectively. We have then, the following restrictions on $x,y,z$ which have to be integers, by the way: $$x,y,z \ge 0; \\ x+y+z=4$$
i.e. we have to find the number of non-negative integral solutions to the above equation $x+y+z=4$. Counting that number is same as counting the number of ways in which you can arrange $4$ balls and $2$ '+' signs in a line, because each such arrangement will lead to each unique solution triplet $(x,y,z)$ $$\text{ like } \blacklozenge + \blacklozenge \blacklozenge \blacklozenge + \text{ denotes the solution } 1+3+0 \\ \text{ and } ++\blacklozenge \blacklozenge \blacklozenge \blacklozenge \text{ denotes the solution } 0+0+4$$ So we have to count the number of permutations of $6$ symbols where $4$ are alike of one kind and $2$ are alike of another kind, and the number of ways to do that is $\dfrac{6!}{4!2!}=15$.
Finally, to answer why you were getting a wrong answer for part (a), you actually considered the balls as indistinguishable (do you see why?), and the boxes as distinguishable [which was the case for part (c)] and hence $15$ is the answer for part (c).
|
H: Simple tensors and projective modules
For a (unital) ring $R$, let $M$ and $N$ be a projective right, and left, $R$-module respectively.
Does it hold that $0 \neq m \otimes n \in M \otimes_R N$ for all non-zero $m \in M$, $n\in N$?
If so, then why?
AI: It does not work even for projective, or free, modules, and the example is already given in comments to the linked more general question by John Palmieri. Let me recap it:
Take $R$ to be a non-reduced commutative ring, e.g. $R=k[X]/(X^2)$, and let $x \in R$ be a nonzero element with $x^2=0.$ Then $R\otimes_R R \simeq R$ by the "multiplication map" $a\otimes b \mapsto ab$, and so the element $x \otimes x$ corresponds under this isomorphism to $x^2=0$.
In fact, the case $R\otimes R$ is in some sense the essential case to consider: Suppose that $M\oplus M' \simeq R^{\oplus I}, N\oplus N'\simeq R^{\oplus J}$. Then there is an injective map
$$M \otimes_R N \hookrightarrow_{\oplus} R^{\oplus I}\otimes_R R^{\oplus J} \simeq (R\otimes_R R)^{\oplus (I\times J)} \simeq R^{\oplus (I\times J)}.$$
So the original elementary tensor $m\otimes n$ induces bunch of elements $(m_{i}n_{j})_{i \in I, j\in J}$ of $R$, and $m\otimes n$ is nonzero iff at least one of the products $m_{i}n_{j}$ is nonzero.
Given the fact that at least one of the $m_i$'s and at least one of the $n_j$'s
are nonzero, it follows that $m\otimes n \neq 0$ provided that $R$ has no non-zero divisors.
|
H: 7 digit number combinations where the first 3 digits must be equal to another 3.
Seven-digit telephone numbers are not allowed to begin with $0$ or $1$.
I can only remember a seven-digit telephone number if the first three digits (the "prefix") are equal to either the next three digits or the last three digits. For example, I can remember $389$-$3892$ and $274$-$9274$.
How many seven-digit telephone numbers can I remember?
I split this up into 2 cases, 1 where the first 3 is the same as the next 3 and another where the first three is the same as the last 3 and I got 8000 for each case and multiplied by 2 which is 16000. However, there are cases where all the digits are the same, like 8888888, but I don't know how many there are and what to subtract from 16000.
AI: First I would check how many "first three digits" there are, which are not allowed to begin with $0$ or $1$. So there are $8 \times 10 \times 10 = 800$ possible "first three digits"
Pick one possibility, call it $ABC$, and split it into cases as you said.
Case "ABC-ABCx": well there are 10 choices for the last digit, totalling $800 \times 10 = 8000$ possibilities
Similarly for case "ABC-xABC". Hence so far we have
$$ Case1 + Case2 = 16,000 $$
But as you point out we overcounted, since "ABC-ABCx"="ABC-xABC" exactly when $x=A,x=C$ and $B=A$. Namely phone numbers of the form "AAA-AAAA", of which there are $8$ possibilities. So final answer is
$$ Total = 16,000 - 8 $$
|
H: Eigenvalues and operator norm
$A: \mathbb R^2 \to \mathbb R^2$ is $2 \times 2$ matrix with eigenvalues $\frac{2}{3}$ and $\frac{9}{5}$. Prove that there exists
a non-zero vector $v$ with $\|Av\|> 2\|v\|$, and
a non-zero vector $v$ with $\|Av\|<\frac{1}{2} \|v\|$.
By defining a continuous function from the unit circle $S$ in the plane, (which is a compact set) to the real line, I conclude that the image contains the closed interval $[\frac{2}{3}, \frac{9}{5}]$. But how do I conclude that the image doesn’t contain $\frac{1}{2}$ and $2$? Help solicited.
AI: Choose $A=\begin{bmatrix} {2 \over 3} & 1000 \\ 0 & {9 \over 5}\end{bmatrix}$.
Note that $\|A (0,1)^T\| > 1000$.
Let $v=(\sqrt{1-{1 \over 1000^2}}, -{1 \over 1000^2})^T$, then and note that
$Av = ({2\over 3} \sqrt{1-{1 \over 1000^2}}-1, {9 \over 5000})^T$
and $\|Av\| \le \sqrt{{1\over 3^2} + {1 \over 10^2}} < {11 \over 30} < { 1\over 2}$.
|
H: Is it true that this entry can always be chosen in such a way that the matrix obtained has 0 determinant?
All the entries of an $n × n$ matrix are fixed with the exception of one entry. Is it true that this entry
can always be chosen in such a way that the matrix obtained has 0 determinant?
I thought in order to $det$ to be 0 there must two proportional rows(columns) or 0 row(column). So with only one entry it is not always possible. Is it proof?
AI: Consider the Laplace expansion of the determinant along the row or column containing the given entry. If the cofactor of that entry is not $0$, there will be a unique value of the entry that makes the determinant $0$. If the cofactor is $0$, the value of this entry does not affect the determinant.
|
H: For any finitely-generated abelian group, show M(G,n) exists
Let $G$ be a finitely-generated abelian group. Prove there is a CW complex $M(G,n)$ which has $\tilde H_k(M(G,n))$ equals $G$ if $k=n$ or zero otherwise.
Here's what I have so far:
By the fundamental theorem for finitely generated abelian groups, $G \cong \mathbb{Z}^k \times \mathbb{Z}/p_1^{n_1}\times \dots \times \mathbb{Z}/p_k^{n_k}$ for primes $p_i$.
$S^n$ is an $M(\mathbb{Z},n)$.
I'm thinking the CW complex I'm looking for will be a wedge sum with $k$ $S^n$ and other spaces.
How can I continue from here?
AI: You're right that using wedge sums is a good idea. In fact, it should be clear that using wedge sums and using $S^n$, you only need to prove that $M(\mathbb Z/p^k\mathbb Z, n)$ exists for all $k,n$.
Hint: use the fact that $\mathbb Z/p^k\mathbb Z$ is the homology of $\mathbb Z\overset{p^k}\to \mathbb Z$, together with cellular homology.
|
H: Sum of roots of trigonometric equation
This is the hardest problem on Georgian (country) high school math exam.
Find all values for parameter $a$ for which the sum of all the roots of the equation:
$$\sin\left(\sqrt {ax-x^2}\right)=0 $$
equal to $100$.
Note that you can't use calculus and we assume only real roots!
AI: We have $a>0$, and the equation reads
$$ax-x^2=k^2\pi^2.$$
By Vieta, when you add the roots in pairs, the sum is $a$.
Hence with $k\ge0$
$$(k+1)a=100$$ with $$a\ge 2k\pi$$
or
$$(k+1)a=100\ge 4(k+1)k\pi.$$
Finally,
$$a=\frac{100}{k+1}$$ with $k=0,1,2.$
Note that as $a$ is rational and $\pi$ transcendental, there is no risk of equal roots.
|
H: Two homomorphisms $f\colon A\to A',\ g\colon B\to B'$ induce a homomorphism $f \otimes g: A\otimes B \to A'\otimes B'$
Yesterday I was working on an exercise-sheet, given the following definition
For two Abelian groups $A$ and $B$ we define their tensor product
$A\otimes B$ as the quotient of the free Abelian group on the set of
formal generators $\{a \otimes b \mid a \in A; b \in B\}$ by the
subgroup generated by elements of the form $$a_1 \otimes b + a_2
\otimes b − (a_1 + a_2) \otimes b$$ and $$a\otimes b_1 +a\otimes b_2
−a\otimes(b_1 +b_2).$$ By abuse of notation we write $a\otimes b$ for
the corresponding element in the quotient $A \otimes B.$
Today I wanted to verify the following:
Two homomorphisms $f\colon A\to A'$ and $g\colon B\to B'$ induce a homomorphism $$f\otimes b\colon A\otimes B \to A'\otimes B'\ \ \text{with}\ \ f\otimes g(a\otimes b) = f(a)\otimes f(b)$$
I know that I am supposed to verify $f\otimes g$ is well defined and $f$ and $g$ indeed induce $f\otimes g$. But I wasn't too confident in how to do this exactly. I was wondering, whether my initial idea about picturing the diagram as below is how to approach problems like this in general.
$\hskip2in$
Where $\{a^n\}$ denotes the set of formal generators of $A$ and $i_*$ are the inclusion maps. That would somehow reduce the problem to the universal property of the quotient space
$\hskip2in$
Would my approach be correct? Did I "think about it in the right way"? Or is there another "canonical" way to this problem?
This exercise might be trivial to most people, but I do not have too much experience in these kind of problems and very little practise with commutative diagrams or diagram chasing in general. I would really love to learn the general approach to problems like this.
AI: It is preferable to forget how the tensor product is constructed, and to remember its universal property: any $\mathbb{Z}$-bilinear map $\varphi:A\times B\to C$ (where $C$ is an abelian group) induces a unique group morphism $u:A\otimes B\to C$ such that $u(a\otimes b)=\varphi(a,b)$ for all $(a,b)\in A\times B$.
Now, I let you check that $\varphi: (a,b)\in A\times B\mapsto f(a)\otimes g(b)\in A'\otimes B'$ is $\mathbb{Z}$-bilinear...
|
H: Proof of an IF - THEN statement
Let $S \subseteq \mathbb{R}$. IF $\exists m \in \mathbb{R}$ such that $\forall n \in S, m \geq n$, THEN $\exists m \in \mathbb{R}$ such that $\forall n \in S, m > n$
Now I know that we get to pick $m$ since its an existence statement, we can pick it as $m = n + 1$ or I can pick $m$ such that $m = \max(S)$, both of these methods guarantee that $m \geq n$. But if we use $m = \max(S)$ then it doesnt satisfy the property that $m > n$ which is required in the THEN statement. So I dont know whether I would say this statement is true or false. I think its false since I can use that max method to prove the THEN part false.
AI: The statement is true. Take $m^*\in\Bbb R$ such that, for each $n\in S$, $m^*\geqslant n$. Now, let $m=m^*+1$. Then, for each $n\in S$, $m>m^*\geqslant n$.
|
H: Which is the standard notation for an infinite summation (or any summation-like operator) without indexes?
An infinite indexed summation is written as $\sum\limits_{i=1}^\infty i$.
A summation of items in a set, finite or not, is $\sum\limits_{c\in C} c$.
How should I represent an infinite sum of the same thing over and over?
Is just $\sum c$ clear enough? Or...
$\sum\limits^\infty c$
$\sum\limits^\infty_{\_} c$
ps.
A more appealing example is perhaps the big X from \varprod which isn't available here.
One of the use cases is to represent a "stream" of Cartesian products that can be zipped to another, finite, one and discard the excess. Actually, the tuples will be zipped, but the result is the same anyway.
ps.2 -
It is something to be written several times in an algebraic expression. Like a language to define a system, so the less verbose the better.
AI: For instance, you can write:
$$ \sum_{k=0}^{\infty} c $$
(since $c$ does not depend on $k$, it is clear that you mean an infinite sum of the same element).
|
H: Prove that there exists and angle $\alpha$ and $r \in \Bbb R$ such that $a\cos x + b\sin x = r\cos\alpha$
Let's say that we have an expression $a\cos x + b\sin x$ where $a \in \Bbb R$ and $b \in \Bbb R$.
I was learning about finding the minimum and maximum values of an expression of this form for some given value of $a$ and $b$ by expressing it in terms of a single trigonometric function. My textbook did it by assuming that $a = m\sin\phi$ and $b = m\cos\phi$, where $m \in \Bbb R$ and $\phi$ is some angle.
But I couldn't wrap my head around the fact that any two real numbers can be expressed as the product of another real number and a trigonometric function for some angle.
So, I decided to take another approach which is highly similar to this one.
It is solely based on the assumption that the expression can be expressed in the form of $r\cos\theta$, where $r \in \Bbb R$ and $\theta$ is some angle. Once this assumption is proved, here's how I will continue it :
$$a\cos x + b\sin x = r\cos\theta$$
Let's say that $\theta = \alpha + x$. So :
$$a\cos x + b\sin x = r\cos(\alpha + x) = (r\cos\alpha)\cos x + (-r\sin\alpha)\sin x$$
This gives us the values of $a$ and $b$ as $r\cos\alpha$ and $-r\sin\alpha$ respectively.
So, it would work perfectly if I can prove the assumption mentioned above.
Unfortunately, I haven't been able to prove it yet.
I was successful in proving it's converse, though i.e. for a given expression, say $p\cos\gamma$, where $p \in \Bbb R$ and $\gamma$ is some angle, it can be expressed in the form of $c\cos\delta + d\sin\delta$ where $c \in \Bbb R$, $d \in \Bbb R$ and $\delta$ is some angle.
This is highly similar to what I've stated above (what I'd do once the assumption is proved).
First, we assume that $\gamma = \beta + \delta$, where $\beta$ and $\delta$ are two angles that fit in the equation.
$$\therefore p\cos\gamma = p\cos(\beta + \delta) = p(\cos\beta\cos\delta - \sin\beta\sin\delta) = (p\cos\beta)\cos\delta + (-p\sin\beta)\sin\delta$$
Substituting $p\cos\beta$ by $c$ and $-p\sin\beta$ by $d$, we can arrive at $c\cos\delta + d\sin\delta$.
I don't know if this will be helpful in proving the initial assumption that an expression $a\cos x + b\sin x$ can be expressed as $r\cos\theta$ for some angle $\theta$ and for some real value of $r$.
I'd really appreciate help in proving this.
Thanks!
PS : I'm not familiar with Euler's formula
AI: The function $p(t) = (\cos t, \sin t)$ maps out the unit circle on the plane.
In fact, for any point $(a,b)$ on the unit circle, there is a unique $t$ (modulo $2 \pi$) such that $p(t) = (a,b)$.
If you pick any point in the plane other than the origin, say $(x,y)$ then
with $R=\sqrt{x^2+y^2}$ the point ${1 \over R} (x,y)$ lies on the unit circle and so there is some $t$ such that ${1 \over R} (x,y) = p(t)$ and so
we can write $(x,y) = R p(t)$, or $x = R \cos t, y = R \sin t$.
So, you are given $a \cos x + b \sin x$, then there is some $\phi$ such that $a= \sqrt{a^2+b^2} \cos \phi, b= \sqrt{a^2+b^2} \sin \phi$ and we can write
$a \cos x + b \sin x= \sqrt{a^2+b^2}(\cos \phi \cos x + \sin \phi \sin x)$ and using the usual trigonometric identities we see that
$a \cos x + b \sin x= \sqrt{a^2+b^2} \cos(x-\phi)$.
|
H: Did I find the right function $f(x) = mx+n?$
I have the following task:
Let $(x_1, y_1)$ and $(x_2, y_2)$ be two points in the plane. We want to determine a
straight line given by the function $f$, i.e. $f(x) = mx + n$, such that $f(x_k) = y_k$
($k = 1,2$).
Find $m$ and $n$.
I solved it like this:
$$ m = \frac{y_2-y_1}{x_2-x_1} $$
and
$$ n = y_1-{mx_1} = y_1- \frac{y_2-y_1}{x_2-x_1}\cdot x_1 $$
Did I calculate it correctly?
AI: We get
$$m=\frac{y_2-y_1}{x_2-x_1}$$ so
$$n=y_1-\left(\frac{y_2-y_1}{x_2-x_1}\right)x_1$$
so $$n=\frac{x_2y_1-x_1y_2}{x_2-x_1}$$
|
H: Evaluate $\int_0^1 \{\ln{\left(\frac{1}{x}\right)}\} \mathop{dx}$
$$\int_0^1 \left\{\ln{\left(\frac{1}{x}\right)}\right\} \mathop{dx}$$
Where $\{x\}$ is the fractional part of x. I was wondering if this integral converges and has a closed form but I dont know how to calculate it. I tried $u=\frac{1}{x}$ to get $$\int_1^{\infty} \frac{\{\ln{u}\}}{u^2} \; du$$
and then perhaps convert the numerator into a series somehow...?
AI: Using the change of variable $y = \log(1/x)$, i.e. $x = e^{-y}$, your integral becames
$$
I = \int_0^\infty e^{-y} \{y\}\, dy
= \sum_{n=0}^\infty \int_n^{n+1} e^{-y}(y-n)\, dy
= \sum_{n=0}^\infty e^{-n} (1 - 2/e) = \frac{e-2}{e-1}.
$$
|
H: Solve $y'(x)=\begin{pmatrix}1 & 1 \\ 4 & 1\end{pmatrix}y(x)$
Find the general solution for $y'(x)=\begin{pmatrix}1 & 1 \\ 4 & 1\end{pmatrix}y(x)$, for $y:\mathbb{R}\to \mathbb{R}^2$.
I've tried to solve this component-wise, that is I've tried to solve $y_1'-y_1=y_2$ and $4y_1+y_2=y_2'$ by plugging the first equation into the second and then solving for $y_2$ but this, along with the other approaches I've tried, doesn't seem to work.
AI: $$
\begin{cases}
y'_1=y_1+y_2 \\
y'_2=4y_1+y_2
\end{cases}
$$
Multiply by $2$ first DE:
$$
\begin{cases}
2y'_1=2y_1+2y_2 \\
y'_2=4y_1+y_2
\end{cases}
$$
Substract both DE:
$$ \implies y'_2-2y'_1=2y_1-y_2$$
$$ y'_2+y_2=2(y_1+y'_1)$$
Multiply by $e^t$ both sides:
$$ (y_2e^t)'=2(y_1e^t)'$$
Integrate.
$$ y_2=2y_1+c_1e^{-t}$$
Plug this in the first DE and solve.
$$ y'_1=y_1+y_2 $$
$$ y'_1=3y_1+c_1e^{-t}$$
|
H: Find generating function for $F_{2n}$
Given that $F(x)=\sum_{n=0}^\infty F_nx^n= \frac{x}{1-x-x^2}$, where $F_n(x)$ is the $n^{th}$ term of the Fibonacci series, and $F(x)$ is the generating function associated to it, find the generating function associated to $F_{2n}$
I know that $F_{2n}=F^2_n+2F_nF_{n-1}$ but this doesn't seem to help much. How can I do this?
AI: We want to compute
$$\sum_{n\geqslant 0} \ F_{2n} x^n$$
Recall the fact that
$$F(x)=\sum_{n\geqslant 0} \ F_{n} x^n=F_0+F_1x+F_2x^2+\cdots=\frac{x}{1-x-x^2}$$
Now,
$$\frac{1}{2}(F(x)+F(-x))=F_0+F_2x^2+F_4x^4+\cdots=\sum_{n\geqslant 0} F_{2n}x^{2n}$$
Therefore, we have
$$\sum_{n\geqslant 0} \ F_{2n} x^n=F_0+F_2x+F_4x^2+\cdots=\frac{1}{2}\left(F(\sqrt{x})+F(-\sqrt{x})\right)=\frac{x}{1-3x+x^2}$$
The problem is solved.
|
H: The existence of group isomorphism between Euclidean space.
Is there any group isomorphism for addition $\mathbb{R}^n$ to $\mathbb{R}^m$ where $n\neq m$? I could prove that there exists any vector space isomorphism or smooth map, but I still could not know that if we consider only abelian group structure for the addition of them, then the group isomorphism between them exits or not.
AI: Hint: For vector spaces over $\mathbb{Q}$, you can see the underlying group homomorphism as a $\mathbb{Q}$-linear map. This reduces the problem to seeing if dimensions of the vector spaces over $\mathbb{Q}$ are equal as this is equivalent to the underlying groups being isomorphic. In turn, it is known (facts section) that the dimension of an infinite dimensional vector space over $\mathbb{Q}$ is the cardinality of the vector set.
|
H: Change (or increment) raised to some power
After some shallow research, I've found no results of anyone asking the same question as me. Please feel free to refer me to wherever this has been discussed previously.
When expressing a change with the letter delta: $\Delta x$ for example and raise it, let's say, to the third power, should it be written as
$(\Delta x)^3$, $\Delta^3 x$, or $\Delta x^3$?
I'm guessing the last one is equivalent to $\Delta(x^3)$.
Thanks in advance.
AI: Write it as $(\Delta x)^3$.
The others would probably be interpreted like so:
$\Delta^3 x$ looks like a higher-order finite difference. This is the finite analogue of $\dfrac{\rm d^3}{{\rm d}x^3}x$. The MathWorld link shows an example with $\Delta^3$ applied to the sequence of cubes $1, 8, 27, \ldots$.
$\Delta x^3$ looks like the change in $x^3$, not the cube of the change in $x$.
Incidentally, note how in $\frac{\rm d^3}{{\rm d}x^3}x$, there are no parentheses. The denominator is really $({\rm d}x)^3$, but for historical reasons we don't write it that way.
|
H: Sum of standard deviations
Suppose I have two series:
$$A = \{a_1,...,a_n\}$$
$$B = \{b_1,...,b_n\}$$
And I define the series C as:
$$C = \{a_1,...,a_n,b_1,...,b_n\}$$
I am wondering if the standard deviation of $C$, $\sigma_C$, could be greater than the sum of the standard deviations of $A$ and $B$, that is, if it is possible that:
$$\sigma_C>\sigma_A+\sigma_B$$
I know that the relationship between the variances is:
$$\sigma_C^2=\frac12(\sigma_A^2+\sigma_B^2)+\left(\frac{\mu_A-\mu_B}2\right)^2$$
But I don't know how to extract conclusions from there. Does anybody know if this is possible? Many thanks in advance!
AI: Kind of a trivial example, but say: $A = \{1,1,1,1,1\}$ and $B = \{5,5,5,5,5\}$. Then $\sigma_A = \sigma_B = 0$. Letting $C = A \cup B$, then $\sigma_C = 2$, so evidently:
$$
2= \sigma_C > \sigma_A + \sigma_B = 0 + 0.
$$
|
H: Given three IID random variables $X_1$, $X_2$, and $X_3$, what is the probability that $X_1X_3$?
Does the answer to this question depend on the distribution of the IID random variables? If so, what are the answers if we assume their distributions are $\bf{N}(\mu, \sigma^2)$ or $\textbf{Unif}(a, b)$?
AI: By symmetry:
$$P(X_1>\max\{X_2,X_3\})=P(X_2>\max\{X_1,X_3\})=P(X_3>\max\{X_1,X_2\})$$
If moreover the distribution is continuous then also:$$P(X_1>\max\{X_2,X_3\})+P(X_2>\max\{X_1,X_3\})+P(X_3>\max\{X_1,X_2\})=1$$so that: $$P(X_2>\max\{X_1,X_3\})=\frac13$$
This does not work if there are elements $x\in\mathbb R$ with $P(X_1=x)>0$.
If e.g. the $X_i$ are degenerated then: $$P(X_2>\max\{X_1,X_3\})=0$$
|
H: Integrate $\int_0^{\frac{\pi}{2}} \frac{dx}{{\left(\sqrt{\sin{x}}+\sqrt{\cos{x}}\right)}^4} $
I found a challenge problem and am confused$$\int_0^{\frac{\pi}{2}} \frac{dx}{{\left(\sqrt{\sin{x}}+\sqrt{\cos{x}}\right)}^4} $$
$u=\frac{\pi}{2}-x$ is no good and square or 4th power the denominator does not help? Suggestion?
AI: \begin{align}
\int_0^{\frac{\pi}{2}} \frac{1}{{\left(\sqrt{\sin{x}}+\sqrt{\cos{x}}\right)}^4}{\rm d}x &= \int_0^{\pi/2}\dfrac{\sec^2 x}{(\sqrt{\tan x} + 1)^4}{\rm d}x.
\end{align}
Denote the upper integral by $I$.
Put $u = \tan x$ to get
$$I = \int_0^\infty \dfrac{1}{(1 + \sqrt u)^4}{\rm d}u.$$
Put $u = t^2$ to get
\begin{align}
I &= \int_0^\infty\dfrac{2t}{(1 + t)^4}{\rm d}t.
\end{align}
Puting $v = t+1$ to get
\begin{align}
I &= \int_1^\infty \dfrac{2(v - 1)}{v^4}{\rm d}v\\
&= 2\int_1^\infty \left(\dfrac{1}{v^3} - \dfrac{1}{v^4}\right){\rm d}v\\
&= 2\left(\dfrac{1}{2} - \dfrac{1}{3}\right)\\
&= \boxed{\dfrac{1}{3}}.
\end{align}
|
H: Prove that $\frac{1}{a^2}+\frac{1}{(a+1)^2}+\frac{1}{(a+2)^2}+\dotsm\infty=\frac{1}{a}+\frac{1}{2a(a+1)}+\frac{2!}{3a(a+1)(a+2)}+\dotsm\infty$
Question:- Prove that $$\frac{1}{a^2}+\frac{1}{(a+1)^2}+\frac{1}{(a+2)^2}+\dotsm\infty=\frac{1}{a}+\frac{1}{2a(a+1)}+\frac{2!}{3a(a+1)(a+2)}+\dotsm\infty$$
Nothing is mentioned in question about nature of $a$
I write it in summation form,but I got stuck and unable to proceed further.
$$\sum_{k=0}^{\infty}\frac{1}{(a+k)^2}=\sum_{n=0}^{\infty}\frac{n!}{(n+1)\prod_{k=0}^{n}(a+k)}$$
Then I take all the terms to LHS in hope that terms may cancel out each other to give zero but that also doesn't help me since with each term degree of both numerator and denominator increases.
Can anybody help me to Prove the result!!
AI: Suppose that $\Re(a)>0$. Then we have
\begin{align*}
\sum\limits_{k = 0}^\infty {\frac{1}{{(a + k)^2 }}} & = \sum\limits_{k = 0}^\infty {\int_0^{ + \infty } {e^{ - (a + k)t} tdt} } = \int_0^{ + \infty } {e^{ - at} t\sum\limits_{k = 0}^\infty {e^{ - kt} } dt} = \int_0^{ + \infty } {e^{ - at} \frac{t}{{1 - e^{ - t} }}dt}
\\ & \mathop = \limits^{t = - \log s} \int_0^1 {s^{a - 1} \frac{{ - \log s}}{{1 - s}}ds} = \int_0^1 {s^{a - 1} \frac{{ - \log (1 + (s - 1))}}{{1 - s}}ds}
\\ & = \int_0^1 {s^{a - 1} \sum\limits_{k = 1}^\infty {( - 1)^{k - 1} \frac{{(s - 1)^{k - 1} }}{k}} ds} = \sum\limits_{k = 1}^\infty {\frac{1}{k}\int_0^1 {s^{a - 1} (1 - s)^{k - 1} ds} }
\\ & = \sum\limits_{k = 1}^\infty {\frac{1}{k}\frac{{\Gamma (k)\Gamma (a)}}{{\Gamma (a + k)}}} = \sum\limits_{k = 1}^\infty {\frac{{(k - 1)!}}{{ka(a + 1) \cdots (a + k - 1)}}} .
\end{align*}
Remark: The original sum is actually convergent provided that $a\neq 0,-1,-2,\ldots$ and its sum is the trigamma function
$$
\psi _1 (a) = \frac{{d^2 }}{{da^2 }}\log \Gamma (a).
$$
The series form after the transformations is convergent only when $\Re(a)>0$. It is called a factorial series expansion.
|
H: Does continuous and strictly increasing implies convex function?
Let $f:[0,\infty)\to [0,\infty)$ be a continous and strictly monotone increasing function and $f(0)=0$. Then prove or disprove that $f$ is a convex function.
My initial guess that, $f$ is a convex function, I want to prove it.
I am unable to proceed!I'm not getting any idea how to use continuity. Any hint?
AI: Take
$$
f(x) = \log(1+x)
$$
Then $f(0)=0$, $f'(x) = 1/(1+x)>0$, but $f''<0$, so $f$ is concave but satisfies all the hypotheses.
|
H: Are there examples of continuous, non-differentiable functions whose "rational derivative" exists?
Define the operator $\Delta_n$ according to the equation
$$\Delta_nf(x)=f\left(x+\frac1n\right)-f(x)$$
Observe that for differentiable $f:\Bbb{R}\to\Bbb{R}$
$$\frac{df}{dx}=\lim_{n\to\infty}n\Delta_nf$$
(Note: The limit can be evaluate from either side by changing the sign of $n$)
This matters solely because it is easier to prove that the sequence $(n\Delta_nf)_{n\in\Bbb{N}}$ converges to some limit $L$ than it is to prove that $\lim_{h\to0}(f(x+h)-f(x))/h=L$ over the reals - so much so, that it's tempting to use this as the definition of the derivative.
So why isn't this the definition of the derivative?
The most significant reason that I can think of is that while the existence of the derivative implies the above equation the converse does not hold. It is possible to have a function such that the above sequence converges when the derivative does not exist. For example, take:
$$g(x)=\begin{cases}e^x & x\in\Bbb{Q}\\0 & \text{otherwise}\end{cases}$$
The sequence $n\Delta_ng(x)$ converges to $g(x)$ for all $x$, but $g$ is not continuous - hence, not differentiable - at any point of its domain.
This problem is easily resolved by adding the qualification "if $f$ is continuous at $x$," since this is a relatively simple condition to check in many cases. So the new definition of the derivative is as follows:
For a function $f:E\subseteq\Bbb{R}\to\Bbb{R}$, continuous at a point $x\in E$, the derivative of $f$ at $x$ exists and is equal to $\lim_{n\to\infty} n\Delta_nf(x)$ iff the sequence $(n\Delta_nf(x))_{n\in\Bbb{N}}$ is convergent.
This sounds correct, but it still leaves the possibility of pathological counterexamples. Continuous nowhere-differentiable functions come to mind, but for every example I can think of, the above sequence does not converge.
Are there any examples of a continuous, non-differentiable function s.t. $\lim_{n\to\infty} n\Delta_nf$ still converges?
AI: Let $f(x) = x\sin(\pi/x)$ if $x \neq 0$ and set $f(0) = 0$.
Then $f$ is continuous but not differentiable at the origin. But
$$
\Delta_n f(0) = \frac{\sin(\pi n)}{n} = 0,
$$
so the rational derivative exists and is zero.
|
H: How do I find $f(1)$ and $f'(1)$ if $2x+3y=5$ is the tangent of $f(x)$ at $x=1$?
Find $f(1)$ and $f'(1)$ if $2x+3y=5$ is the tangent of $f(x)$ at $x=1$.
Is this correct:
$$2x+3y=5$$
$$3y=5-2x$$
$$y=5/3-(2/3)x$$
From here I get that $f'(1) = -2/3$. Here I am not sure how to continue for $f(1)$? If I just replace it in $y=5/3-2/3x$, I get $y=5/3-2/3=3/3=1$, but I am not sure if this is correct?
AI: You are correct:
If $f$ is differentiable at $x_0$, the tangent line of $f$ at $x_0$ is by definition the line $y = f'(x_0) (x-x_0) + f(x_0) $. So if the line $y = \alpha x + \beta$ is the tangent line of $f$ at $x = x_0$ then
$$
f'(x_0) = \alpha
$$
and
$$
f(x_0) = \alpha x_0 + \beta.
$$
|
H: What is the meaning of this symbol $\ll_d$?
I apologize for the simple question but I'm reading a paper "On the Convex Hull Of The Integer Points In A Disc" and I'm confused by some notation. They say
$$\# \textrm{ vertices of }P \ll_{\;d} (vol P)^{\frac{d-1}{d+1}}$$
And I'm unfamiliar with the meaning of $\ll_{\;d}$. Here, $P \subset R^d$ is a convex polytope with integral vertices and nonempty interior.
AI: It means that the left side is bounded by a constant $C_{d} > 0$ times the right side, and the constant $C_{d}$ depends only on the number $d$ and nothing else.
Sometimes this is written instead with "big O" notation $x= O_{d}(y)$ (or, equivalently, $x\leq O_{d}(y)$).
**
Added later by request: A source for this notation is the wikipedia article:
https://en.wikipedia.org/wiki/Big_O_notation#History_(Bachmann%E2%80%93Landau,_Hardy,_and_Vinogradov_notations)
The link talks about the meaning of $\ll$ as the same thing as big O. This is also called "Vinogradov's notation". It's common practice to put subscripts if the implicit constant in the big O depends on some parameters.
It's probably good to caution that sometimes people interpret $\ll$ as "much much less than" which might make one think of "little o" notation. So maybe it's always good to read carefully. But the article in the question explicitly says Vinogradov's notation.
Here's another good article which talks about these issues:
https://faculty.math.illinois.edu/~hildebr/595ama/ama-ch2.pdf
Sections 2.1.5, 2.1.6, and 2.1.7 are relevant to everything I've said.
|
H: Calculating maximum area of trapezoid
With a $40\mathrm{m}$ long fence, it is desired to create a trapezoidal region with a base wall. What is the largest area that can be created? How can I calculate this question?
I've tried all $3$ sides of trapezoid $\frac{40}3$ and creating $3$ triangles. And that : For only $3$ sides from $40\mathrm{m}$:
Let the top side opposite the wall be $x\mathrm{m}$. The other two sides are therefore $\frac{40 - x}2$. The area, $A$, of the trapezoid is therefore $x\frac{40 - x}2 = 20x - \frac{x^2}2$. After that derivatives and etc... But i'm not sure. (isosceles trapezoid)
AI: Knowing that the region to be enclosed must be a trapezoid, this means the middle segment must be parallel to the existing wall. Since the formula for the area of a trapezoid is $$A = \frac{h}{2}(b_1 + b_2),$$ this means that if the bases are constant and the height is constant, the area is the same even if the location of the two bases are are "shifted" relative to each other. So for a trapezoid of given fixed area, the one that minimizes the perimeter is the one that has the lateral sides, which we will call $l_1$ and $l_2$, equal--that is to say, the trapezoid is isosceles. When we reverse this reasoning, it follows that for a fixed length of fencing, we only ever need to consider dividing the fencing into three lengths, say $x, y, z$, such that $x = z$, since any other choice with $x \ne z$ yields a trapezoid with inferior area. Since $x + y + z = 40$, this gives us a second constraint, so that there is one free variable to consider, say $y$, that uniquely determines how the fence is cut.
However, for a given partitioning of the fencing, there are numerous ways to place it against the wall to enclose a trapezoidal area. At one extreme, we can simply lay it flat against the wall and enclose no area. At the other, we can put the ends of the fence as close to each other as possible along the wall, which, depending on whether $2x < y$, might enclose no area, or a triangular area. This suggests letting the angle $\theta$ be the internal angle between the wall and one of the lateral sides. We have the base $b_2 = y$, the two lateral sides $x = (40 - y)/2$, and we need to solve for the height $h$ and wall-side base $b_1$ for a given angle $\theta$.
Trigonometry gives us $$\sin \theta = \frac{h}{x}, \\ b_1 = b_2 + 2 x \cos \theta.$$ So the area as a function of $y$ and $\theta$ is $$A(y, \theta) = \frac{40-y}{2} \sin \theta \left(y + \frac{40-y}{2} \cos \theta\right).$$
Now I leave the rest as an exercise to compute the values of $y, \theta$ such that $A$ is maximized.
|
H: If $\sum_{i=k}^n {n \choose i} p^{i}(1-p)^{n-i} \approx 0.05$, how can we find $k$?
Let $n$ be any natural number, let $k\in\{0, \dots, n\}$, and let $p \in [0, 1]$.
If $\sum_{i=k}^n {n \choose i} p^{i}(1-p)^{n-i} \approx 0.05$, how can we find $k$ (in terms of $n$ and $p$)?
AI: In some cases you can use a normal approximation with continuity correction and say $$\Phi\left(\frac{k-0.5 - np}{\sqrt{np(1-p)}}\right) \approx 1-0.05$$
Since $\Phi(1.644854) \approx 0.95$ you could then say in these cases $$k \approx np+0.5+ 1.644854\sqrt{np(1-p)}$$
As an example, suppose $n=40$ and $p=\frac14$. Then this suggests $k \approx 15$
As a check $\sum\limits_{i=15}^{40} {40 \choose i} \left(\frac14\right)^i\left(\frac34\right)^{40-i} \approx 0.054$ so this is not a bad approximation; $k=14$ would give about $0.103$ while $k=16$ would give about $0.026$
|
H: Eigenvalues of $p(A$)
Let $\lambda$ be an eigenvalue of a matrix $A$. I am trying to show that all the eigenvalues of $p(A)$ are $p(\lambda)$ where $p(x)$ is any polynomial.
I have been able to show that $p(\lambda)$ is an eigenvalue of $p(A)$. But how to show that these will be the only eigenvalues?
For instance, if eigenvalues of $A$ are $1$ and $-1$.
Then I know that 1 is an eigenvalue of $A^2$ but how to show that there is no other eigenvalue?
AI: I assume $A$ is a complex $n \times n$ matrix.
Then $A$ is similar to an upper triangular matrix $D$ whose diagonal coefficients are (in order) $\lambda_1, \dots, \lambda_n$ the eigen values of $A$ (with repetition), so there exist $P$ an invertible (complex) matrix such that $A = P^{-1} D P$.
Then, writing $p = \sum_k \alpha_k X^k$ we have
$$
p(A) = \sum_k \alpha_k (P^{-1} D P)^k = \sum_k \alpha_k P^{-1} D^k P = P^{-1} \left(\sum_k \alpha_k D^k \right) P = P^{-1} p(D) P.
$$
Now it suffices to see that $p(D)$ is upper diagonal and that its diagonal coefficients are $p(\lambda_1), \dots, p(\lambda_n)$. Indeed, since $D$ is upper triangular, $D^k$ is also upper diagonal and its diagonal coefficients are $\lambda_1^k, \dots, \lambda_n^k$, and the result follows by linearity.
Since $p(A)$ is similar to an upper diagonal matrix whose diagonal coefficients are $p(\lambda_1), \dots, p(\lambda_n)$, the spectrum of $p(A)$ is $\{p(\lambda_1), \dots, p(\lambda_n)\}$.
Hope this helps!
|
H: Integrate $\int_{-\infty}^{\infty} \frac{e^{2020x}-e^{x}}{x\left(e^{2020x}+1\right)\left(e^x+1\right)} \mathop{dx}$
A challenge problem says integrate $$\int_{-\infty}^{\infty} \frac{e^{2020x}-e^{x}}{x\left(e^{2020x}+1\right)\left(e^x+1\right)} \mathop{dx}$$
I thought $u=-x$ helps but I get $I$ so it is even. I also try partial fraction to $$\int_{-\infty}^{\infty} -\frac{1}{x\left(e^{2020x}+1\right)} + \frac{1}{x\left(e^{x}+1\right)} \mathop{dx}$$
Now what? Help please thanks
AI: Instead of trying to use partial fraction decomposition, how about try to introduce a parameter, $a$, inside the integral? Sometimes when there are integrals with strange numbers, like $2020$ in this case, the integral can be generalized.
$$I(a)=\int_{-\infty}^{\infty} \frac{e^{ax}-e^x}{x\left(e^{ax}+1\right) \left(e^x+1\right)} \; \mathrm{d}x$$
Now, factor out the terms independent of $a$, and differentiate both sides with respect to $a$:
\begin{align*}
I'(a)&=\int_{-\infty}^{\infty} \frac{1}{x \left(e^x+1\right)} \cdot \frac{x e^{ax}\left(e^{ax}+1\right)-xe^{ax}\left(e^{ax}-e^x\right)}{{\left(e^{ax}+1\right)}^2} \; \mathrm{d}x \\
&=\int_{-\infty}^{\infty} \frac{e^{ax}}{{\left(e^{ax}+1\right)}^2} \; \mathrm{d}x \\
&=-\frac{1}{a\left(e^{ax}+1\right)} \bigg \rvert_{-\infty}^{\infty} \\
&=\frac{1}{a}\\
\end{align*}
Integrate both sides with respect to $a$:
$$I(a)=\ln{a}+C$$
Notice that $I(1)=0$.
$$0=\ln{1}+C \implies C=0$$
Therefore, the integral you posted evaluates to:
$$I(2020)=\boxed{\ln{2020}}$$
|
H: Prove that negation of the continuum hypotheses implies existence of subset of R such that...
Prove that the negation of the continuum hypothesis implies that there exist $A⊂R$ such that $ℵ_0<|A|<|R|$.
The negation of the hypotheses implies existence of a set B such that $ℵ_0<|B|<|R$|, but how can I create a subset of R from it?
AI: Big HINT: By definition $|B|<|\Bbb R|$ means that there is an injection $f:B\to\Bbb R$.
|
H: Looking for a symmetric matrix
Do you know a method to find a particular $2 \times 2$ symmetric matrix $M$ with rational coefficients knowing that $\lambda = \sqrt 2$ is an eigenvalue of $M$?
Many thanks.
AI: Let's go for a $2 \times 2$ matrix. If $A$ has rational coefficients it's minimal polynomial $\mu$ will have rational coefficients, and since $\sqrt{2}$ is an eigenvalue of $A$ so must $-\sqrt{2}$ be. That means $\mu = x^2-2$ from which we get $\mathrm{det}(A)=-2$ and $\mathrm{tr}(A)=0$. Moreover, A is symmetric so of the form
$$\begin{pmatrix} a & b \\b & c\end{pmatrix}$$ and our conditions imply $c=-a$ and $-a^2-b^2=-2$ which gives us a possible solution
$$\begin{pmatrix} 1 & 1 \\1 & -1\end{pmatrix}$$
Now if $n$ is larger than $2$ simply make it a block diagonal matrix with your favorite symmetric, rational matrix of size $(n-2) \times (n-2)$
|
H: Prove that $\int_{0}^{\fracπ2}(\log(\tan x))^{2n}dx=\left (\frac {π} {2} \right )^{2n+1} \left ( \frac {d^{2n} \sec(z)}{d z^{2n}} \right ) _{z=0}$
Question: Prove that for $n\in Z^{+}$
$$\int_{0}^{\fracπ2}(\log (\tan x))^{2n}dx=\left (\frac {π} {2} \right )^{2n+1} \left ( \frac {d^{2n} \sec(z)}{d z^{2n}} \right ) _{z=0}$$
I used fourier expansion of $\log(\tan x)$ function
$$\log(\tan x)=-2\sum_{k=0}^{\infty}\frac{\cos(2(2k+1)x)}{2k+1}$$ For $ x\in(0,\frac{π}{2})$
Which makes hard to evaluate summation.Also Integration by parts doesn't work since integrals becomes big to obtain some reduction formual.I could't figure out any other method to proceed further.
AI: Famously (using these),$$\int_0^{\pi/2}\tan^{2s-1}xdx=\frac12\operatorname{B}(s,\,1-s)=\frac12\Gamma(s)\Gamma(1-s)=\frac{\pi}{2}\csc(\pi s),$$so$$\int_0^{\pi/2}\tan^{2z/\pi}xdx=\frac{\pi}{2}\csc\left(\frac{\pi(2z/\pi+1)}{2}\right)=\frac{\pi}{2}\sec z.$$Finally, apply $\left(\frac{\pi}{2}\frac{d}{dz}\right)^{2n}=\left(\frac{\pi}{2}\right)^{2n}\frac{d^{2n}}{dz^{2n}}$ before setting $z=0$.
|
H: getting formula about finding height of a vertical line in different positions in coordinate system.
I want to find height of a vertical line in different positions. I need a formula for calculating. please help me.
AI: $$H=mx+\frac{1}{2}h$$
where m is the slope of the line.
|
H: Pushforward of a Vector Field by a Diffeomorphism
A question concerning when the pushforward of a vector field is well-defined. if $ F:N \rightarrow M $ is a smooth map between manifolds, the pushforward of a tangent vector $ X_p \in T_PN $ is given by $ F_{*,p}:T_pN \rightarrow T_{F(p)}M $, where $ \big(F_{*,p}(X_p)\big)f := X_p(f \circ F) $ for some $ f \in C^\infty_{F(p)}(M) $. However, (I think?) I understand that the pushforward of a smooth section $ X:M \rightarrow TM $ on the bundle $ (TM,M,\pi) $ doesn't generally exist, as the pointwise definition of the bundle map $ F_*:TN \rightarrow TM $, such that $ (F_*X)_{F(p)} = F_{*,p}(X_p) $ is ambiguously defined if $ F $ isn't one-to-one.
That being said, the references I've checked all uniformly state that a necessary and sufficient condition for this pushforward to exist is that $ F $ is a diffeomorphism, but this seems like overkill to me. If $ F $ is smooth and a bijection, isn't a smooth homeomorphism a sufficient condition? Furthermore, does $ F $ even need to be onto, as long as we don't care about points in $ M $ outside of the image of $ F $? Does $ F $ only need to be smooth and one-to-one, and that's it? I guess I don't see why we need $ F^{-1} $ to be smooth to pull this off. I'd normally assume I'm being too picky, but most proofs concerning such vector fields seem to begin with something like "Let $ F $ be a diffeomorphism...", so I assume I'm missing something critical.
AI: Suppose that $F$ were a smooth bijection $N\to U\subset M$, where $U$ is some open subset of $M$. Then for a (smooth) vector field $X\colon N\to TN$, we can define a map $F_*X\colon U\to TU$ by
$$
F_*X\colon q\mapsto(F_*X)_q = dF_{F^{-1}(q)}(X_{F^{-1}(q)}).
$$
This is well-defined as a set map from our assumption that $F$ be bijective onto its image. From the explicit formula $F_*X = dF\circ X\circ F^{-1}$, where $dF\colon TN \to TU$ is the bundle map induced by $F$, we see that $F_*X$ is continuous iff $F^{-1}$ is continuous (i.e., $F$ is a homeo onto its image), and that $F_*X$ smooth iff we suppose further that $F^{-1}$ is smooth (i.e. $F$ is a diffeo onto its image). So we see that the regularity of $F_*X$ is equivalent to the regularity of $F^{-1}$.
|
H: Divisibility with factorials
Find all positive integers $n$, less than 17, for which $n!+(n+1)!+(n+2)!$ is an integral multiple of 49.
I tried to factor the expression, but I am not having any luck.
AI: $\textbf{Hint:}$
$n!+(n+1)!+(n+2)!=n! \times (n+2)^2$
After that I think you can quite easily find the answers which are:
$5,12,14,15,16$
|
H: Diagonalizable matrix is similar to a diagonal matrix with its eigenvalues as the diagonal entries
My book defines a diagonalizable matrix as follows:
A matrix $A$ is diagonalizable if it is similar to a diagonal matrix say $D$. So there exists an invertible matrix $P$ such that $A =PDP^{-1}$.
Now let eigen values of a diagonalizable matrix $A$ are $\lambda_1, \lambda_2,\dots,\lambda_n$.
How do I show that $A$ is similar to a diagonal matrix with $\lambda_1, \lambda_2,\dots,\lambda_n$ as its diagonal entries.
AI: If $A$ and $D$ are similar, then they have the same characteristic polynomials. But the characteristic polynomial of $A$ is $(\lambda_1-x)(\lambda_2-x)\ldots(\lambda_n-x)$ and, if$$D=\begin{bmatrix}\mu_1&0&0&\ldots&0\\0&\mu_2&0&\ldots&0\\0&0&\mu_3&\ldots&0\\\vdots&\vdots&\vdots&\ddots&\vdots\\0&0&0&\ldots&\mu_n\end{bmatrix},$$then the characteristic polynomial of $D$ is $(\mu_1-x)(\mu_2-x)\ldots(\mu_n-x)$. Since $(\lambda_1-x)(\lambda_2-x)\ldots(\lambda_n-x)=(\mu_1-x)(\mu_2-x)\ldots(\mu_n-x)$, …
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.