text
stringlengths 83
79.5k
|
|---|
H: Conditions that a function is analytic in the complex plane of its independent variable?
I have no a mathematic undergraduate background, so I am very sorry if this question is too naive.
Consider a simple example: $f(x)=\vert x \vert^3$ and $g(x)=x^3$ where $x\in \mathbb{C}$. Why $f(x)$ is not analytic in the complex $x$ plane and $g(x)$ is a analytic function in the extire complex plane of $x$? or what is the conditions that a function is analytic in the complex plane of its independent variable?
Please explain as detail as possible but please not use too much jargon. Thank you very much.
AI: Start with the definition of analytic. The function of a complex variable $z$ is analytic at $z \in \mathbb C$ if it is differentiable at $z$, which means
$$\begin{align} \frac{f(z+h) - f(z)}{h} \tag 1 \end{align}$$
has a unique limit as $\lvert h \rvert \to 0$, denoted $f'(z)$. The limit has to exist regardless of how and in which direction $h$ approaches zero.
This is a strong requirement and requires $f(z)$ to satisfy the Cauchy Riemann equations.
These are obtained as follows: write $z=x+iy$ and $f(z) = u(x,y)+iv(x,y)$ and consider the complex derivative when $h = \delta x$ and $h = i\delta y$ for real $\delta x,\delta y$. If $f$ is required to be analytic at $z$ then both are must be the same, so we obtain,
$$\frac{\partial f}{\partial x} = f'(z) = -i \frac{\partial f}{\partial y}.$$
Now write this in terms of $u,v$ to obtain the Cauchy-Riemann equations,
$$ \begin{align}
\frac{\partial u}{\partial x} = \frac{\partial v}{\partial y}, \quad
\frac{\partial v}{\partial x} = -\frac{\partial u}{\partial y}. \tag 2
\end{align}$$
When applied to $\lvert z \rvert^3$ these break down. We have $u(x,y) = (x^2+y^2)^{3/2} $ and $v(x,y) = 0$. It is not difficult to see then that $(2)$ will only be satisfied by exception, when $x = y = 0$. Thus $\lvert z \rvert^3$ cannot be analytic except at the single point $z = 0$.
I hope this is useful.
|
H: Smooth function with non-zero derivative having bounded number of zeros in a compact interval?
Let $f: \mathbb{R} \to \mathbb{R}$ be a smooth function. Suppose $f^{(n)}(a) \neq 0$ for some $n \geq 2$.
I was wondering is it possible to prove that there exists $\epsilon > 0$ such that
$f$ only has finitely many zeros in the interval $[a-\epsilon, a+ \epsilon]$?
If $n=1$ this is clear.
The function given in the answer in Smooth function with all derivatives zero is an example of a non-constant smooth function which has infinitely many zeros
in $[0-\epsilon, 0 + \epsilon]$ for all $\epsilon > 0$. But this example satisfies $f^{(n)}(a) = 0$ for all $n$. I was wondering if we can make the number of zeros to be finite with some kind of non-zero derivative assumption. Thank you.
AI: Yes, this follows the usual way. There is some minimal $k$ so that $f^{(k)}(a)\ne 0$, then there is by the continuity of that derivative some $ϵ>0$ so that $|f^{(k+1)}(x)(x-a)|<\frac12|f^{(k)}(a)|$ for $x\in(a−ϵ,a+ϵ)$. From that it follows that $f$ is close enough to its most trivial non-trivial Taylor polynomial (monomial) to conclude that there are no roots in that interval for $k=0$ or a $k$-fold root at $x=a$ for $k>0$ and no other roots.
It is perhaps easier to just divide out the linear factors, $f(x)=(x-a)^kg(x)$ with $g(a)=\frac1{k!}f^{(k)}(a)$, so that roots of $f$ other than $a$ are also roots of $g$. Then argue by the continuity of $g$ and $g(a)\ne 0$ that $a$ is an isolated root (or that there are no roots at all close to $a$).
|
H: Does the integral of a function exist at a sharp point in the function?
This is a pretty basic and easy question to answer, but I am not certain about its answer (I am still studying at highschool).
Let's say we have a function with some sharp point and we want to find its improper integral, such as: f(x)=|1-x| (the answer will definitely imply defining the integral as a piece-wise function). Since there's a sharp point at x=1, would that value be included in the domain of the integral of the function?
The answer in my book to this exercise (and others which are almost the same) does include those values in the domain of the integral, but this doesn't make much sense, does it? On what grounds would that be the correct solution? I thought sharp points cannot possibly be derived nor integrated. I hope I have expressed my problem clearly. Thanks in advance.
AI: A fundamental difference between differentiation and integration is that differentiation is performed at specific points, while integration is done over a region. So while you are correct that a function cannot be differentiated at "sharp points" (more formally, points where the function is not differentiable), there is nothing to stop you from integrating over sharp points (so long as your function is integrable on whatever domain you're integrating over).
Though harder to actually compute, integration is, in a sense, nicer than differentiation. A differentiable function is necessarily continuous, but this is not the case for integrable functions - in fact, any continuous function is integrable. So even if you do have "sharp points", so long as your function is continuous (and even if it's not in pretty much every case you'll ever encounter in high school), you can integrate over any points at which your function isn't differentiable.
Practically, if your domain of integration includes the point $x=1$, then that just means you'll need to split up your integral. For example:
$$\int_0^2|1-x|\,dx=\int_0^1(1-x)\,dx+\int_1^2(x-1)\,dx.$$
|
H: Addition-related property of the determinant of a $2 \times 2$ block matrix
For $2 \times 2$ matrices,
$$\det \begin{bmatrix}
a&b \\
c+e&d
\end{bmatrix} =\det\begin{bmatrix}
a&b \\
c&d
\end{bmatrix}+ \det \begin{bmatrix}
a&b \\
e&0
\end{bmatrix}$$
If $A$ is $m \times m$ matrix and $D$ is $n\times n$ matrix, is the following correct?
$$\det \begin{bmatrix}
A&B \\
C+E&D
\end{bmatrix} =\det\begin{bmatrix}
A&B \\
C&D
\end{bmatrix}+ \det \begin{bmatrix}
A&B \\
E&0
\end{bmatrix}$$
If it is correct, please show me a proof.
AI: In general, no. Suppose that $A$, $B$, $C$, and $D$ are $2\times2$ matrices, with $A=B=D=\left[\begin{smallmatrix}1&0\\0&1\end{smallmatrix}\right]$. Take $C=\left[\begin{smallmatrix}c_1&c_2\\c_3&c_4\end{smallmatrix}\right]$ and $E=\left[\begin{smallmatrix}e_1&e_2\\e_3&e_4\end{smallmatrix}\right]$. Then\begin{multline}\det\begin{bmatrix}A&B\\C+E&D\end{bmatrix}=\\=1 - c_1 - c_2 c_3 - c_4 + c_1 c_4 - e_1 + c_4 e_1 - c_3 e_2 - c_2 e_3 - e_2 e_3 - e_4 + c_1 e_4 + e_1 e_4.\end{multline}But$$\det\begin{bmatrix}A&B\\C&D\end{bmatrix}=1 - c_1 - c_2 c_3 - c_4 + c_1 c_4$$and$$\det\begin{bmatrix}A&B\\E&0\end{bmatrix}=e_2 e_3 + e_1 e_4.$$
|
H: On the use of Weierstrass' M-test for uniform convergence of series including unbounded terms
Let $A$ be a subset of $\mathbb{R}$ and for each integer $k\in\mathbb{N}$ consider a sequence of functions $\{f_k(x)\}_{k=1}^\infty$ defined on the set $A$. Suppose that there is an integer $n^*$ such that $\sup_{x\in A}|f_k(x)|\leq M_k$ for every $k>n^*$ and that $\sum_{k=n^*+1}^\infty M_k<\infty$. Hence by the Weierstrass M-test the series $\sum_{k=n^*+1}^\infty f_k(x)$ converges uniformly (and absolutely) on the set $A$.
Now if all of (or, some of) the functions $\{f_k(x)\}_{k=1}^{n^*}$ are unbounded on the set $A$, can we still say that the whole series $\sum_{k=1}^\infty f_k(x)$ converges uniformly (and absolutely) on the set $A$ ? I saw in many comments in this site (when answering questions related to the use of the Weierstrass M-test) saying that when we remove unbounded first finite terms from the series and apply the Weierstrass M-test, if the rest of the series is uniform convergent (by the Weierstrass M-test), then the whole series is uniform convergent, too. If so, how can we arrive at this result? (Because, we might down to the case $\infty-\infty$ when removing unbounded terms.)
(By the way, in this case, the Cauchy Criterion would be sufficient to conclude that the whole series is uniformly convergent on the set $A$. Is this correct?)
AI: Suppose $ \sum\limits_{k=n^{*}+1}^{\infty} f_k(x)$ converges uniformly to $G(x)$. Let $F(x)= \sum\limits_{k=1}^{n^{*}}f_k(x)$. Consider $| \sum\limits_{k=1}^{N} f_k(x)-(F(x)+G(x))|$ where $N >n^{*}$. This is same as $| \sum\limits_{k=n^{*}+1}^{N} f_k(x)-G(x)|$ because the first $n^{*}$ terms simply cancel out. Now just apply definition of uniform convergence of $ \sum\limits_{k=n^{*}+1}^{\infty} f_k(x)$ to $ G(x)$ to complete the proof.
|
H: About differential equations
I do not understand the concept of differential enter image description here for example
What have both the sides been differentiated by to get this result.I understand that dz^2/dz gives 2z,where has the extra dz come from also what has the side with theta been differentiated by to get this result,it cannot be dz.If its dtheta how can you differentiate something with respect to another thing in two sides of an equation.Can someone explain this and how did dz and dtheta come in the equations.
AI: From the equation you have shown it looks like $z$ depends on $\theta$, so they have differentiated both sides with respect to $\theta$ and then multiplied both sides by $d\theta$.
To differentiate the left hand side you need to use the chain rule since you are differentiating with respect to $\theta$, not $z$.
|
H: What is the answer to this question?
A cylinder with closed ends has a total surface area ; the radius of the base is and the height is . Find an expression for in terms of and .
The expression I get cancels down to $√a^2=a$ which seems obvious.
AI: HINT: The total surface area $S$ of a closed cylinder with radius $a$ & height $ka$ is given as
$$S=2\pi (a)(ka)+2\pi a^2$$
|
H: Confidence interval for $\theta$ when $X_i$'s are i.i.d $N(\theta,\theta)$
Let $X_i$ be i.i.d. r.v. with $N(\theta,\theta)$
I calculated $$E[\bar{X_n}] = \theta$$ $$Var[\bar{X_n}] = \theta/n$$
And want to construct a confidence interval $I_{\theta}$ that is centered around $\bar{X_n}$ such that $\theta \in P(I_{\theta}) = 0.9$ for all n, where $q_{\alpha/2}=q_{0.05}=1.6448$
I am using the general formula: $$\bar{X} \pm Z \frac{s}{\sqrt(n)} = \bar{X_n} \pm 1.6448 \frac{\sqrt{Var[\bar{X_n}]}}{\sqrt(n)} = \bar{X} \pm 1.6448 \frac{\sqrt{\theta}/\sqrt(n)}{\sqrt(n)} = \bar{X} \pm 1.6448 \frac{\sqrt{\theta}}{n}$$
But it seems I am getting it wrong here. Any input would be appreciated. It seems as there is a simple solution to this.
AI: The statistic to be used is, as usual,
$\bar{X}_n\sim N(\theta;\frac{\theta}{n})$
The pivotal quantity is the following
$$ \bbox[5px,border:2px solid red]
{
\frac{\bar{X}_n-\theta}{\sqrt{\theta}}\sqrt{n}\sim \Phi
\qquad (1)
}
$$
So you can find your Confidence interval solving this double inequality with respect to $\theta$
$$-1.64 \leq \frac{\bar{X}_n-\theta}{\sqrt{\theta}}\sqrt{n} \leq 1.64$$
this lead to the exact solution:
$$ \bbox[5px,border:2px solid black]
{
\bar{X}_n+\frac{1.64^2}{2n} \pm \sqrt{(\frac{1.64^2}{2n})^2+ \bar{X}_n \frac{1.64^2}{n} }
\qquad (2)
}
$$
If you want to find an approx solution, in a very easy way, you can substitute $\sqrt{\theta}$ with its estimation, $\sqrt{\bar{X}_n}$ and you immediately find the following CI from (1)
$$ \bbox[5px,border:2px solid black]
{
\bar{X}_n \pm 1.64 \sqrt{ \frac{\bar{X}_n}{n}}
\qquad (3)
}
$$
So my conclusion is the following:
If you are interested in finding a CI of $\theta$ centered around $\bar{X}_n$ the only alternative is to calculate an approximate Interval.
Calculations from (1) to (2)
$-z<\frac{ \bar{X}-\theta}{\sqrt{\theta}}\sqrt{n}<z$
$( \bar{X}-\theta)^2<\frac{z^2\theta}{n}$
$\bar{X}^2-2\bar{X}\theta+\theta^2-\frac{z^2\theta}{n}<0$
$\theta^2 -\theta(2\bar{X}+\frac{z^2}{n})+\bar{X}^2<0 $
$\theta=\frac{2\bar{X}+\frac{z^2}{n}\pm\sqrt{4\bar{X}^2+\frac{z^4}{n^2}+\frac{4\bar{X}z^2 } {n} -4\bar{X}^2 }}{2} $
$\theta=\bar{X}+\frac{z^2}{2n}\pm\sqrt{(\frac{z^2}{2n})^2+\frac{\bar{X}z^2 } {n} } $
1) the notation $\theta=a\pm b$ is a standard notation used in Confidence Intervals. See here for an example. Anyway it is just a problem of notation.
2) On the countrary this, in effect, it is a problem. At a first sight I would have solved the exercise with my formula 2) that is a correct
$$ \bbox[5px,border:2px solid black]
{
\bar{X}_n+\frac{z^2}{2n} - \sqrt{(\frac{z^2}{2n})^2+ \bar{X}_n \frac{z^2}{n} }< \theta<\bar{X}_n+\frac{z^2}{2n} + \sqrt{(\frac{z^2}{2n})^2+ \bar{X}_n \frac{z^2}{n} }
\qquad (2)
}
$$
Note that, for your happiness, I avoided to use the previous notation and I substituted 1.64 with $z$, indicating a general percentile of the Gaussian. (let me avoid to indicate $z_{\frac{\alpha}{2}}$ and $z_{(1-\frac{\alpha}{2})}$ to simplify the notation)
As formula 3) is concerned, I tried to find a CI that was symmetric around $\bar{X}_n$ as requested in the exercise but your observation is absolutely correct: it is a standard way to proceed in other situation but with a $N(\theta;\theta)$ we must pay more attention. Anyway, before answering, I did some simulations and the two formulas (2) and (3) give more or less the same numerical result when $n \rightarrow \infty$
In conclusion: my formula (2) is absolutely correct; (3) works, I do not know why but it works... :)
|
H: Bearings Question involving cosine rule
Ship A is 120 nautical miles from lighthouse L on a bearing of 072°T, while ship B is 180 nautical miles from L on a bearing of 136°T. Calculate the distance between the two ships to the nearest nautical mile.
I've stuck on this question for a while.I have tried using the cosine rule, where I construct a triangle in which the angle is 136°, side a is 120 nautical miles side b is 180 nautical miles. But I'm nowhere near the answer even after doing that. I get the answer 279 for the missing side after calculating
$\sqrt{180^2+120^2-2*180*120*\cos(136)}$
What am I doing wrong?
AI: In general: to find the length of a side of the triangle by a cosine rule the following equation applies:
$a^2 = b^2 + c^2 - 2*b*c*cos(A)$, where "a", "b" and "c" are the sides of your triangle and "A" is the angle opposite to your "a" side.
So in order to evaluate "a" you need to square root the right side of your equation.
Can you attach a figure showing your problem?
|
H: Equivalence in strong convergence of operators
I am trying to see if I have that $(T_n)\in L(X)$ bounded and that $T_nx$ converges to $Tx$ for every $x$ in a dense subset of $X$ a Banach space, then $T_n$ converges strongly to $T$.
Let suppose that $D$ is the dense subset, I was able to see that if $x\in cl D$ then $T_nx$ is a Cauchy sequence so we know that it converges , but I can't see that it converges to $T_x$. Also I tried seeing by this '$$||T_nx-T_x||=||T_nx-T_nd_n+T_nd_n-T_x||\leq ||T_n||||x-d_n||+||T_nd_n-T_x||,$$ where $d_k \rightarrow x,$ but I can't seem to prove what I want. Any advice is appreciated.
AI: Case 1: $\sup_n\|T_n\|=\infty.$ Then your assertion need not be true. For example Let $X=c_0, D=c_{00}$ and $T_n(x)=(x_1,2x_2,3x_3,\ldots,nx_n,0,0,\ldots)$ for $x \in X.$ Then $\|T_n\|=n \to \infty$ which means $T_n$ cannot converge strongly in $X.$ However $T_n x \to 0$ for every $x \in D.$
Case 2: Suppose $M:=\sup_n\|T_n\|<\infty.$
Since $T \in L(D,X),$ so by Lemma below it has a unique norm preserving extension $T \in L(X).$ Then $\|T\|\leq M.$
Let $x \in X.$ Since $D$ is dense in $X,$ there exists $(x_k)$ in D such that $x_k \to x.$ Let $\epsilon >0,$ then there exists $k_0 \in \mathbb{N}$ such that $$\|x_{k_0}-x\|<\frac{\epsilon}{3M}.$$ By given condition $T_n x_{k_0} \to T x_{k_0}$. Therefore there exists $n_0\in \mathbb{N}$ such that $$\|T_nx_{k_0}-Tx_{k_0}\|<\frac{\epsilon}{3}$$ for all $n \geq n_0.$ Combining
$$\begin{align*}\|T_n x-Tx\|&\leq \|T_n x-T_n x_{k_0}\|+\|T_nx_{k_0}-Tx_{k_0}\|+\|Tx_{k_0}-Tx\|\\&<M \frac{\epsilon}{3M}+\frac{\epsilon}{3}+M\frac{\epsilon}{3M}\\&=\epsilon\end{align*}$$ for all $n \geq n_0.$
Lemma: Let $X:$ normed space, $Y:$ Banach space, $D \subseteq X:$ dense and $T_0 \in L(D,Y).$ Then there exists a unique $T \in B(X,Y)$ such that $T\mid_D=T_0$ and $\|T\|=\|T_0\|.$
|
H: Bijection between $\mathbb{Z}/ m\mathbb{Z}$ and the set of elements coprime to all divisors of $m$
Let $m \in \mathbb{N}$. I want to show that there's a bijection between the sets
$$A = \{(q,a) \mid q\in \mathbb{N} \text{ divides }m, \text{ and }a\in \mathbb{Z}/q\mathbb{Z} \text{ is such that }\gcd{(a,q)}= 1\}, \text{ and}$$
$$B = \mathbb{Z}/ m\mathbb{Z}.$$
For example, when $m = 6$,
$A = \{(1,1), (2,1), (3,1), (3,2), (6,1), (6,5)\}$.
I'm having difficulty finding an appropriate map between the two sets.
AI: Let's rewrite your definition of $A$ as
$$
A=\{(q,a):q\mid m,\quad 1\le a\le q,\quad\gcd(a,q)=1\}
$$
where all variables are in $\mathbb{N}$.
Take $a$ such that $0\le a<m$ and consider $q=m/\!\gcd(a,m)$. Then $\gcd(a,q)=1$, so $(q,a)\in A$.
Conversely, if $(q,a)\in A$, you have $\gcd(a,m)=m/q$.
So the set $A$ essentially lists the integers in $[0,m)$ and the required map is $f(q,a)=am/q$.
Let's make an example with $m=6$. Then we have
$$
A=\{(1,0),(2,1),(3,1),(3,2),(6,1),(6,5)\}
$$
and we get $f(1,0)=0$, $f(2,1)=3$, $f(3,1)=2$, $f(3,2)=4$, $f(6,1)=1$, $f(6,5)=5$.
|
H: Sum of one open and one closed set in $\mathbb{R}^n$ is open or closed or none? NBHM 2012 PhD question.
A and B are subsets of $R^n$ where A is open and B is closed. Define A+B as $$A+B= \{a+b: a\in A, b\in B \}.$$
is A+B open or closed or none of them?
I tried to consider n=1 and work with $\mathbb{R}$ only. By taking various examples I feel that A+B should be open. However, I am unable to come with a conclusion. Any help/hint is highly appreciated.
AI: The set is open but not necessarily closed:
1) $A+B$ open.
For every $b\in B$ the translate $A+b$ of $A$ is open (the translation by $b$ is an homeomorphism!) and then $A+B=\bigcup_{b\in B}(A+b)$ that is an union of open subsets and is open.
2) $A+B$ is not closed.
Consider $A=(0,1) \subset \mathbb{R}$ and $b$ a point.
|
H: Prove that $\int\sum k\chi_{f^{-1}}<\infty$
For a Lebesgue measurable sets and functions problem, I need to prove this statement:
Being $A\subset\mathbb{R}$ a measurable set with $m(A)<\infty$, and $f:A\rightarrow[0,\infty)$ a Lebesgue measurable function:
$$\sum_{n=0}^\infty(m(\{x\in A:f(x)\geq n\}))<\infty\Longleftrightarrow \int_{\mathbb{R}}\sum_{k=0}^\infty (k\cdot\chi_{f^{-1}([k,k+1))}) dm<\infty$$
The notation I'm using is: $m$ for the Lebesgue measure and $\chi_B$ the characteristic function of $B$, to be said, $\chi_B(x)=1\Leftrightarrow x\in B$ and $\chi_B(x)=0\Leftrightarrow x\notin B$.
I've started calling $S=\sum_{k=0}^\infty (k\cdot\chi_{f^{-1}([k,k+1))}$, so that $S$ is in fact a simple function, and that means (by characterisation of simple functions)
$$\int_RS=\sum_{k=0}^\infty (k\cdot m(f^{-1}([k,k+1)))$$
I clearly see that $\sum_{k=0}^\infty (m(f^{-1}([k,k+1)))=m(f^{-1}([0,\infty)))=m(A)=<\infty$, because the problem says $m(A)<\infty$. My problem is that I'm struggling with the multiplying $k$ inside of the sum. I don't know what to (guess using $\sum_{n=0}^\infty(m(\{x\in A:f(x)\geq n\}))<\infty$, but I don't know how) do with that $k$ to prove that $$\sum_{k=0}^\infty (k\cdot m(f^{-1}([k,k+1)))<\infty.$$
I will thank any answer or comment.
AI: Hint: Let $a_n=m\{x: f(x) \in [k,k+1)\}$ and $s_n=a_n+a_{n+1}+...=m\{x: f (x) \geq n\}$. Then $\sum k a_k =a_1+2a_2+3a_3+...=s_1+s_2+...=\sum m(f(x) \geq n)$. To finish the proof you have to know that the sum and integral on RHS can be interchanged. But that is true by non-negativity.
|
H: Conceptual meaning of a differential
When we find the derivative of $z^2$ with respect to $z$ it means the slope of the graph,Which comes out to be $2z$.
$$ \frac{dz^2}{dz}=2z $$if we take $dz$
on the other side it becomes $dz^2=2zdz$ which is known as the differnetial of $z^2$. I am not sure what this means, does it mean if we change $z^2 $with $dz$ then $z$ changes by $2zdz$?
If so, then for example I have an equation $z^2+\cos\theta+26$. Then can I differentiate the sides and get $$2zdz-\sin\theta d\theta=0$$ What does this even mean?
AI: It means that if you change z by dz, then what is the corresponding change in $z^2$
To the 2nd part there should be a 0 on LHS otherwise it's not an equation. If there is a 0 on LHS then what you have done is right. It means to see how both the changes dz and $d\theta$, depend on each other, ie what is the change $d\theta$ in $\theta$ when z is changed by dz.
This dependence is implicitly contained in the eqn $z^2 + \cos\theta+26=0$
|
H: Prove that $\frac{1}{\sqrt[3]2}=\sqrt{\frac 5{\sqrt[3]4}-1}-\sqrt{(3-\sqrt[3]2)(\sqrt[3]2-1)}$
Playing around with denesting radicals, I arrived at the following formula which appears to be correct.
$$\frac 1{\sqrt[3]2}=\sqrt{\frac 5{\sqrt[3]4}-1}-\sqrt{(3-\sqrt[3]2)(\sqrt[3]2-1)}$$
If one were to prove this strictly from the given equation, say, as a contest math problem, how would one do it? I have literally no idea how to do this, and I only derive these nested radical equations backwards (e.g. substituting radical values for $a$, $b$ and $c$ in an expression like $(a+b-c)^2$ and hoping for an elegant result after some more or less tedious algebra).
Is there an official method by which to prove this, or is it a bit foggy? I have heard Galois theory is probably important here but that's all I know about it, pretty much, and the rest is vaguely known to me. I would love to see if there is some kind of process to solve/prove such problems, as it might shed light on how Ramanujan came across his several radical denestations and related general identities.
How it was discovered.
I noticed that $$1-\frac 1{\sqrt[3]2}+\frac 1{\sqrt[3]4}=\frac 12\Big\{1+\sqrt{(3-\sqrt[3]2)(\sqrt[3]2-1)}\Big\}$$ and $$1-\frac 1{2\sqrt[3]2}+\frac 1{\sqrt[3]4}=\frac 12\Bigg(1+\sqrt{\frac 5{\sqrt[3]4}-1}\Bigg)$$ and I put two and two together.
Of course, nobody just notices these things (except maybe Ramanujan). I was simply doing what I described earlier about deriving these backwards and merely experimenting and playing around with numbers for the fun of it. But I really want to know why these outputs do come out so nicely, and the essence of it all.
Any thoughts?
Thank you in advance.
AI: Well, let's do this step by step:
Write:
$$\sqrt[3]{4}=\sqrt[3]{2^2}=2^\frac{2}{3}\tag1$$
Write:
$$\frac{5}{2^\frac{2}{3}}=\frac{5}{2^\frac{2}{3}}\cdot\frac{\sqrt[3]{2}}{\sqrt[3]{2}}=\frac{5\sqrt[3]{2}}{2}\tag2$$
Write:
$$\left(3-\sqrt[3]{2}\right)\left(\sqrt[3]{2}-1\right)=-3+3\sqrt[3]{2}-\left(-\sqrt[3]{2}\right)-\sqrt[3]{2}\sqrt[3]{2}=-3+3\sqrt[3]{2}+\sqrt[3]{2}-2^\frac{2}{3}=$$
$$-3+4\sqrt{3}{2}-2^\frac{2}{3}=1+2\sqrt[3]{2}-2^\frac{2}{3}-4+2\sqrt[3]{2}=$$
$$1+2\sqrt[3]{2}-\left(\sqrt[3]{2}\right)^2-2\left(\sqrt[3]{2}\right)^3+\left(\sqrt[3]{2}\right)^4=\left(1+\sqrt[3]{2}-2^\frac{2}{3}\right)^2\tag3$$
Write:
$$\frac{5\sqrt[3]{2}}{2}-1=\frac{5\sqrt[3]{2}}{2}-\frac{2}{3}=\frac{5\sqrt[3]{2}-2}{2}\tag4$$
Write:
$$5\sqrt[3]{2}-2=2+4\sqrt[3]{2}-4+\sqrt[3]{2}=\frac{4+8\sqrt[3]{2}-8+2\sqrt[3]{2}}{2}=$$
$$\frac{4+8\sqrt[3]{2}-4\left(\sqrt[3]{2}\right)^3+\left(\sqrt[3]{2}\right)^4}{2}=\frac{\left(2+2\sqrt[3]{2}-2^\frac{2}{3}\right)^2}{2}\tag5$$
I think you can finish.
|
H: Finding the adjoint of linear map with respect to an inner product
Consider the map $A:\mathbb{R}^2\mapsto\mathbb{R}^n$ given by $A\left(x_{1}, x_{2}\right)=\left(2 x_{1}, x_{1}-x_{2}\right)$. I want to find the adjoint with respect to the inner product on $\mathbb{R}^2$.
My attempt:
$$\langle A x, y\rangle=\left\langle\left(2 x_{1}, x_{1}-x_{2}\right),\left(y_{1}, y_{2}\right)\right\rangle=2 x_{1} \bar{y}_{1}+\left(x_{1}-x_{2}\right) \bar{y}_{2}$$
And since we are in $\mathbb{R}^2$ and $\langle A x, y\rangle=\left\langle x, A^{*} y\right\rangle$ we can write:
$$\begin{array}{l}
2 x_{1} y_{1}+\left(x_{1}-x_{2}\right) y_{2}=\left\langle\left(x_{1}, x_{2}\right), A^{*}\left(y_{1}, y_{2}\right)\right\rangle \\
\Rightarrow A^{*}\left(y_{1}, y_{2}\right)=\left(2 y_{1}+y_{2},-y_{2}\right)
\end{array}$$
Can someone confirm if I am correct?
AI: For a linear map $A:\mathbb{R}^n\to\mathbb{R}^n$, the adjoint is just the transpose.
So if
$$A=\begin{bmatrix}2&0\\1&-1\end{bmatrix}$$
you get
$$A^\star = \begin{bmatrix}2&1\\0&-1\end{bmatrix}\,.$$
Your answer is correct.
|
H: Proof for Division Algorithm, from the book *Contemporary Abstract Algebra* by Joseph A. Gallian
This is the same question as Q2 of Need help with understanding the proof for Division Algorithm, from the book *Contemporary Abstract Algebra* by Joseph A. Gallian
"Q2: I don't understand how 0∉S implies a≠0. We could still have a−bk>0 even if a=0 by choosing k<0."
The moderator would not allow me to post my request there.
I did not understand the answer given and would appreciate if someone could expound the answer in more detail.
This answer was:
"2 If a=0, then we can take k=0 to see that 0=a−b⋅0∈S."
Below are some details on my thought process of trying to understand this answer:
I still can't see the logic on how this answer answers the question. Are we using the argument of modus tollens? I'm reading the answer logically as:
If a=0, then there exists a k such that 0=0−b⋅0∈S.
So not [there exists a k (=0) such that 0=0−b⋅0∈S] which I can't for the life of me determine the negation. Maybe? (For all k, 0=a−b⋅k∉S)? which is true because we assumed 0∉S and thus we can then conclude a≠0.
It seems Brian (in the original post answer) is saying we use the contrapositive of "If a=0, then we can take k=0 to see that 0=a−b⋅0∈S" to arrive at the solution, but I can't see how to do so with logic.
AI: We want to prove that $0 \notin S \implies a \ne 0$.
Hence, it is equivalent to prove that if $a=0 \implies 0 \in S$.
That is we want to prove that $a=0 \implies 0 \in S=\{ a-bk|a -bk \ge 0, k \in \mathbb{Z}\}$.
That is we want to show that we can pick a value $k\in \mathbb{Z}$, such that $a-bk=0-bk=-bk = 0$.
A possible choice is to let $k=0$ and this would have shown that $0 \in S$.
|
H: Is my proof that nonnegative polynomials on $[0,1]$ form a convex set correct?
I want to prove that the set $K = \{c \in R^n\mid c_{1} + c_{1}t +\dotsb+ c_{n}t^{n-1} ≥ 0 \forall t \in [0,1]\}$ is a cone i.e that for $x \in K$ and $\theta \ge 0$, $\theta x\in K$.
Is the follow attempt a correct proof?
Let's consider $x \in K$.
We have $\sum\limits_{i=1}^n{x_{i} t^{i-1}} \ge 0, \space t ∈ [0,1]$
For $\theta \ge 0$, we then have
$\sum\limits_{i=1}^n{\theta x_{i} t^{i-1}} = \theta \sum\limits_{i=1}^n{x_{i} t^{i-1}}$
since both $\theta$ and the polynomial $\sum\limits_{i=1}^n{x_{i} t^{i-1}}$ are greater than or equal to $0$, we must have
$\sum\limits_{i=1}^n{\theta x_{i} t^{i-1}} = \theta \sum\limits_{i=1}^n{x_{i} t^{i-1}} \ge 0$
Thus $x \in K$, $\space$ $\theta \ge 0$ $\rightarrow$ $\theta x \in K$.
Hence $K$ is a cone. $\square$
AI: Yes, your proof is correct. It really is that simple!
Added remark: Correctness aside, I recommend that you study Xander's exemplary answer concerning ways to improve presentation.
|
H: Unbiased estimator for median (lognormal distribution)
Assume that
$Y \sim N(\mu,\sigma^2)$
$X = e^Y$
Then X is lognormal distributed with parameters $\sigma$ and $\mu$.
I know that
$E(X) = E(e^Y) = \eta e^{\frac{\sigma^2}{2}} $
The median in the lognormal distrubution is $\eta = e^\mu$, and that $\eta* = e^\widehat{\mu}$ is not an unbiased estimator.
$X_1,...,X_n$ is independent and lognormal distributed. $Y_i = ln(X_i)$ for $i = 1,...,n$. We assume that we know the value of $\sigma$.
$\widehat{\mu} = \bar Y = \frac{1}{n}\sum_{i=1}^n Y_i = \frac{1}{n}\sum_{i=1}^n ln(X_i) $
Now I have to show that $\widehat{\eta} = e^{\widehat{\mu}-\frac{\sigma^2}{2n}} $ is an unbiased estimator for the median, and I don't know how to do this.
Any help would be greatly appreciated!
AI: First find that:$$\mathbb{E}X^{\alpha}=e^{\alpha\mu+\frac{1}{2}\alpha^{2}\sigma^{2}}$$
For that have a look at this answer.
Then:
$$e^{\hat{\mu}}=e^{\frac{1}{n}\sum_{i=1}^{n}\ln X_{i}}=\prod_{i=1}^{n}X_{i}^{\frac{1}{n}}$$
so that: $$\mathbb{E}e^{\hat{\mu}}=\prod_{i=1}^{n}\mathbb{E}X_{i}^{\frac{1}{n}}=\left(\mathbb{E}X_{1}^{\frac{1}{n}}\right)^{n}=\left(e^{\frac{1}{n}\mu+\frac{1}{2n^{2}}\sigma^{2}}\right)^{n}=e^{\mu+\frac{1}{2n}\sigma^{2}}$$
Then consequently:$$\mathbb{E}e^{\hat{\eta}}=\mathbb{E}e^{\hat{\mu}-\frac{1}{2n}\sigma^{2}}=e^{-\frac{1}{2n}\sigma^{2}}\mathbb{E}e^{\hat{\mu}}=e^{-\frac{1}{2n}\sigma^{2}}e^{\mu+\frac{1}{2n}\sigma^{2}}=e^{\mu}=\eta$$
|
H: Prove $(\frac{a}{b-c})^2+(\frac{b}{c-a})^2+(\frac{c}{a-b})^2 \geq 2$
If $a, b, c$ are distinct real numbers, prove that
$(\frac{a}{b-c})^2+(\frac{b}{c-a})^2+(\frac{c}{a-b})^2 \geq 2$
I thought of using AM-GM but that is surely not getting me anywhere ( Maybe some step before i can do AM-GM ?)
I thought of applying AM-GM but that clearly doesnt give anything. Then i thought if a assuming \txtWloga>b>c, that would make the first time greater than 1 so if only i could show that the sumof the other two terms is ≥1 i'd be done but ofc i couldn't progress further from there.
Thanks.
AI: $$\sum_{cyc}\frac{a^2}{(b-c)^2}=\left(\sum_{cyc}\frac{a}{b-c}\right)^2-2\sum_{cyc}\frac{ab}{(b-c)(c-a)}=$$
$$=\left(\sum_{cyc}\frac{a}{b-c}\right)^2-2\sum_{cyc}\frac{ab(a-b)}{\prod\limits_{cyc}(a-b)}=\left(\sum_{cyc}\frac{a}{b-c}\right)^2+2\geq2.$$
|
H: Can every convergent series uniformly converge in some way?
I found myself thinking on this idea for a long time, I really not understand the intuition of the difference between uniform convergence and convergence. But when I thought on the definition deeply, I figured that
If I have convergence of $f_n$ so I know for every $x\in A$ and for every $\epsilon > 0$ there is $N$ such that for every $n>N$ $$|f_n-f|<\epsilon$$
but if I take the $N=\max[{N_i}]$ I get exactly the definition of the uniform convergebce.
Is it correct? what is the motivation to define just convergence, or uniform convergebce?
AI: The problem is that when you wrote $N=max[{N_i}]$, you're silently omitting on which set the index $i$ belongs to. In the case of real maps, you have in fact $i \in \mathbb R$, an infinite set. In that case, $N=max[{N_i}]$ may not be defined.
Some examples
Take for $f_n(x) = 1/n$ a map not depending on $x$. In that case, everything is fine.
Now take $f_n(x) = x^n$ defined on the interval $[0,1]$... I let you define the $N_i$ of your question and figure out that $$N=max[{N_i}]$$ is not defined, or said in another way has to be infinite.
|
H: Is the value of $\mathbb E_{\forall i=1,2,\cdots,n,x_i\sim N(0,1)}\,\Vert\vec x\Vert_2=\sqrt{n}$?
I simulated the result with computer and my guess seems to be correct. Assuming that $\mathbb E_{\forall i=1,2,\cdots,n,x_i\sim N(0,1)}\,\Vert\vec x\Vert_2=\sqrt{\mathbb E_{\forall i=1,2,\cdots,n,x_i\sim N(0,1)}\,\sum_{i=1}^nx_i^2}$, we can easily deduce that the value is $\sqrt n$. However, I am not sure whether the assumption is correct, e.g. a well-known case that $(\mathbb E\,\mathrm x)^2\neq\mathbb E[\mathrm x^2]$ (the difference of the two being the variance of $\mathrm x$).
AI: This equality cannot hold. C-S inequality gives $E\|X\| \leq \sqrt {E \sum X_i^{2}}=\sqrt n$ and equality can hold only when $\|X\|$ is constant. In this this is not a constant so we have strict inequality.
|
H: How to prove $\sum_{i=1}^n i.i! = (n+1)!-1$ with mathematical induction?
I'm trying to prove
$$
\sum_{i=1}^n i.i! = (n+1)!-1
$$
with mathematical induction. The first step I did after prove it for 1 was:
$$
\sum_{i=1}^{n+1} i.i! = (n+2)!-1= (n+2).(n+1)!-1
$$
but I can't do anything more.
AI: The equality is trivial for $n=1.$ Assume it is true for $n,$ then
$$\begin{align*}\sum_{i=1}^{n+1} i\cdot i!&=\sum_{i=1}^n i.i!+(n+1).(n+1)!\\&=(n+1)!-1+(n+1)(n+1)! \qquad \text{(induction hypothesis)}\\
&=(n+2)(n+1)!-1\\&=(n+2)!-1.\end{align*}$$
|
H: A question related to discrete and subspace topology
I am unable to think how to prove this question.
Question is - Let A be a subgroup of Real Line under Addition. Show that either A is dense in Real Line Or else the subspace topology of A is discrete topology.
I tried by assuming A is not dense in Real Line but I am unable to prove that A intersection is discrete topology.
Any help will be really appreciated.
AI: Hint: Either $A$ has a least positive element or it doesn't. The former gives you a discrete set and the latter a dense set (provided $A \neq \{0\}$).
EDIT: Adding details for the case where $A$ has a least positive element.
Let $\epsilon \in A$ be this least positive element.
Claim. $|x - y| > \epsilon/2$ for all $x, y \in A$ with $x\neq y$.
Proof. Suppose not. Let $x, y \in A$ be distinct such that $|x - y| \le \epsilon/2$.
Set $d = x - y$. Note that $d \in A$ and $-d \in A$. Thus, we may assume that $d > 0$.
Thus, we have $0 < d \le \epsilon/2 < \epsilon$. This contradicts that $\epsilon$ is the smallest positive element of $A$.
Thus, the claim is proven and it follows that $A$ is discrete.
|
H: Equality of expectations if identically distributed
Let $X,Y,Z$ be independent random variables on some probability space $(\Omega, P)$ with values in $\mathbb R$ such that $X \sim Y$, i.e. $X$ and $Y$ have the same distribution. Let $f \colon \mathbb R^2 \to \mathbb R$ be a measurable function. Is it always true that
$$\mathbb E [ f(X,Z) ] = \mathbb E[f(Y,Z) ].$$
If not, are there any sufficient conditions on $f$ for this to be true? E.g. when $f(x,y) = xy$ this is true since $$\mathbb E[XZ] = \mathbb E[X] \mathbb E[Z] = \mathbb E[Y] \mathbb E[Z] = \mathbb E[YZ]$$
by independence. However, I was not able to prove the more general result. Any help is appreciated!
AI: If $f(X,Z)$ and $f(Y,Z)$ are integrable then they gave the same expectation . The common value is $\iint f(x,z) dF_X(x)dF_Z(z)$. The equation also holds if $f$ is a non-negative measurable function. This follows from the fact that $(X,Z)$ and $(Y,Z)$ have the same two dimensional distribution.
|
H: $f(x)=(\sin(\tan^{-1}x)+\sin(\cot^{-1}x))^2-1, |x|>1$
Let $f(x)=(\sin(\tan^{-1}x)+\sin(\cot^{-1}x))^2-1, |x|>1$. If $\frac{dy}{dx}=\frac12\frac d{dx}(\sin^{-1}(f(x)))$ and $y(\sqrt3)=\frac{\pi}{6}$, then $y(-\sqrt3)=?$
$$f(x)=(\frac{x}{\sqrt{x^2+1}}+\frac{1}{\sqrt{x^2+1}})^2-1=\frac{2x}{1+x^2}$$
$$\frac{dy}{dx}=\frac12\frac d{dx}(\sin^{-1}(\sin(2\tan^{-1}x)))$$
$$y=\tan^{-1}x+c$$
Using, $y(\sqrt3)=\frac{\pi}{6}$, I get, $c=-\frac\pi6$. Thus, $y(-\sqrt3)=-\frac\pi2$. But the answer is given as $\frac{5\pi}6$.
AI: $\sin(\cot^{-1}x)=\sin\left(\dfrac\pi2-\tan^{-1}x\right)=\cos(\tan^{-1}x)$
$$\implies f(x)=\left(\sin(\tan^{-1}x)+\sin(\cot^{-1}x)\right)^2-1=\sin2\left(\tan^{-1}x\right)$$
Now $\sin^{-1}\left(\sin(2\tan^{-1}x )\right)=\begin{cases} \pi-2\tan^{-1}x &\mbox{if } 2\tan^{-1}x>\dfrac\pi2\iff x>\tan\dfrac\pi4 \\
-\pi-2\tan^{-1}x& \mbox{if } 2\tan^{-1}x<-\dfrac\pi2 \end{cases}$
|
H: Prove $P= 7\,{c}^{4}-2\,ab{c}^{2}-2\,ab \left( a+b \right) c+ \left( a+b \right) ^{2} \left( {a}^{2}+{b}^{2} \right) \geqq 0$
For $a,b,c$ are reals$.$ Prove$:$ $$P= 7\,{c}^{4}-2\,ab{c}^{2}-2\,ab \left( a+b \right) c+ \left( a+b \right) ^{2} \left( {a}^{2}+{b}^{2} \right) \geqq 0$$
I found this from Michael Rozenberg's solution. See here.
My proof:
$$P=\frac{1}{16} \, \left( a+b \right) ^{2} \left( a+b-4\,c \right) ^{2}+{\frac {5 \, \left( a+b \right) ^{4}}{14}}$$ $$+{\frac { \left( 3\,{a}^{2}+6\,ab+3\,{ b}^{2}-28\,{c}^{2} \right) ^{2}}{112}}+\frac{3}{8}\, \left( a+b \right) ^{2} \left( a-b \right) ^{2}+\frac{1}{8}\, \left( 2\,c+a+b \right) ^{2} \left( a-b \right) ^{2}$$
I’m looking for an alternative proof. Thanks!
AI: Let $a+b=2u$ and $ab=v^2$, where $v^2$ can be negative.
Thus, we need to prove that:
$$7c^4-2abc^2-2ab(a+b)c+(a+b)^2(a^2+b^2)\geq0$$ or
$$7c^4-2v^2c^2-4uv^2c+8u^2(2u^2-v^2)\geq0$$ or
$$7c^4+16u^4\geq2v^2(c^2+2uc+4u^2).$$
But, $$c^2+2uc+4u^2=(c+u)^2+3u^2\geq0$$ and $v^2\leq u^2$ it's just $(a-b)^2\geq0.$
Thus, it's enough to prove that
$$7c^4+16u^4\geq2u^2(c^2+2uc+4u^2)$$ or
$$7c^4-2u^2c^2-4u^3c+8u^4\geq0,$$ which is true by AM-GM:
$$7c^4-2u^2c^2-4u^3c+8u^4\geq$$
$$\geq c^4+u^4-2c^2u^2+c^4+3u^4-4u^3c\geq$$
$$\geq4\sqrt[4]{c^4(u^4)^3}-4u^3c=4|u^3c|-4u^3c\geq0.$$
|
H: Checking the uniform convergence of sequence of functions
I have been trying some questions on uniform convergence.Got stuck in one of those questions which says that
For a positive real number p, let (f$_n$) is a sequence of functions defined on [$0,1$] by
$$f_n(x) =
\begin{cases}
n^{p+1}x, \text{if 0 $\le$ $x$ $\lt$ $\frac{1}{n}$}\\
\frac{1}{x^p}, \text{if $\frac{1}{n}$ $\le$ $x$ $\le$ 1}
\end{cases}$$
I have found its point-wise limit given by
$$f(x)=
\begin{cases}
0, \text{if $x$ = $0$}\\
\frac{1}{x^p}, \text{if 0 $\lt$ $x$ $\le$ $1$}
\end{cases}$$
I am stuck in proving whether its uniformly convergent or not.
I take any $\epsilon$ $\gt$ $0$.Now I need to know that does there exist a natural number m such that $\lvert f_n(x)-f(x)\rvert$ $\lt$ $\epsilon$ for all n $\geq$m and for all $x$ in [$0,1$]?
Explain,please!
AI: $|f_n(\frac 1 {n^{p+1}}) -\frac 1 {(\frac 1 {n^{p+1}})^{p}}|=|1-n^{p(p+1)}| \to \infty$. Hence $\sup_x |f_n(x)-f(x)|$ does not tend to $0$ and the convergence is not uniform
|
H: Let, $V$ be a vector subspace of $\Bbb{R}^n$. Prove that, $V$ is a closed set in $\Bbb{R}^n$ with respect to usual metric.
Let, $\{u_1,u_2,\ldots,u_k\}$ be a basis for $V$. Let, $\{v_n\}$ be sequence of vectors in $V$ such that $v_j\to v$ where $v\in\Bbb{R}^n$. My target is to prove $v\in V$.
Now for each $n\in\Bbb{N}$, $v_j=\lambda_{1j}u_1+\lambda_{2j}u_{2}+\cdots+\lambda_{kj}u_{k}$.
Since $\{v_j\}$ converges in $\Bbb{R}^n$, it is cauchy in $\mathbb{R}^n$.
Now I want to say the sequences $\{\lambda_{1j}\},\{\lambda_{2j}\},\ldots,\{\lambda_{kj}\}$ are all cauchy (hence convergent) in $\Bbb{R}$. So, that taking $n\to\infty$, we will get $v=\lim_{j\to\infty}\lambda_{1j}u_1+\lambda_{2j}u_{2}+\cdots+\lambda_{kj}u_{k}=\lambda_1 u_1+\lambda_2 u_2+\cdots+\lambda_k u_k\in V$ where $\lambda_i=\lim_{j\to\infty}\lambda_{ij}$.
Now, $|v_j-v_m|\le |\lambda_{1j}-\lambda_{1m}||u_1|+|\lambda_{2j}-\lambda_{2m}||u_2|+\cdots+|\lambda_{kj}-\lambda_{km}||u_k|$$\le M(|\lambda_{1j}-\lambda_{1m}|+|\lambda_{2j}-\lambda_{2m}|+\cdots+|\lambda_{kj}-\lambda_{km}|)$ where $M=max\{|u_i|\}$
But from this, I cannot say each $\{\lambda_{ij}\}$ is Cauchy. Can anyone complete the solution? Thanks for the help in advance.
AI: You can assume $u_i$ are orthonormal, by the Gram-Schmidt procedure. Extend to a full orthonormal basis $u_1,\dots, u_n$ of $\mathbb R^n$. Write $v=\sum_{i=1}^n \lambda_i u_i. $Then for each $i> k$,
$$
|\lambda_i| =|(v_j-v)\cdot u_i|\le |v_j - v|\xrightarrow[j\to\infty]{}0.
$$
Therefore $v=\sum_{i=1}^k \lambda_i u_i$ i.e. $v$ belongs to $V$.
|
H: The cross product of three vectors
Is the cross product of three vectors associative? If not, then how do I determine $A\times B\times C$? Is it a vague statement? What I did was $A\times B$ then $(A\times B)\times C$.
AI: The cross product is not associative; you have to use brackets to disambiguate. Normally one would write $(A\times B)\times C$, but never $A\times B\times C$ unless it is absolutely clear from context (and even then it is frowned upon).
|
H: How to evaluate $\lim_{n\to+\infty}\frac{(n+1)\ln(n+1)}{n\ln(n)}$?
How to evaluate the limit $$\lim_{n\to+\infty}\frac{(n+1)\ln(n+1)}{n\ln(n)}\,?$$
I tried to search it in the forum but I didn't find, this limit is pretty famous and I know it is equal to 1, but I not sure I understand the technique there
AI: Using l'Hôpital's rule, one can prove $$\lim_{n\to\infty}\frac{\ln(n+1)}{\ln(n)} = 1$$then $$\lim_{n\to\infty}\frac{(n+1)\ln(n+1)}{n\ln(n)}=1 \cdot 1=1$$
|
H: Arithmetic Sequences on Prime numbers
It is known that $a_1, a_2,$...$, a_{50}$ is an arithmetic sequence with common difference $d$, and $a_i (i=1, 2, ..., 50)$ are primes. If $a_1>50$, prove that $d>600,000,000,000,000,000.$
Honestly I have completely no idea how to solve this question. I don't even know how $6*10^{17}$ is derived from. Does anyone know how to solve this?
AI: Fix any prime $p<50$, and consider the sequence mod $p$. Notice that none of the terms (say $a$) can be equal to $0$, because otherwise $p\mid a$. But if $p\nmid d$, the sequence will eventually be $0$ mod $p$. (within the first 50 terms). Hence $p\mid d$ for all primes less than 50.
If you multiply all the primes out, you get around $6.15\times10^{17}$ which is large enough.
|
H: integral of fractional part $\int_0^1\left\{\frac 1x\right\}dx$ convergent?
$$I=\int_0^1\left\{\frac 1x\right\}dx=\int_1^\infty\frac{\{u\}}{u^2}du=\sum_{k=1}^\infty\int_0^1\frac{\{v+k\}}{(v+k)^2}dv=\sum_{k=1}^\infty\int_0^1\frac{v}{(v+k)^2}dv=\sum_{k=1}^\infty\ln\left(\frac{k+1}k\right)+\frac k{k+1}-1$$
and I believe the integral should converge but neither parts of this series converge from what ive calculated:
$$\sum_{k=1}^\infty\ln\left(\frac{k+1}k\right)=\ln\left(\prod_{k=1}^\infty\frac{k+1}k\right)=\lim_{n\to\infty}\ln\left(\frac{(n+1)!}{n!}\right)=\lim_{n\to\infty}\ln(n+1)\to\infty$$
my reasoning is that if the first substitution is valid:
$$\int_1^\infty\frac{\{u\}}{u^2}du\le\int_1^\infty\frac{du}{u^2}=\left[\frac{1}{u}\right]_\infty^1=1$$
Thank you for all the comments and answers, using them I have written the sum as:
$$1+lim_{n\to\infty}\left[\ln(n)-\text{H}_n\right]$$
which as has been pointed out is a known value of $1-\gamma$
AI: To answer your first question, the initial integral must converge. Write it as $$I(x)=\int_{1-x}^1\left\{\frac1t\right\}dt.$$
Since $0\le\left\{\frac1x\right\}<1$ for all $x$, we know that $I(x)$ is monotonically increasing and bounded (on $[0,1]$, and hence must converge).
As achille hui stated, the positive part of your summation diverges -- but so does the negative part! This by no means implies divergence.
In fact, the Euler Mascheroni constant is defined to be equal to this difference between the logarithmic function and the harmonic series sum. The integral, when evaluated , should be $1-\gamma.$
|
H: $\sin x = \cos y, \sin y = \cos z, \sin z = \cos x$
For real numbers $x,y,z$ solve the system of equations:
$$\begin{align} \sin x = \cos y,\\
\sin y = \cos z,\\
\sin z = \cos x\end{align}$$
Source: high school olympiads, from a collection of problems for systems of equations, no unusual tricks involved.
So far I found that if we square two equations and use the $\sin^2 x + \cos^2 x=1$ we get $\sin^2 y + \cos^2 z=1$ which yields $\sin^2 y = \sin^2 z$. Is this correct or am I missing something? I don't know how to continue
AI: I arrived at a different result: (squaring the first two and adding)
$$\sin^2(x)+\sin^2(y)=\cos^2(y)+\cos^2(z)$$
$$\implies 1-\cos^2(x)+\sin^2(y)=\cos^2(y)+\cos^2(z)$$
Now $\cos^2(x)=\sin^2(z)$ from third equation, giving
$$1+\sin^2(y)=\cos^2(y)+(\cos^2(z)+\sin^2(z)) \iff \sin^2(y)=\cos^2(y)$$
$$\implies y= \frac{\pi}{4}+\pi n \implies x=z=\frac{\pi}{4}+\pi n$$
for some integer $n$
|
H: Disprove that $V$ is a linear subspace of $R^3$
Disprove that $V$ is a linear subspace of $R^3$, where $V$ = {$(x, y, z)$ ∈ $R^3$ : $x + 2y = 0$ or $5x − z = 0$}.
So here $dim$ = 2, $basis$ = { $(1, 2, 0), (5, 0, -1)$ } if I understand it correctly.
So it is a subspace of $R^3$.
Or am I mistaken here?
AI: Firstly, that is not the basis -- the basis isn't simply putting the coefficients into the right slots. For instance, $(1,2,0)$ does not satisfy $x+2y=0$.
Secondly, if we write $u=(2,-1,0)$ and $v=(1,0,5)$, then clearly both vectors are in $V$. What is $u+v$? $(3,-1,5)$ of course, but this satisfies neither equation one nor equation two. Thus the subspace is not linear.
|
H: write the following complex numbers in the form a+bi
I need to write the following complex numbers in the form a+bi, where a and b are real numbers and i=√I
(3-4i)+(1+ √1)
(2 + 2i)(2 - 3i)
AI: $$x = (3+4i) + (1+\sqrt{-1}) = 3+1+4i + i = 4+5i\,,$$
assuming the convention $\sqrt{-1}=i$.
$$y=(2+2i)(2-3i) = 4+4i-6i-6i^2=10-2i\,.$$
|
H: Doubt regarding groups formation under matrix multiplication
When considering the set of matrices:
$$Sp(n) = \{ S \in \text{GL} (2n, \mathbb{R}) \hspace{2mm} \text{s.t.} \hspace{2mm} S^T \Omega S= \Omega\} \tag{1}$$
where
$$\tag{2} \Omega = \begin{pmatrix} 0 & \mathbb{I}_{n} \\ - \mathbb{I}_{n} & 0\end{pmatrix}$$
with $\mathbb{I}_{n}$ being the $n \times n$ identity matrix and $0$ denotes the zero matrix of appropriate dimension.
Show that $Sp(n)$ forms a group under matrix multiplication (assuming associativity).
When trying to prove closure I took two matrices into account, labeled $S_1$ and $S_2$.
$$S_1 \in GL(2n, \mathbb{R}) \hspace{5mm} ; \hspace{5mm} S_2 \in GL(2n, \mathbb{R})$$
and therefore $S_1 \cdot S_2 \in GL(2n, \mathbb{R})$
But then I know I must show that these matrices respect the equation above, $S^T \Omega S= \Omega$. How do I do this?
I thought that I needed a representation (general) of an $S$ matrix that had a $det(S) \neq 0 $ but how do I make a general matrix (using letter and not numbers) that clearly shows having a $det \neq 0 $?
Must I also show that $\Omega$ belongs to $GL(2n, \mathbb{R}$? Or is that assumed?
AI: It seems your question is: show that $Sp(n)$ forms a group under matrix multiplication.
However, when you want to prove the "closure" you do not have to take $S_1$ and $S_2$ in $GL(2n)$, but in $Sp(n)$.
How to do it: take $S_1 \in Sp(n)$ and $S_2 \in Sp(n)$. Is it true that $S_1 S_2 \in Sp(n)$? Let's try:
$$
(S_1 S_2)^T
\Omega (S_1 S_2) = S_2^T S_1^T \Omega S_1 S_2 = S_2^T (S_1^T \Omega S_1) S_2
= S_2^T \Omega S_2 = \Omega
$$
This proves that $S_1 S_2 \in Sp(n)$.
|
H: Understanding double asymptotics
Suppose I have a double sequence $x_{i,t}$ of numbers where $i=1,\dots,n$ and $t=1,\dots,T$ and a function $f(n,T)$ of these numbers, e.g. $f(n,T)=\sum_{t=1}^{T} \sum_{i=1}^{n} x_{i,t} / nT$.
The paper am reading is interested in the quantity $f(n,T)$ as $n,T\to \infty$ with $n=O(T)$.
Question: What is the precise meaning of $n=O(T)$ in this context?
Am confused because neither $n$ or $T$ is a function of the other and usually we have expressions like $x_n=O(y_n)$ with a common index $n$.
AI: Casually speaking, $n=O(T)$ means that the growth of $n$ is "at most" the growth of $T$ in asymptotic manner. If both $n$ and $T$ are functions (or sequences, in your case perhaps) of some $k$, then you can imagine their graphs that as $n$ grows, its tail behaviour would be smaller or the same as that of $T$. For example, if $n=\sqrt k$ and $T=k$, then it is valid to claim that $n=O(T)$.
Big O notation is somewhat a neat way to quantify functions comparison in terms of their growth rate.
|
H: Find the GCD of $S = \{ n^{13} - n \mid n \in \mathbb{Z} \}.$
Determine the greatest common divisor of the elements of the set $S = \{ n^{13} - n \mid n \in \mathbb{Z} \}.$
I put in a few values of $n$ and I know that $10$ is a common factor, but I'm not sure if that's the greatest common factor.
AI: By Fermat's Little Theorem, we know that $13\mid n^{13}-n, 7\mid n^7-n, 5\mid n^5-n,3\mid n^3-n$ and $2\mid n^2-n$. I claim the product $2\cdot3\cdot5\cdot7\cdot13=2730$ is the largest common factor.
Note that $2^{13}-1=8190=3\cdot2730$ and $(3^{13}-3)-(2^{13}-1)=2730\cdot581$ where $\gcd(3,581)=1$ so $2730$ is the answer.
|
H: sum upto n terms where rth term is $r(r+1)2^r$
sum upto n terms where rth term is $r(r+1)2^r$.
I tried to make a telescoping series but failed.It seems like i have to subtract and add something from r(r+1) such that power of 2 also change.Is there a systematic approach?
AI: Hint:
$$S=\sum_{r=1}^n r(r+1)x^r$$
$$(1-x)S=2\sum_{r=1}^nrx^r-n(n+1)x^{n+1}\text{ as } r(r+1)-r(r-1)=2r$$
Again, if $$T=\sum_{r=1}^nrx^r$$
$$(1-x)T=?$$
|
H: Proving the inequality that $\dfrac{x^2 + x^{-2}}{x-x^{-1}} \geq 2 \sqrt{2}$ for $x > 1$
Question: Show that $$\dfrac{x^2 + x^{-2}}{x-x^{-1}} \geq 2 \sqrt{2}$$ for $x > 1$.
My attempts: After spending some time trying to prove it by $AM-GM$ and with algebraic manipulation, I tried to use trigonometric substitutions like letting $x = \tan\theta$ and $x = \sin\theta$ although I was still unsuccessful. I know that this can be proven with calculus, however I am looking to prove this without the aid of calculus. Any help would be appreciated!
AI: If $x =1$, we have $x-x^{-1} =0$ so I assume $x>1$.
Put $ t = x- \frac1x$. Then $t>0$ and $x^2 + \frac{1}{x^2}=t^2 +2$
Now $$\frac{x^2 + x^{-2}}{x - x^{-1}} = \frac{t^2+2}{t}=t+\frac2t\geq2\sqrt2$$
by AM-GM. The equality holds when $t^2=2$, or $x =\sqrt
{2 \pm \sqrt{3}}$
|
H: Eigenvector of a complete graph Laplacian
Can somebody help me prove why $v=\begin{bmatrix} 1 \dots 1\end{bmatrix}^T$ is the eigenvector of every complete graph Laplacian matrix?
Thanks!
AI: Recall that the definition of a graph Laplacian is $L=D-A$, where $D$ is the degree matrix and $A$ is the adjacency matrix.
Now $Lv$ is a vector whose $i^\text{th}$ component is the sum of entries in the $i^\text{th}$ row in $L$. But the sum of entries in the $i^\text{th}$ row of $D$ is $\deg(v_i)$ by definition, as is the sum of entries of the $i^\text{th}$ row of $A$, because this counts all vertices that $v_i$ is adjacent to, (with multiplicity of edges if the graph is not simple).
So these cancel out, leaving $0$, and hence $Av = 0v$.
|
H: self-adjoint operator and symmetric operator
we recently learned about self-adjoined operator with the formal definition $ ⟨Tv, w⟩ = ⟨v, Tw⟩$ for every $v, w$ in $V.$
In the other side we talked that self-adjoined can be represented as a symmetric operator (or matrix).
can you explain the geometric interoperation of a symmetric operator (matrix) and what does it mean?
also we learned that symmetric operator also always has real eigenvalues, I understood the part on the real eigenvalues, but why always exists such eigenvalues.
also can you help understand why for every two column in a Symmetric matrix are orthogonal (for every C1,C2 in A symmetric $<C1, C2> = 0.$), I understood the algebraic proof but I will be happy for some geometric intuition.
and finally what is the connection between the eigenvalues and eigenvectors of A symmetric with the the linear operator that A represents? (we learned that somehow its related to the direction that the operator scale/ squeeze the plane).
thank you
AI: Geometrically, it's probably best to think about self-adjoint operators in terms of their eigenspaces. An operator on a finite-dimensional inner product space is self-adjoint if and only if its eigenvalues are real and its eigenspaces are orthogonal and sum (directly) to the whole space.
The real eigenvalues means, roughly, there can't be any kind of rotation happening in any plane. All of the orthogonal spaces must stretch, shrink, and/or reflect.
Here's some examples, and geometric reasoning to support why/why not they are self adjoint:
Rotations in a plane
As stated before, there can't really be rotations while remaining self-adjoint, as these produce complex eigenvalues (of modulus $1$, in fact).
Projections onto a line/plane/subspace by least distance
Yep! These are self-adjoint. In essence, we are decomposing the space into the space we are projecting onto (the range), and its orthogonal complement (the kernel). We are leaving the vectors in the range alone (i.e. multiplying them by $1$), and shrinking the vectors in kernel to nothing (i.e. multiplying them by $0$).
Reflections, by least distance
Also self-adjoint. Rather than shrinking the complement to nothing, instead we are reflecting and multiplying the vectors by $-1$. This still makes them self-adjoint, but it will mean that the map is not positive-(semi)definite.
Projections onto one subspace, along a complementary subspace
This is a more general type of projection, which won't generally be self-adjoint, as the complementary subspace need not be orthogonal to the original subspace.
Hope that helps!
EDIT: Regarding orthogonal eigenspaces, suppose that $T : V \to V$ is self-adjoint, and $v_1, v_2$ are eigenvalues for distinct eigenvalues $\lambda_1, \lambda_2$. We simply need to show $\langle v_1, v_2 \rangle = 0$.
To prove this, consider
\begin{align*}
\lambda_1 \langle v_1, v_2 \rangle &= \langle \lambda_1 v_1, v_2 \rangle \\
&= \langle Tv_1, v_2 \rangle \\
&= \langle v_1, Tv_2 \rangle \\
&= \langle v_1, \lambda_2 v_2 \rangle \\
&= \overline{\lambda_2} \langle v_1, v_2 \rangle \\
&= \lambda_2 \langle v_1, v_2 \rangle,
\end{align*}
where the last line uses the fact that $\lambda_2$ is real. Thus, we have
$$(\lambda_1 - \lambda_2)\langle v_1, v_2 \rangle = 0 \implies \langle v_1, v_2 \rangle = 0$$
since $\lambda_1 - \lambda_2 \neq 0$.
|
H: $A^{-1}XB = I$ Solve for X matrix equation
$A^{-1}XB = I$, $A$ and $B$ are given and they are square matrixes.
If I want to solve this matrix equation for $X$, I need to change it to the form like this $X = A×B×I$?
AI: If $B$ is invertible, i.e. $B^{-1}$ exists, then, multiplying by $A$ from the left and by $B^{-1}$ from the right, you have:
\begin{equation}
A^{-1}XB = I\iff X=AA^{-1}XBB^{-1} = AIB^{-1}=AB^{-1},
\end{equation}
using that $AI=A$, respectively $IB^{-1}=B^{-1}$.
|
H: Density of $g(Y)=\frac{1}{2}\mathbb{E}[X|Y]$
Let $(X,Y)$ a random variable with density $f(x,y)=cx(y-x)e^{-y}$ for $0 \leq x \leq y <\infty$. Find:
1) the value of $c$.
$\rightarrow c=1$
2) the density of $X|Y=y$.
$\rightarrow f_{X|Y}(x,y):=\frac{f_{XY}(x,y)}{f_Y(y)}=\frac{fx(y-x)}{y^3}$
3) the density of the random variable $g(Y)=\frac{1}{2}\mathbb{E}[X|Y]$.
$\rightarrow g(Y)=\frac{1}{4}y$.
Is it correct? In particular the third point? Thanks in advance.
AI: There is a missing factor of 6 in your solution to part 2); I can only assume that this is a typo, and the $f$ was intended to be a 6. Your solution to the third point is indeed correct.
|
H: Variance of $X$, an uniformly random sum from a finite set $S$.
This is from my class. Can I have an explanation of what going on in the last equality (i.e. $\operatorname{Var}\left(\varepsilon_{i}\right) s_{i}^{2}=\frac{1}{4} \sum_{i=1}^{k} s_{i}^{2}$)?
Let $S=\left\{s_{1}, s_{2}, \ldots, s_{k}\right\} \subseteq[n]$ be a largest set with distinct sums (i.e. the sums of elements in all $T \subset S$ are all different).
Let $X$ be a uniformly random sum from $S$.
$\Rightarrow X=\sum_{i=1}^{k} \varepsilon_{i} s_{i},$ where each $\varepsilon_{i}$ is independent, uniform on {0,1}.
Let $\mu:=\mathbb{E}[X]=\sum_{i=1}^{k} \mathbb{E}\left[\varepsilon_{i} s_{i}\right]=\frac{1}{2} \sum_{i=1}^{k} s_{i}$. (Actual value is unimportant)
Variables $\varepsilon_{i}$ are independent
$\Rightarrow \operatorname{Var}(X)=\operatorname{Var}\left(\sum_{i=1}^{k} \varepsilon_{i} s_{i}\right)=\sum_{i=1}^{k} \operatorname{Var}\left(\varepsilon_{i}\right) s_{i}^{2}=\frac{1}{4} \sum_{i=1}^{k} s_{i}^{2} \leq \frac{1}{4} n^{2} k$
AI: It follows from the definition. Since $\varepsilon_i$ is a uniform r.v. on $\{0, 1\}$, we have
$$
\mathbb E[\varepsilon_i] = \sum_{x \in \{0, 1\}} x \mathbb P(\varepsilon_i = x) = 0\cdot \frac 1 2 + 1 \cdot \frac 1 2 = \frac 1 2,
$$
and we similarly obtain $\mathbb E[\varepsilon_i^2] = \frac 1 2$, so
$$
\mathrm{Var}(\varepsilon_i) = \mathbb E[\varepsilon_i^2] - \mathbb E[\varepsilon_i]^2 = \frac 1 2 - \frac 1 4 = \frac 1 4.
$$
This constant is then taken outside of the sum.
|
H: Where is the error? Application of FTC
Please, I need a help to see the error on this argument:$$\int_0^tf(a)g(t-a)da=\int_0^tf(t-a)g(a)da\implies$$
$$\dfrac{d}{dt}\int_0^tf(a)g(t-a)da=\dfrac{d}{dt}\int_0^tf(t-a)g(a)da\implies$$
$$f(t)g(0)=f(0)g(t)$$
If $f(0)=g(0)=k\neq 0$, então
$$f(t)=g(t)\forall t\in\Bbb R.$$
It is obviously incorrect, but I could not find the error.
Many thanks!
AI: You can't apply FTC here -- you have to use the Leibniz integral rule. This is because not only are the bounds changing, but the inside of the integral itself is changing as well. Contrast this with $$\frac{d}{dx}\int_0^xxt\;dt=\frac{t^2}2,$$ not $xt$ as a naive application of FTC would entail.
|
H: Let $f(x) = x + 2x^2 + 3x^3 + 4x^4 + 5x^5 + 6x^6$, and $S = [f(6)]^5 + [f(10)]^3 + [f(15)]^2$. Find the remainder when $S$ is divided by 30.
Let $f(x) = x + 2x^2 + 3x^3 + 4x^4 + 5x^5 + 6x^6$, and let $S = [f(6)]^5 + [f(10)]^3 + [f(15)]^2$. Compute the remainder when $S$ is divided by 30.
I don't really know how to start this, any help is appreaciated.
AI: hint
$$6^2=36 \equiv 6 \mod 30$$
$$10^2=100 \equiv 10 \mod 30$$
$$15^2=225 \equiv 15 \mod 30$$
but
$$ 6+2.6+3.6+4.6+5.6+6.6=21.6$$
thus
$$f(6)\equiv 6 \mod 30$$
You can continue. If you don't find $ 21$ as remainder, it means you made a mistake somewhere.
|
H: On finding domain of a trigonometric function
Given, f(theta)= 11cos^2 (theta) - 9sin^2 (theta) + [15 sin (theta) . Cos (theta)].
Find the Range of the function give above and express the above function in the form of {a cos(2theta+ alpha) + b},where a,b, alpha are real numbers.
I tried using completing the sqaure method but It was no good.
Then I tried adding subtracting terms from the function but reached no where.
Please help.
AI: $$F(x)=11\cos^2 x- 9 \sin^2 x+15 \sin x \cos x=1+10 \cos 2x +(15/2) \sin 2x$$ $$\implies F(x)=1+\frac{25}{2} \sin (2x+\alpha)$$
$$\implies F_{min}=1-\frac{25}{2}, F_{max}=1+\frac{25}{2}$$
Hence the rabge of $F(x)$ is $[-23/2,27/2]$
|
H: If you can reach a point in $R^4$ does that automatically mean that your set of vectors must be Linearly Independent in $R^4$?
I am working on part c) and given that we can reach a point $(1,1,1,1)$ does this mean we are linearly independent in $R^4$?
We can formulate this as a 4x6 matrix and as such the rank must be less than or equal to 4 but how do we know it is 4 without doing any calculations?
How do I determine the number of solutions in this scenario then? Because the system is not linearly independent in $R^6$ it means we can find some linear combination is zero.
I do not follow the reasoning given here:
This is not homework it is from the MIT course given by Strang
AI: Writing out that $4 \times 6$ matrix gives $$\begin{pmatrix} 0&0&0&1&1&2\\ 0&0&1&1&2&0\\ 0&1&1&2&0&0\\ 1&1&2&0&0&0\end{pmatrix}.$$ You can see that this is similar to a matrix in echelon form (just put the rows in reverse order) with four pivot columns, i.e. four linearly independent vectors, so the rank is $4$.
Now, we’ve observed that $(2,-1,0,1,0,0)$ is a solution. In other words, $$2r_1 + (-1)r_2 + 0r_3 + 1r_4 + 0r_5 + 0r_6 = (1,1,1,1) \tag{1}.$$
On the other hand, as you observed, since there are $6$ vectors, they must be linearly dependent in $\mathbb{R}^4$, so there is $\alpha_1, …, \alpha_6$ not all zero such that $\alpha_1 r_1 + … + \alpha_6 r_6 = 0.$ Then note that, for any $t \in \mathbb{R}$, $t\alpha_1 r_1 + … + t\alpha_6 r_6 = t(\alpha_1 r_1 + … + \alpha_6 r_6) = t \cdot 0 = 0$ as well. Thus, $$(1,1,1,1) = \text{Equation 1} = \text{(Equation 1)} + 0 = \text{(Equation 1)} + (t\alpha_1 r_1 + … + t\alpha_6 r_6) = (2+t\alpha_1)r_1 + (-1+t\alpha_2)r_2 + (t\alpha_3) r_3+ (1+t\alpha_4)r_4 + (t\alpha_5)r_5 + (t\alpha_6)r_6.$$
Hence we have infinitely many solutions.
More generally, we have the following result: If $A$ is an $m \times n$ matrix with $n > m$, then by the rank-nullity theorem, the null space of $A$ is non-zero, so in particular the null space has infinitely many vectors. So, if the system $Ax = b$ is consistent (i.e. has a solution), say it has a solution $x^*$, then for any vector $k$ of the null space, we have $A(x^*+k) = Ax^* + Ak = Ax^* + 0 = b$, so that $x^* + k$ is also a solution to the system. Since there are infinitely many vectors $k$, we have infinitely many solutions. To summarize: Every consistent system with more unknowns than equations has infinitely many solutions.
|
H: product of matrices, and its norm
I have a product of matrices $\prod\limits_{i=1}^{n} a_i$, If $b$ is an eigenvalue of $a_i$ for any $i$, then $|b|<1$.
(1) Under what norm or condition, $\|\prod\limits_{i=1}^{n} a_i\|<r<1$
(2) If I had infinite such matrices, $\|\prod\limits_{i=1}^{\infty} a_i\|<r<1$ true for some norm?
(3) What if all the matrices are diagonalizable? In that case, will I get (1) and (2) as desired in some norm?
Thanks for any help.
AI: Perhaps the vector induced norm can work? If $b$ is an eigenvalue of $a_i$ implies that $|b|<1$, then the largest eigenvalue of $a_i$, call it $\tilde{a}_i$ should also be $|\tilde{a}_i|$<1.
Taking a look specifically at the statement in the wiki page:
"Any induced operator norm is a submultiplicative matrix norm $\lVert AB \rVert < \lVert A \rVert \lVert B \rVert $ ".
Thus, in the vector induced norm, we should be able to say that
\begin{equation}
\lVert \prod_{i=1}^{n} a_i \rVert < \prod_{i=1}^{n} \lVert a_i \rVert < \lVert \tilde{a} \rVert^{n}<1,
\end{equation}
where $\tilde{a}$ is the largest eigenvalue of all the $\tilde{a}_i$'s.
I think this applies for all three points.
|
H: Verification of Discrete metric space
When we define a metric on $\mathbb{N}$ by $d(m,n)=|\frac{1}{n}-\frac{1}{m}|$. I need to show this metric gives discrete topology on $\mathbb{N}$. For this i need to show singletons are open or in other words for any $n\in \mathbb{N}$ there exists $r>0$ such that $B_{d}(n,r)=\{n\}$. My question is how to find such $r$ for any given $n\in \mathbb{N}$. For n=1, i am able to find $r=\frac{1}{4}$ but i am not able to generalise it. Please suggest me a hint.
AI: As you asked for hint this idea should work for you .
You know that $\frac{1}{x}$ is strictly decreasing on $(0,\infty)$.Also $\{\frac{1}{n}\}_{n\in \mathbb N}$ is strictly decreasing sequence.
Observe that for a fixed $n$ you have $d(n,n+1)\le d(n,m)$ for all $m\ne n$ and $m\in \mathbb N$. So you choose $r<d(n,n+1)$ and you are done.
|
H: Does the converse of if $x$ is a transitive set then $\bigcup (x^{+})=x$ hold?
I am studying Set Theory from the book Set Theory: A First Course by Daniel W. Cunningham. This proves that If $x$ is a transitive set, $\bigcup x^+ = x$, where $x^+ = x\cup\{x\}$. I really wonder if the converse is true. I tried proving it but got stuck. I couldn't find counterexamples either.
Suppose $a \in x$. Then $a\in y$ for some $y\in x^{+}$. Then $y=x$ or $y\in x$. If $y = x$ then we're done. Suppose $y\ne x$ then $y\in x$. What happens then?
Hints would be appreciated.
AI: We have $\bigcup x^+ = (\bigcup x)\cup x,$ so $\bigcup x^+ = x$ means that $\bigcup x \subseteq x,$ which we can see implies (actually is equivalent to) transitivity: If $y\in x$ and $z\in y,$ then $z\in \bigcup x,$ so $z\in x.$
|
H: Optimality proof for the coin-change problem of 1, 2, 5 and 10
I have four types of coins: 1, 2, 5 and 10. When I am given a number $k \in \mathbb{N}^{+}$, I have to return the least number of coins to reach that number. Using a greedy algorithm I can simply return all the possible 10 coins, and from the remaining, all possible 5 coins, and so on.
I need to proof that this greedy algorithm always return an optimal solution.
After some research, I realized this problem is called the coin-change problem and those coin systems that always return optimal solutions are called "canonical coin systems". Characterization of canonical coin systems has been made partially using theorems over specific subsets (1, 2, 3), but these theorems seems pretty hard to prove. Is there any simpler proof that I can use for this specific case of 1, 2, 5 and 10 without using those theorems?
For instance, the coinset 1, 5 and 10 can be easily proved to be canonical because every element is a factor of the larger elements. Can I use something similar in this case?
AI: If you already know the proof as to why $S = \{1, 5, 10\}$ is canonical, then you can easily prove that $S' = \{1, 2, 5, 10\}$ is canonical. This is clearly true since, if $x$ never equals $2, 3,$ or $4$, then we never exercise our option to take the coin $2$, and we obtain an optimal solution due to the optimality of $S$. Conversely, if $x$ equals $2, 3,$ or $4$ at some point, then we can only do better by taking fewer coins since all of these cases reduce directly to either $0$ or $1$.
|
H: Calculating the characteristic polynomial of a 3x3 matrix
I had to calculate the eigenvalues of the following matrix.
$$H=h\begin{pmatrix}A+\frac{1}{2}(B+C) & = & \frac{1}{2}(B-C) \\ 0 & B+C & 0 \\ \frac{1}{2}(B-C) & 0 & A+\frac{1}{2}(B+C)\end{pmatrix}$$
for that, I calculated the characteristic polynomial
$$ \text{char}(\lambda)=\det(H-\lambda Id_3) $$
which I did as one usually does with the Laplace Expansion. The master solution is
$$(A+B-\lambda)(A+C-\lambda)(B+C-\lambda)=0$$
Now that's a nice polynomial. I'm wondering, if I'm missing something here. My approach by calculating the determinant was seemed way more complex. Did they just rewrite the polynomial nicely or am I missing something here which would give me the solution more easily?
AI: The eigenvalue of $B+C$ follows because subtracting $(B+C)I$ yields a matrix with a row of zeroes (and hence determinant zero). To find the remaining two eigenvalues, we ignore the middle column and row (since this just adds a factor of $(B+C-\lambda)$ to our polynomial), and then observe that subtracting $(A+B)I$ or $(A+C)I$ again yields a matrix with zero determinant.
|
H: How to find a solution for a matrix with 1 equation and 3 unknown variables?
The task is to find all solutions for $A_1 x = 0$ with $x\in \mathbb R^3$
$$A_1 = \begin{pmatrix}
6 & 3 & -9 \\
2 & 1 & -3 \\
-4 & -2 & 6 \\
\end{pmatrix}
$$
The given solution is as follows:
$$L_1 = \{ \lambda \begin{pmatrix}
1 \\
1 \\
1 \\
\end{pmatrix} + \mu \begin{pmatrix} 0 \\ 3 \\ 1 \\ \end{pmatrix} | \lambda, \mu \in \mathbb R \}$$
As far as I understand the matrix has 3 unkowns and only 1 equation resulting in:
$$A_1 = \begin{pmatrix}
2 & 1 & -3 \\
0 & 0 & 0 \\
0 & 0 & 0 \\
\end{pmatrix}
$$
My approach was to find a way to display the soltions by rearranging $x_1, x_2$ and $x_3$ in dependancy to each other, which results in:
$$L_1 = \{x = \begin{pmatrix} \frac{3x_3-x_2}{2} \\ 3x_3-2x_1 \\ \frac{2x_1 + x_2}{3} \end{pmatrix} \space | \space x \in \mathbb R^3\}$$
I tried searching for a similar problem but usually there are at least two equations for three unknowns resulting in one free to choose and the others depending on the one chosen.
AI: I don't understand how you can represent the solutions of the linear equation
$$2x_1+x_2-3x_3=0$$
as
$$L_1 = \left\{x = \begin{pmatrix} \frac{3x_3-x_2}{2} \\ 3x_3-2x_1 \\ \frac{2x_1 + x_2}{3} \end{pmatrix} \space | \space x \in \mathbb R^3\right\}.$$
You may write for example,
$$L_1 = \left\{\begin{pmatrix} \frac{3x_3-x_2}{2} \\ x_2 \\ x_3 \end{pmatrix} \space | x_2, x_3 \in \mathbb{R}\right\}
=\{ x_2 \mathbf{u} + x_3 \mathbf{v} | x_2, x_3 \in \mathbb{R} \}$$
where
$$ \mathbf{u}=(-1/2,1,0)^t\quad\mathbf{v}=(3/2,0,1)^t.$$
Similarly
$$L_1 = \left\{\begin{pmatrix} x_1 \\ 3x_3-2x_1 \\ x_3\end{pmatrix} \space | \space x \in \mathbb R^3\right\}
=\{ x_1 \mathbf{u} + x_3 \mathbf{v} | x_1, x_3 \in \mathbb{R} \}$$
where
$$ \mathbf{u}=(1,-2,0)^t\quad\mathbf{v}=(0,3,1)^t.$$
The set of all solutions of the given equation is a vector space of dimension $2$ in $\mathbb{R}^3$. We find two linearly independent vectors $\mathbf{u}$, and $\mathbf{v}$ which satisfy the equation and the set of solutions can be written as
$$L_1 = \{ \lambda \mathbf{u} + \mu \mathbf{v} | \lambda, \mu \in \mathbb{R} \}.$$
|
H: Definition of topological space & open sets
I am just getting into topology, and I have a doubt regarding open sets.
Let $(X, \mathcal{T})$ be a topological space. Given an open set of $X$, $A$, and subset of $X$, $B$ such that
$$A\cap B \in \mathcal{T}$$
$$A\cup B \in \mathcal{T}$$
Can I conclude that $B$ is also an open set? That is, if I have an arbitrary set of $X$ whose intersection and union with an open set are themselves open sets, does this imply the arbitrary set is also open?
AI: No. It does not imply that the set is open. For instance:
Let $X= \left\lbrace a, b, c\right\rbrace$ and consider the topological space $(X, \tau)$ where $\tau=\left\lbrace\varnothing, X, \left\lbrace a \right\rbrace\right\rbrace$.
Let $A=\left\lbrace a \right\rbrace$, let $B=\left\lbrace b, c \right\rbrace$.
Then,
$A \cup B= X \ \in \ \tau $,
$A \cap B= \varnothing \ \in \ \tau$.
However $B \ \notin \ \tau$.
|
H: Find out whether linearity for the functions $f$ and $g$ persists
Given $$f: \mathbb{C}^3 \rightarrow \mathbb{C}^2, \begin{bmatrix}
a\\
b\\
c
\end{bmatrix} \mapsto \begin{bmatrix}
ia+b\\
c
\end{bmatrix}, \,\,\,\,\,\,\,g:
\mathbb{C}^3 \rightarrow \mathbb{C}^2, \begin{bmatrix}
a\\
b\\
c
\end{bmatrix} \mapsto \begin{bmatrix}
ia+b\\
c+1
\end{bmatrix}$$
I like to find out if linearity exists for each function? If I understood correctly, it needs to be shown that they are homogenous and additive. So let
$$\vec{v_1}=\begin{bmatrix}
a_1\\
b_1\\
c_1
\end{bmatrix} \,\,\,\,,
\vec{v_2}=\begin{bmatrix}
a_2\\
b_2\\
c_2
\end{bmatrix} \,\,\,\, \text{ where each is from } \,\, \mathbb{C^3}$$
Because the functions need to be homogenous and additive, we need to do
$$f(\vec{v_1}+\vec{v_2}) = f(\begin{bmatrix}
a_1+a_2\\
b_1+b_2\\
c_1+c_2
\end{bmatrix}) =
\begin{bmatrix}
i(a_1+a_2)+(b_1+b_2)\\
(c_1+c_2)
\end{bmatrix}=
\begin{bmatrix}
(ia_1+b_1)+(ia_2+b_2)\\
(c_1+c_2)
\end{bmatrix}$$
But from here I don't know how to continue and what to do ? :C
AI: You want to show $f(v_1 + v_2) = f(v_1) + f(v_2)$. If we compute the values $f(v_1)$ and $f(v_2)$ we find that
$$
\begin{align*}
f(v_1) + f(v_2)
&= \begin{bmatrix}
ia_1 + b_1 \\
c_1
\end{bmatrix} +
\begin{bmatrix}
ia_2+b_2\\
c_2
\end{bmatrix} \\
&= \begin{bmatrix}
(ia_1+b_1)+(ia_2+b_2)\\
c_1+c_2
\end{bmatrix} \\
&= f(v_1 + v_2).
\end{align*}
$$
Thus, we see that $f$ is linear (if we quickly compute $f(av) = af(v)$).
Another way to see this quickly is to note that all the components of $f(v)$ are linear combinations of the entries of $v$.
This is not the case for $g$ due to the addition of 1 in the second component, hence $g$ is not linear.
Alternatively, one can plug in $0$ to get
$$g(0) = \begin{bmatrix}
0\\
1
\end{bmatrix} \ne 0.$$
|
H: Determine a total cost of producing x units
marginal cost is $C'(x) = 5 + \frac{10}{\sqrt{x}}$, it is known that producing 100 units costs 950$, how much would it be to produce 400 units?
from that I can calculate total cost function which is $C(x) = 5x + 20\sqrt{x}$
Is it enough to just plug in 400? Or should I plug in 100 and get 700\$ which means 250$ is fixed cost?
So should I plug in 400 and add 250 to the result?
Can anyone help me please?
AI: When you integrate the marginal cost function, your total cost function should actually be $C(x)=5x+20\sqrt{x}+C$. You're given that at 100 units, the cost is $950:
$$C(100)=950=700+C \implies C=250$$
Now, plug in 400 units into $C(x)=5x+20\sqrt{x}+250$ to obtain the cost for producing 400 units.
|
H: When is $\sqrt{k}\operatorname{Var}\left(\sum_{i=1}^{k}X_{i}\right) \leq \sum_{i=1}^{k}\operatorname{Var}(X_{i}) $ true?
Assume we have $k$ dependent random variables $X_{1}, \dots, X_{k}$ with $\operatorname{Var}(X_{i}) < \infty$.
In which case
$$\sqrt{k}\operatorname{Var}\left(\sum_{i=1}^{k}X_{i}\right) \leq \sum_{i=1}^{k}\operatorname{Var}(X_{i})\,?
$$
It seems that negative covariance is not enough.
AI: Starting with the identity
$$\operatorname{Var} \left( \sum_{i=1}^k X_i \right) = \sum_{i=1}^k \operatorname{Var}(X_i) + \sum_{i > j} 2\operatorname{Cov}(X_i, X_j)$$
we can see why having negative covariance isn't sufficient; we need the covariances to be negative enough to offset the $\sqrt k$ multiplier. Specifically, what we need is
\begin{align*}
& \sqrt k\sum_{i=1}^k \operatorname{Var}(X_i) + \sqrt k \sum_{i > j} 2\operatorname{Cov}(X_i, X_j) \leq \sum_{i=1}^k \operatorname{Var}(X_i) \\
\iff & \sum_{i > j} \operatorname{Cov}(X_i, X_j) \leq \frac{k^{-1/2} - 1}{2 } \sum_{i=1}^k \operatorname{Var}(X_i)
\end{align*}
and we note that the coefficient on the right is always negative when $k \geq 2$, implying that having negative total covariances is necessary but not sufficient.
|
H: How to prove that some specific group is not isomorphic to any member of any of the 5 families of groups?
For example, let's say we have a group Q8 (Quaternion group).
How to prove that this group is not isomorphic to any member of any of the families of groups (Cyclic, Abelian, Dihedral, Symmetric, Alternating)?
What is the general approach for solving this problem for any given group?
AI: It is clear that $Q_8$ is not abelian, hence not cyclic, because for example $ij=-ji$. So we can use, if we want, that we only have two nonabelian groups of order $8$, namely $Q_8$ and $D_4$. So we only need to distinguish these two groups. But this is easy. Every subgroup in $Q_8$ is normal, which is not true for $D_4$.
References:
How can you show there are only 2 nonabelian groups of order 8?
Show that every subgroup of $Q_8$ is normal.
|
H: Integrability of $\frac{x_1}{|x|^{n}}$ over the unit ball
Is $\frac{x_1}{|x|^{n}}$ integrable over the unit ball of $B_1(\mathbb{R}^n)$? That is, is
$$\int_{B_1(\mathbb{R}^n)} \frac{|x_1|}{|x|^{n}}<\infty?$$
I know that $|x|^{-a}$ is integrable over the ball if $a<n$, but what if $a<n$ in just one dimension? My heart says it probably isn’t, but I’m not sure.
AI: Just remark that $|x_1|≤|x|$, so
$$
\int_{B_1(\mathbb{R}^n)} \frac{|x_1|}{|x|^n}\,\mathrm{d}x ≤ \int_{B_1(\mathbb{R}^n)} \frac{1}{|x|^{n-1}}\,\mathrm{d}x < \infty
$$
In the more general case when $a≠n$, first remark that the only singularity is at $x=0$ so it is sufficient to prove that the integral is finite on $[-1,1]^n$. On this set, we can factorize the integral using Fubini theorem and writing $x = (x_1,\tilde{x})$ and doing the change of variable $x_1 = |\tilde{x}|\,r$ and then $s=r^2$ one gets
$$
\begin{align*}
\int_{[-1,1]^n} \frac{|x_1|}{|x|^a}\,\mathrm{d}x &= \int_{[-1,1]^{n-1}}\int_{-1}^1 \frac{|x_1|}{\left(\sqrt{|x_1|^2+|\tilde{x}|^2}\right)^a}\,\mathrm{d}x_1\,\mathrm{d}\tilde{x}
\\
&= \int_{[-1,1]^{n-1}}\int_{-1/|\tilde{x}|}^{1/|\tilde{x}|} \frac{|\tilde{x}|\,|r|}{\left(\sqrt{|r|^2+1}\right)^a|\tilde{x}|^a}|\tilde{x}|\,\mathrm{d}r\,\mathrm{d}\tilde{x}
\\
&= \int_{[-1,1]^{n-1}} \frac{1}{|\tilde{x}|^{a-2}}\int_{0}^{1/|\tilde{x}|} \frac{2\,r}{\left(\sqrt{1+r^2}\right)^a}\,\mathrm{d}r\,\mathrm{d}\tilde{x}
\\
&= \int_{[-1,1]^{n-1}} \frac{1}{|\tilde{x}|^{a-2}}\int_{0}^{|\tilde{x}|^{-2}} \frac{\mathrm{d}s}{\left(1+s\right)^{a/2}}\,\mathrm{d}\tilde{x}
\\
&= \frac{2}{a-2}\,\int_{[-1,1]^{n-1}} \frac{1}{|\tilde{x}|^{a-2}}\left(1-(1+|\tilde{x}|^{-2})^{1-a/2}\right)\mathrm{d}\tilde{x}
\\
&= \frac{2}{a-2}\,\int_{[-1,1]^{n-1}} \frac{1}{|\tilde{x}|^{a-2}}-\frac{1}{(1+|\tilde{x}|^2)^{a/2-1}}\mathrm{d}\tilde{x}
\end{align*}
$$
where $a≠ 2$ (if $a=2$ you get a $\ln$ when you compute the integral in $s$). This is integrable if and only if $a-2 <n-1$, so
$$\boxed{\int_{B_1(\mathbb{R}^n)} \frac{|x_1|}{|x|^a}\,\mathrm{d}x < ∞ \ \Leftrightarrow\ a<n+1}$$
|
H: Existence of $f$ such that $f(x,|x|^2)f(y,|y|^2)=0$ whenever $x \cdot y=0$
Does there exists a non trivial continuous function (other than $f=0$) with the following :
$f:R^4 \to [0, \infty)$
Let a $x,y \in R^3$ and their respective Euclidean norm squared $|x|^2$ and $|y|^2$ and their dot product $x \cdot y$
$f(x,|x|^2)f(y,|y|^2)=0$ whenever $x \cdot y=0$
$f(x,|x|^2)f(-x,|x|^2)=0$
AI: $f(x_1,x_2,x_3,x_4)=(x_1+|x_1|)(x_2+|x_2|)(x_3+|x_3|)$
or any function which is zero outside of the nonnegative orthant $\mathbb{R}_+^3$.
If $x,y$ are perpendicular then at least one of them has at least one nonpositive coordinate and then $f$ vanishes. One of $x,-x$ also has at least one nonpositive coordinate.
|
H: I have to find the conditional pmf's $f_{X|Y}(x|y)$ and $f_{Y|X}(y|x)$
$f(x, y)$ = ($\frac{1}{x+y−1}$$+$ $\frac{1}{x+y+1}$ $−$ $\frac{2}{x+y}$)
I have to find the conditional pmf's $f_{X|Y}(x|y)$ and $f_{Y|X}(y|x)$
I know we can use the following formula, but I do not know how to apply it:
$f_{X|Y}(x|y)$ = $P(X=x|Y=y)$ = $\frac{P(X=x,Y=y)}{P(Y=y)}$ and $f_{Y|X}(y|x)$ = $\frac{P(X=x,Y=y)}{P(X=x)}$.
After this, I have to show that X, Y are independent.
Any help would be grateful Thanks in advance.
I already started with some preliminaries in another question. It was better to post this as a new question.
I have to find the marginal pmf's $f_X$, $f_Y$ and $f_{X+Y}$ of $X+Y$.
AI: We already know that: $$f_{X,Y}\left(x,y\right)=\frac{1}{x+y-1}+\frac{1}{x+y+1}-\frac{2}{x+y}\tag1$$
In your former question you found out that: $$f_{Y}\left(y\right)=\frac{1}{y^{2}+y}\tag2$$
So you can get an expression for: $$f_{X\mid Y}(x\mid y)=\frac{P(X=x,Y=y)}{P(Y=y)}=\frac{f_{X,Y}(x,y)}{f_Y(y)}\tag3$$by substituting $(1)$ and $(2)$ in $(3)$.
Observe that in $(3)$ the $y$ is fixed and the $x$ is variable. Another notation for it (making that fact more clear) is: $$f_{Y=y}(x)$$
Similar story for $f_{Y\mid X}(y\mid x)$.
|
H: Multivariate Analysis: formula with unknown origin
A formula to study the stability of a multivariant variable was given to me.
The formula is introduced below:
$$ \sum_{i=1}^k (p_{i,2}-p_{i,1})\log \bigl(\frac{p_{i,2}}{p_{i,1}}\bigl) $$
Where:
$p_{i,j}$ is the relative frequency of the observed value $i$ in the sample $j$.
$j$ refers to the beginning of the relevant observation period ($j=1$) and the end of the relevant observation period ($j=2$) respectively.
$k$ is the number of facility grades/pools or segments.
The final target of this analysis is to decide if the result of the formula shown above is reasonable in terms of variation. For this, I have to establish a threshold. The problem is that I do not know the origin of the formula. I assume that it is deduced to supposing some kind of statistical distribution for the sample.
Thank you in advance and my apologies for the possible mathematical incongruences that you could find. Please, do not doubt to reach me if any doubts arise.
AI: Your formula is the symmetrized Kullback-Leibler divergence, also known as the Jeffreys divergence. In the first link you'll find the assertion "In the Banking and Finance industries, this quantity is referred to as Population Stability Index, and is used to assess distributional shifts in model features through time." See this Cross Validated post for a discussion.
(IMO there is very little justification for the thresholds $0.10$ and $0.25$ commonly cited when using the PSI. These thresholds seem to be folklore passed from one practitioner to the next, with no empirical/theoretical evidence to illustrate how a threshold relates to stability, and no regard for how the PSI behaves when you change $k$, the number of segments.)
|
H: Can we represent an improper integral as $\int_{-\infty}^{\infty} f(x)\,dx = \lim_{a \to \infty} \int_{-a}^a f(x)\,dx$?
I was reading on improper integrals, and came across :
$$\int_{-\infty}^{\infty} f(x)\,dx = \lim_{A \to -\infty}
\int_A^Cf(x)\,dx + \lim_{B \to \infty} \int_C^B f(x)\,dx$$
My question is a rather silly one:
Is it any different if I write the above as :
$$\int_{-\infty}^{\infty} f(x)\,dx = \lim_{a \to \infty} \int_{-a}^a f(x)\,dx$$
If it's correct, is there any specific reason why it is written that way? Or is looking at it in that way any more beneficial than my way?
If it's not, why is it incorrect to write it this way? (Any counter-example also appreciated)
AI: Your question is not silly; it's important to be clear on the definitions of these things.
It is indeed different if you write the above like that. Consider for instance an arbitrary odd function $f : \mathbb R \to \mathbb R$ (a function is called odd if $f(-x) = -f(x)$). Then, using your definition,
$$
\int_{-\infty}^{\infty} f(x) \, dx = \lim_{a \to \infty} \int_{-a}^{a} f(x) \, dx = \lim_{a \to \infty} \int_{-a}^0 f(x) \, dx + \int_0^a f(x) \, dx = 0,
$$
which implies that the integral from $-\infty$ to $\infty$ of $f$ is $0$, no matter how pathological it is. Clearly we don't want to consider arbitrary odd $f$ to integrate to $0$ over the entire real line. Take as an example $f(x) = x$. The concern becomes especially obvious if we consider
$$
\lim_{a \to \infty} \int_{-a}^{2a} x \, dx = \lim_{a \to \infty} \frac 3 2 a^2 = \infty \neq 0,
$$
so the "rate" at which the upper and lower bounds move changes the answer.
The method of splitting up the integral into two improper integrals is thus used as a convention; it doesn't have this problem. In fact, you can prove that if splitting up the integral yields a convergent result, then so will your method. In essence, the method of splitting up the integral prevents "too many" things from converging.
What you are suggesting does in fact have a name; it is the Cauchy principal value of the improper integral. This is useful in some special cases but certainly should not be used all the time for the reasons given above.
|
H: Expected value of $1$'s in a matrix product defined over $\mathbb{Z}_2$
Let $\mathbf{A}$, $\mathbf{B}$ be random boolean matrices of $n \times n$ size, such that the matrix entry is $1$ with probability $p$ and $0$ otherwise. All entries are independent. How many $1$'s on average will be in the product of matrices, if an addition and multiplication operations are defined over $\mathbb{Z}_2$?
Attempt
Briefly, in every matrix cell, I have got an expected value of $(p^2n)\space\text{mod 2}$. In total, $n^2(p^2n\space \text{mod 2})$.
AI: Forgetting about over $Z_2$ for a second, if we consider the product over $Z$, it is correct that the expected value of each spot is $p^2n$, because each spot is the sum of $n$ independent products of two independent Bernoulli random variables with probability $p$.
However, we can do more than just state the EV of each of these entries; we can calculate the full dist. The product of the two independent Bernoulli random variables is also Bernoulli, with probability $p^2$, so the sum of $n$ (independent) of these is Binomial. So, for each entry in the product matrix, the entry in $Z_2$ is 1 iff this Binomial random variable is odd.
There is a handy trick that the sum of the odd values of a binomial distribution: Sum of odd terms of a binomial expansion: $\sum\limits_{k \text{ odd}} {n\choose k} a^k b^{n-k}$. Using that result, we get that the probability that any one entry in the product matrix (over $Z_2$) is 1 is
$\frac{1}{2}(1-(1-2p^2)^n)$
Because expectation is linear, the expected number of total 1's in the product matrix is
$\frac{n^2}{2}(1-(1-2p^2)^n)$
|
H: generating set of $\mathbb{Z}$
I have some troubles with identification of the generating set in the next group:
If I want to create a group $\mathbb{Z}$ from commutative monoid $\mathbb{N}$ I should take $\mathbb{N}^2$ and factorize it by $(n_1,m_1) = (n_2,m_2)$ if $n_1+m_2 = n_2+m_1$. After that, the operation $-$ is obvious.
I try to figure out what is the generating element in this new group. I know, that a $\mathbb{Z}$ isomorphic to a free one-element group $<a>$. What is playing the role of this $a$ in the group from factorset? That is not $(1,1)$ because of $(k,k) = (0,0)$ --- identity element.
Can somebody help me, please?
AI: The pair $(a,b)$ represents the integer $a-b$. So the integer $1$ is represented by the pair $(n+1,n)$ for any natural number $n$.
|
H: Product $PN$ of normal subgroups is abelian
I am trying to show that every non-abelian group $G$ of order $6$ has a non-normal subgroup of order $2$ using Sylow theory.
First, Sylow's Theorem says the number of Sylow $2$-subgroups $n_2$ is either $1$ or $3$. Assume that $n_2=1$. Then $G$ has a normal subgroup $P$ of order $2$. By index considerations, any subgroup $N$ of order $3$ will be normal. We know $G=PN$, and does this somehow derive a contradiction? I'd like to contradict the non-abelianness of $G$ to deduce that $n_2=3$, and hence $G$ has $3$ non-normal Sylow $2$-subgroups.
AI: Let the non-trivial element of the normal subgroup of order $2$ be $g$. Then all conjugates of $g$ must be $g$ (it cannot be identity as identity is only conjugate to itself). Hence $g$ commutes with everything. We know an element of order $3$ exists (by Cauchy), so call this $h$. Then $g$ and $h$ generate the group and commute, and it is now clear that $G$ must be $C_6$, which is abelian.
|
H: If$ f(x)=x^{10000}-x^{5000}+x^{1000}+x^{100}+x^{50}+x^{10}+1$, what is the number of rational roots of $f(x)=0$?
The question is:
If$$ f(x)=x^{10000}-x^{5000}+x^{1000}+x^{100}+x^{50}+x^{10}+1$$ what is the number of rational roots of $f(x)=0$?
I used descrates rule.
As number of time sign change is two therefore positive real roots is less than $2$ .
Also the function is even number of negative real roots is also less than $2$.
But it gives me information about real roots not rational.
AI: By the rational root theorem, the rational roots of this polynomial must be integers, because the polynomial is monic. Moreover, the integer roots must divide $1$ and so must be $1$ or $-1$. Neither is actually a root.
|
H: Quadratic with missing Linear Coefficient
Let $x^2-mx+24$ be a quadratic with roots $x_1$ and $x_2$. If $x_1$ and $x_2$ are integers, how many different values of $m$ are possible?
I'm assuming we can use Vieta's Formula.
We can say $x_1+x_2=m,$ and $x_1\cdot x_2=24.$
$16$ values satisfy both of these conditions, so I think our solution would be $\boxed{16}.$ Did I go wrong somewhere in my process, or am I correct? Thank you in advance.
AI: You are on the right track, but consider that the pairs $(3/8)$ and $(8/3)$ , for example, give the same sum. Now, you should be able to solve the problem.
|
H: Probability of being "close" to a lattice point
Let $X = \{1, \cdots, n\}$, and let $T$ be the set of $t$-tuples over $X$.
Now choose a random point $x$ from $[1, n]^t$ (note that $x$ is a tuple of real numbers, not necessarily a lattice point), and define $\epsilon_1, \cdots, \epsilon_{|T|}$ to be the distances (say Euclidean) between $x$ and each tuple in $T$. Finally, let $\epsilon$ be the smallest such $\epsilon_i$.
What is the probability that this smallest distance $\epsilon$ is less than a given parameter, say $\lambda$?
AI: The probability that the distance from some lattice point is $<\lambda$ is simply the hypervolume covered by $n$-dimensional hyperspheres of radius $\lambda$ centred at each point (cutting the sphere off if it’s at an edge/vertex). The formulae for these are well known, and assuming $\lambda$ is less than $1/2$ as so to not make the spheres overlap, this will give you your desired probability.
|
H: Inverse of the $y=x^x$ in implicit form?
I want to find the inverse of the function $y=x^x$ in implicit form and not by using Lambert W function. Can you tell me how to find it?
Thanks.
AI: The Lambert $W$ function was precisely introduced to solve the equation
$$y=xe^x$$ and those that can reduce to that form, such as
$$\log y=\log x\,e^{\log x}.$$
That essentially means that there is no other way.
|
H: Convergence of integral $\int_1^2 \frac{\sqrt{x}} {\ln(x)} \,dx $
I want to determine whether or not the integral $$\int_1^2 \frac{\sqrt{x}} {\ln(x)} \,dx $$ converges.
I have tried things like $$\int_1^2 \frac{\sqrt{x}} {\ln(x)} \,dx \leq \int_1^2 \frac{2} {\ln(x)} dx ,$$ but I find myself unable to evaluate the latter integral.
Next I try: $$\int_1^2 \frac{\sqrt{x}} {\ln(x)} dx \geq \int_1^2 \frac{\sqrt{x}} x dx .$$ In this case the latter integral is finite but that does not tell me anything about the convergence or divergence of the original integral.
What comparison can I make to determine the convergence of this integral?
AI: For a direct comparison, since $\ln x < x-1$ for $x > 1$, we have
$$\frac{\sqrt{x}}{\ln x} > \frac{\sqrt{x}}{x -1} > \frac{1}{x -1} $$
Note that $\displaystyle\int_1^2 \frac{dx}{x-1}= -\lim_{c \to 1+} \ln(c-1)=+\infty.$
|
H: Uniform continuity implies continuity in topological vector spaces
Let $E$ and $F$ be topological vector spaces and $A \subset E$. I want to prove that: if $f: A \longrightarrow F$ is a uniformly continuous function, then $f$ is continuous.
I want that, by definition of uniformly continuous we have for all $V \in \mathcal{F}(0_F)$ there exists $U \in \mathcal{F}(0_E)$ such that if $x_1,x_2 \in A$ and $x_1-x_2 \in U$ implies
$$f(x_1)-f(x_2) \in V$$
where $0_E$ and $0_F$ denote the origin of $E$ and $F$ respectively. Moreover $ \mathcal{F}(0_E)$ and $ \mathcal{F}(0_F)$ are the filters of neighborhoods of the origins.
But, I am not able to tie these facts with continuity, since the definition of continuity is via $f$ inverse images.
AI: Fix $x\in A$ and let $V$ be an open set around $f(x)$. Then $-f(x)+V\in\mathcal{F}(0_F)$, so there exists $U\in\mathcal{F}(0_E)$ such that $x_1,x_2\in A$ with $x_1-x_2\in U$ implies $f(x_1)-f(x_2)\in -f(x)+V$. Consequently, $(x+U)\cap A$ is an open set in $A$ containing $x$ such that $f((x+U)\cap A)\subset V$, so $f$ is continuous at $x$ for all $x\in A$.
Edit: For clarity, the notion of continuity I used here is that a function between topological spaces $f:X\to Y$ is continuous if and only if for every $x\in X$ and every open set $V$ in $Y$ containing $f(x)$, there exists an open set $U$ in $X$ containing $x$ such that $f(U)\subset V$. It is a standard exercise in topology that this is equivalent to other definitions of continuous.
|
H: Automorphism of vector space $V$ such that $\varphi(S_1)=S_2$
Let $V$ be a finite dimensional $K$-vector space. Let $S_1,S_2\subset V$ be subspaces such that $\dim S_1=\dim S_2=n$. Show there is an automorphism $\varphi:V\to V$ such that $\varphi(S_1)=S_2$.
Let $\{a_1,\ldots,a_n\}$ be a basis for $S_1$, and let $\{b_1,\ldots,b_n\}$ be a basis for $S_2$. Extend these to bases for $V$, namely $\{a_1,\ldots,a_k\}$ and $\{b_1,\ldots,b_k\}$ where $k=\dim V$. Now define $\varphi:V\to V$ by mapping $a_i\mapsto b_i$ for each $1\leq i\leq k$ (making sure that the basis elements of $S_1$ map to the basis elements of $S_2$). This defines a linear map, and since the $b_j$ span all of $V$, $\dim\text{im }\varphi=\dim V$. Therefore $\varphi$ is surjective, hence $\varphi$ is an automorphism on $V$. Finally, it is clear that $\varphi(S_1)=S_2$ by construction. Is this correct?
AI: That looks good to me.
Just a small comment. You say Therefore $\varphi$ is surjective, hence $\varphi$ is an automorphism on $V$. This is true.
However, you're referring to an argument which is usually only valid for finite dimensional vector spaces. This is fine as it is the case in your example. But it's not necessary. A linear map that maps a basis onto a basis is an automorphism whatever the dimension of the vector space considered is.
|
H: Is a collection of pairwise disjoint closed intervals countable?
Attempt: If we argue the same way as we did for the case of open intervals: Since the closed intervals are disjoint, we can identify each closed interval with a rational number in that interval and since the rationals are countable, their subsets are countable as well. Hence, the collection of disjoint closed intervals must be countable as well.
I just read somewhere that the disjoint collection of closed intervals in $\Bbb R$ may not be countable. But, Is there an error in the above argument?
AI: As in my comment:
If singletons do not count then the answer is yes since the collection of interiors of those closed intervals will be a collection of pairwise disjoint open intervals and must therefore be countable (note that $[a,b]=[c,d]$ if and only if $(a,b)=(c,d)$).
|
H: Contradiction problem from $P(x)=a_nx^n+a_{n-1}x^{n-1}+ \dots+ a_0$
Prove that there is no polynomial $$P(x)=a_nx^n+a_{n-1}x^{n-1}+ \dots+ a_0$$ with integer coefficients and of degree at least $1$ with the property that $P(0), P(1), P(2), \dots$ are all prime numbers.
How should one approach this? Contradiction seems plausible if we would assume it we would get that $P(0), P(1), P(2) \dots$ would all equal some primes. Also from $P(0) = q$, where $q$ is some prime we would get that $a_0=q.$ From here on I don’t quite know how to continue... Any hints would be appreciated.
AI: The key point here is that $a-b|P(a)-P(b)$ (if you haven’t seen this already you should try proving it, it’s a nice exercise).
From here, if $P(n) = p$, then $p$ divides $P(n+kp)$ for all positive $k$. But since all of these are prime, we get that $P(n+kp)=p$, and so $P$ takes the same value infinitely many times, and hence is constant, contradicting the degree condition.
|
H: Compute $\lim_{n \rightarrow \infty} \lim_{R \rightarrow \infty} \int_0^R \sin{(x/n)} \sin{(e^x)}dx$.
Another practice preliminary question for you all. This time, a double limit of an integral.
Problem Compute $\lim_{n \rightarrow \infty} \lim_{R \rightarrow \infty} \int_0^R \sin{(x/n)} \sin{(e^x)}dx$. Hint: Integrate by parts.
My issue is the order of the limits. I can't get a nice closed form solution that does not blow up to infinity in the first limit. I've performed integration by parts in order to try to find something that is more easily approximated or to see if the integral "repeats itself", so to say. What I've tried doesn't seem to go anywhere.
My Attempt
Define $f_n(x) = \sin(x/n) \sin (e^x)$. For any fixed $x \in \mathbb{R}$ we have that $f_n(x) \rightarrow 0$. Additionally, $|f_n(x)| \leq 1$ for all $n$ and $x$. So we have that $f_n$ is bounded, measurable, and converges pointwise to $0$ on $\mathbb{R}$. At this point, I'd love to conclude that the integral is zero from the Bounded Convergence Theorem and that $\mathbb{R} = \cup_{m \in \mathbb{N}}[m-1,m]$. On every interval as such we have $\lim_{n \rightarrow \infty} \int_{[m-1,m]}f_n = 0$ by the BCT. However, the conclusion seems like taking the limits in reverse order. Is it the case that $\sum_{m \in \mathbb{N}} \lim_{n \rightarrow \infty} \int_{[m-1,m]}f_n = \lim_{n \rightarrow \infty} \sum_{m \in \mathbb{N}} \int_{[m-1,m]}f_n $?
Otherwise I think a solution might be found from the integral over the ascending union $\cup_{n \rightarrow \infty} [0,n]$. I'm certain this problem will require use of the Lebesgue Dominated Convergence Theorem, but I am missing the integrable function that bounds $f_n$.
Thanks in advance for any hints or nudges in the right direction.
AI: $$
\lim_{R\to\infty}\int_0^R \sin{(x/n)} \sin{(e^x)}dx = \lim_{R\to\infty}\int_0^R \underbrace{\sin{(x/n)} e^{-x}}_{u} \cdot \underbrace{e^x \sin{(e^x)}dx}_{dv}
$$
$$
=\lim_{R\to\infty}\left. - e^{-x}\sin(x/n) \cos(e^x)\right|_{0}^{R} + \int _0^R \cos(e^x)\cdot(-e^{-x}\sin(x/n)+\frac{1}{n}e^{-x}\cos(x/n))\,dx
$$
$$
=\lim_{R\to\infty}\int _0^R \cos(e^x)\cdot(-e^{-x}\sin(x/n)+\frac{1}{n}e^{-x}\cos(x/n))\,dx
$$Now use $|\sin(\theta)|<|\theta|$ and take absolute values:
$$
\lim_{R\to\infty} \int _0^R \left|\cos(e^x)\cdot\left(-e^{-x}\sin(x/n)+\frac{1}{n}e^{-x}\cos(x/n)\right)\right|\,dx
$$
$$
\leq \lim_{R\to\infty} \int _0^R e^{-x} \left|\sin(x/n)+\frac{1}{n}\cos(x/n)\right|\,dx
$$
$$
\leq \lim_{R\to\infty} \int _0^R e^{-x} \left(\left|\sin(x/n)\right|+\left|\frac{1}{n}\cos(x/n)\right|\right)\,dx$$
$$
\leq \lim_{R\to\infty} \frac{1}{n}\int _0^R e^{-x}(x+\left|\cos(x/n)\right|)\,dx
$$
$$
\leq \lim_{R\to\infty} \frac{1}{n}\int _0^R e^{-x}(x+1)\,dx = \frac{2}{n}
$$
|
H: Find three vectors in ${R^3}$ such that the angle between all of them is pi/3?
Is there a simple way to do this? I have found that ${(a.b)/|a||b|}$ must be equal to ${1/2}$ but from there I am stuck how to proceed. Any help?
P.s. This is from the MIT 2016 Linear Algebra course and is not homework.
AI: Diagonals of cube faces forms a tetrahedron
|
H: Evaluate $f_n(\alpha,\beta)=\int_0^{\infty}\mathrm{e}^{-x^n}\sin(\alpha x)\cos(\beta x)\,dx$
I was just looking at the following function but couldn’t understand how to form a way of integrating this and how does it depend upon $n, \alpha$ and $\beta$ can anyone please help
$$f_n(\alpha,\beta)=\int_0^{\infty}\mathrm{e}^{-x^n}\sin(\alpha x)\cos(\beta x)\,dx$$
AI: Well, we have the following integral:
$$\mathcal{I}_\text{n}\left(\alpha,\beta\right):=\int_0^\infty\exp\left(-x^\text{n}\right)\sin\left(\alpha x\right)\cos\left(\beta x\right)\space\text{d}x\tag1$$
Using the definition of the Exponential function:
$$\exp(x)=\sum_{\text{k}\ge0}\frac{x^\text{k}}{\text{k}!}\tag2$$
So, we can write:
$$\mathcal{I}_\text{n}\left(\alpha,\beta\right)=\sum_{\text{k}\ge0}\frac{\left(-1\right)^\text{k}}{\text{k}!}\int_0^\infty x^\text{kn}\sin\left(\alpha x\right)\cos\left(\beta x\right)\space\text{d}x\tag3$$
Now, we also know that:
$$\sin\left(\alpha x\right)\cos\left(\beta x\right)=\frac{\sin\left(\left(\alpha-\beta\right)x\right)+\sin\left(\left(\alpha+\beta\right)x\right)}{2}\tag4$$
So:
$$\mathcal{I}_\text{n}\left(\alpha,\beta\right)=\sum_{\text{k}\ge0}\frac{\left(-1\right)^\text{k}}{2\left(\text{k}!\right)}\left\{\underbrace{\int_0^\infty x^\text{kn}\sin\left(\left(\alpha-\beta\right)x\right)\space\text{d}x}_{\text{I}_1}+\underbrace{\int_0^\infty x^\text{kn}\sin\left(\left(\alpha+\beta\right)x\right)\space\text{d}x}_{\text{I}_2}\right\}\tag5$$
Now, we can use the 'evaluating integrals over the positive real axis' property of the Laplace transform in order to write:
$$\text{I}_1=\int_0^\infty\mathcal{L}_x\left[\sin\left(\left(\alpha-\beta\right)x\right)\right]_{\left(\text{s}\right)}\cdot\mathcal{L}_x^{-1}\left[x^\text{kn}\right]_{\left(\text{s}\right)}\space\text{ds}\tag6$$
$$\text{I}_2=\int_0^\infty\mathcal{L}_x\left[\sin\left(\left(\alpha+\beta\right)x\right)\right]_{\left(\text{s}\right)}\cdot\mathcal{L}_x^{-1}\left[x^\text{kn}\right]_{\left(\text{s}\right)}\space\text{ds}\tag7$$
And using the table of selected Laplace transforms, we have:
$$\mathcal{L}_x\left[\sin\left(\left(\alpha-\beta\right)x\right)\right]_{\left(\text{s}\right)}=\frac{\alpha-\beta}{\left(\alpha-\beta\right)^2+\text{s}^2}\tag8$$
$$\mathcal{L}_x\left[\sin\left(\left(\alpha+\beta\right)x\right)\right]_{\left(\text{s}\right)}=\frac{\alpha+\beta}{\left(\alpha+\beta\right)^2+\text{s}^2}\tag9$$
$$\mathcal{L}_x^{-1}\left[x^\text{kn}\right]_{\left(\text{s}\right)}=\frac{1}{\text{s}^{1+\text{kn}}}\cdot\frac{1}{\Gamma\left(-\text{kn}\right)}\tag{10}$$
In order to finish you can use this.
|
H: Transitive actions of $\mathbb{Z}_6$ on itself
Two actions of $\mathbb{Z}_6$ on itself that we naturally might consider are $\overline{m} \cdot \overline{n} = \overline{m}+\overline{n}$, and $\overline{m} \cdot \overline{n} = \overline{n}-\overline{m}$. In fact these actions are isomorphic. On the other hand, we can define an action such as $\overline{m} \cdot \overline{n} = 2\overline{m} + \overline{n}$, which is not isomorphic to those.
So my question is whether all the transitive actions of $\mathbb{Z}_6$ acting on itself are isomorphic to the above $\overline{m} \cdot \overline{n} = \overline{m}+\overline{n}$.
AI: Suppose that $\mathbb{Z}_6$ acts on itself in some way. Then, in order for the action to be transitive, the map $\bar{m} \mapsto \bar{m} \cdot \bar{0}$ (which is easily seen to be an equivariant map, if the codomain has the given action and the domain has the "natural" action given by addition) must be surjective. Since it is a surjective map from a finite set to itself, it must also be injective, and hence an isomorphism of $\mathbb{Z}_6$-sets.
|
H: Uniform convergence of a sequence of functions which is integral of another sequence
I was going through some questions on pointwise and uniform convergence. Got stuck in one of those which says:
Let $g_n(x) = \sin^2(x+\frac{1}{n})$ be defined on $[0,\infty).$
and $f_n(x) = \int_0^xg_n(t)\,dt.$
I am supposed to discuss about its uniform-convergence of $(f_n).$
The terms are really looking complicated to try it by the definition. Should I first show that $(g_n)$ is uniformly convergent? How am I supposed to do even that?
Help, please.
AI: You have
$$\begin{aligned}g_n(x)&=\sin^2\left(x + \frac{1}{n}\right) = \frac{1}{2}\left(1- \cos\left(2(x + \frac{1}{n})\right)\right)\\
&=\frac{1}{2}\left(1 - \cos 2x \cos\frac{1}{n} + \sin 2x \sin \frac{1}{n}\right).
\end{aligned}$$
Therefore
$$f_n(x)= \frac{1}{2}\left(x - \frac{1}{2}\cos\frac{1}{n}\sin 2x-\frac{1}{2}\sin\frac{1}{n}\left(\cos 2x -1\right)\right).$$
From there, you can prove that $\{f_n\}$ converges uniformly to
$$f(x) = \frac{x}{2} - \frac{1}{4} \sin 2x$$
as
$$\begin{aligned}\left\vert f_n(x) - f(x) \right\vert &= \frac{1}{4}\left\vert \left(1 - \cos\frac{1}{n} \right)\sin 2x + \sin\frac{1}{n}\left(\cos 2x -1\right)\right\vert\\
&\le \frac{1}{4}\left(\left\vert \left(1 - \cos\frac{1}{n} \right)\sin 2x\right\vert + \left\vert\sin\frac{1}{n}\left(\cos 2x -1\right)\right\vert\right)\\
&\le \frac{1}{4}\left(\left\vert 1 - \cos\frac{1}{n} \right\vert + 2\left\vert\sin\frac{1}{n}\right\vert\right)\\
\end{aligned}$$
and the RHS of above inequality converges to zero independently of $x$.
|
H: Simplifying $a = \dfrac{\sqrt{x}}{x+3} $
Solving equations involving terms of the form $ \dfrac{3x}{6x^2} $ is easy. You can cancel the $x$ in the numerator and end up with: $ \dfrac{3}{6x} $.
However, I am presented with an equation of the following form:
$$a = \dfrac{\sqrt{x}}{x+3} $$
Where, $a$ is a constant.
Trying to cancel out the variable $x$ in either the numerator or denominator doesn't work.
How do I go about and solve such an equation?
AI: $ax+3a=\sqrt{x} \implies a^2x^2+(6a^2-1)x+9a^2=0$
$$\begin{align*}\implies x&=\frac{1-6a^2\pm\sqrt{(6a^2-1)^2-36a^4}}{2a^2}\quad(a\neq 0)\\
&=\frac{1-6a^2\pm\sqrt{1-12a^2}}{2a^2}\quad(a\neq 0)\end{align*}$$
|
H: How can I solve the differential equation $r'=r(1-r)$
I seperated the variables and decomposed the fraction to get $r(1-r)=ce^t$, but I don't know where to go from here.
AI: $$\dfrac {dr}{r(1-r)}=dt$$
$$ \left({\dfrac 1 r- \dfrac 1{(r-1)}}\right)dr=dt$$
Integrate:
$$\ln r -\ln (r-1)=t+c$$
$$\dfrac r {r-1}=ke^t$$
$$ r(1- ke^t)=-ke^t$$
$$ r(t)=\dfrac {ke^t}{ke^t-1}$$
Finally:
$$ r(t)=\dfrac {e^t}{e^t+C}$$
|
H: What is the error of a sine function defined using a unit polygon instead of a unit circle?
I have made up a geometric definition for a function $\mathrm{polysin}(n, θ)$:
Construct a regular polygon of $n$ sides. Place the first vertex at
$(1,0)$ and place the rest going counterclockwise.
Let a line through the origin intersect this "unit polygon," making an angle of θ with the positive half of the x-axis.
The y-coordinate of that point of intersection is equal to $\mathrm{polysin}(n, θ)$.
(You can play around with the function in this GeoGebra sketch.)
I would like to know a couple of things about this function, namely:
How can I define $\mathrm{polysin}(n, θ)$ algebraically?
How much error is there between $\mathrm{polysin}(8, θ)$ and $\sin(θ)$? Between $\mathrm{polysin}(n, θ)$ and $\sin(θ)$?
I am at a quite basic level of math education - I know how to do algebra and trigonometry, but I haven't taken any Calculus yet. I would really appreciate some insight into how one should approach math problems like this, and the broad strokes of what is involved in solving this problem. (As well as the answer, of course!)
AI: I'll get you started. The first vertex, $v_1$, is at $(\cos(2\pi/n),\sin(2\pi/n))$. The line between $v_1$ and $v_0=(1,0)$ is $y= \frac{\sin(2\pi/n)}{1-\cos(2\pi/n)}(1-x)$; for $0\leq \theta \leq 2\pi/n$, you have defined $\text{polysin}(n,\theta)=\frac{\sin(2\pi/n)}{1-\cos(2\pi/n)}(1-\cos(\theta)) $. You can generalize this for any two consecutive vertices: find the equation of the line between them using point-slope form and evaluate at $\cos(\theta)$. If you're familiar with calculus, you can find the maximum error by taking the difference $\sin(x)-\text{polysin}(n,x)$, differentiating, and setting that equal to $0$. Neat idea, hope that helps!
|
H: How can I prove this statement about mean and variance?
How can I prove that:
$$E(a) = a\, \text{ and }\, V(a) = 0?$$
AI: Suppose $X$ discrete random variable with constant value $a$. So distribution looks like$$a\space a \space ...a\\p_1\space p_2 \space... p_n$$
Then
$$EX = \sum_{i=1}^{n} a \cdot p_i = a \cdot \sum_{i=1}^{n} p_i = a$$
Variance in same way.
|
H: Does removing the "heaviest" edge of all cycles in an (unweighted) graph result in a minimum spanning tree?
Background:
A graph is connected if there is a path between all pairs of vertices.
A graph has a cycle if there exists two vertices with an edge between them and a path between them that doesn’t use that edge.
A graph is a tree if it is connected and does not contain a cycle.
If you remove one edge from a cycle, it’s no longer a cycle.
Definition:
The heaviest edge of a cycle is the edge that corresponds to the largest vertex in the cycle and its largest neighbor. To compare two vertices, assume each vertex corresponds to a unique integer.
Question:
Given a connected graph, if we remove the heaviest edges of all cycles, is the result a spanning tree of that graph? Or can the resulting graph be disconnected?
Example:
Vertices: {0,1,2,3}
Edges: {01,02,03,13,23}
There are 3 cycles: 0130 0230 01320
The heavy edges (for each of the 3 cycles, respectively) are: 13 23 23.
Removing the heavy two edges results in the spanning tree with edges: {01 02 03}
AI: It's always a spanning tree.
You probably already noticed this, but for completeness: the resulting graph is acyclic, because every cycle in the original graph has been destroyed. So we need to show that the result is still connected.
Another characterization of connectivity will be useful here: a graph $(V,E)$ is connected if and only if for every nonempty $S \subsetneq V$, there is a crossing edge: an edge between a vertex in $S$ and a vertex in its complement $V \setminus S$. So let's check this for the graph after deletions.
For a given set $S$, because our starting graph was connected, there are some crossing edges. Let $e$ be the lightest of these edges. I claim that the edge $e$ is never deleted, and so there is also a crossing edge in the graph we get at the end.
For $e$ to be deleted, we'd first have to find a cycle containing it. That cycle contains at least one vertex in $S$ and at least one vertex not in $S$. Following that cycle starting from $S$, at some point we leave $S$ - but then we have to come back to $S$ by a different edge. This can happen multiple times, but even if it only happens once, we see that the cycle contains at least two crossing edges: $e$, and some other edge $e'$ (and maybe others).
Since $e$ is the lightest crossing edge, it is in particular lighter than $e'$. So it is not the heaviest edge on this cycle, and will not be deleted when we consider this cycle. The same argument holds for every cycle containing $e$, so the edge $e$ will never be deleted.
In fact, the tree $T$ we get at the end is a minimum spanning tree.
To see this, take any other spanning tree $T'$. Let $e$ be an edge of $T$ not in $T'$. Adding $e$ to $T'$ creates a cycle, and deleting any edge of that cycle would create another spanning tree. Let's add $e$ and delete the heaviest edge of that cycle.
That heaviest edge is definitely not $e$, because $e$ is not the heaviest edge of any cycle. So we added $e$ to $T'$, then deleted an edge heavier than $e$. This means that we've reduced the total weight of $T'$: therefore, $T'$ is not a minimum spanning tree. Since some minimum spanning tree must exist, it can only be $T$.
|
H: Changing reduced partial sum into a multiplicative function
I have a partial sum in the form of $$\sum_{\substack{n \leq x\\k|n}} f(n)$$ for a fixed $k \in \mathbb{Z}$ where $f(n)$ is a multiplicative function. Is there a way to reduce this partial sum into another sum such that I can exploit the multiplicative property of the main term of the resulting sum?
AI: As was mentioned in the comments, we can write
$$\sum_{{n\le x}\atop{k|n}}f(n)=\sum_{{pk\le x}\atop{k|pk}}f(pk)=\sum_{k\le x/p}f(pk),$$
as the requirement $k|pk$ is always met for $p,k\ne 0$.
|
H: Is there a way to determine a function that could model the transformation of one function to another?
Let's say I have a function centered at the origin, say $f(x)= x^2$, at an initial time. After some time has passed, the initial function $f(x)$ has transformed into a different function, say $g(x)=6x^7$. Is it mathematically possible to obtain a third function, say $h(x)$, that models the transition from $f(x)$ to $g(x)$? If so, could this be extended to multivariable functions?
What does this look like generally? What is the method to obtain the function $h(x)$ that models the transition from any $f(x)$ to any $g(x)$ with a time interval between $f(x)$ and $g(x)$ that allows $f(x)$ to transform into $g(x)$?
Thanks!
AI: Try:
$$f(x;t) = (5 t + 1) x^{5 t + 2}$$
and try $t=0$ and $t=1$.
Or you could define $h(q)= 6 q^{7/2}$, and then $h(f(x)) = 6 x^7$.
But such approaches are not unique. There are many functions that will give you the particular functions you state.
|
H: If original set of vectors have zero mean, will the orthogonally projections of the vectors onto another vector have zero mean?
Consider vectors $x_1, \cdots, x_n \in \mathbb{R}^m$. Define the vector $\mu \in \mathbb{R}^m$ to be the mean of the vectors:
$$
\mu = \frac{1}{n}\sum_{i=1}^n x_i
$$
Assume that $\mu = 0$, the zero vector.
Now consider some other vector $u$. Define the orthogonal project of $x_i$ onto $u$ as $v_i$, a scalar, i.e.,
$$
v_i = u^Tx_i
$$
And define $\phi \in \mathbb{R}$ as the mean of $v_i$. Is $\phi = \frac{1}{n}\sum_{i=1}^n v_i = 0$?
The way I approached this problem seems trivial, and I'm not sure if I did this correctly. Essentially, I started with
$$
\mu = \frac{1}{n}\sum_{i=1}^n x_i = 0 \\
u^T\frac{1}{n}\sum_{i=1}^n x_i = u^T0 \\
= \frac{1}{n}\sum_{i=1}^n u^Tx_i = \phi = 0 \\
\therefore \phi = 0
$$
This seems like a trivial proof if I did this correctly, but I'm having a hard time visualizing why this is true, i.e., why is the mean of the orthogonal project of the vectors onto $u$ zero?
This result also appears to be independent of $u$.
AI: Orthogonal projections are linear maps.
And linear maps: (1) map the zero vector to the zero vector and (2) map the mean of vectors to the mean of the mapped vectors.
|
H: Necessary and sufficient condition for $f_n$($x$) = $b_{n}x$+$c_{n}x^2$ to uniformly converge to zero
Have been trying some questions on uniform and point-wise convergence of sequence of functions. Got stuck in this. I have to prove the following:-
Let ($b$$_n$) and ($c$$_n$) be sequences of real numbers then $\sum_{n=1}^\infty$ $\lvert b_n\rvert$ $\lt$ $\infty$ and $\sum_{n=1}^\infty$ $\lvert c_n\rvert$ $\lt$ $\infty$ is not a necessary and sufficient condition for the sequence of polynomials f$_n$($x$) = $b$$_n$$x$+$c$$_n$$x^2$ to converge uniformly to $0$ on the real line.
I am trying to let $b$$_n$ = $\frac{1}{n}$ and $c$$_n$ = $0$
Now f$_n$($x$) becomes $\frac{1}{n}$$x$ which converges to $0$ point-wise. How can I prove it does so uniformly?
And also is this correct?
AI: Take
$$b_n=c_n=\frac{1}{n^2}$$
$$\sum |b_n| \;\; and \;\; \sum |c_n|$$
are convergent.
for all real $x$,
$$\lim_{n\to+\infty}f_n(x)=0$$
the sequence of functions $(f_n)$ converges in a pointwise way to zero.
But, as a polynomial function
$$\sup \{|f_n(x)|\;, \; x\in \Bbb R\}=+\infty$$
since
$$\lim_{x\to +\infty} \frac{1}{n^2}|x+x^2|=+\infty$$
thus, the convergence is not uniform.
|
H: Algebra/number theory solution check, number of 0's at end of integer
As part of a larger problem, I wish to calculate the value of $\frac{1993^2+1993}{2} \pmod {2000}$. The top reduces to $42$. However, $\gcd(2,2000)>1$, so the solution is not $21$, and carrying out the division would require changing the modulus.
Is there a method to divide in this case (or in a general case entirely applicable to this case), such that doing this division will not change the modulus from 2000, i.e. the solution $x$ is $\frac{1993^2+1993}{2} \equiv x \pmod {2000}$.
I understand that I can just do the arithmetic, but I wonder if it is possible to keep the modulus at $2000$ after the division.
AI: $1993^2 + 1993 \equiv (-7)^2 +(-7) \equiv 42\pmod {2000}$.
So $\frac {1993^2+1993}2 \equiv \frac {42}2 \pmod {\frac {2000}{\gcd(2,2000)}}$
So $x \equiv 21\pmod{1000}$ so $x \equiv 21, 1021 \pmod {2000}$
$2000 = 125*16$ and if we consider the chinese remainder theorem we get that $x \equiv 21 \pmod{125}$
But to solve $x \pmod{16}$ we solve $1993^2 + 1993\pmod{32}\equiv 9^2 +9\equiv 26\pmod {32}$.
So $\frac{1993^2 + 1993}2 \equiv \frac{32k +26}2\equiv 16k + 13 \equiv 13 \equiv -3\pmod {16}$.
And as $21 \not \equiv -3\pmod{16}$ then $x\equiv 21 \pmod {2000}$ can't be the solution.
So it must be $x \equiv 1021 \pmod{2000}$.
|
H: A Map is continuous on the inverse image of the set $(-\infty,r]$. Does this inverse image a closed set?
Let $U$ be a topological space and a map $g:U\to \mathbb{R}$. For a given $r\in\mathbb{R}$, define $E:= \{x\in U: g(x)\leq r\}$. If $g$ is continuous at every point of $E$, then Is it true that $E$ is closed set in U? If yes, prove it. If not, give counter example.
My guess is yes.. Here is my attempt. Let $x_n\in E$ be a sequence such that $\lim_{n\to\infty}x_n = x$. I am trying to prove $x\in E$. By continuity of $g$, we have $g(x_n)$ also converges. Moreover, $g(x_n)\leq r$ for all $n$. Therefore, $\lim_{n\to\infty}g(x_n)\leq r$.
Here I am stuck. If I prove that $\lim_{n\to\infty}g(x_n)=g(x)$, I am done. Can somebody help me here.
AI: NO. E.g. $U=\Bbb R$ and $E=(0,\infty).$ Let $r=1.$ Let $g(x)=0$ for $x\in E.$ Let $g(x)=2$ for $x\le 0.$
If you also want $g$ to be discontinuous at every point of $U$ \ $E$ then modify the example above to $g(x)=2$ when $0\ge x\in \Bbb Q$ and $g(x)=3$ when $0>x\not\in \Bbb Q.$
Given a non-empty space $E$ we can always find a space $U$ such that $E$ is an open, non-closed subspace of $U.$ For example let $U=E\cup \{p\}$ with $p\not\in E.$ Let $T_E$ be the topology on $E$ and let the topology on $U$ be $\{U\}\cup T_E.$
If $E$ is an open, non-closed subspace of $U,$ let $r=1.$ Let $g(x)=0$ for $x\in E.$ Let $g(x)=2$ for $x\in U$ \ $E.$
|
H: Multiobjective optimization
I need some clarification on multi objective optimization. I would like to know if a problem has three objectives with completely different variables, should such a problem be solved as three independent single objective optimization problem or could the problem be solved using a multiobjective optimization. I would appreciate your feedback. Thank you.
eg. Min
f1=a1*x1 +a2*x2;
f2= a3*x3 +a4*x4;
f3=a5*x5 + a6*x6
AI: This problem should be solved as three separate optimization problems. Since each of the functions see "separate" variables, there is no need to use the machinery from multi-objective optimization.
Multi-objective minimization seeks to simultaneously minimize several functions. You could technically call your example a "multi-objective" problem, where your vector is $x=[x_1,x_2,x_3,x_4,x_5,x_6]$. However, since your problem can be separated into solving three separate minimization problems (each of which have no effect on the solution of the other), it does not really match the spirit of the field.
|
H: If a function $f$ is $L$-periodic then $f'$ has $2$ zeros in $[0,L)$?
Let $f: \mathbb{R} \longrightarrow \mathbb{R} $ be a differentiable and odd function. If $f$ is periodic and the (minimal) period $L>0$, then $f'$ has $2$ zeros in $[0,L)$?
For example, this occurs if we consider $f(x)=\sin(x)$, for all $x \in \mathbb{R} $, since in this case $L=2\pi$.
This is in general true? ${}$
AI: Yes; follows immediately from Rolle's Theorem.
To elaborate: $f(0)=0$. $f$ has a root at $a$, with $0<a<L$. Then $f$ has a root at $-a$ as well, so it also has a root at $-a+L$. By Rolle's Theorem, $f'$ has a root on $(0,a)$ and on $(a,L)$.
|
H: Is $\forall x ((A = \{a | P(a)\} \wedge x \in A ) \rightarrow P(x))$ an axiom of some system?
In section 1.3 of Vellemans's 'How to Prove it', the author states the following:
"In general, the statement $y \in {x | P(x)}$ means the same thing as $P(y)$,..."
I couldn't find a proof of this, and wondered if $\forall x ((A = \{a | P(a)\} \wedge x \in A) \rightarrow P(x))$ an axiom of some set theory or a close derivative one?
AI: This is basically just by definition. Set theory doesn't actually have set-builder notation directly; "$A=\{a: P(a)\}$" is just shorthand for "$\forall a(a\in A\leftrightarrow P(a))$." And $$\forall a(a\in A\leftrightarrow P(a))\wedge x\in A\rightarrow P(x)$$ is provable just from the basic laws of logic:
From "$\forall a(a\in A\leftrightarrow P(a))$" we get "$x\in A\leftrightarrow P(x)$" via universal instantiation.
From that in turn we get $x\in A\rightarrow P(x)$, just by unpacking "$\leftrightarrow$" (and applying one of the $\wedge$-elimination rules if you want to be picky).
Finally, we can get $P(x)$ from $x\in A\rightarrow P(x)$ and $x\in A$ via modus ponens.
Note that everything except the universal instantiation is basically just propositional logic, namely the fact that $$[(p\leftrightarrow q)\wedge p]\rightarrow q$$ is a propositional tautology.
|
H: How to calculate average growth when it's negative?
We have annual reports for company's revenue and can calculate annual growth as
$yg = {y_{i+1} \over y_i}$.
And then we can calculate the average monthly growth as $mg = ({y_{i+1} \over y_i})^{1 \over 12}$.
So for reports 2000-12 $1m and 2001-12 $2m the average monthly growth would be 1.06.
But how calculate monthly growth when the revenue became negative?
For reports 2000-12 revenue = $1m and 2001-12 revenue = $-1m?
P.S.
I need it for simple prediction. For example 2000-12 $1m and 2001-12 $2m the revenue in 2002-02 could be predicted as $2 \times 1.06^2 = 2.25$
AI: An example for negative growth rate: $y_0=100, y_1=80$
The growth rate from $t=0$ to $t=1$ is $g_{01}=\frac{80}{100}-1=0.8-1=-0.2$
So you can use the formula for growth rate no matter whether the growth rate is positive or negative:
$$g_{t,t+1}=\frac{y_{t+1}}{y_t}-1$$
Btw, the growth factor $1+g_{01}$ is still positive $1-0.2=0.8$
To apply the formula for the growth rate you need a meaningful zero point. That means that the values are ratio scaled. If $y_t$ is can be negative as well, then a growth rate cannot be determined.
|
H: Find the sequence $a_n$ so that $\sum_{n=1}^{\infty} a_nsin(nx) = f(x)$ where $f(x)$ is a piecewise function.
Trying to solve a problem I reached a point where I know that $$\sum_{n=1}^{\infty} a_nsin(nx) = f(x) \text{, where }f(x) = \begin{cases}
x & 0 \leq x \leq \frac\pi2 \\[5pt]
\pi - x & \frac\pi2 < x \leq \pi
\end{cases}$$
To solve the problem I need to find a sequence $a_n$.
I thought that if I were to find the Fourier series for $f(x)$ that might be it, but I found that the Fourier series is $f(x) = \frac{\pi}{4}-\frac{2}{\pi}\sum_{n=1}^{\infty}\frac{cos((4n-2)x)}{(2n-1)^2}$. The two are not alike at all, and I don't know any other way of finding $a_n$. How should I approach this problem? Is it even possible?
AI: Hint: exend $f$ to $[-\pi,0]$ by defining $f(-x)=-f(x)$. This will result an odd function on $[-\pi, \pi]$, which will have only $\sin$ terms in the Fourier-series (it's called Fourier Sine Series).
|
H: How do I find the sum of a power series $\underset{n=3}{\overset{\infty}{\sum}}\frac{x^n}{(n+1)!n\,3^{n-2}}$?
I have found the area of convergence to be $ x \in (-\infty, \infty)$, and this is how far I had gotten before getting stuck:
$$
\begin{aligned}
\sum_{n=3}^{\infty} \frac{x^{n}}{(n+1) ! n 3^{n-2}} &=\sum_{k=0}^{\infty} \frac{x^{k+3}}{(k+4) !(k+3) 3^{k+1}} \\
&=\sum_{k=0}^{\infty} \frac{x^{3} x^{k}}{(k+4) !(k+3) 3 \cdot 3^{k}} \\
&=\frac{x^{3}}{3} \sum_{k=0}^{\infty} \frac{1}{(k+4) !(k+3)}\left(\frac{x}{3}\right)^{k} \\
\text{substituting }u=& \frac{x}{3} \\
&=\frac{x^{3}}{3} \sum_{k=0}^{\infty} \frac{1}{(k+4) !(k+3)} u^{k}
\end{aligned}$$
I do not know how to proceed from here.
AI: $$f(x)= \sum_{n=3}^{\infty} \frac{x^{n}}{(n+1) ! n 3^{n-2}}$$
$$f^{'}(x)= \sum_{n=3}^{\infty} \frac{nx^{n-1}}{(n+1) ! n 3^{n-2}}= \frac{3^3}{x^2}\sum_{n=3}^{\infty} \frac{x^{n+1}}{(n+1) ! 3^{n+1}} = \frac{3^3}{x^2} \left( e^{\frac{x}{3}}-1- \frac{x}{3}\right)$$
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.