Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
|---|---|---|
How do you parameterize a circle? I need some help understanding how to parameterize a circle.
Suppose the line integral problem requires you to parameterize the circle, $x^2+y^2=1$. My question is, if I parameterize it, would it just be: $r(t)=($cos $t)i+($sin $t)j$? And how would that change if say my radius became $4$ instead of $1$? Thanks in advance!
|
Your parametrization is correct. Once you have a parameterization of the unit circle, it's pretty easy to parameterize any circle (or ellipse for that matter): What's a circle of radius $4$? Well, it's four times bigger than a circle of radius $1$!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1055132",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
}
|
Solving $2\cos^2 x-2\sin^2 x-2\cos x=0$ $$f(x) = 2\cos^2 x-2\sin^2 x-2\cos x$$
Need values of x that which make $f(x) = 0$
Tried $a^2-b^2 = (a+b)(a-b)$ with no luck
Really just need a hint that could bring me in the right direction
Thanks
EDIT: Solution thanks to everyones help! :D
$$f(x) = 2\cos^2 x-2\sin^2 x-2\cos x$$
$$0 = 2\cos^2 x-2 + 2\cos^2 x-2\cos x$$
$$0 = 4\cos^2 x-2\cos x - 2$$
$$\cos x = 2\pm \sqrt {-2^2-4(4)(-2)\over8}$$
$$\cos x = {2 \pm 6\over 8}$$
$$\cos x = {-1\over2}, 1$$
$$x = 2n\pi \pm{2\pi\over3}$$
Thank you!
|
Hint:
Use $\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\sin^2x=1-\cos^2x$
Update:
$$\cos x=-\frac{1}{2},1$$
$$\cos x=-\dfrac12=\cos\left(\frac{2\pi}3\right)\implies x=2k\pi\pm\dfrac{2\pi}3$$
$$\cos x=1\implies x=2k\pi$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1055318",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Why are integers subset of reals? In most programming languages, integer and real (or float, rational, whatever) types are usually disjoint; 2 is not the same as 2.0 (although most languages do an automatic conversion when necessary). In addition to technical reasons, this separation makes sense -- you use them for quite different purposes.
Why did they choose to say $\mathbb{Z} \subset \mathbb{R}$ in math? In other words, why are 2 and 2.0 considered the same?
When you are working in $\mathbb{R}$, does it make any difference whether some elements, eg. 2.0, also belong to $\mathbb{Z}$ or not?
|
There are very many reasons, but first of all, if the real numbers did not contain the integers then it would be very very very difficult to do elementary arithmetic.
Computers don't have any way to represent "real numbers", they can represent only integers and rationals.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1055501",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "47",
"answer_count": 11,
"answer_id": 7
}
|
y' = y^3 - y stable points So I have this differential equation:
$$y' = y^3 - y$$
And I need to find out which of the points (0, 1, -1) are stable.
So here we go:
$$y = \pm \frac{1}{\sqrt{e^{2x+c}+1}}$$ (solution to differential equation without y=0)
Assume that
$y = y^*(x, y_0^*)$ is a solution which satisfies $y(x_0) = y_0$ then:
$$y_0^* = \pm \frac{1}{\sqrt{e^{2x_0+c}+1}}$$
$$c = \frac{1-y_0^*{^2}}{y_0^*{^2}}e^{-2x_0}$$
$$y^*(x, y_0^*) = \pm\frac{y_0^*{^2}}{(1-y_0^*{^2})e^{2(x-x_0)}+y_0^*{^2}}$$
Firstly, let examine point 0 using Lyapunov's stability definition:
$$\forall \varepsilon \gt 0 : \exists \delta(\varepsilon) \gt 0 : |y(x_0) - y^*(x, y_0^*)| \lt \varepsilon, |y_0 - y_0^*| \lt \delta$$
$$|0-y^*(x, y_0^*)| = \frac{y_0^*{^2}}{|(1-y_0^*{^2})e^{2(x-x_0)}+y_0^*{^2}|} \le \frac{y_0^*{^2}}{|1-y_0^*{^2}|}e^{-2(x-x_0)} \le \frac{y_0^*{^2}}{|1-y_0^*{^2}|} \lt \varepsilon, 1-y_0^*{^2} \gt 0$$
And $$\pm\lim_{x\to \infty}\frac{y_0^*{^2}}{(1-y_0^*{^2})e^{2(x-x_0)}+y_0^*{^2}} = 0$$
so it means that point 0 is asymptotical stable.
Now, let examine points 1 and -1:
$$|\pm 1-y^*(x, y_0^*)| = \left\lbrace\frac{|1-y_0^*{^2}|e^{2(x-x_0)}}{|(1-y_0^*{^2})e^{2(x-x_0)}+y_0^*{^2}|} \le 1 \lt \varepsilon; \frac{|1-y_0^*{^2}|e^{2(x-x_0)}+2y_0^*{^2}}{|(1-y_0^*{^2})e^{2(x-x_0)}+y_0^*{^2}|} \le 1+\frac{2y_0^*{^2}}{|1-y_0^*{^2}|}e^{-2(x-x_0)} \le 1+\frac{2y_0^*{^2}}{|1-y_0^*{^2}|} \lt \varepsilon \right.$$
That means that in general
$$1+\frac{y_0^*{^2}}{|1-y_0^*{^2}|} \lt \varepsilon, 1-y_0^*{^2} \gt 0$$
is that mean that 1 and -1 are stable points (but not asymptotical)? Or where am I doing it wrong?
And what about my assumption $$1-y_0^*{^2} \gt 0$$
|
Let me show you a way I was taught
$$
y = y_0 + \delta y
$$
where $y_0 = \lbrace{0,\pm1\rbrace}$
Linearise the original equation given by
$$
y' = F(y)
$$
as
$$
\delta y' = \frac{\partial F(y)}{\partial y}\lvert_{y_{0}}\delta y = \left(3y_0^2-1\right)\delta y
$$
solutions for $\delta y$ i.e the perturbations
$$
\delta y = \delta y_0 \mathrm{e}^{\left(3y_0^2-1\right)t}
$$
it is clear that for $y_0 =\pm 1$ the solution is
$$
\delta y = \delta y_0 \mathrm{e}^{2t}\implies \text{unstable for t>0 i.e. $|\delta t|\rightarrow \infty$ as $t\rightarrow \infty$}
$$
and for $y_0$
$$
\delta y = \delta y_0 \mathrm{e}^{-t}\implies \text{stable for t>0 }
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1055590",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Simplifying $\frac{x^6-1}{x-1}$ I have this:
$$\frac{x^6-1}{x-1}$$
I know it can be simplified to $1 + x + x^2 + x^3 + x^4 + x^5$
Edit : I was wondering how to do this if I didn't know that it was the same as that.
|
Ill show an "tricky" method.
$\displaystyle \frac{x^6 - 1}{x-1}$
$= \displaystyle \frac{x^6 -x + x - 1}{x-1} = \frac{x^6 - x}{x-1} + 1 = \frac{x^6 - x^5 + x^5 - x}{x-1} + 1 = x^5 + 1 + \frac{x^5 - x}{x-1} = \frac{x^5 - x^4 + x^4 - x}{x-1} + x^5 + 1 = \frac{x^4(x - 1) + x^4 - x}{(x-1)}$
Do you see the pattern?
This is simply to show how you can manipulate expressions; its a trick.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1055671",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 12,
"answer_id": 5
}
|
Let R be an integral domain. Show that if the only ideals in R are {0} and R itself, R must be a field I know that if (x)={0} then the if 0=r0 such that r belongs to R therefor it's a field.
Most likely I'm wrong but I need help with the second part if the ideal is R
|
If $R$ with unit.
Let $0\neq x\in R$ then $ 0\neq ( x)$ is an ideal of $R$ , hence $(x)=R$ so there exists $a$ in $R$ sush that $ax=1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1055748",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Double substitution when integrating. I need to integrate
$$f(x) = \cos(\sin x)$$
my first thought was substituting so that $u = \cos(x)$, but that doesn't seem to do the trick. Is there any way to do a double substitution on this?
Any ways on how to proceed would be appreciated.
|
Although the indefinite integral $($or anti-derivative$)$ cannot be expressed in terms of elementary functions, its definite counterpart does have a closed form in terms of the special Bessel function:
$$\int_0^\tfrac\pi2\cos(\sin x)~dx~=~\int_0^\tfrac\pi2\cos(\cos x)~dx~=~\frac\pi2J_0(1).$$
Also, $$\int_0^\tfrac\pi2\sin(\sin x)~dx~=~\int_0^\tfrac\pi2\sin(\cos x)~dx~=~\frac\pi2H_0(1),$$
where H represents the special Struve function.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1055847",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
}
|
Miklos Schweitzer 2014 Problem 8: polynomial inequality Look at problem 8 :
Let $n\geq 1$ be a fixed integer. Calculate the distance:
$$\inf_{p,f}\max_{x\in[0,1]}|f(x)-p(x)|$$ where $p$ runs over
polynomials with degree less than $n$ with real coefficients and $f$
runs over functions $$ f(x)=\sum_{k=n}^{+\infty}c_k\, x^k$$ defined on
the closed interval $[0,1]$, where $c_k\geq 0$ and
$\sum_{k=n}^{+\infty}c_k = 1.$
This is what I have so far.
Clearly for $n=1$, we have $1/2$.
I am conjecturing for $n>1$, we have $(n-1)^{(n-1)} / n^n$ or something similar to that? (just put $x^{(n-1)}$ and $x^n$ then use AM-GM). it's just weird that the pattern does not fit, so it's probably wrong. Any ideas?
|
Your inequality does not hold since $x^{n-1}$ is not the best approximation polynomial of $x^n$ with respect to the uniform norm. By Chebyshev's theorem we have that if $p(x)$ is the best approximation polynomial for $f(x)$, then $f(x)=p(x)$ holds for $\partial p+1$ points in $[0,1]$.
For instance, if $f(x)=x^n$ and $p(x)$ is the Lagrange interpolation polynomial with respect to the points $x=\frac{k}{n}$ for $k=1,2,\ldots,n$, since $f^{(n)}(x)=n!$ we have:
$$\|f(x)-p(x)\|_{\infty} = \left\|\prod_{k=1}^{n}\left(x-\frac{k}{n}\right)\right\|_{\infty}=\frac{n!}{n^n}$$
that is below your bound for any $n\geq 4$.
We can improve this bound by choosing our interpolation points more carefully: by selecting Chebyshev nodes, for instance: $x_k=\cos^2\frac{\pi(2k-1)}{4n}$ for $k=1,\ldots,n$.
In order to find the best approximation polynomial of $x^n$, have a look at the following answer of Noam Elkies on MO: https://mathoverflow.net/questions/70440/uniform-approximation-of-xn-by-a-degree-d-polynomial-estimating-the-error .
Since $\|T_n(2x-1)\|_\infty=1$, with the best choice for the interpolation nodes we have that the uniform error in approximating $x^n$ is always greater than $\color{red}{\frac{2}{4^n}}$.
Since for every function in our class we have $\frac{f^{(n)}(\xi)}{n!}\geq 1$ for any $\xi\in[0,1]$, $f(x)=x^n$ is the easiest function to approximate, and:
$$\inf_{p,f}\|f-p\|_{\infty}=\color{red}{\frac{2}{4^n}}.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1056041",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Riemann integrals real analysis How would I do this question?
$$
\int_{-1}^{2}(4x - x^2)dx
$$
I am using a similar method that I did for a different question but I think my problem is I don't know how to tackle the $x^2$.
I am getting an answer of $15$, but $15$ is not the answer.
|
When using the Riemann definition, we first have to define the partition and the interpolation points. In the region $[-1,0]$ we use $x_i=-\frac{i}{N}$, and every interval has length $\frac{1}{N}$. In the region $[0,2]$ we use $x_i=2\frac{i}{N}$ and every interval has length $\frac{2}{N}$. It follows that the integral is approximated by Riemann sums:
\begin{align}
\int_{-1}^2x^2dx&=\lim_{N\to\infty}\left[\sum_{i=1}^N\frac{1}{N}\left(-\frac{i}{N}\right)^2\right]+\lim_{N\to\infty}\left[\sum_{i=1}^N\frac{2}{N}\left(2\frac{i}{N}\right)^2\right]\\
&=\lim_{N\to\infty}\left[\frac{1}{N^3}\sum_{i=1}^Ni^2\right]+8\lim_{N\to\infty}\left[\frac{1}{N^3}\sum_{i=1}^Ni^2\right]\\
&=\lim_{N\to\infty}\left[\frac{1}{N^3}\frac{1}{6}N(N+1)(2N+1)\right]+8\lim_{N\to\infty}\left[\frac{1}{N^3}\frac{1}{6}N(N+1)(2N+1)\right]\\
&=\lim_{N\to\infty}\frac{N(N+1)(2N+1)}{6N^3}+\lim_{N\to\infty}\frac{4N(N+1)(2N+1)}{3N^3}\\
&=\frac{1}{3}+\frac{8}{3}=3
\end{align}
Here we used that $1^2+2^2+\cdots+N^2=\frac{1}{6}N(N+1)(2N+1)$ and that the limit of the ratio of two polynomials is determined by the ratio of the leading coefficients.
Do you understand which intervals are convenient to choose and how to do the summation?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1056116",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Simplifying the sum $\sum\limits_{i=1}^n\sum\limits_{j=1}^n x_i\cdot x_j$ How can I simplify the expression $\sum\limits_{i=1}^n\sum\limits_{j=1}^n x_i\cdot x_j$?
$x$ is a vector of numbers of length $n$, and I am trying to prove that the result of the expression above is positive for any $x$ vector. Is it equal to $\sum\limits_{i=1}^n x_i\cdot \sum\limits_{j=1}^n x_j$? If it is then my problem is solved, because $\left(\sum\limits_{i=1}^n x_i\right)^2$ is non-negative (positive or zero).
|
Yes,
$$\sum_{i=1}^n\sum_{j=1}^nx_i x_j=\left(\sum_{i=1}^ nx_i\right)^2\;.$$
To see this, let $a=\sum_{i=1}^ nx_i$; then
$$\sum_{i=1}^n\sum_{j=1}^nx_i x_j=\sum_{i=1}^n\left(x_i\sum_{j=1}^nx_j\right)=\sum_{i=1}^na x_i=a^2\;.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1056337",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Finding a basis for Kernel and Image of a linear transformation using Gaussian elimination. This is a very fast method for computing the kernel/nullspace and image/column space of a matrix. I learned this from my linear algebra teacher but I haven't seen it mentioned online apart from this reference in wikipedia.
The method is as follows: If $A$ is a m × n matrix, we construct $\displaystyle\left[\frac{I}{A}\right]$, where $I$ is the n × n identity matrix.
We then do elementary column operations until our $A$ is in column echelon form and we get the augmented matrix $\displaystyle\left[\frac{C}{B}\right]$. A basis of the kernel of $A$ consists in the columns of $C$ such that the corresponding column of $B$ is a zero column. A basis for the image of $A$ consist of all the non-zero columns of $B$.
An example:
$A =
\begin{pmatrix}
4 & 1 & 3 \\
2 & -1 & 3 \\
2 & 1 & 1 \\
1 & 1 & 0
\end{pmatrix}$
$\displaystyle\left[\frac{I}{A}\right] = \begin{pmatrix}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1 \\
\hline 4 & 1 & 3 \\
2 & -1 & 3 \\
2 & 1 & 1 \\
1 & 1 & 0
\end{pmatrix}$ $\to$ $\begin{pmatrix}
1 & 0 & -1 \\
0 & 1 & 0 \\
0 & 0 & 1 \\
\hline 4 & 1 & -1 \\
2 & -1 & 1 \\
2 & 1 & -1 \\
1 & 1 & -1
\end{pmatrix}$$\to$ $\begin{pmatrix}
1 & 0 & -1 \\
0 & 1 & 1 \\
0 & 0 & 1 \\
\hline 4 & 1 & 0 \\
2 & -1 & 0 \\
2 & 1 & 0 \\
1 & 1 & 0
\end{pmatrix}$
So a basis for $ Ker(A) = \begin{pmatrix} -1 \\ 1 \\ 1 \end{pmatrix}$
And a basis for $ Im(A) = \begin{pmatrix} 4 \\ 2 \\ 2 \\ 1 \end{pmatrix},\begin{pmatrix} 1 \\ -1 \\ 1 \\ 1 \end{pmatrix}$.
My problem is that I don't understand fully why this method works. From the wikipedia article I learned this derives from Gaussian Elimination, which sort of makes sense to me as it is similar to the method for finding the inverse of a matrix using Gauss-Jordan Elimination. So my question is: Why does this work?
|
The image of $A$ is precisely the column space of $A$, that is, the vector space spanned by the columns of $A$. Performing column operations does not change the column space of $A$ (see below), so the nonzero columns in the echelon form are a basis for the column space of $A$ (and, hence, for the image of $A$).
A vector $(b_1,\dots,b_n)$ is in the nullspace of $A$ precisely when $b_1C_1+\cdots+b_nC_n=0$, where $C_1,\dots,C_n$ are the columns of $A$. The column operations you are doing are forming linear combinations of the columns of $A$, and the top part of your augmented matrix is keeping track of the coefficients of those linear combinations. So when you get a zero column, the entries above it are the coefficients of a vanishing linear combination of the columns of $A$; they form an element of the nullspace.
"Performing column operations does not change the column space of $A$." There are three kinds of elementary column operation. You just have to check that for each of these three types, performing it does not change the column space.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1056478",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Find the value of the sum $$\arctan\left(\dfrac{1}{2}\right)+\arctan\left(\dfrac{1}{3}\right)$$
We were also given a hint of using the trigonometric identity of $\tan(x + y)$
Hint
$$\tan\left(x+y\right)\:=\:\dfrac{\tan x\:+\tan y}{1-\left(\tan x\right)\left(\tan y\right)}$$
|
Let $$\arctan\left(\dfrac{1}{2}\right)+\arctan\left(\dfrac{1}{3}\right)=u $$
Take tangents on both sides using hint given.
$$ \dfrac{1/2 +1/3}{1- {\dfrac{1} {6}}} = tan(u), $$
$$ tan(u) =1$$
$$ u = \pi/4 $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1056578",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Prove that the sequence $\cos(n\pi/3)$ does not converge EDIT: Using a rigorous formal proof, I need to prove that this sequence does not diverge. I of course understand why it doesn't converge...
$n=1$ to infinity of course.
So, I have a bit of trouble understanding the definition. Here's what I have so far:
I know it doesn't converge if there exists an $\varepsilon > 0$, where for ALL natural numbers $\mathbb{N}$, $|a_n - L| < \varepsilon$, for some $n\in\mathbb N$.
So I can make $\varepsilon > 1/2$, but then I'm not sure what to do. I know I need to prove this by way of contradiction using 2 cases, one where I evaluate $L$ as both greater than and equal the other where it's less than zero.
|
If the sequence converges, then all its subsequences have the same limit. However, the subsequence $\big( \cos \big( \frac{(6k+3) \pi}{3} \big) \big)_{k \in \mathbb{N}}$ converges to $-1$ and the subsequence $\big( \cos \big( \frac{(6k) \pi}{3} \big) \big)_{k \in \mathbb{N}}$ converges to $1$. Hence, the limit does not exist.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1056678",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
}
|
If matrix $\sum_0^\infty C^k$ is convergent, how can I prove that $A(\sum_0^\infty C^k)B$ is convergent? For an $n \times n$ matrix $C$ and If $\sum_0^\infty C^k$ is convergent, how can I prove that for two matrices $A$ and $B$, $A(\sum_0^\infty C^k)B$ is convergent?
It seems quite obvious that you just multiply the convergent matrix sum with two constant matrices.
So, it has got to be convergent.
But I don't know how to prove it..
|
One approach is to note that the map
$$
X \mapsto AXB
$$
is continuous. As Marc van Leeuwen notes below,
$$
A(\sum_{k=0}^N C^k)B = \sum_{k=0}^N AC^kB
$$
so that the limit must be $\sum_{k=0}^\infty AC^kB$, which converges (by our statement of continuity).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1056800",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Evaluate the integral by using Gauss divergence theorem. Evaluate $\int\int_SF.dS$ where $F=(xz,yz,x^2+y^2)$
by using the Gauss divergence theorem.
Where $S$ is the closed surface obtained from the surfaces $x^2+y^2\leq 4,z=2,x^2+y^2\leq 16,z=0$ on the top and the bottom and $z=4-\sqrt{x^2+y^2}$ on the side.
My calculation shows that answer is $\frac{40\pi}{3}$ but the answer should be $\frac{88\pi}{3}$.What is I am doing wrong?
I set $$\iint_S \bf{F}\cdot dS=\iiint_{V}\text{div}FdV=2\iiint_VzdV$$
In cylindrical coordinates the limits are $$0\leq z\leq 4-r,2\leq r\leq 4,0\leq \theta\leq 2\pi$$
What is wrong with these limits?
|
Anyway, this is what I get:
$$2 \int\int\int z dV = 2\int_0^2 \int_0^{4-z} \int_0^{2\pi} z r\ d\theta dr dz = 4\pi \int_0^2 \int_0^{4-z} zr\ dr dz$$
$$ = 2\pi \int_0^2 z(4-z)^2 dz = 2\pi \int_0^2 \bigg(z^3 - 8z^2 + 16z\bigg)dz$$
$$ = 2\pi \bigg( \frac{2^4}{4} - \frac{16(2^3)}{3} + \frac{16(2^2)}{2}\bigg) = \frac{88\pi}{3}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1056940",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How to evaluate $\lim\limits_{n\to \infty}\prod\limits_{r=2}^{n}\cos\left(\frac{\pi}{2^{r}}\right)$ How do I evaluate this limit ?
$$\lim_{n\to \infty}\cos\left(\frac{\pi}{2^{2}}\right)\cos\left(\frac{\pi}{2^{3}}\right)\cdots\cos\left(\frac{\pi}{2^{n}}\right)$$
I assumed it is using this formaula $\displaystyle \cos(A)=\sqrt{\frac{1+\cos(2A)}{2}}$ But I am stuck
|
Hint: $\cos x = \dfrac{\sin (2x)}{2\sin x}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1057003",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 2,
"answer_id": 1
}
|
A finite sum over $\pm 1$ vectors $$\mbox{What is a nice way to show}\quad
\sum_{\vphantom{\Large A}u\ \in\ \left\{-1,+1\right\}^{N}}
\left\vert\,\sum_{i\ =\ 1}^{N}u_{i}\,\right\vert
= N{N \choose N/2}
\quad\mbox{when}\ N\ \mbox{is even ?.}
$$
Could there be a short inductive proof ?.
|
If there are $r$ -1s in $u$, then $|\sum u_i| = |N-2r|$
Let $M = \frac{N}{2} -1$. Thus we are looking at
$$ \sum_{r=0}^{N} \binom{N}{r} |N - 2r| = 2\sum_{r=0}^{M} \binom{N}{r} (N-2r) $$
(using $\binom{N}{r} = \binom{N}{N-r}$)
Now if $P(x) = \sum_{k=0}^{N} a_k x^k$, then the partial sum $a_o + a_1 + \dots + a_i$ is given by the coefficient of $x^i$ in the series expansion of $\dfrac{P(x)}{1-x}$
Let $$Q(x) = \sum_{r=0}^{N} q_r x^r = \sum_{r=0}^{N} \binom{N}{r}(N-2r)x^r = N\sum_{r=0}^{N} \binom{N}{r}x^r - \sum_{r=0}^{N} \binom{N}{r}2rx^r =$$
$$= N(1+x)^N - 2xN(1+x)^{N-1} = N(1+x)^{N-1}(1-x)$$
(We used: $(1+x)^N = \sum_{r=0}^{N} \binom{N}{r} x^r$ and then differentiate and multiply by $x$)
We are interested in the partial sums of the coefficients of $Q(x)$: $2(q_0 + q_1 + \dots + q_M)$
Thus we need to look at the coefficient of $x^M$ in $\dfrac{Q(x)}{1-x} = N(1+x)^{N-1}$, which is $N\binom{N-1}{M}$.
Thus your answer is $$ 2N \binom{N-1}{M}$$ which easily simplifies to
$$ N \binom{N}{N/2}$$
(remember, $M = \frac{N}{2} -1$)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1057107",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 0
}
|
Construct quadrangle with given angles and perpendicular diagonals The following came up when I worked on the answer for a different question (though it was ultimately not used in this form):
Proposition.
Given positive angles $\alpha,\beta,\gamma,\delta$ with $\alpha+\beta+\gamma+\delta=360^\circ$, $\beta<180^\circ$, $180^\circ< \alpha+\beta<270^\circ$, $180^\circ< \beta+\gamma<270^\circ$, there exists a convex quadrangle $ABCD$ with $\angle A=\alpha$, $\angle B=\beta$, $\angle C=\gamma$, $\angle D=\delta$, having $AC\perp BD$.
Proof: Ignoring the orthognality condition, there are many possible quadrangles with the given angles that can be continuously transformed into each other. For such let $P$ denote the intersection of $AC$ and $BD$. In the degenerate case $A=B$, we get $\angle CPD=\alpha+\beta-180^\circ<90^\circ$, in the degenerate case $B=C$, we get $\angle DPA=\beta+\gamma-180^\circ<90^\circ$.
Then the Intermediate Value Theorem guarantees the existence of a case where $\angle CPD=90^\circ$. $_\square$
My question is: Can someone provide a proof not relying on continuity arguments? That is, something more classic Greek compass-and-straightedge-y constructive?
Edit: I had to update and add $\beta<180^\circ$ to the condition in the proposition - the old version would have allowed $\beta\ge180^\circ$ and so no convex quadrilateral at all. If we allow nonconvex quadrangles and diagonals to interesect in the exterior, this additional condition should be unnecessary.
|
Consider a hypothetical solution $\square ABCD$ with diagonals meeting at $X$, and with angle measures and segment lengths as shown:
Then
$$\tan \alpha = \tan A = \tan(\alpha_1 + \alpha_2) = \frac{\tan\alpha_1+\tan\alpha_2}{1-\tan\alpha_1\tan\alpha_2} = \frac{\frac{d}{a}+\frac{b}{a}}{1-\frac{d}{a}\frac{b}{a}} = \frac{a(b+d)}{a^2-bd} \tag{1}$$
$$\tan\beta = \frac{b(a+c)}{b^2-ac} \tag{2}$$
$$\tan\gamma = \frac{c(b+d)}{c^2-bd} \tag{3}$$
$$\tan\delta = \frac{d(a+c)}{d^2-ac} \tag{4}$$
To show that this hypothetical solution is valid, we need only solve equations (1) through (4) for $b$, $c$, $d$ in terms of $\alpha$, $\beta$, $\gamma$, $\delta$, and $a$ (which we can take to be $1$). This is do-able, and the algebra gets no more complicated than quadratics (so that the solution is constructible), but the expressions are a bit messy. I'll post more after I do some clean-up.
Edit. After considerable manipulation, (I think) the above equations reduce to these:
$$\begin{align}
(a^2-b^2)\sin\alpha\sin\beta \cos(\gamma+\beta) + a b \left( \sin\alpha \cos(\gamma+2\beta) + \cos\alpha\sin\gamma \right) &= 0 \\[4pt]
(a^2-d^2)\sin\alpha\sin\delta \cos(\gamma+\delta)\;+ a d \left( \sin\alpha \cos(\gamma+2\delta) + \cos\alpha\sin\gamma \right) &= 0 \\[4pt]
2 (a^2+c^2) \sin\alpha\sin\gamma \cos(\gamma+\beta) \cos(\gamma+\delta)
\qquad- a c ( k + \sin^2(\alpha-\gamma) ) &= 0
\end{align}$$
where
$$k := -1 + \sin^2\alpha + \sin^2\beta + \sin^2\gamma + \sin^2\delta + \cos(\alpha-\gamma) \cos(\beta-\delta)$$
Consequently, we have
$$\begin{align}
\frac{b}{a} &= -\frac{ \sin\alpha \cos(\gamma+2\beta) + \cos\alpha \sin\gamma \pm \sqrt{k}}{2\sin\alpha \sin\beta \cos(\gamma+\beta)} \\[6pt]
\frac{d}{a} &= -\frac{ \sin\alpha \cos(\gamma+2\delta) + \cos\alpha \sin\gamma \pm \sqrt{k}}{2\sin\alpha \sin\delta \cos(\gamma+\delta)} \\[6pt]
\frac{c}{a} &= \frac{ k + \sin^2(\alpha-\gamma) \pm 2 \sin(\alpha-\gamma) \sqrt{k}}{4 \sin\alpha \sin\gamma \cos(\gamma+\beta)\cos(\gamma+\delta)}= \frac{\left(\;\sin(\alpha-\gamma) \pm \sqrt{k} \;\right)^2 }{4 \sin\alpha \sin\gamma \cos(\gamma+\beta)\cos(\gamma+\delta)}
\end{align}$$
where, for now at least, resolution of the "$\pm$"s is left as an exercise to the reader.
Note that the relation $\alpha+ \beta+\gamma+\delta = 360^\circ$ causes each of these to have myriad representations. I don't claim to have given the best possible ones; in fact, I suspect there are representations that make the relations far more clear.
As mentioned, the various quantities are constructible, since the lengths are at most as complicated as a square root. Formulation of a construction strategy will have to wait.
Edit 2. If we normalize our lengths, say, with a constant sum,
$$a + b + c + d = s$$
then we can express each length independently. With $m := \pm\sqrt{k}$, we have
$$\frac{a}{s} = \frac{\left(\; m + \sin(\alpha-\gamma) \;\right)\left(\;m + \sin(\beta+\delta) + 2 \sin\beta\sin\delta\;\right)}{2m\left(\;2\sin(\beta+\delta)+\cos(\beta-\delta)-\cos(\alpha-\gamma) \;\right)}$$
while expressions for $b$, $c$, $d$ arise by cyclically permuting the angles, $\alpha\to\beta\to\gamma\to\delta\to\alpha$. A different normalization (for instance, $a^2+b^2+c^2+d^2=s^2$ seems a classic choice) would lead to different —potentially better— representations, but my attempts at symbol-wrangling haven't resulted in anything particularly nice.
By the way, to verify that the earlier ratios hold, it helps to know that
$$m^2 - \sin^2(\alpha-\gamma) = 4\sin\alpha\sin\gamma\cos(\gamma+\beta)\cos(\gamma+\delta)$$
Therefore multiplying $a$ by the $c/a$ ratio above turns out to be an overly-complicated way to flip a single sign: $m + \sin(\alpha-\gamma) \;\to\; m - \sin(\alpha-\gamma)$, which matches the considerably-easier process of exchanging $\alpha$ and $\gamma$ (and exchanging $\beta$ and $\delta$, which actually does nothing) in the formula for $a$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1057310",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 4,
"answer_id": 0
}
|
c(n,k) equals subdivisions To compare n files, the total comparison count is:
$$
{{n}\choose{k}} = C^k_n = \dfrac{n!}{k! ( n - k )!}
$$
with k = 2. Input space is composed by all pairs of files to compare. I want to split input space by the number of processor available to parallelize comparison. Dividing the input space doesn't divide the comparison's count equally.
If file's count is 100, the comparison's count is 4 950, with 4 processors: processors 1, 2 and 3 should do 1 237 comparisons and processor 4 should do 1 239 comparisons.
Dividing 100 per 4 gives this comparison's count for each processor:
*
*p1: files 1.. 25, first compared to the next 99 files, second to 98 -> 2 175 comparisons
*p2: files 26.. 50 -> 1 550 comparisons
*p3: files 50.. 75 -> 925 comparisons
*p4: files 76..100 -> 300 comparisons
The correct ranges, empirically determined, are [0..13[ (1295 comparisons), [13..29[ (1240 comparisons), [29..50[ (1239 comparisons) and [50..100[ (1 176 comparisons)
I need the reverse function of $ C^k_n $ to determine n from the comparisons count.
|
There's an amusing solution in the case of $4$ processors.
Consider the space of all $(i,j)$ file indexes you're going to compare. This is a triangle, the half of a square with side $n$.
You can split this triangle into four identical ones by means of the other diagonal and the medians.
This generalizes to all powers of $2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1057368",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Problem of Partial Differential Equations
For this question, I get stuck when I apply the second initial equation. My answer is $θ= Ae^-(kλ^2 t)\cos λx$, where $A$ is a constant. Would anyone mind telling me how to solve it?
|
Notice that the general solution to the heat equation with Neumann boundary conditions is given by
$$ \theta(x,t) = \sum_{n=0}^{\infty} A_n\cos\bigg(\frac{nx}{2}\bigg)\exp\bigg(-k\bigg(\frac{n}{2}\bigg)^{2}t\bigg) $$
We are given
$$ \theta(x,0) = 2\pi x - x^{2} $$
But our series solution at $ t = 0$ is
$$ \theta(x,0) = \sum_{n=0}^{\infty} A_n\cos\bigg(\frac{nx}{2}\bigg) $$
These must be equivalent, hence we have
$$ \begin{align}
\theta(x,0) &= \sum_{n=0}^{\infty} A_n\cos\bigg(\frac{nx}{2}\bigg) \\
&= 2\pi x - x^{2} \\
\end{align} $$
Now, for orthogonality, we multiply both sides by
$$ \cos\bigg(\frac{mx}{2}\bigg) $$
And integrate from $0 \rightarrow 2\pi$ (you will need to integrate by parts). Hence you will need to solve
$$ \int_{0}^{2\pi} (2\pi x -x^{2})\cos\bigg(\frac{mx}{2}\bigg) dx = \sum_{n=0}^{\infty} A_n \int_{0}^{2\pi}\cos\bigg(\frac{nx}{2}\bigg)\cos\bigg(\frac{mx}{2}\bigg) dx $$
Where the RHS gives
$$ A_0 \cdot 2\pi $$
for $ n=m=0 $
$$ \implies A_0 = \frac{1}{2\pi} \int_{0}^{2\pi} (2\pi x -x^{2}) dx $$
and
$$ A_n \cdot \pi $$
for $ n=m \ne 0 $
$$ \implies A_n = \frac{1}{\pi} \int_{0}^{2\pi} (2\pi x -x^{2})\cos\bigg(\frac{mx}{2}\bigg) dx $$
Solving for $A_0, A_n$ gives
$$ \theta(x,t) = \frac{2\pi^{2}}{3} + \sum_{n=1}^{\infty} \frac{8}{n^{2}}\bigg[(-1)^{n+1}-1\bigg]\cos\bigg(\frac{nx}{2}\bigg)\exp\bigg(-k\bigg(\frac{n}{2}\bigg)^{2}t\bigg) $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1057484",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Determine the number of zeros for $4z^3-12z^2+2z+10$ in $\frac{1}{2}<|z-1|<2$. I'm faced with the problem in the title
Determine the number of zeros for $4z^3-12z^2+2z+10$ in the annulus $\frac{1}{2}<|z-1|<2$.
Clearly this requires a nifty application of Rouche's Theorem. Why this isn't so easy for me is because the annulus isn't centered at the origin. In those cases in which it is centered at the origin, it's a simple plugging in of numbers. Could anybody suggest a next step to take?
|
Well I just figured out what to do while writing this, so I'm going to write the trick out here for anybody that might run across the same problem I had.
Just write the polynomial $4z^3-12z^2+2z+10$ as a Taylor Polynomial centered at $z=1$. You don't actually need to use Taylor's theorem for this - using algebra could do it also.
Doing that, you get $4(z-1)^3-10(z-1)+4$. With a simple substitution $s=z-1$, we get $4s^3-10s+4$, and from there it's trivial.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1057698",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Another way to prove that if n$^{th}$-degree polynomial $r(z)$ is zero at $n+1$ points in the plane, $r(z)\equiv 0$? The original problem is as follows
Let $p$ and $q$ be polynomials of degree $n$. If $p(z)=q(z)$ at $n+1$ distinct points of the plane, the $p(z)=q(z)$ for all $z\in \mathbb{C}$.
I attempted showing this is by taking the polynomial $r(z)=p(z)-q(z)$. Since both $p(z)$ and $q(z)$ are both polynomials of degree $n$, $r(z)$ can either be identically zero ($r(z)\equiv 0$), or it can be a polynomial of a degree less than $n$. However, if $p(z)=q(z)$ at $n+1$ distinct points, then $r(z)=0$ at $n+1$ points, which can only happen if $r(z)\equiv 0$ by the fundamental theorem of algebra. Hence, $p(z)=q(z)$. $\blacksquare$.
Is this the correct way to prove this? Is there another way? I'm asking this because I immediately jumped to this proof, and usually I'm wrong with these problems that are at the end of the problem set for a subsection.
|
When people say “Fundamental Theorem of Algebra”, they usually mean the result that every complex polynomial has a complex root. This is far simpler than that, since it’s true over any field at all, not even of characteristic zero.
For a proof, let $\rho_1,\dots,\rho_n,\rho_{n+1}$ be the roots. Then by Euclidean division, you show that $X-\rho_1$ divides your original $r(X)$, in such a way that the quotient polynomial still vanishes at the other $\rho_i$’s. Then you proceed by induction to show that $\prod_i(X-\rho_i)$ divides your original. But the product of all the $X-\rho_i$ is of degree $n+1$. So that original $r(X)$ has to have been zero to start with.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1057794",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Colouring graph's edges. Let $G$ be a graph in which each vertex except one has degree $d$. Show that if $G$ can be edge-coloured in $d$ colours then
(1) $G$ has an odd number of vertices,
(2) $G$ has a vertex of degree zero
Please help me with it.
|
Hint:
*
*Let $u$ be the vertex of non-$d$ degree. We have that $\deg(u) < d$, so there is a color $c$ not used by that vertex.
*Remove all the edges of color $c$, either
*
*$\deg(u)$ is still smaller and we reduced the problem, or
*all the degrees are equal, but the sum of degrees we removed is an even number.
I hope this helps $\ddot\smile$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1057889",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How to prove $x^2=-1$ has a solution in $\mathbb{Q}_p$ iff $p=1\mod 4$ Let $p$ be prime and let $\mathbb{Q}_p$ denote the field of $p$-adic numbers.
Is there an elementary way to prove $x^2=-1$ has a solution in $\mathbb{Q}_p$ iff $p=1\mod 4$?
I need this result, but I cannot find a reference. Can some recommend a good book or a set of (easily available) lecture notes to help me out?
|
Are you familiar about number theory? This is equivalent to saying that $x^2 \equiv -1$ in $\mathbb{F}_p = \mathbb{Z}/p\mathbb{Z}$ iff $p \equiv 1$ (mod $4$). But this is a basic fact on number theory, since
\begin{equation*}
\Big( \frac{-1}{p} \Big) = (-1)^{\frac{p-1}{2}} = 1 \text{ iff } p \equiv 1 (\text{ mod }4)
\end{equation*}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1057975",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
How to Find the Matrix of a Linear Map If a map $f$ from the set of polynomials of degree 3 to the real numbers is given by $f(u) = u'(-2)$, how do I find the matrix that represents $f$ with respect to the bases $[1, t, t^2, t^3]$ and $[1]$?
I have worked out $f(1) = 0$, $f(t) = 1$, $f(t^2) = -4$ and $f(t^3) = 12$ and now I'm at a loss.
Any help appreciated.
|
Actually you are done already:
$$f \equiv \pmatrix{0&1&-4&12}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1058075",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Derangement formula; proof by induction Proof by induction that
$
d_{n}=nd_{n-1}+(-1)^{n}
$
where $d_{n}$ is number of $n$-element derangements.
|
Hints
The number of derangements is given by
$$d_n = n! \sum_{k=0}^n \frac{(-1)^k}{k!}.$$
To prove your relationship by induction, show that
*
*it is correct for the base case $n=1$, i.e. that $d_1 = 1 d_0 + (-1)^1$ by plugging in.
*it is correct in the inductive step; i.e., assuming that this is true for all $k=1, \ldots n-1$, prove that this is true for $k=n$.
Can you finish this?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1058157",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
asymptotics of sum I wanna find asymptotic of sum below
$$\sum\limits_{k=1}^{[\sqrt{n}]}\frac{1}{k}(1 - \frac{1}{n})^k$$
assume I know asymptotic of this sum (I can be wrong):
$$\sum\limits_{k=1}^{n}\frac{1}{k}(1 - \frac{1}{n^2})^k \sim c\ln{n}$$
So I use Stolz–Cesàro theorem and wanna show that
$$\sum\limits_{k=1}^{[\sqrt{n}]}\frac{1}{k}(1 - \frac{1}{n})^k \sim c\ln{n}$$
where
$$x_n = \sum\limits_{k=1}^{[\sqrt{n}]}\frac{1}{k}(1 - \frac{1}{n})^k$$
$$x_n - x_{n-1} = \frac{1}{\sqrt{n}}(1 - \frac{1}{n})^{\sqrt{n}}$$
$$y_n - y_{n-1} = \ln(n) - \ln(n-1)$$
but
$$
\lim_{n \to \infty}
\frac{\frac{1}{\sqrt{n}}(1 - \frac{1}{n})^{\sqrt{n}}}
{\ln(n) - \ln(n-1)} = \infty
$$
What I'm doing wrong?
|
Another method you could try is rewriting your sum as
$$
\sum_{k=1}^{\lfloor\sqrt{n}\rfloor} \frac{1}{k} - \sum_{k=1}^{\lfloor\sqrt{n}\rfloor} \frac{1}{k} \left[1-\left(1-\frac{1}{n}\right)^k\right],
$$
then using Bernoulli's inequality to show that the second sum is $O(n^{-1/2})$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1058259",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
There is a function which is continuous but not differentiable I have a function which is a convergent series:
$$f(x) = \sin(x) + \frac{1}{10}\sin(10x) + \frac{1}{100}\sin(100x) + \cdots \frac{1}{10^n}\sin(10^nx)$$
This function is convergent because for any E you care to specify, the function has a term which is smaller than E. However, the function is not differentiable, and I don't understand why.
$$\frac{d}{dx}f(x) = \cos(x)+\frac{10}{10}\cos(10x) + \frac{100}{100}\cos(100x) + \cdots \frac{10^n}{10^n}\cos(10^nx)$$
Is this a special case to A Continuous Nowhere-Differentiable Function ?
|
Notice that if you plug $x=k\pi$, where $k$ is any integer, into the derivative of the that function, you get $\infty$, thereby making the derivative discontinuous. This, in other words, means that the function is not differentiable.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1058367",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
Why Rational Numbers do not include pairs $(a,b)$ with $b=0$? Let $X=Z\times Z$
If we have the relation $R$ on $X$ defined by $(a,b)R(c,d)$ if and only if $ad=bc$. Then, what is the problem if $b=0$?
Obviously, I'm not looking for the answer that we cannot divide by 0, but rather something more fundamental. I thought that perhaps it violates the reflexivity of the equivalence relation.
Can I have a hint?
|
As anorton said, this fails to be an equivalence relation when you include $(0,0)$.
You don't need to throw away all ordered pairs with $b=0$, though; you could just remove $(0,0)$, then get a valid equivalence relation on $\mathbb Z\times\mathbb Z\setminus\{(0,0)\}$. You could think of these as the "extended rationals," $\mathbb Q\cup\{\infty\}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1058488",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Do the columns of an invertible $n \times n$ matrix form a basis for $\mathbb R^n$? The columns of an invertible $n \times n$ matrix form a basis for $\mathbb R^n$.
I follow the definition from the text book, then I guess because the matrix is invertible, each vector in the matrix is linearly independent, thus the basis of column space is span in $\mathbb R^n$.
However, I am still confused. Could someone tell me why or if I've missed something?
Thanks.
|
If you have an $n \times n$ invertible matrix, then the columns (of which there are $n$) must be independent. How do we know that these vectors -- the columns -- span $\mathbb{R}^n$? We know this because $\mathbb{R}^n$ is an $n$-dimensional space, so any independent set of $n$ or more vectors will span the space. Hence, the columns are both independent and span $\mathbb{R}^n$, so they form a basis for $\mathbb{R}^n$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1058555",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
What are the double union ($\Cup$) and double intersection ($\Cap$) Operators? Finale of THIS.
Unicode says that $\Cup$ and $\Cap$ are double union and intersection, respectively. I was wondering if there was an actual operation that went with these symbols. If not, would these definitions make sense for these operators? As follows:
$$A\Cup B:=\left\{(x,x):x\in(A\cup B)\right\}$$
and
$$A\Cap B:=\left\{(x,x):x\in(A\cap B)\right\}$$
Question
Do these operators exist within Set Theory? Iff not, do they exist anywhere in the realm of mathematics? Is my idea for these two operators logical and useful?
|
I think a more useful definition would be $A\Cup B:=\{a\cup b\mid a\in A, b\in B\}$, respectively $A\Cap B:=\{a\cap b\mid a\in A, b\in B\}$.
I've never seen the symbol before, but I could think of situations where that might be useful. I'm sure there are situations where the diagonal of $(A\cup B)^2$ (which you're using) is used, but I have no idea, when you'd want to introduce a special symbol for that, and why you'd want to pick one, that looks more like a union than a diagonal, which is usually denoted by $\Delta$...
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1058656",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
}
|
Unit sphere weakly dense in unit ball I'm studying for an exam and came across a problem: I want to prove that the unit sphere in a Hilbert space $\mathcal{H}$ is weakly dense in the unit ball.
I already had to prove that the unit ball is weakly closed, so the weak closure of the unit sphere is contained in the unit ball. What remains to be seen is that the unit ball is contained in the weak closure of the sphere.
I suspect that orthonormal bases will need to be used here; the problem also had us prove that every orthonormal sequence in $\mathcal{H}$ converges weakly to $0$ which I did.
There's another similar question here: Prove: The weak closure of the unit sphere is the unit ball. but that deals with normed spaces as opposed to Hilbert ones and is beyond the scope of my course from the looks of it.
|
A basic neighbourhood of a point $p$ in the weak topology can be written in the form
$$U(y_1, \ldots, y_n; p) = \{x: \left|\langle y_j, x - p \rangle\right| < 1, j = 1 \ldots n\}$$
where $\{y_1, \ldots, y_n\}$ is any finite set of vectors in $\mathcal H$.
If $\|p \| < 1$, find $v$ so $\langle y_j, v\rangle = 0$ for $j = 1\ldots n$
and add an appropriate multiple of $v$ to $p$ to get a unit vector in $U(y_1, \ldots, y_n; p)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1058727",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Arithmetic Progressions in slowly oscillating sequences An infinite sequence ($a_0$, $a_1$, ...) is such that the absolute value of the difference between any 2 consecutive terms is equal to $1$. Is there a length-8 subsequence such that the terms are equally spaced on the original sequence and the terms form an arithmetic sequence from left to right?
Clarifications:
1. The Common difference can be negative or 0
Example:
the sequence 1, 2, 3, 2, 3, 4, 3, 4, 5, 4, 5, 6, 5, 6, 7, 6, 7, 8, 7, 8, 9, 8, 9, 10
works because the 3rd term is 3, 6th term is 4, 9th term is 5, ..., 24th term is 10.
and 3rd, 6th, ..., 24th terms are equally spaced. They also form an arithmetic sequence.
I am thinking about Szekeres theorem but idk if that would work
EDIT: I was able to show it for $n=$. I am actually more interested in indefinitely long ones. But hey, it could be that one can construct something that an indefinitely long one will never happen.
|
Far from a full answer, but I have some (hopefully) new information. Length $4$ equally spaced, AP subsequences can be found from all finite $(a_n)_{n=1}^N$ with $N>10$ and $\forall n(|a_{n+1}-a_n|=1)$. This can very easily be brute forced, as there exist only $2^N$ distinct length $N$ sequences which obey the absolute value condition. That comes out to only $2048$ distinct sequences which require checking.
Here is an example of a length $10$ sequence which does not contain any length $4$, equally spaced, AP subsequences. However, appending either $a_{10}+1$ or $a_{10}-1$ to it will negate this.
| /\
|/ \/\ /
| \/
|
So I thought that there is probably some finite length $N$ after which every such sequence will contain length $8$, equally spaced, AP subsequences. Turns out that even for length $5$, $N$ would have to be greater than $32$. There are $2^{32}$ distinct sequences, and filtering out those sequences where length $5$ APs have already been found, over $3$ million sequences are left. This was when I got a memory error.
Perhaps some of you out there with better hardware and/or programming prowess (or just more time) could brute force the solution, if there is indeed such a finite $N$. Of course, a positive answer for $k$ will beg the same question for $k+1$ and eventually you will run out of processing power or memory, which is why this is a rather inelegant method of doing it.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1058858",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26",
"answer_count": 2,
"answer_id": 0
}
|
expression of the 4-tensor $f \otimes g$ in given basis Let $f$ and $g$ be bilinear functions on $\mathbb{R}^n$ with matrices $a = \{a_{ij}\}$ and $B = \{b_{ij}\}$, respectively. How would I go about finding the expression of the $4$-tensor $f \otimes g$ in the basis$$x_{i_1} \otimes x_{i_2} \otimes x_{i_3} \otimes x_{i_4},\text{ }1 \le i_1, i_2, i_3, i_4 \le n?$$ Any help would be greatly appreciated!
|
Let $\{e_1, \dots, e_n\}$ be the usual basis for $\mathbb{R}^n$, dual to $x_1, \dots, x_n$. We have $$f\otimes g = \sum_{1 \le i_1, i_2, i_3, i_4 \le n} f \otimes g(e_{i_1}, e_{i_2}, e_{i_3}, e_{i_4})x_{i_1} \otimes x_{i_2} \otimes x_{i_3} \otimes x_{i_4}.$$$($Proof: Let$$H = \sum_{1 \le i_1, i_2, i_3, i_4 \le n} f \otimes g(e_{i_1}, e_{i_2}, e_{i_3}, e_{i_4})x_{i_1} \otimes x_{i_2} \otimes x_{i_3} \otimes x_{i_4}.$$Then $H(e_{j_1}, e_{j_2}, e_{j_3}, e_{j_4}) = f \otimes g(e_{j_1}, e_{j_2}, e_{j_3}, e_{j_4})$. Equality at general $v_1$, $v_2$, $v_3$, $v_4$ then follows by multilinearity of both $H$ and $f \otimes g$.$)$
But$$f \otimes g(e_{i_1}, e_{i_2}, e_{i_3}, e_{i_4}) = f(e_{i_1}, e_{i_2})g(e_{i_3}, e_{i_4}) = a_{i_1 i_2}b_{i_3 i_4}$$so$$f \otimes g = \sum_{1 \le i_1, i_2, i_3, i_4 \le n} a_{i_1 i_2} b_{i_3 i_4} x_{i_1} \otimes x_{i_2} \otimes x_{i_3} \otimes x_{i_4}.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1058989",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Lattice Paths problem I was assigned to determine the number of "lattice paths" that are in a 11 x 11 square.
Recalling that I can only go upwards and rightwards, here is my approach:
Note: The red square is the restricted area I cannot go through.
I computed the result in WolframAlpha, and I got the following:
(Corrected input)
I just know that we go up to 11 choose 3 since we have 11 paths (n), and then we have up to 3 m moves. I really do not know what is the technical approach to this. I only know that by intuition, is up to 11 choose 3. Can someone explain to me why?
Also, I want to know if my approach to the problem is correct.
Thank you!
|
Your solution is okay.
$$2\times\left[\binom{11}{0}\binom{11}{11}+\binom{11}{1}\binom{11}{10}+\binom{11}{2}\binom{11}{9}+\binom{11}{3}\binom{11}{8}\right]$$
If you go 'right-under' then it is for
certain that you will arrive $\left(11,0\right)$, $\left(10,1\right)$,
$\left(9,2\right)$ or $\left(8,3\right)$.
It cannot happen that you arrive on your path on more than one of
these points.
The factor $2$ covers the fact that you can also go 'left-up' along analogous paths.
Edit:
Your approach is correct.
Going from $(0,0)$ to e.g. $(9,2)$ with steps to the right or upwards comes to choosing $2$ (numbered) steps out of $9+2=11$ to be the steps that go upward. There are $\binom{11}{2}$ ways to do that. Of course it also comes to choosing $9$ out of $11$ to be the steps to the right, giving $\binom{11}{9}$ possibilities and (fortunately :)) $\binom{11}{2}=\binom{11}{9}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1059091",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
}
|
evaluating a contour integral where c is $4x^2+y^2=2$ Consider the integral
$$\oint_C \frac{\cot(\pi z)}{(z-i)^2} dz,$$ where $C$ is the contour of $4x^2+y^2=2$.
The answer seems to be $$2 \pi i\left(\frac{\pi}{\sinh^2 \pi} - \frac{1}{\pi}\right)$$ but I do not know how to proceed.
I would be grateful if an answer contains some worked out steps on how to proceed with this problem.
|
Hints.
(A) $\cot(\pi z)$ is a meromorphic function having residue equal to $\frac{1}{\pi}$ for every $z\in\mathbb{Z}$;
(B) $\frac{1}{(z-i)^2}$ is a meromorphic function with a double pole in $z=i$;
(C) If $\gamma$ is a simple closed curve in the complex plane and $f(z)$ is a meromorphic function with no singularities on $\gamma$, then:
$$\oint_{\gamma}f(z)\,dz = 2\pi i\cdot \sum_{\zeta\in Z}\operatorname{Res}\left(f(z),z=\zeta\right)$$
where $Z$ is the set of singularities of $f(z)$ inside $\gamma$.
Inside the given countour there are just two singularities for $f(z)=\frac{\cot(\pi z)}{(z-i)^2}$, in $z=0$ and $z=i$.
Since $\operatorname{Res}(f(z),z=0)=-\frac{1}{\pi}$, we just need to compute $\operatorname{Res}(f(z),z=i)$.
$z=i$ is a double pole for $f(z)$, hence:
$$\operatorname{Res}(f(z),z=i)=\lim_{z\to i}\frac{d}{dz}(z-i)^2 f(z)=-\lim_{z\to i}\frac{\pi}{\sin^2(\pi z)}=\frac{\pi}{\sinh^2 \pi}.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1059169",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Great contributions to mathematics by older mathematicians It is often said that mathematicians hit their prime in their twenties, and some even say that no great mathematics is created after that age, or that older mathematicians have their best days behind them.
I don't think this is true. Gauss discovered his Theorema Egregium, a central result in differential geometry, in his fifties. Andrew Wiles proved Fermat's Last Theorem in his thirties.
Post many examples of great mathematics created over the age of
30, the older the better. Bonus points for mathematicians whose
best work happened over the age of 30. I will define great mathematics to be something of great significance, such as proving a
deep theorem, developing far-reaching general theory, or anything else
of great impact to mathematics.
Addendum: Please also include a brief explanation of why the mathematical result posted is significant.
(Many say that 30 isn't that old, but I'm casting the net wide to get more examples. Age-30 examples would also help to debunk the "peak at twenties" myth. I do ask for examples to be as old as possible, so the lower bound isn't that important.)
I know that mathematicians can produce a lot of work throughout their lives - Euler is a great example - but here I want to focus on mathematics of great significance or impact.
Help me break this misconception!
|
Louis de Branges proved Bieberbach's Conjecture at age 53.
LINK wikipedia
(I found the picture when researching his birthdate)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1059235",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "34",
"answer_count": 11,
"answer_id": 7
}
|
proving gradient of a function is always perpendicular to the contour lines Can someone give an explanation of how such a proof would go, given a function
example: $y = f(x)$
|
I have an intuitive (not formal, just to get a good image) answer for you: What is a gradient? It is the direction of fastest increase for the function $g$. What is a contour line? It is a line on which $g$ has the same value everywhere. But what direction must the gradient have with respect to the contour line to prevent the function $g$ from changing its value? - Perpendicular.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1059293",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Integrate a periodic absolute value function \begin{equation}
\int_{0}^t \left|\cos(t)\right|dt = \sin\left(t-\pi\left\lfloor{\frac{t}{\pi}+\frac{1}{2}}\right\rfloor\right)+2\left\lfloor{\frac{t}{\pi}+\frac{1}{2}}\right\rfloor
\end{equation}
I got the above integral from https://www.physicsforums.com/threads/closed-form-integral-of-abs-cos-x.761872/. It seems to hold and the way I approached it was to see that the integrand was periodic and $\int_{\frac{\pi}{2}}^\frac{3\pi}{2} -\cos(t)dt=\int_{\frac{3\pi}{2}}^{\frac{5\pi}{2}} \cos(t)dt=\ldots=2$.
I need to evaluate a similar integral.
\begin{equation}
\int_{0}^t \sin\left(\frac{1}{2}(s-t)\right)\left|\sin\left(\frac{1}{2}s\right)\right|ds
\end{equation}
Here too the integrand is periodic but I am unable to get the closed form. Can someone help me out?
|
A quick experimentation gave me that
$$
\int_0^t\sin\left(\frac{s-t}{2}\right)\left|\sin\left(\frac{s}{2}\right)\right|\,\mathrm{d}s=
\mathrm{sign}\left(-\sin \left(\frac{t}{2}\right)\right) \left(2 \pi \cos \left(\frac{t}{2}\right) \left\lceil \frac{\left\lfloor \frac{t}{2 \pi }\right\rfloor }{2}\right\rceil +\sin \left(\frac{t}{2}\right)-\frac{1}{2} t \cos \left(\frac{t}{2}\right)\right)$$
This is by no means the only closed form, and it probably isn't nice enough for what you are looking for, so please accept my apologies.
Hope this helps.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1059400",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
}
|
How many ways are there to arrange k out of n elements in a circle with repetition? If you a set of the n elements, in how many ways $Q(n,k)$ can you take some of them and arrange them on a $k$-gon, when repetition of one element is allowed but rotations of one arrangement are not counted twice?
If I am not mistaken, the number for $n=k=3$ should be 11, all of them being listed in the following image (arranged on a triangle which of course is equivalent):
My question now is to give and explain a formula for computing $Q(n,k)$ for general $n$ and $k$ and to optionally note down an algorithm for generating all possible permutations.
|
Ok, if I understand correctly what you want is basically the number of ways to color the vertices of a $k$-gon having $n$ colors available, but taking two colorings to be the same if you can rotate one of them so that they look the same.
To do this we need a little bit of group theory. The intuition is the following, first take the set $C$ of all the colorings of a $k$-gon with $n$ colors available (taking all of them to be different). Then take the group $\mathbb Z_k$ and define the action of $Z_k$ on $C$ by rotation, that is, if $c\in C$ is a coloring and $m\in \mathbb Z_k$define $m*c$ to be the coloring $c$ rotated so that vertex $1$ is now vertex $m+1$.
What we want to count is precisely the number of orbits under this action of $\mathbb Z_k$ on $C$. To do this we shall use Burnsides lemma.
This tells us the number of orbits is $\frac{1}{k}\sum\limits_{m\in \mathbb Z_k}|X^m|$. Where $X^m$ is the set of colorings that are unchanged after applying the rotation $m$. (In other words the colorings $c$ such that $m*c=c$).
As an example notice that the colorings that use only one color belong to $X^m$ for any $m$, since rotating them doesn't change them.
We now proceed to calculate $|X^m|$ for fixed values of $m$. What exactly is $m*c$? in the coloring $m*c$ vertex $m+1$ has the color vertex $1$ had in the coloring $c$, vertex $m+2$ has the color vertex $2$ had, and so on. The vertices are partitioned into $\gcd(m,k)$ sets of uniform size in the following sense. Take a vertex and draw the edge going from that vertex to the same vertex plus $m$. do this again with the new vertex and stop once you go back to the same vertex. You will have passed through all vertices if and only if $m$ and $k$ are relatively prime. More generally, the vertices are seperated into $\gcd(k,m)$ "connected components".
It is easy to see if you want $c=m*c$ then all vertices that belong to the same "connected component" need to have the same color. Why? well, clearly vertex $v$ needs to have the same color as vertex $v+m$, but vertex $v+m$ needs to have the s ame color as vertex $v+2m$, and so on untill we go back to vertex $v$.So if we want $c=m*c$ we can select one color for each of the $\gcd(m,k)$ components.
Thus we obtain $|X^m|=n^{\gcd(m,k)}$
Using this result in combination with Burnsides lemma we get the number of colorings is
$\frac{1}{k}\sum\limits_{m\in \mathbb Z_k}n^{gcd(m,k)}$.
But we can simplify this a bit more. to do this we make two observations: First off notice $\gcd(m,k)$ is always a divisor of $m$. Having that out of the way we ask ourselves: given $d$ a divisor of $k$, how many numbers $m$ satisfy $\gcd(m,k)=d$?. Notice $d$ does, and any number satisfying $\gcd(m,k)$ is going to be a multiple of $d$, how many of those are there?$\frac{k}{d}$, they are precisely $d,2d,3d\dots \frac{k}{d}d$. However the only ones we want are those whose coefficient is relatively prime to $\frac{k}{d}$,so that the $\gcd$ doesn't become larger. therefore there are $\varphi(\frac{k}{d})$ elements of $\mathbb Z_k$ satisfying $(m,k)=d$.
Using this we simplify the result to
$$\frac{1}{k}\sum\limits_{d|k}\varphi(k/d)n^d$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1059500",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Eigenvalue problem $y'' + \lambda y = 0,$ $y'(0) = 0$, $y(1) = 0$
Find the eigenvalues of
$$y'' + \lambda y = 0, \; y'(0) = 0, y(1) = 0$$
For $\lambda >0$,
$$y(x) = c_1 \cos(\sqrt{\lambda} x) + c_2 \sin(\sqrt{\lambda}x)$$
We get that $y'(0) = 0 \implies c_2 = 0$, but when I try to solve for $\lambda$ when doing $y(1) = 0$, I run into trouble. Anyone know how to do this sort of problem?
|
Apply your boundary condition at $x=1$:
$$
y(1)=c_1\cos(\sqrt{\lambda})=0\implies \sqrt{\lambda}={(2n-1)\pi\over 2},\ n=1,2,\dots
$$
(recall that the zeros of the cosine function occur at odd multiples of $\pi/2$), but then squaring both sides,
$$
\lambda=\lambda_n=\left({(2n-1)\pi\over 2}\right)^2,\ n=1,2,\dots
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1059571",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Difficult general integral definite 0 to 1 $$\int_{0}^{1} \log^2(x)\cdot x^{k+1} dx$$
I tried integration by parts but it leads to an extremely complicated computation, which didnt lead me anywhere.
Then
I tried differentiating the beta function. That was partly successful. But the problem was when I substituted $k+2$ in and then the digammas and trigamma acted out.
Any help?
Thanks!
Series, and ANY type EXCEPT COMPLEX ANALYSIS is welcome.
|
The two answers are excellent, but there is a third giving something interesting, i.e. a relation to the Gamma function:
$$\int_0^1 \log(x)^2 x^{k+1}\ dx=$$
Substituting $u=-\log(x)$:
$$\int_0^{\infty} u^2 \exp(-(k+1)u)\exp(-u)\ du=$$
Substituting $v=(k+2)u$:
$$(k+2)^{-3}\int_0^{\infty} v^2 \exp(-v)\ dv= (k+2)^{-3} \Gamma(3)=2(k+2)^{-3} $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1059674",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 2
}
|
Tensor products- balanced maps versus bilinear When defining tensor products $M\otimes_R N$ over a commutative ring $R$ one can use a universal property with respect to bilinear maps $M\times N\rightarrow P$.
On the other hand, in the general case, for noncommutative rings one has to use balanced maps $M \times N \rightarrow Z$ instead of bilinear. Of course, in the first case $P$ is an $R$-module while in the second case $Z$ is just an abelian group.
Remember $f$ is biliniar if $f(mr, n)=rf(m, n)=f(m, nr)$ while $f$ is balanced if and only if $f(mr,n)=f(m, rn)$.
I have the following two natural questions:
*
*Why the two definitions coincide?
*Is there an example of a balanced map $M\times N \rightarrow P$ which is not bilinear? I cannot construct one by myself. Here I assume R commutative so I can speak about bilinear maps.
|
The right bilinear maps would yield a tensor product of right modules if we construct the corresponding universal object, where $mr\otimes n=m\otimes nr$. The tensor product of two right modules is again a right module. The corresponding construction can also be done for left modules, which is a left module, and we can also take the tensor product of a right module and a left module, which is where the balanced maps come in. In general the tensor product of a right and a left module is only an abelian group as you mentioned, though it can be made into a module if there are module structures on the side not being "boxed in" by the tensor product. For this construction the coefficient ring can also be commutative, as it is possible for a module to have different actions by a commutative ring on the left and the right, which happens for example when we consider the skew group ring over a commutative ring that the group acts on.
If $R$ is any noncommutative ring, the multiplication map $(r,s)\mapsto rs$ is a balanced (also called middle linear) map that is not bilinear on the left or the right.
Now let $R$ be commutative of characteristic not equal to 2. Let $S$ be the free left $R[x]$ module with basis $\{1,y,y^2,\ldots\}$ and define a right action of $R[x]$ by $yx=-xy$. $S$ has the structure of a ring, essentially a polynomial ring in variables that do not commute. The multiplication map $S\times S\to S$ is then an $R[x]$-balanced map that is not bilinear.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1059782",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
}
|
Understanding proof of Peano's existence theorem I'm studying the proof of Peano's existence theorem on this paper.
At page 5 it is said that the problem
$$\begin{cases}
y(t) = y_0 & \forall t ∈ [t_0, t_0 + c/k] \\
y'(t) = f(t − c/k, y(t − c/k)) & \forall t ∈ (t_0 + c/k, t_0 + c]
\end{cases}
$$
has a unique solution.
Can you explain me why? Thank you.
|
The proof proceeds stepwise. At the outset, the solution is given for $t\in[t_0,t_0+c/k]$. Note carefully that then the right hand side of the second equation $y'(t)=f(t-c/k,y(t-c/k))$ is known for $t\in[t_0+c/k,t_0+2c/k]$, so you can integrate:
$$y(t)=y(t_0+c/k)+\int_{t_0+c/k}^\tau f(\tau-c/k,y(\tau-c/k))\,d\tau,\qquad t\in[t_0+c/k,t_0+2c/k].$$
Now repeat the same idea, gaining a unique solution for $t$ in the next interval $[t_0+2c/k,t_0+3c/k]$, and so on.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1059859",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Open vs Closed Sets for studying topology Topologies can be defined either in terms of the closed sets or the open sets. Yet most proofs, examples, problems, etc. in standard texts concern the open sets.
I would think closed sets are easier and more intuitive for most people. So, is there a particular reason it is better to work primarily with the open sets?
|
Perhaps the reason is that many proofs and definitios rely on open neighbourhoods. I think that an open set that contains a point is more intuitive that a closed set that doesn't.
Take, for example, this definition of limit:
$\lim_{x\to a}f(x)=L$ if for every $\epsilon>0$ there exists some $\delta>0$ such that $|a-x|\ge\delta$ or $a=x$ whenever $|f(x)-f(a)|\ge\epsilon$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1059987",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Find the field of intersection Let $\mathbb{F}_{p^{m}}$ and $\mathbb{F}_{p^{n}}$ be subfields of $\overline{Z}_p$ with $p^{m}$ and $p^{n}$ elements respectively. Find the field $\mathbb{F}_{p^{m}} \cap \mathbb{F}_{p^{n}}$.
Could you give me some hints how I could show that??
|
Hints:
For all prime $\;p\;$ and $\;n\in\Bbb N\;$ :
$$\begin{align}&\bullet\;\; \Bbb F_{p^n}=\left\{\;\alpha\in\overline{\Bbb F_p}\;:\;\;\alpha^{p^n}-\alpha=0\;\right\}\\{}\\
&\bullet\;\; \Bbb F_{p^n}\le\Bbb F_{p^m}\iff n\mid m\;\;\end{align}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1060072",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Help with a recurrence with even and odd terms I have the following recurrence that I've been pounding on:
$$
a(0)=1\\
a(1)=1\\
a(2)=2\\
a(2n)=a(n)+a(n+1)+n\ \ \ (\forall n>1)\\
a(2n+1)=a(n)+a(n-1)+1\ \ \ (\forall n\ge 1)
$$
I don't have much background in solving these things, so I've been googling around looking for different techniques. Unfortunately, I haven't been able to correctly apply them yet. So far I've tried:
*
*Characteristic equations
*Generating functions
*Plugging it into Wolfram Alpha
*Telescoping
*Observation
I'm sure that one or more of these is the way to go, but my biggest problem right now is figuring out how to deal with the idea that there are different equations for odd and even values of $n$. I'm not necessarily looking for a solution (though it would be gratefully accepted), but any nudges in the right direction would be much appreciated.
To follow up, it turned out that the speed problem described in my comment, below was related to the Decimal implementation in Python 2.7. Switching to Python 3.3 (and Java) made everything many orders of magnitude better.
|
This isn't a full answer, but might make a solution more tractable.
Define $S(n) = a(n+2) - a(n)$. (FYI I wasn't able to find $S(n)$ sequence in OEIS either)
Then, we get that
$S(2n) = a(2n+2) - a(2n) = a(n+2) + a(n+1) + (n+1) - a(n+1) - a(n) - n = a(n+2) - a(n) + 1 = S(n) + 1$
and
$S(2n+1) = a(2n+3) - a(2n+1) = a(n+1) + a(n) + 1 - a(n) - a(n-1) - 1 = a(n+1) - a(n-1) = S(n-1)$
So the recurrence for $S(2n) = S(n) + 1$ and $S(2n+1) = S(n-1)$. This still has a parity difference in the recursion, but is a bit simpler.
I spent a little while trying to work out a Josephus problem type solution for $S(n)$, with only partial results (i.e. if $n = 2^k + 3(2^j-1)$ with $0 \leq j < k$ and $2 \leq k$, then $S(n) = k - j + 4$). At the very least that shows that the differences between $a(n+2)$ and $a(n)$ vary between small constant values and up values around $\log n$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1060154",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 2,
"answer_id": 0
}
|
Basic set theory proof about cardinality of cartesian product of two finite sets I'm really lost on how to do this proof:
If $S$ and $T$ are finite sets, show that $|S\times T| = |S|\times |T|$.
(where $|S|$ denotes the number of elements in the set)
I understand why it is true, I just can't seem to figure out how to prove it.
Any hints/help will be greatly appreciated!
|
OK here is my definition of multiplication:
$$m\cdot 0=0$$
$$m\cdot (n+1)=m\cdot n +m$$
(you need some such definition to prove something so basic.)
Now let $|T|=m$ and $|S|=n$.
If $n=0$ then $S=\emptyset $ and so $T\times S=\emptyset$ and we are done by the first case.
If $n=k+1$ let $s \in S$ be any element and let $R=S -\{s\}$ then $|R|=k$ and by induction we have $|T\times R|=m\cdot k$.
Now $$T\times S=T\times R \cup T\times \{s\}$$
Now $|T\times \{x\}|=m$ is easy to prove. Further $T\times R$ and $T\times \{s\}$ are disjoint, so the result follows from the second case and an assumed lemma about the cardinality of disjoint unions being the sum of the cardinalities of the sets.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1060219",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 0
}
|
Prove that if $f$ is integrable on $[0,1]$, then $\lim_{n→∞}\int_{0}^{1} x^{n}f(x)dx = 0$.
Prove that if $f$ is integrable on $[0,1]$, then $\lim_{n→∞}\int_{0}^{1} x^{n}f(x)dx = 0$.
Attempt: this exercise has two parts. I did part a) already. From part a) we know that $g_{n} \geq 0$ is a sequence on integrable functions which satisfies $\lim_{n→∞}\int_{a}^{b} g_{n}dx = 0$.
Then suppose $f$ is integrable, thus, it is bounded so let $m$ be the infimum and $M$ be the supremum of $f$ on the interval $[a,b]$.
Then by the Mean Value Theorem there is a $c \in [m, M]$ such that $\int_{a}^{b} f(x)g(x)dx = c \int_{a}^{b} g(x)dx$.
Then let $g_{n} = x^{n} \geq 0$ $\forall x \in [0,1]$ Then $\int_{a}^{b} f(x)g_{n}(x)dx = c \int_{a}^{b} g_{n}(x)dx = c \int_{a}^{b} x^{n}dx$.
Can someone please help me continue?Any feedback/help would be appreciated. Thank you.
|
If you know that $f$ is bounded, so that $-M \le f \le M$, then you have $\int_0^1 x^n f(x)\,dx \le M \int_0^1 x^n\,dx$. But you can compute $\int_0^1 x^n\,dx$ directly and show that it converges to 0. Likewise, $\int_0^1 x^n f(x)\,dx \ge -M \int_0^1 x^n\,dx$. Use the squeeze theorem.
Alternatively, use the fact that $$\left| \int_0^1 x^n f(x)\,dx \right| \le \int_0^1 x^n |f(x)|\,dx \le M \int_0^1 x^n\,dx.$$
(This assumes that "integrable" means "Riemann integrable". If it means "Lebesgue integrable", then integrable functions need not be bounded, and this proof doesn't work. But if you're working with Lebesgue integrable functions then you probably know the dominated convergence theorem and should just use that.)
(Acknowledgement of priority: I just noticed that Jon's comment contains exactly the same hint.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1060329",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Evaluating $\frac{d}{dx} \int_{1}^{3x} \left(5\sin (t)-7e^{4t}+1\right)\,\mathrm dt$
$$\dfrac{d}{dx} \int_{1}^{3x} \left(5\sin (t)-7e^{4t}+1\right)\,\mathrm dt$$
The answer I come up with is: $5\sin(3x)(3)-7e^{4(3x)}(3)$, however this was not on the answer choice. What is the correct way to do this? Thanks.
|
Let $$\begin{align}
f(x)&=\dfrac{d}{dx} \int_{1}^{3x} \left(5\sin (t)-7e^{4t}+1\right)\,\mathrm dt\\
&=\dfrac{d}{dx} \int_{0}^{3x}\left( 5\sin (t)-7e^{4t}+1\right)\,\mathrm dt-\dfrac{d}{dx} \int_0^{1}\left(5\sin (t)-7e^{4t}+1\right)\,\mathrm dt\\
&=\dfrac{d}{dx} \int_{0}^{3x}\left( 5\sin (t)-7e^{4t}+1\right)\,\mathrm dt+0\\
&=\left( 5\sin (3x)-7e^{4\cdot3x}+1\right)\cdot3\\
&=15\sin (3x)-21e^{12x}+3\\
\end{align}$$
$$\dfrac{d}{dx} \int_{1}^{3x} \left(5\sin (t)-7e^{4t}+1\right)\,\mathrm dt=15\sin (3x)-21e^{12x}+3$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1060435",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Calculating how many elements are in the product of Cartesian multiplication Let $A = \{1,3\}, B = \{1, 2\}, C = \{1, 2,3\}$.
How many elements are there in the set
$\{(x,y,z) \in A \times B \times C | x + y = z \} $ ?
Two things I'm not familiar with here,
First, how do I do Cartesian multiplication between 3 sets?
And I'm having trouble figuring how the $x+y=z$ has to do with the number of elements?
Can you please show me how to solve this?
thanks :)
|
The Cartesian product of the three sets $A$, $B$, and $C$ is just the triplet $(a,b,c)$ such that $a∈ A$, $b∈B$, and $c∈C$. We wish to find any and all $(a,b,c)$ that satisfies the property $a+b=c$, where $a∈ A$, $b∈B$, and $c∈C$.
So $1+1=2$, and $1∈A$, $1∈B$, and $2∈C$, so we found one element, namely $(1,1,2)$.
$1+2 = 3$, and $1∈A$, $2∈B$, and $3∈C$, so we found another element, namely $(1,2,3)$.
Notice that $(1,1,3)$ is in $A×B×C$, but $1+1\neq3$, so $(1,1,3)$ is not an element.
Since our sets are small, we can repeat this process and then count how many elements you found that satisfy that property.
Edit: looks like that's all of them.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1060647",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Show that $(E\cup Z_1)\setminus Z_2$ has the form $E\cup Z$ I'm trying to do an exercise as follows:
Let $(X, {\mathbf X}, \mu)$ be a measure space and let ${\mathbf Z}=\{E\in {\mathbf X}:\mu(E)=0\}$. Let $\mathbf X'$ be the family of all subsets of $X$ of the form $(E\cup Z_1)\setminus Z_2, E\in \mathbf X$, where $Z_1$ and $Z_2$ are arbitrary subsets of sets belonging to $\mathbf Z$. Show that a set is in $\mathbf X'$ if and only if it has the form $E\cup Z$ where $E\in \mathbf X$ and $Z$ is a subset of a set in $\mathbf Z$.
My proposed answer was if $Q=E\cup Z$ where $E\in \mathbf X$ and $Z\subset P\in\mathbf Z$ then $Q=E\cup Z\setminus(P\setminus Z)$ since $Z$ and $P\setminus Z$ are both subsets of $P\in \mathbf Z$. This seems to be wrong since it assumes $P\cap E=\emptyset$. Also, I cannot seem to work out how to go the other way around. I tried defining the set $R=\{x\in X:f(x)>0\}$. By the definition of sigma algebra, $R\in \mathbf X$. Then I tried taking intersections and complements with $R$. However I just keep getting messy expressions which never resolve to the required $E\cup Z$. Is this method with $R$ a good idea or did I miss something obvious?
[This is part of exercise 3.L. of The Elements of Integration and Lebesgue Measure by R. G. Bartle.]
|
Clearly $E \cup Z = (E \cup Z) \setminus \emptyset \in \mathbf X'$.
On the other hand, suppose that $E \in \mathbf X$, $N_1,N_2 \in \mathbf Z$, $Z_1 \subset N_1$, and $Z_2 \subset N_2$.
Now work out that $$(E \cup Z_1) \setminus Z_2 = (E \setminus N_2) \cup \bigg[(E \cap (N_2 \setminus Z_2)) \cup (Z_1 \setminus Z_2)\bigg]$$
which belongs to $\mathbf X'$, since the set in brackets is a subset of $N_1 \cup N_2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1060731",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Probability of guessing the colors of a deck of cards correctly 10 years ago when I was about 15 I sat down with a deck of shuffled cards and tried to guess if the next card in the deck would be red or black. In sequence I guessed 36 cards correctly as red or black, at that point my older bother came in and took told me what I was doing was stupid and I stopped being able to guess correctly. I would love to know what the probability of guessing 36 cards correctly in sequence is? Thank you
|
If you're aiming to guess the first 36 cards right with as high a probability as possible, then the best you can do is to choose some sequence of guesses with 18 reds and 18 blacks, and all those sequences are equally likely.
Each of them has probability
$\frac{(26\times 25\times 24\times\dots\times9)^2}{52\times51\times50\times\dots\times17}$
$=\frac{26!\times26!\times16!}{52!\times8!\times8!}$
$\approx 2.595\times 10^{-11}$.
So, the chances are about 1 in 40 billion.
(Assuming the pack of cards is shuffled)
It's easier than guessing 36 coin-tosses correctly, which would have probability $2^{-36}$ which is about 1 in 70 billion.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1060782",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
How find $a,b$ if $\int_{0}^{1}\frac{x^{n-1}}{1+x}dx=\frac{a}{n}+\frac{b}{n^2}+o(\frac{1}{n^2}),n\to \infty$ let
$$\int_{0}^{1}\dfrac{x^{n-1}}{1+x}dx=\dfrac{a}{n}+\dfrac{b}{n^2}+o(\dfrac{1}{n^2}),n\to \infty$$
Find the $a,b$
$$\dfrac{x^{n-1}}{1+x}=x^{n-1}(1-x+x^2-x^3+\cdots)=x^{n-1}-x^n+\cdots$$
so
$$\int_{0}^{1}\dfrac{x^{n-1}}{1+x}=\dfrac{1}{n}-\dfrac{1}{n+1}+\dfrac{1}{n+2}-\cdots$$
and note
$$\dfrac{1}{n+1}=\dfrac{1}{n}\left(\frac{1}{1+\dfrac{1}{n}}\right)=\dfrac{1}{n}-\dfrac{1}{n^2}+\dfrac{1}{n^3}+o(1/n^3)$$
and simaler
$$\dfrac{1}{n+2}=\dfrac{1}{n}-\dfrac{2}{n^2}+o(1/n^2)$$
$$\dfrac{1}{n+3}=\dfrac{1}{n}-\dfrac{3}{n^2}+o(1/n^2)$$
then which term end?
|
$\newcommand{\angles}[1]{\left\langle\, #1 \,\right\rangle}
\newcommand{\braces}[1]{\left\lbrace\, #1 \,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\, #1 \,\right\rbrack}
\newcommand{\ceil}[1]{\,\left\lceil\, #1 \,\right\rceil\,}
\newcommand{\dd}{{\rm d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\dsc}[1]{\displaystyle{\color{red}{#1}}}
\newcommand{\expo}[1]{\,{\rm e}^{#1}\,}
\newcommand{\fermi}{\,{\rm f}}
\newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,}
\newcommand{\half}{{1 \over 2}}
\newcommand{\ic}{{\rm i}}
\newcommand{\iff}{\Longleftrightarrow}
\newcommand{\imp}{\Longrightarrow}
\newcommand{\Li}[1]{\,{\rm Li}_{#1}}
\newcommand{\pars}[1]{\left(\, #1 \,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\pp}{{\cal P}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\vphantom{\large A}\,#2\,}\,}
\newcommand{\sech}{\,{\rm sech}}
\newcommand{\sgn}{\,{\rm sgn}}
\newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}}
\newcommand{\ul}[1]{\underline{#1}}
\newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert}$
\begin{align}
\int_{0}^{1}{x^{n - 1} \over 1 + x}\,\dd x&
=\sum_{k\ =\ 0}^{\infty}\pars{-1}^{k}\int_{0}^{1}x^{n - 1 + k}\,\dd x
=\sum_{k\ =\ 0}^{\infty}{\pars{-1}^{k} \over n + k}
=\sum_{k\ =\ 0}^{\infty}\pars{{1 \over 2k + n} - {1 \over 2k + 1 + n}}
\\[5mm]&={1 \over 4}\sum_{k\ =\ 0}^{\infty}
{1 \over \bracks{k + \pars{n + 1}/2}\pars{k + n/2}}
=\half\bracks{\Psi\pars{n + 1 \over 2} - \Psi\pars{n \over 2}}
\end{align}
where $\ds{\Psi}$ is the Digamma Function.
Then,
\begin{align}
\color{#66f}{\large a}&=\lim_{n\ \to\ \infty}
n\braces{\half\bracks{\Psi\pars{n + 1 \over 2} - \Psi\pars{n \over 2}}}
=\color{#66f}{\large\half}
\\[5mm]
\color{#66f}{\large b}&=\lim_{n\ \to\ \infty}
n^{2}\braces{
\half\bracks{\Psi\pars{n + 1 \over 2} - \Psi\pars{n \over 2}} - {1 \over 2n}}
=\color{#66f}{\large{1 \over 4}}
\end{align}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1060893",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
}
|
$[1-x_n]^{n}\to 1$ implies that $n x_n\to 0$ as $n\to\infty$? As the title says $[1-x_n]^{n}\to 1$ implies that $n x_n\to 0$ as $n\to\infty$?
We know that $\left(1+\frac{x}{n} \right)^n \to e^x$ as $n\to\infty$. This implies (not sure why) that $\left(1+\frac{x}{n} +o(1/n) \right)^n \to e^x$.
So the result obtains since we must have that $x_n=\frac{x}{n}+o(1/n)$ where $x=0$?
|
For $n$ be sufficiently large, we have $(1-x_n)^n >0$ (since it converges to 1).
We can then say that $n \log(1-x_n)$ tends to zero. Necessarily $\log(1-x_n)$ tends to zero.
So $x_n$ tends to zero and we have that : $ \quad \log(1+x) \underset{0}{\sim} x$
Then $nx_n \to 0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1060947",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
How many invertible matrices are in $M_{2}(\mathbb{Z}_{11})$? I tried to solve this question but without a success.
How many invertible matrices are in $M_{2}(\mathbb{Z}_{11})$?
Thanks
|
The answer is $ (11^2 - 1)(11^2 - 11) $
The first term is how many non-zero vectors there are in $ (Z_{11})^2 $, the other one is how many there are that are linearly independent of the first one chosen.
You can easily generalize that fact: the power of all invertible $ n\times n$ matrices over a finite field $\mathbb{F}_p$ is
$$(p^n - 1)(p^n - p) \dots (p^n - p^{n-1})$$
Each of these tells us how many vectors there are that are linearly independent with the previous ones
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1061053",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Evaluate $\lim_{x→0}\left(\frac{1+\tan x}{1+\sin x}\right)^{1/x^2} $ I have the following limit to evaluate:
$$ \displaystyle \lim_{x→0}\left(\frac{1+\tan x}{1+\sin x}\right)^{1/x^2} $$
What's the trick here?
|
$$\lim_{x\to 0}\left(\frac{1+\tan x}{1+\sin x}\right)^{\frac{1}{x^2}}$$ $$=\exp\left(\lim_{x\to 0}{\frac{1}{x^2}}\left(\frac{1+\tan x}{1+\sin x}-1\right)\right)$$
$$=\exp\left(\lim_{x\to 0}{\frac{1}{x^2}}\left(\frac{\tan x-\sin x}{1+\sin x}\right)\right)$$
$$=\exp\left(\lim_{x\to 0}{\frac{1}{x^2}}\left(\frac{\frac{2tan\frac{x}{2}}{1-\tan^2\frac{x}{2}}-\frac{2tan\frac{x}{2}}{1+\tan^2\frac{x}{2}}}{1+\frac{2tan\frac{x}{2}}{1+\tan^2\frac{x}{2}}}\right)\right)$$
$$=\exp\left(\lim_{x\to 0}{\frac{1}{x^2}}\left(\frac{2\tan\frac{x}{2}\left(1+\tan^2\frac{x}{2}-1+\tan^2\frac{x}{2}\right)}{\left(1-\tan^2\frac{x}{2}\right) \left(1+\tan^2\frac{x}{2}+2\tan\frac{x}{2}\right)}\right)\right)$$
$$=\exp\left(\lim_{x\to 0}{\frac{1}{x^2}}\left(\frac{4\tan^3\frac{x}{2}}{\left(1-\tan^2\frac{x}{2}\right) \left(1+\tan^2\frac{x}{2}+2\tan\frac{x}{2}\right)}\right)\right)$$
$$=\exp\left(\lim_{x\to 0}\left(\tan\frac{x}{2}\right)\lim_{x\to 0}\left(\frac{\left(\frac{\tan\frac{x}{2}}{\frac{x}{2}}\right)^2}{\left(1-\tan^2\frac{x}{2}\right) \left(1+\tan^2\frac{x}{2}+2\tan\frac{x}{2}\right)}\right)\right)$$
$$=e^{\left(\left(0\right)\left(\frac{1}{1\times1}\right)\right)}=e^{0}=1$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1061142",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 6,
"answer_id": 5
}
|
Closed-form of sums from Fourier series of $\sqrt{1-k^2 \sin^2 x}$ Consider the even $\pi$-periodic function $f(x,k)=\sqrt{1-k^2 \sin^2 x}$ with Fourier cosine series $$f(x,k)=\frac{1}{2}a_0+\sum_{n=1}^\infty a_n \cos2nx,\quad a_n=\frac{2}{\pi}\int_0^{\pi} \sqrt{1-k^2 \sin^2 x}\cos 2nx \,dx.$$ This was considered in this earlier question, and in comments an observation was made: The Fourier coefficients all appear to be of the form $a_n= A_n(k) K(k)+B_n(k) E(k)$ where $K,E$ are the complete elliptic integrals of the first and second kind and $A_n(k),B_n(k)$ are rational functions of $k$.
This is made plausible by the fact that $f(x,k)$ is the first $x$-derivative of the incomplete elliptic integral of the second kind $E(x,k)=\int_0^x f(x',k)\,dx'$. Moreover, this conjecture for $a_k$ can be confirmed by examining the Fourier series of $E(x,k)$; this appears in a 2010 paper by D. Cvijovic, with full text available on ResearchGate.
Something we may conclude from this observation is that, since the Fourier coefficients are linear combinations of complete elliptic integrals, $f(x,k)$ itself must be of the form $A(x,k)K(k)+B(x,k)E(k)$ where $A(x,k), B(x,k)$ are even $\pi$-periodic functions whose Fourier coefficients are rational functions of $k$. Such a Fourier expansion of does appear in the paper noted above, but the functions themselves are not found in closed-form. Hence my question:
Can $A(x,k)$, $B(x,k)$ be obtained in closed-form in terms of known special functions?
|
We have:
$$ \frac{\pi}{2}\,a_n=\int_{0}^{\pi}\sqrt{1-k^2\sin^2\theta}\cos(2n\theta)\,d\theta =\frac{k^2}{4n}\int_{0}^{\pi}\frac{\sin(2\theta)\sin(2n\theta)}{\sqrt{1-k^2\sin^2\theta}}\,d\theta.\tag{1}$$
If we set:
$$ b_m = \int_{0}^{\pi}\frac{\cos(2m\theta)}{\sqrt{1-k^2\sin^2\theta}}\,d\theta,\qquad c_m = \int_{0}^{\pi}\cos(2m\theta)\sqrt{1-k^2\sin^2\theta}\,d\theta $$
we have:
$$ b_m = \int_{0}^{\pi}\frac{\cos(2\theta)\cos((2m-2)\theta)}{\sqrt{1-k^2\sin^2\theta}}\,d\theta-\int_{0}^{\pi}\frac{\sin(2\theta)\sin((2m-2)\theta)}{\sqrt{1-k^2\sin^2\theta}}\,d\theta$$
and expressing $\cos(2\theta)$ as $1-2\sin^2\theta$ we get:
$$ b_m = \frac{2}{k^2}\int_{0}^{\pi}\frac{(k^2/2-k^2\sin^2\theta)\cos((2m-2)\theta)}{\sqrt{1-k^2\sin^2\theta}}\,d\theta-\frac{4(m-1)}{k^2}c_{m-1}$$
so:
$$ b_m = \frac{2}{k^2} c_{m-1} + \frac{k^2-2}{k^2}b_{m-1}-\frac{4m-4}{k^2}c_{m-1} = \frac{k^2-2}{k^2}b_{m-1}-\frac{4m-2}{k^2}c_{m-1}. $$
Moreover, integration by parts gives:
$$ c_m = \frac{k^2}{8m}\left(b_{m+1}-b_{m-1}\right), \tag{2}$$
hence:
$$ b_m = \frac{k^2-2}{k^2}b_{m-1}-\frac{2m-1}{4m-4}(b_m-b_{m-2}), $$
$$ \frac{6m-5}{4m-4} b_m = \frac{k^2-2}{k^2}b_{m-1}+\frac{2m-1}{4m-4}b_{m-2}, $$
or:
$$ b_m = \frac{4m-4}{6m-5}\cdot\frac{k^2-2}{k^2}b_{m-1}+\frac{2m-1}{6m-5}b_{m-2}.\tag{3} $$
Since $b_0 = 2 K(k) $ and $c_0 = 2 E(k)$ we have
$$ b_1 = \frac{1}{k^2}\left((2k^2-4)K(k)+4E(k)\right).$$
In order to have explicit forms for $A(\theta,k)$ and $B(\theta,k)$, it is sufficient to find a closed-form expression for the recursion given by $(3)$, since, by $(1)$ and $(2)$:
$$\frac{\pi}{2}a_n = c_n = \frac{k^2}{8n}\left(b_{m+1}-b_{m-1}\right).$$
By setting:
$$ B(x)=\sum_{n\geq 0} b_n x^n $$
recursion $(2)$ can be converted into a first-order ODE for $B(x)$, giving that $B(x)$ behaves like the reciprocal of the square root of a cubic polynomial, and the radius of convergence of $B(x)$ is $\geq 2.485$. $A(\theta,k)$ and $B(\theta,k)$ can be recovered from $\Re\left(B(e^{2i\theta})\right)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1061251",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 1,
"answer_id": 0
}
|
Finding the pdf of $(X+Y)^2/(X^2+Y^2)$ where $X$ and $Y$ are independent and normal
$X$ and $Y$ are iid standard normal random variables. What is the pdf of $(X+Y)^2/(X^2+Y^2)$?
I am guessing you would transform into polar coordinates and go from there, but I am getting lost. Do we need two variable transformations here?
|
$\newcommand{\angles}[1]{\left\langle\, #1 \,\right\rangle}
\newcommand{\braces}[1]{\left\lbrace\, #1 \,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\, #1 \,\right\rbrack}
\newcommand{\ceil}[1]{\,\left\lceil\, #1 \,\right\rceil\,}
\newcommand{\dd}{{\rm d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\dsc}[1]{\displaystyle{\color{red}{#1}}}
\newcommand{\expo}[1]{\,{\rm e}^{#1}\,}
\newcommand{\fermi}{\,{\rm f}}
\newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,}
\newcommand{\half}{{1 \over 2}}
\newcommand{\ic}{{\rm i}}
\newcommand{\iff}{\Longleftrightarrow}
\newcommand{\imp}{\Longrightarrow}
\newcommand{\Li}[1]{\,{\rm Li}_{#1}}
\newcommand{\pars}[1]{\left(\, #1 \,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\pp}{{\cal P}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\vphantom{\large A}\,#2\,}\,}
\newcommand{\sech}{\,{\rm sech}}
\newcommand{\sgn}{\,{\rm sgn}}
\newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}}
\newcommand{\ul}[1]{\underline{#1}}
\newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert}$
In order to keep it simple enough, I'll assume normal distributions like
$\ds{\expo{-\xi^{2}/2} \over \root{2\pi}}$. Lets
$\ds{z \equiv {\pars{x + y}^{2} \over x^{2} + y^{2}}}$:
\begin{align}
\color{#66f}{\large\pp\pars{z}}&=\int_{-\infty}^{\infty}{\expo{-x^{2}/2} \over \root{2\pi}}
\int_{-\infty}^{\infty}{\expo{-y^{2}/2} \over \root{2\pi}}\,
\delta\pars{z - {\bracks{x + y}^{2} \over x^{2} + y^{2}}}\,\dd x\,\dd y
\\[5mm]&={1 \over 2\pi}\ \overbrace{\int_{0}^{\infty}\dd r\,r\expo{-r^{2}/2}}
^{\ds{=}\ \dsc{1}}\
\int_{0}^{2\pi}\delta\pars{z - 1 - \sin\pars{2\theta}}\,\dd\theta
\end{align}
It's clear that $\ds{\pp\pars{z} = 0}$ whenever $\ds{z < 0}$ or $\ds{z > 2}$. Hereafter, I'll assume that $\ds{z \in \pars{0,2}}$:
\begin{align}
\left.\color{#66f}{\large\pp\pars{z}}\right\vert_{z\ \in\ \pars{0,2}}&
={1 \over \pi}\int_{0}^{\pi}\delta\pars{z - 1 - \sin\pars{2\theta}}\,\dd\theta
={1 \over \pi}\int_{-\pi/2}^{\pi/2}
\delta\pars{z - 1 + \sin\pars{2\theta}}\,\dd\theta
\\[5mm]&={1 \over \pi}\int_{0}^{\pi/2}\bracks{%
\delta\pars{z - 1 + \sin\pars{2\theta}} + \delta\pars{z - 1 - \sin\pars{2\theta}}}
\,\dd\theta
\\[5mm]&={1 \over \pi}\int_{-\pi/4}^{\pi/4}\bracks{%
\delta\pars{z - 1 + \cos\pars{2\theta}} + \delta\pars{z - 1 - \cos\pars{2\theta}}}
\,\dd\theta
\\[5mm]&={2 \over \pi}\int_{0}^{\pi/4}\bracks{%
\delta\pars{z - 1 + \cos\pars{2\theta}} + \delta\pars{z - 1 - \cos\pars{2\theta}}}
\,\dd\theta
\\[5mm]&={1 \over \pi}\int_{0}^{\pi/2}\bracks{%
\delta\pars{z - 1 + \cos\pars{\theta}} + \delta\pars{z - 1 - \cos\pars{\theta}}}
\,\dd\theta
\\[5mm]&={1 \over \pi}\int_{0}^{\pi/2}
\delta\pars{\verts{z - 1} - \cos\pars{\theta}}\,\dd\theta
\\[5mm]&={1 \over \pi}\int_{0}^{\pi/2}
{\delta\pars{\theta - \arccos\pars{\verts{z - 1}}} \over \verts{\sin\pars{\theta}}}\,\dd\theta
\\[5mm]&={1 \over \pi}\,{1 \over \root{1 - \verts{z - 1}^{2}}}
={1 \over \pi}\,{1 \over \root{z\pars{2 - z}}}\,,\qquad z \in \pars{0,2}
\end{align}
$$
\color{#66f}{\large\pp\pars{z}}
=\color{#66f}{\large\left\{\begin{array}{lcl}
{1 \over \pi}\,{1 \over \root{z\pars{2 - z}}} & \color{#000}{\mbox{if}} & z \in \pars{0,2}
\\
0 && \mbox{otherwise}
\end{array}\right.}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1061344",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Evaluate the Cauchy Principal Value of $\int_{-\infty}^{\infty} \frac{\sin x}{x(x^2-2x+2)}dx$ Evaluate the Cauchy Principal Value of $\int_{-\infty}^\infty \frac{\sin x}{x(x^2-2x+2)}dx$
so far, i have deduced that there are poles at $z=0$ and $z=1+i$ if using the upper half plane. I am considering the contour integral $\int_C \frac{e^{iz}}{z(z^2-2z+2)}dz$ I dont know how to input graphs here but it would be intended at the origin with a bigger R, semi-circle surrounding that. So, I have four contour segments.
$\int_{C_R}+\int_{-R}^{-r}+\int_{-C_r}+\int_r^R=2\pi i\operatorname{Res}[f(z)e^{iz}, 1+i]+\pi iRes[f(z)e^{iz},o]$ I think. Ok, so here is where I get stuck. Im not sure how to calculate the residue here, its not a higher pole, so not using second derivatives, not Laurent series. Which method do I use here?
|
Write $\sin{x} = (e^{i x}-e^{-i x})/(2 i)$. Then consider the integral
$$PV \oint_{C_{\pm}} dx \frac{e^{\pm i z}}{z (z^2-2 z+2)} $$
where $C_{\pm}$ is a semicircular contour of radius $R$ in the upper/lower half plane with a semicircular detour into the upper/lower half plane of radius $\epsilon$. For $C_{+}$, we have
$$PV \oint_{C_{+}} dz \frac{e^{i z}}{z (z^2-2 z+2)} = \int_{-R}^{-\epsilon} dx \frac{e^{i x}}{x (x^2-2 x+2)}+ i \epsilon \int_{\pi}^0 d\phi \, e^{i \phi} \frac{e^{i \epsilon e^{i \phi}}}{\epsilon e^{i \phi} (\epsilon^2 e^{i 2 \phi} - 2 \epsilon e^{i \phi}+2)} \\+ \int_{\epsilon}^R dx \frac{e^{i x}}{x (x^2-2 x+2)}+ i R \int_0^{\pi} d\theta \, e^{i \theta} \frac{e^{i R e^{i \theta}}}{R e^{i \theta} (R^2 e^{i 2 \theta} - 2 R e^{i \theta}+2)} $$
For $C_-$, we have
$$PV \oint_{C_{-}} dz \frac{e^{-i z}}{z (z^2-2 z+2)} = \int_{-R}^{-\epsilon} dx \frac{e^{-i x}}{x (x^2-2 x+2)}+ i \epsilon \int_{-\pi}^0 d\phi \, e^{i \phi} \frac{e^{-i \epsilon e^{i \phi}}}{\epsilon e^{i \phi} (\epsilon^2 e^{i 2 \phi} - 2 \epsilon e^{i \phi}+2)} \\+ \int_{\epsilon}^R dx \frac{e^{-i x}}{x (x^2-2 x+2)}- i R \int_0^{\pi} d\theta \, e^{-i \theta} \frac{e^{-i R e^{-i \theta}}}{R e^{-i \theta} (R^2 e^{-i 2 \theta} - 2 R e^{-i \theta}+2)} $$
In both cases, we take the limits as $R \to \infty$ and $\epsilon \to 0$. Note that, in both cases, the respective fourth integrals have a magnitude bounded by
$$\frac{2}{R^2} \int_0^{\pi/2} d\theta \, e^{-R \sin{\theta}} \le \frac{2}{R^2} \int_0^{\pi/2} d\theta \, e^{-2 R \theta/\pi}\le \frac{\pi}{R^3}$$
The respective second integrals of $C_{\pm}$, on the other hand, become equal to $\mp i \frac{\pi}{2} $. Thus,
$$PV \oint_{C_{\pm}} dz \frac{e^{\pm i z}}{z (z^2-2 z+2)} = PV \int_{-\infty}^{\infty} dx \frac{e^{\pm i x}}{x (x^2-2 x+2)} \mp i \frac{\pi}{2}$$
On the other hand, the respective contour integrals are each equal to $\pm i 2 \pi$ times the sum of the residues of the poles inside their contours. (For $C_-$, there is a negative sign because the contour was traversed in a clockwise direction.) The poles of the denominator are at $z_{\pm}=1 \pm i$. Thus,
$$PV \int_{-\infty}^{\infty} dx \frac{e^{\pm i x}}{x (x^2-2 x+2)} \mp i \frac{\pi}{2} = \pm i 2 \pi \frac{e^{\pm i (1 \pm i)}}{(1 \pm i) (2) (\pm i)} $$
Taking the difference between the two results and dividing by $2 i$, we get that
$$\int_{-\infty}^{\infty} dx \frac{\sin{x}}{x (x^2-2 x+2)} = \frac{\pi}{2} \left (1+\frac{\sin{1}-\cos{1}}{e} \right ) $$
Note that we may drop the $PV$ because the difference between the integrals removes the pole at the origin.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1061378",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
}
|
Prove by induction that an expression is divisible by 11
Prove, by induction that $2^{3n-1}+5\cdot3^n$ is divisible by $11$ for any even number $n\in\Bbb N$.
I am rather confused by this question. This is my attempt so far:
For $n = 2$
$2^5 + 5\cdot 9 = 77$
$77/11 = 7$
We assume that there is a value $n = k$ such that $2^{3k-1} + 5\cdot 3^k$ is divisible by $11$.
We show that it is also divisible by $11$ when $n = k + 2$
$2^{3k+5} + 5\cdot 3^{k+2}$
$32\cdot 2^3k + 5\cdot 9 \cdot3^k$
$32\cdot 2^3k + 45\cdot 3^k$
$64\cdot 2^{3k-1} + 45\cdot 3^k$ (Making both polynomials the same as when $n = k$)
$(2^{3k-1} + 5\cdot 3^k) + (63\cdot 2^{3k-1} + 40\cdot 3^k)$
The first group of terms $(2^{3k-1} + 5\cdot 3^k)$ is divisible by $11$ because we have made an assumption that the term is divisible by $11$ when $n=k$. However, the second group is not divisible by $11$. Where did I go wrong?
|
Keep going!
$64\cdot 2^{3k-1} + 45\cdot 3^k = 9(2^{3k-1} + 5\cdot3^k) + 55\cdot2^{3k-1}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1061477",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 9,
"answer_id": 1
}
|
Why is the grade of the wedge product of two arbitrary blades the sum of the two blades' grades independently? I'm reading Geometric Algebra For Computer Science, An Object Oriented Approach to Geometry and it says that this is true of any two arbitrary blades.
$\ grade( \textbf{ A} \wedge \textbf{B})=grade( \textbf{ A} )+grade( \textbf{B})$
However, it seems like this is wrong, since
$\ 0=grade( (e_1 \wedge e_2) \wedge (e_2 \wedge e_3)) \\=grade(e_1 \wedge e_2) +grade (e_2 \wedge e_3)\\ =grade(e_1)+ grade( e_2) +grade (e_2) +grade( e_3)=4$
Is this formula incorrect or am I using it incorrectly, and how?
|
I don't think there's any issue here. $(e_1 \wedge e_2) \wedge (e_2 \wedge e_3)$ is the zero 4-vector. It should not be confused with the zero scalar, although in geometric algebra, we can and often do use the same symbol (0) to denote any zero $k$-vector, or whole linear combinations of these zero $k$-vectors.
I would say the grade of the zero 4-vector is still 4.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1061556",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Why do we focus so much in math on functions (as a subclass of relations)? Why is it that math so focuses on the subclass of relations known as functions? I.e. why is it so useful for us in nearly all branches of mathematics to focus on relations which are left-total and left-unique? Left- (or even right-) totality seem to be intuitive, since if an element doesn't appear in the domain, we might throw it out. But why left-uniqueness?
I'm looking for something like a "moral explanation" of why they would be the most useful subclass of relations.
My apologies if this is a previous question; I looked and didn't find much.
|
A function models a deterministic computation: if you put in $x$, you always get out the same result, $f(x)$, hence the left-uniqueness.
The asymmetry of the definition (left uniqueness rather than right uniqueness) is because the left side models the input and the right side models the output, and the input is logically prior to the output. If you know the input, you can determine the output, but you can't (in general) do the reverse. The function $f:x\mapsto x^2$ means that if you put in 17 you get out 289. But it makes no sense at all to ask what you get out before specifying what you put in.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1061735",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
}
|
Conjugation map on a complex vector space Let $V$ be a complex vector space and $J:V\to V$ a map with the following properties:
(i) $J(v+w)=J(v)+J(w)$
(ii) $J(cv)=\overline{c}J(v)$
(iii) $J(J(v))=v$
for all $v,w\in V$ and $c\in\mathbb{C}$. Put $W=\{v\in V\,|\, J(v)=v\}$; this is a real vector space with respect to the operations in $V$ (I have shown this). From here, I am asked to show:
$\bullet$ For each $v\in V$, there exist unique vectors $w,u\in W$ such that $v=w+iu$.
The uniqueness is very straight-forward once you have the decomposition. I've been trying the usual kinds of tricks, like writing $v=\bigl(v-J(v)\bigr)+J(v)$, but this clearly doesn't work. And I understand morally what's happening: we're decomposing $V$ into its formal real part and formal imaginary part. I just don't see how to get the decomposition.
Thanks
|
Given $z \in \mathbb{C}$, we have $\operatorname{Re}(z) = \frac{1}{2}(z + \bar{z})$ and $\operatorname{Im}(z) = \frac{1}{2i}(z - \bar{z})$. Let's try the analogous thing, where in place of $\mathbb{C}$ we have $V$, and instead of conjugation, we have $J$.
Let $w = \frac{1}{2}(v + J(v))$ and $u = \frac{1}{2i}(v - J(v))$; note, we're allowed to multiply by $\frac{1}{2i}$ in the expression for $u$ because $V$ is a complex vector space. Now note that
$$J(w) = J\left(\frac{1}{2}(v + J(v))\right) = \frac{1}{2}(J(v)+J(J(v))) = \frac{1}{2}(J(v) + v) = w$$
and
$$J(u) = J\left(\frac{1}{2i}(v - J(v))\right) = \frac{-1}{2i}(J(v)-J(J(v))) =\frac{-1}{2i}(J(v) - v) = u.$$
So $w, u \in W$ and
$$w + iu = \frac{1}{2}(v + J(v)) + i\left(\frac{1}{2i}(v - J(v))\right) = \frac{1}{2}(v + J(v)) + \frac{1}{2}(v - J(v)) = v.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1061823",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
When to use Binomial Distribution vs. Poisson Distribution?
A bike has probability of breaking down $p$, on any given day.
In this case, to determine the number of times that a bike breaks down in a year, I have been told that it would be best modelled with a Poisson distribution, with $\lambda = 365\,p$.
I am wondering why it would be incorrect to use a binomial distribution, with $n=365$. After all, isn't Poisson really an approximation of a sum of Bernoulli random variables?
Thanks!
|
For Binomial, we assume the bike breaks in a given day and it doesn't break again that day (one Bernoulli trial). For Poisson, bike might get fixed and break yet again in the same day (as said by @JMoravitz). Still, if the chosen time interval (day is an arbitrary choice) is narrowed down to so small that likelihood of breaking twice becomes negligible, Binomial is the model for the distribution. In that case, however, the number of Bernoulli trials becomes very large, in which case the Binomial converges to Poisson distribution (Poisson limit theorem). And then, "it is best modeled as a poison distribution because the calculations are much simpler and the approximation is sufficiently close for large n" (@Graham Kemp)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1061916",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 4,
"answer_id": 3
}
|
What is a counting number? The definition of natural number is given as The counting numbers {1, 2, 3, ...}, are called natural numbers. They include all the counting numbers i.e. from 1 to infinity. at the link http://en.wikipedia.org/wiki/List_of_types_of_numbers.
What does counting number mean, are there any numbers which are not countable?
|
The counting numbers are $\mathbb{Z}^{+} = \big\{1, 2,3....,∞\big\} $ also known as the natural numbers.
Even though the counting numbers go on on forever, they can be counted.
Given a set $X$,
$X$ is denumerable or enumerate if there is a bijection $\mathbb{Z}^{+} → X$.
A set is countable if either it is finite or it is denumerable.
Thus the set $\mathbb{Z}^{+} = \big\{1, 2,3....,∞\big\} $, is countable since there is a bijection $\mathbb{Z}^{+} → \mathbb{Z}^{+}$.
This gives the listing $\mathbb{Z}^{+} = \big\{1, 2,3....,∞\big\} $
The set of real $\mathbb{R}$ numbers is uncountable (not countable) since there is no bijection such that $\mathbb{R}^{} → \mathbb{Z}^{+}$.
For example, $∞$ cannot be map to 1, since infinity is not a number.
Thus there has to be a one to one and surjective relationship between the set of the natural numbers with the given set of numbers.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1063006",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
}
|
Is that true that the normalizer of $H=A \times A$ in symmetric group cannot be doubly transitive? Is that true that the normalizer of $H=A \times A$ in the symmetric group cannot be doubly transitive where $H$ is non-regular, $A \subseteq H$ and $A$ is a regular permutation group?
|
Let the group $A$ act regularly on a set $X$. Then the centralizer of $A$ in the symmetric group $S_X$ is a regular group $A'$ isomorphic to $A$. (The centralizer of the (image of the) left regular representation of $A$ is the right regular representation.) Note that $A \cap A' = Z(A)$, so we only get $A \times A$ acting in the manner described if $Z(A)=1$. So let's assume that $Z(A)=1$.
Let $G=N_{S_X}(H)$ with $H=A \times A$ and suppose that $G$ is $2$-transitive. Now the two direct factors of $H$ are fixed or interchanged by elements of $G$, so the normalizer $N$ of the two factors has index $2$ in $G$. Then, for any $x \in X$,
$N_x \unlhd G_x$ with $|G_x:N_x| \le 2$, so $N_x$ either acts transitively, or it has two orbits of equal size on $X \setminus \{x\}$.
Now since $A$ acts regularly on $X$, for each $y \in X$ there is a unique $a \in A$ with $a(x)=y$, and so we can identify $A$ with $X$, with $x \in X$ corresponding to the identity element $1 \in A$. If we do that, then the action of $N_x$ on $X$ corresponds to the conjugation action of $N_x$ on $A$.
So, if $N_x$ is transitive on $N \setminus \{x\}$, then all elements of $A \setminus \{1\}$ are conjugate under $N_x$, so they all have the same order, which must be a prime, so $A$ is a $p$-group, contradicting $Z(A) = 1$.
Otherwise $N_x$ has two orbits of the same size on $X \setminus \{x\}$, and so (since $A$ cannot be a $p$-group), $A$ must have equal numbers of elements of orders $p$ and $q$ for distinct primes $p$ and $q$. So $|A|=p^aq^b$ for some $a,b > 0$. Also, since the elements of orders $p$ and $q$ in $A$ are all conjugate under $N$, the Sylow $p$- and $q$-subgroups of $A$ must be elementary abelian, so all conjugacy classes of $p$-elements have order a nonzero power of $q$ while those of order $q$ have order a nonzero power of $p$. Hence $(p^aq^b-1)/2$ is divisible by $p$ and $q$, which is clearly impossible.
So $G$ cannot be $2$-transitive.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1063092",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Reading the coefficients with Maple If I have
$$
a+(b-c \cdot a) \cdot x = 5+7x
$$
then I know that
$$
a = 5
$$
and
$$
b-c \cdot a = b-5c = 7
$$
Can I get this result with Maple?
If I just write solve(a+(b-c*a)*x = 5+7*x) it will solve it instead of just 'reading' the coefficient.
|
You could do as follows
lhs:=a+(b-c*a)*x
rhs:=5+7x
solve(coeff(lhs,x,0)=coeff(rhs,x,0),a)
This will output $a=5$. You are solving the equation that the coefficients of the 0 degree terms on each side are equal. To solve for $a$ and say $b$ simultaneously you could try
solve({coeff(lhs,x,0)=coeff(rhs,x,0),coeff(lhs,x,1)=coeff(rhs,x,1)},{a,b})
I hope this helps!
NOTE: The command $\verb!coeff(expression,var,degree)!$ regards $\verb!expression!$ as a polynomial in $\verb!var!$ and returns the coefficient of the term of degree specified in $\verb!degree!$.
Of course, defining $\verb!lhs!$ and $\verb!rhs!$ separately can also be replaced by storing the equation as $\verb!eq1!$ and then applying the commands $\verb!lhs(eq1)!$ and $\verb!rhs(eq1)!$ instead.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1063210",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Evaluate $\int_0^{\pi/2}x\cot{(x)}\ln^4\cot\frac{x}{2}\,\mathrm dx$ How to evaluate the following integral ?:
$$
\int_{0}^{\pi/2}x\cot\left(\, x\,\right)\ln^{4}\left[\,\cot\left(\,{x \over 2}\,\right)\,\right]\,{\rm d}x
$$
It seems that evaluate to
$$
{\pi \over 16}\left[\,
5\pi^{4}\ln\left(\, 2\,\right) - 6\pi^{2}\zeta\left(\, 3\,\right)
-{93 \over 4}\,\zeta\left(\, 5\,\right)
\,\right]
$$
Exactly ?.
|
Let $$J = \int_0^1 {\frac{{\arctan x{{\ln }^4}x}}{x}dx} \qquad K = \int_0^1 {\frac{x{\arctan x{{\ln }^4}x}}{{1 + {x^2}}}dx}$$
Then by M.N.C.E.'s comment, $$\tag{1}I = \int_0^{\pi /2} {x\cot x{{\ln }^4}\left( {\cot \frac{x}{2}} \right)dx} = 2J - 4K$$
Here is a symmetry of the integrand that we can exploit:
$$\begin{aligned}
K &= \int_0^1 {\frac{{x\arctan x{{\ln }^4}x}}{{1 + {x^2}}}dx}
\\&= \int_0^1 {\frac{{\arctan x{{\ln }^4}x}}{x}dx} - \int_0^1 {\frac{{\arctan x{{\ln }^4}x}}{{x(1 + {x^2})}}dx}
\\&= J - \int_1^\infty {\frac{{x\left( {\frac{\pi }{2} - \arctan x} \right){{\ln }^4}x}}{{1 + {x^2}}}dx}
\\& = J - \int_1^\infty {\frac{1}{x}\left( {\frac{\pi }{2} - \arctan x} \right){{\ln }^4}xdx} + \int_1^\infty {\frac{1}{{x(1 + {x^2})}}\left( {\frac{\pi }{2} - \arctan x} \right){{\ln }^4}xdx}
\\& = J - J + \frac{\pi }{2}\int_1^\infty {\frac{{{{\ln }^4}x}}{{x(1 + {x^2})}}dx} - \int_1^\infty {\frac{{\arctan x}}{{x(1 + {x^2})}}{{\ln }^4}xdx}
\\&= \frac{\pi }{2}\int_0^1 {\frac{{x{{\ln }^4}x}}{{1 + {x^2}}}dx} - \int_0^\infty {\frac{{\arctan x}}{{x(1 + {x^2})}}{{\ln }^4}xdx} + \int_0^1 {\frac{{\arctan x}}{{x(1 + {x^2})}}{{\ln }^4}xdx}
\\& = \frac{\pi }{2}\int_0^1 {\frac{{x{{\ln }^4}x}}{{1 + {x^2}}}dx} - \int_0^\infty {\frac{{\arctan x}}{{x(1 + {x^2})}}{{\ln }^4}xdx} + \int_0^1 {\frac{{{{\ln }^4}x\arctan x}}{x}dx} - \int_0^1 {\frac{{x{{\ln }^4}x\arctan x}}{{1 + {x^2}}}dx}
\\& = \frac{\pi }{4}\int_0^1 {\frac{{x{{\ln }^4}x}}{{1 + {x^2}}}dx} - \frac{1}{2}\int_0^\infty {\frac{{\arctan x}}{{x(1 + {x^2})}}{{\ln }^4}xdx} + \frac{J}{2} \end{aligned}$$
The fact that exponent $4$ is even is paramount here.
Plugging into $(1)$, the $J$ miraculously cancelled:
$$I = 2\underbrace{\int_0^\infty {\frac{{\arctan x}}{{x(1 + {x^2})}}{{\ln }^4}xdx}}_{L} - \pi \underbrace{\int_0^1 {\frac{{x{{\ln }^4}x}}{{1 + {x^2}}}dx}}_{45\zeta(5)/64} $$
The crux of the problem is, indeed, evaulating the remaining integral.
Note that $$ L = \int_0^{\frac{\pi }{2}} {x\cot x{{\ln }^4}(\tan x)dx} $$ and we have the following formula:
For $-2<p<1$, $$\int_0^{\frac{\pi }{2}} {x{{\tan }^p}xdx} = \frac{\pi
}{4}\csc \frac{{p\pi }}{2}\left[ {\psi ( - \frac{p}{2} + 1) - 2\psi (
- p + 1) - \gamma } \right]$$
Hence the value of $L$ follows from it by differentiating four times and set $p=-1$:
$$L = -\frac{3 \pi ^3 \zeta (3)}{16}-\frac{3 \pi \zeta (5)}{8}+\frac{5}{32} \pi ^5 \ln 2$$
Finally, we obtain $$I = 2L - \pi \frac{45\zeta(5)}{64} = \color{blue}{-\frac{3 \pi ^3 \zeta (3)}{8}-\frac{93 \pi \zeta (5)}{64}+\frac{5}{16} \pi ^5 \ln 2}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1063275",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
}
|
I am working on proving or disproving $\cos^5(x)-\sin^5(x)=\cos(5x)$ True or false? $$\cos^5(x)-\sin^5(x)=\cos(5x)$$ for all real x.
I have no idea how to prove or disprove this. I tried to expand $\cos(5x)$ using double angle formula but I wasn't sure how to go from that to $$\cos^5(x)-\sin^5(x)$$
|
$x=\dfrac{\pi}{4} \implies \cos^5 x-\sin^5 x=0 \neq\cos \dfrac{5\pi}{4}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1063386",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
How does Wolfram Alpha come up with the substitution $x = 2\sin u$? Integration/Analysis I have to integrate
$$
\int_0^2 \sqrt{4-x^2} \, dx
$$
I looked at the Wolfram Alpha step by step solution, and first thing it does is it substitutes
$x = 2\sin(u)\text{ and } \,dx = 2\cos(u)\,du$
How does it know to substitute $2\sin(u)$ for $x$?
|
This is a common substitution technique known as Trig substitution, you substitute $x$ with a trigonometric function.
Typically, when something is in the form $\sqrt{a-x^2}$, you substitute $x=\sqrt a\sin u$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1063477",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 0
}
|
Number of winning tern in a deck of cards and other 3 related questions There is a deck made of $81$ different card. On each card there are $4$ seeds and each seeds can have $3$ different colors, hence generating the $ 3\cdot3\cdot3\cdot3 = 81 $ card in the deck.
A tern is a winning one if,for every seed, the correspondent colors on the three card are or all the same or all different.
-1 How many winning tern there are in the deck?
-2 Shows that $4$ cards can have at most one winning tern.
-3 Draw $4$ card from the deck. What is the probability that there is a winning tern?
-4 We want delete $4$ cards from the decks to get rid of as many tern as possible. How we choose the $4$ cards?
I've no official solution for this problem and i don't know where to start.
Rather than a complete solution i would appreciate more some hints to give me the input to start thinking a solution.
|
For question one, ask a simpler question: Look at just the first seeds, how many ways can you color them?
Two possibilities are: Card 1: Red, Card 2: Red, Card 3: Red
and Card1:Red, Card2: Blue, Card3: Yellow.
Now for four seeds, you make the choice of colors four times.
You should check that you haven't chosen the same card three times - how many ways could you have done that?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1063599",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Is "being harmonic conjugate" a symmetric relation? The question is:
Prove or disprove the following: If $u,v:\mathbb{R}^2 \to \mathbb{R}$
are functions and $v$ is a harmonic
conjugate of $u$, then $u$ is a harmonic conjugate of $v$ (in other words, show whether or
not the relation of being a harmonic conjugate is symmetric)
I'm pretty sure i'm correct in saying it isn't a symmetric relationship... But I'm wondering if someone can think of a direct counter-example to prove me right. Or is there an algebraic way to prove this that's better?
Any help is appreciated, thanks!
|
Here's the simplest example I can think of. $f(z)=z$ is clearly holomorphic, but $g(z)=Im(z)+iRe(z)$ is not, because when you look at the second Cauchy-Riemann equation you get $1 \neq -1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1063705",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
Limit of the geometric sequence $\{r^n\}$, with $|r| < 1$, is $0$?
Prove that the $\lim_{n\to \infty} r^n = 0$ for $|r|\lt 1$.
I can't think of a sequence to compare this to that'll work. L'Hopital's rule doesn't apply. I know there's some simple way of doing this, but it just isn't coming to me. :(
|
If $r=0$ it's trivial, so we can skip this case. Now assume $r\neq0$.
$$\lim_{n\to\infty}r^n=0 \Longleftrightarrow(\forall \epsilon \in \mathbb{R}^+)(\exists \delta\in\mathbb{R})(\forall n_{> \delta})(|r^n| < \epsilon)$$
Now we can note, that both sides of above inequality are positive. We can use logarithms.
$$|r^n|<\epsilon \overset{|r|<1}{\Longleftrightarrow} n > \log_{|r|}\epsilon $$
Because $\log_{|r|}\epsilon$ is const, $\delta = \log_{|r|}\epsilon$ satisfy thesis.
$\mathscr{Q.E.D.}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1063774",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 7,
"answer_id": 4
}
|
For a $C^1$ function, the difference $|{g'(c)} - {{g(d)-g(c)} \over {d-c}} |$ is small when $|d-c|$ is small
Suppose $g\in C^1 [a,b]$. Prove that for all $\epsilon > 0$, there is $\delta > 0$ such that $|{g'(c)} - {{g(d)-g(c)} \over {d-c}} |{< \epsilon }$ for all points $c,d \in [a,b]$ with $0 <|d-c|< \delta$
First, I don't know what $C^1 [a,b]$ means.
Some ideas:
By Mean value theorem, $g'(c) = {{g(b)-g(a)} \over {b-a}} $ since $c\in[a,b]$.
To show $|{g'(c)} - {{g(d)-g(c)} \over {d-c}} |{< \epsilon }$ whenever $0 <|d-c|< \delta$ for $ c,d \in [a,b]$. I guess I have to use the definition of limit of continuous function. But, I don't know how to connect all of these ideas.
|
This problem has nothing to do with the mean value theorem. This problem is designed to test your understanding of uniform continuity and/or compactness. It seems likely to me that you have just learned (in the context of whatever book or class this is from) the compactness argument that a continuous function on a compact interval is uniformly continuous.
Once you've parsed this and your question, the answer amounts to writing down a definition of uniform continuity.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1063845",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
"Novel" proofs of "old" calculus theorems Every once in a while some mathematicians publish (mostly on the American Mathematical Monthly) a new proof of an old (nowadays considered "basic") result in analysis (calculus).
This article is an example.
I would like to collect a "big list" of such novel proofs of old results. Note, however, that I am only looking for proofs that represent an improvement (in some sense) over standard alternatives which can be found on most textbooks.
|
Andrew Bruckner's survey paper Current trends in differentiation theory includes a lot of examples of new simple proofs of results previously having difficult proofs, but most of the examples are probably past the level you want. Probably more appropriate would be the use of full covers in real analysis.
(ADDED NEXT DAY) When I got home last night I realized that the Bruckner paper I was thinking about isn't the paper I cited above, but rather the paper below. I couldn't find a copy on the internet, but most university libraries (at least in the U.S.) should carry the journal. Nonetheless, the use of full covers in real analysis, which I've already mentioned, is about as close a fit to what you're looking for as I suspect you'll get.
Andrew M. Bruckner, Some new simple proofs of old difficult theorems, Real Analysis Exchange 9 #1 (1983-1984), 63-78.
[Go here for the zbMATH review (Zbl 569.26007) of the paper.]
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1063938",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21",
"answer_count": 1,
"answer_id": 0
}
|
If a function is convergent and periodic, then it is the constant function. I have to prove that if a function f is convergent : $$ \lim\limits_{x\to +\infty} f(x) \in \mathbb{R}$$ and f is a periodic function :
$$\exists T \in \mathbb{R}^{*+}, \forall x \in \mathbb{R}, f(x + T) = f(x)$$
Then f is a constant function.
Actually, this is obvious, but I can't figure out how to prove it formally ..
(I previously thought of this : The link between the monotony of a function, and its limit, but it doen't work because I assume the periodicity hypothesis, and then I use it to prove a contradiction ..)
Thanks !
|
Argue by contradiction. Suppose $f$ is not constant. Then there exist $a,b \in \mathbb{R}$ with $f(a) \neq f(b)$. Now suppose $c = \lim_{x \to \infty} f(x)$. Then for all $
\epsilon > 0$ there exists an $N > 0$ such that $|c - f(x)| < \epsilon$ for all $x > N$. Since $f$ is periodic, there exists $x > N$ with $f(x)=f(a)$. For this value of $x$, we find that $|c - f(a)| < \epsilon$. Since this holds for all positive $\epsilon$, we must have $c=f(a)$. In the same way, one can show that $c=f(b)$, contradicting the assumption that $f(a) \neq f(b)$.
(Of course, this proof by contradiction can be easily turned into a direct proof: given arbitrary $a,b \in \mathbb{R}$ one show that we must have $f(a)=f(b)$, so $f$ is constant.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1064012",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Probability generating function for urn problem without replacement, not using hypergeometric distribution UPDATE: Thanks to those who replied saying I have to calculate the probabilities explicitly. Could someone clarify if this is the form I should end up with:
$G_X$($x$) = P(X=0) + P(X=1)($x$) + P(X=2) ($x^2$) + P(X=3)($x^3$)
Then I find the first and second derivative in order to calculate the expected value and variance?
Thanks!
ORIGINAL POST: We have a probability question which has stumped all of us for a while and we really cannot figure out what to do. The question is:
An urn contains 4 red and 3 green balls. Balls will be drawn from the urn in sequence until the first red ball is drawn (ie. without replacement). Let X denote the number of green balls drawn in this sequence.
(i) Find $G_X$(x), the probability generating function of X.
(ii) Use $G_X$(x) to find E(X), the expected value of X.
(iii) Use $G_X$(x) and E(X) to find $σ^2$(X), the variance of X.
It appears to me from looking in various places online that this would be a hypergeometric distribution, as it is with replacement. However, we have not covered that type of distribution in our course and it seems the lecturer wishes for us to use a different method. We have only covered binomial, geometric and Poisson. I have tried to figure out an alternative way of finding the probability generating function and hence the expected value and variance (just using the derivatives), but, I have not been successful. Would anyone be able to assist?
Thanks! :)
Helen
|
You don't need to use the formula for a hypergeometric distribution. Simply observe that the most number of balls you can draw before obtaining the first red ball is $3$, so the support of $X$ is $X \in \{0, 1, 2, 3\}$. This is small enough to very easily compute explicitly $\Pr[X = k]$ for $k = 0, 1, 2, 3$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1064115",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
How to sum $\sum_{n=1}^{\infty} \frac{1}{n^2 + a^2}$? Does anyone know the general strategy for summing a series of the form:
$$\sum_{n=1}^{\infty} \frac{1}{n^2 + a^2},$$
where $a$ is a positive integer?
Any hints or ideas would be great!
|
Use the fractional expansion of $\cot z$, you can get:
$$\frac{1}{e^t-1} -\frac{1}{t} +\frac{1}{2} =\sum_{n=1}^{\infty}\frac{2t}{t^2 +4n^2\pi^2} $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1064217",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 2
}
|
Why can a matrix whose kth power is I be diagonalized? Say A is an n by n matrix over the complex numbers so that A raised to the kth power is the identity I. How do we show A can be diagonalized?
Also, if alpha is an element of a field of characteristic p, how do we show that the matrix A=[1, alpha; 0, 1] satisfies A raised to the pth power equals the identity I and cannot be diagonalized if alpha is nonzero.
Please be detailed. I really have no idea how to start on this one.
|
If you have the machinery of Jordan forms and/or minimal polynomials you can settle the questions with those. When working over $\Bbb{C}$ user152558's answer points at a useful direction, and Pedro's answer shows that this won't work over the reals.
Lacking such technology I proffer the following argument.
Remember that an $n\times n$ matrix (over a field $K$) can be diagonalized if and only if its eigenvectors span all of $K^n$ or, equivalently, if all the vectors in $K^n$ are linear combinations of eigenvectors.
Over $K=\Bbb{C}$ you can then do the following. Let $x\in K^n$ be arbitrary. Let $\omega=e^{2\pi i/k}$ be a primitive $k$th root of unity. The vectors
$$
z_j=\frac1k(x+\omega^{-j}Ax+\omega^{-2j}A^2x+\cdots+\omega^jA^{k-1}x),\quad j=0,1,\ldots,k-1,
$$
are then easily seen to be eigenvectors of $A$. Namely (You show this)
$$
Az_j=\omega^jz_j.$$
Furthermore, because for all $j, 0<j<k$, we have $\sum_{t=0}^{k-1}\omega^{-jt}=0$ (You show this, too. Apply the formula for a geometric sum) we see that
$$
x=z_0+z_1+\cdots z_{k-1}.
$$
Therefore all the vectors of $K^n$ are linear combinations of eigenvectors, and we are done.
Note: this argument works whenever the field $K$ has a primitive $k$th root of unity and when it is possible to divide by $k$, i.e. the characteristic of $K$ is not a factor of $k$ (actually the latter property follows from the first, but let's skip that).
Your other question follows from the same characterization of diagonalizability. Using the characteristic polynomial you see that $1$ is the sole eigenvalue. But when $\alpha\neq0$, the corresponding eigenspace is 1-dimensional. Thus not all vectors of $K^2$ are linear combinations of eigenvectors and we are done.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1064277",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
if $m, n \in \mathbb{N}$, $m < n$, then $S_m$ isomorphic to subgroup of $S_n$ How do show that if $m, n \in \mathbb{N}$ and $m < n$, then $S_m$ is isomorphic to a subgroup of $S_n$, without using any "overpowered" results?
|
We first prove a lemma.
Lemma. Let $X$ be a set and let $G = \{f: X \to X\text{ }|\text{ }f\text{ is a bijection}\}$. This is a group under composition. Let $A \subseteq X$ be a subset, and let $H_A = \{f \in S_X\text{ }|\text{ }f(a) = a\text{ for all }a \in A\}$. Then $H_A$ is a subgroup of $S_X$.
Proof. We first show the group under composition part. Let $f, g \in G$. If $f \circ g(a) = f \circ g(b)$, then $g(a) = g(b)$ and so $a = b$, establishing injectivity of $f \circ g$. For $y \in X$, choose $z$ such that $g(z) = f^{-1}(y)$. Then $f \circ g(z) = y$ so $f\circ g$ is surjective. Thus $G$ is closed under composition. If $f \in G$, $f^{-1}$ is also a bijection $X \to X$ so $f^{-1} \in G$. And of course the identity is a bijection. Since function composition is associative, $G$ is a group.
To show that $H_A$ is a subgroup of $S_X$, we use the subgroup criterion. $H_A$ is nonempty since it contains the identity. For $f, g \in H_A$, note that $g^{-1}(a) = a$ for all $a \in A$ so $f \circ g^{-1} = a$ for all $a \in A$. Since composition of bijections gives a bijection (as in the above) it follows $f \circ g^{-1} \in H_A$, so $H_A$ is a subgroup. $\square$
Let $A = \{m+1, m+2, \dots, n\}$. By our lemma, $H_A$ is a subgroup of $S_n$. For $\sigma \in H_A$, let $\varphi(\sigma)$ be the corresponding permutation in $S_m$ $($only the first $m$ numbers are moved amongst themselves, so $\sigma$ is essentially a permutation of $m$ letters$)$. $\varphi$ is clearly a bijection. Since function composition preserves fixed points, $\varphi$ is also a homomorphism $($again, only the first $m$ letters are permuted amongst themselves by $\sigma)$. Thus $S_m \cong H_A \le S_n$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1064342",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
}
|
Compact sets closed in Hausdorff spaces without choice? An elementary proof that compact sets are closed in Hausdorff spaces involves making arbitrary choices based on the Hausdorff property. Is there a way to avoid invoking choice?
|
Yes, there is.
Let $\langle X,\tau\rangle$ be a Hausdorff space, and let $K$ be a compact subset of $X$. Suppose that $K$ is not closed. Then we can pick $p\in(\operatorname{cl}K)\setminus K$. (Note that this does not require $\mathsf{AC}$: it’s a single choice.) Let
$$\mathscr{U}=\{U\in\tau:p\notin\operatorname{cl}U\}\;.$$
*
*$K\subseteq\bigcup\mathscr{U}$.
If not, pick $x\in K\setminus\bigcup\mathscr{U}$. $X$ is Hausdorff, so there are disjoint $U,V\in\tau$ such that $x\in U$ and $p\in V$. But then $U\in\mathscr{U}$, contradicting the choice of $x$. Note that I picked only three things here, $x$, $U$, and $V$; this does not require $\mathsf{AC}$.
$K$ is compact, so there is a finite $\{U_1,\ldots,U_n\}\subseteq\mathscr{U}$ such that $K\subseteq\bigcup_{k=1}^nU_k$. Then
$$p\notin\bigcup_{k=1}^n\operatorname{cl}U_k=\operatorname{cl}\bigcup_{k=1}^nU_k\supseteq\operatorname{cl}K\;,$$
contradicting the choice of $p$. Therefore $K$ is closed.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1064394",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Probability that Amy, Bill and Poor pete win money 6.042 MIT I was doing the following problem:
Problem 2. [12 points] Amy, Bill, and Poor Pete play a game:
*
*Each player puts \$2 on the table.
*Each player secretly writes a number between 1 and 4.
*They roll a fair, four-sided die with faces numbered 1, 2, 3, and 4.
*The money on the table is divided among the players that guessed correctly. If no one guessed correctly, then everyone gets their
money back and Poor Pete is paid \$0.25 in “service fees”.
Suppose that, Amy and Bill cheat by picking a pair of distinct
numbers uniformly at random.
How do you calculate the following Probability:
Pete guesses right AND either Amy or Bill guesses right?
I think it should be a simple question but I can't seem to get the answer $\frac{1}{8}$
This is what I have tried:
Let P denote the event that Pete wins, A and B that Amy and Bill will respectively.
So we want $Pr[P=wins \cap (A=wins \cup B=wins)]$ which is equal to:
$Pr[P=wins]Pr[(A=wins \cup B=wins)]$
So I tried calculating each one by writing down the tree corresponding to each Pr and then multiplying the probability.
Pr[P = wins] = Pr[P chooses the same as the number in the real die] = Pr[1 came up]Pr[Pete chooses 1]+ Pr[2 came up]Pr[Pete chooses 2] + Pr[3 came up]Pr[Pete chooses 3] + Pr[4 came up]Pr[Pete chooses 4] = $\frac{4}{16} = \frac{1}{4}$
The other probability $Pr[(A=wins \cup B=wins)]$ I was simply going to compute via inclusion exclusion. i.e.
$$Pr[(A=wins \cup B=wins)] = Pr[A=wins] + Pr[B = wins] - Pr[(A=wins \cap B=wins)]$$
is this approach correct? Can't seem to make it work and give me $\frac{1}{8}$.
I also had to compute:
Pete guesses right AND both Amy and Bill guess wrong
Pete guesses wrong AND either Amy or Bill guesses right
Pete guesses wrong AND both Amy and Bill guess wrong
and was wondering if this general approach was correct.
|
You've understood Pete's behavior well, as well as the independence of Pete's behavior and that of Amy & Bill's. But you haven't explored Amy & Bill's behavior.
The key is to understand what a pair of distinct numbers at random means.
Amy and Bill, as a pair, select one of:
$\begin{array}{cc}
\text{Amy} & \text{Bill} \\
1 & 2 \\
1 & 3 \\
1 & 4 \\
2 & 1 \\
2 & 3 \\
2 & 4 \\
3 & 1 \\
3 & 2 \\
3 & 4 \\
4 & 1 \\
4 & 2 \\
4 & 3
\end{array}$
And they, as a pair, select each with equal probability. There are twelve, so the selection given in each row has probability $1/12$.
Then $P[\text{A wins}] = 1/4$ (sum over the four possibilities as you did for Pete: $P[A = 1] = 3/12 = 1/4$, etc) and $P[\text{B wins}] = 1/4$, but $P[\text{A wins and B wins}] = 0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1064475",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Diagonalization and find matrix that corresponds to the given condition Diagonalize the matrix
$$
A=
\begin{pmatrix}
1 & 2\\
0 & 3
\end{pmatrix}
$$
and find $B^3=A$.
I derived $A \sim \text{diag}(1,3)$ but I have problem finding any $B$. I tried to solve it by writing $B= \begin{pmatrix} 1 & x\\ 0 & 3\end{pmatrix}$, but is it okay to solve the problem in this way?
|
The eigen values are $1,3 $ clearly. So it is diagonalizabe(distinct eigen values).
And so, there exits $P$ such that
$$A=P\left(\begin{array}{cc}1& 0\\ 0& 3\end{array}\right)P^{-1}.$$
Now we need $B$ such that $B$ such that $B^3=A$
Supose there exists such a $B$ then, $$B^3=A=P\left(\begin{array}{cc}1& 0\\ 0& 3\end{array}\right)P^{-1}$$
Therefore, $$B=P\left(\begin{array}{cc}1& 0\\ 0& 3\end{array}\right)^{\frac 1 3}P^{-1}$$
$$=P\left(\begin{array}{cc}1^{\frac 1 3}& 0\\ 0& 3^{\frac 1 3}\end{array}\right)P^{-1}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1064546",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
}
|
$x\in \{\{\{x\}\}\}$ or not? I wonder if we can we say $x\in \{\{\{x\}\}\}$?
In one viewpoint the only element of $\{\{\{x\}\}\}$ is $\{\{x\}\}$. In the other viewpoint $x$ is in $\{\{\{x\}\}\}$, for example all people in Madrid are in Spain.
|
The answer to your question is no. Note that the set $ \{ \{ \{ x \} \} \} $ has one element, which is $ \{ \{ x \} \} $, not $x$. To say $ x \in \{ \{ \{ x \} \} \} $ means that $x$ is an element of $ \{ \{ \{ x \} \} \} $, implying that $ x = \{ \{ x \} \} $, which is not true if $x$ is a number.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1064663",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 8,
"answer_id": 6
}
|
Estimation of a sum independent of $n$ Suppose $f$ is differentiable on $[0,1]$, $f(0)=f(1)$, $\int_0^1 f(x)dx=0$, $f'(x)\neq 1$. Furthermore, let $g(x)=f(x)-x$, $n\geq 2$ is an integer.
Show that $$\left|\sum_{i=1}^{n-1}g\left(\frac{i}{n}\right)\right|<\frac12.$$
I do not know how to prove it, only can prove $$-\frac{n}{2}<\sum_{i=0}^{n-1} g\left(\frac in\right)<-\frac n2+1$$
|
let $$f(x)\equiv 0 \Longrightarrow g(x)=-x$$, and such all condition
But
$$\sum_{i}^{n-1}g\left(\dfrac{i}{n}\right)=-\sum_{i=1}^{n-1}\dfrac{i}{n}=-\dfrac{n-1}{2}\to -\infty,n\to \infty$$
In fact, I think your reslut is true
we have $f'(c)=0,c\in(0,1)$ since $f(0)=f(1)$
since $f'(x)\neq 1$,so $f'(x)<1$.
let $$g(x)=f(x)-x\Longrightarrow g'(x)=f'(x)-1<0$$
so
$$\dfrac{1}{n}-\dfrac{1}{2}=\int_{0}^{1}g(x)dx+\dfrac{1}{n}>\dfrac{1}{n}\left(\sum_{k=1}^{n}g\left(\dfrac{k}{n}\right)+1\right)=\dfrac{1}{n}\sum_{k=0}^{n-1}g\left(\dfrac{k}{n}\right)>\int_{0}^{1}g(x)dx=-\dfrac{1}{2}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1064745",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Rotate a line by a given angle about a point Given the coefficients of a line $A$ , $B$ and $C$.
$$ Ax + By + C = 0$$
I wish to rotate the line by angle say $\theta$ about a point $x_0$ and $y_0$ in clockwise direction. How can I achieve this so that I get new coefficients then?
|
Take a point on this line, say $$A=\binom{x}{\frac{-C-Ax}{B}}$$ multiply the coordinates of this point by the rotation-matrix defined by $$R_\theta=\bigg(\matrix{\cos\theta &&-\sin \theta \\\sin\theta && \cos\theta}\bigg)$$ Immediately you get $$A^\prime=\binom{x\cos\theta+\frac{C+Ax}{B}\sin\theta}{x\sin\theta-\frac{C+Ax}{B}\cos\theta}$$ which defines the equation of the rotated line (extract $y$ from it and proceed).
EDIT: Notice that $$A_x=x(\cos\theta+\frac A B \sin\theta)+\frac{C}{B}\sin\theta\Rightarrow x=\frac{A_x-\frac{C}{B}\sin\theta}{\cos\theta+\frac{A}{B}\sin\theta} \\ A_y=x(\sin\theta-\frac A B\cos\theta)-\frac C B\cos\theta \\ A_y=\frac{(A_x-\frac{C}{B}\sin\theta)(\sin\theta-\frac A B \cos\theta)}{\cos\theta+\frac{A}{B}\sin\theta}-\frac{C}{B}\cos\theta$$and from here I guess you can get $A_y$ in terms of $A_x$ and generalize to get the rotated plan equation.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1064832",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
If $(f\circ g)(x)=x$ does $(g\circ f)(x)=x$? Given $$(f◦g)(x)=x$$ (from R to R for any x in R)
does it mean that also $$(g◦f)(x)=x$$
I feel like its not true but I can't find counter example :(
I tried numerous ways for several hours but I cant counter it though I almost know for sure that this will only be true if g is onto but I don't know why :P
|
Let $f(x)=\begin{cases}\tan(x),&\text{if }x \in (-\pi/2,\pi/2)\\0,&\text{otherwise}\end{cases}$
and $g(x)=\tan^{-1}(x)$ with image $(-\pi/2,\pi/2)$.
Then $f(g(x))=x$ but $g(f(2\pi))=0\ne 2\pi$.
The hypothesis is indeed true if both $f$ and $g$ are continuous. $g(\mathbb{R})$ is an interval, and $f|_{g(\mathbb{R})}$ has inverse* $g$ and hence is injective, so must be strictly monotone. If $g(\mathbb{R})$ has a finite endpoint, e.g. $(-\infty,t)$, then $\lim_{x\to t^-} f(x)=\pm\infty$, otherwise $f$ cannot have image $\mathbb{R}$. This contradicts the assumption that $f$ is continuous. Hence $g$ is surjective, and so $f$ and $g$ must be inverses of each other.
*To show that $\forall x\in g(\mathbb{R}), g(f(x))=x$:
Let $y\in \mathbb{R}$ s.t. $g(y)=x$.
$g(f(x))=g(f(g(y)))=g(y)=x$.
I think you can come up with a counterexample if we have only $f$ continuous.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1064935",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 1
}
|
Dimension of a vector space below I have to prove that the dimension of the vector space of real numbers over Q (rational numbers) is infinity. How can I prove? I have no idea.
|
Hint: Show that all $\log(p)$ for $p$ prime are linearly independent over $\mathbb{Q}$.
Apply the exponential function to the equation $\lambda_1\log(p_1)+\cdots +\lambda_n\log(p_n)=0$, and conclude that all $\lambda_i$ are zero. It follows that $\dim_{\mathbb{Q}}\mathbb{R}\ge n$ for all $n\ge 1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1065041",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
bounds of Riemann $\zeta(s)$ function on the critical line? I vaguely remembered that
$$0\leq|\zeta(1/2+i t)|\leq C t^{\epsilon},\qquad t>>1,\epsilon>0$$.
Is this bound correct?
Thanks-
mike
|
In: http://www.math.tifr.res.in/~publ/ln/tifr01.pdf pp.97-99 , it is proven that:
$$\zeta(s) < A(d)t^{1-d}, \text{for } \sigma=\mathrm{Re}(s) \geq d, 0 < d < 1 ; t=\mathrm{Im}(s) \geq 1 .$$
with $A(d) = (1/(1-e) + 1 + 3/e)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1065157",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Probability of selecting a red ball first An urn contains 3 red and 7 black balls. Players A
and B withdraw balls from the urn consecutively
until a red ball is selected. Find the probability that
A selects the red ball. (A draws the first ball, then
B, and so on. There is no replacement of the balls
drawn.)
How do I calculate this probability?
I tried using the total probability rule without success.
I used the $P(A)=\frac{3}{10}+P(A_2\mid B_1)$ and so on, where $B_i$=Player B doesn't get a red ball.
The answer should be $0.0888$
|
Here's what I was thinking.
$$\color{RED}R+BB\color{RED}R+BBBB\color{RED}R+BBBBBB\color{RED}R$$
$$P(A)=\frac{3}{10}+\frac{7}{10}\frac{6}{9}\frac{3}{8}+\frac{7}{10}\frac{6}{9}\frac{5}{8}\frac{4}{7}\frac{3}{6}+\frac{7}{10}\frac{6}{9}\frac{5}{8}\frac{4}{7}\frac{3}{6}\frac{2}{5}\frac{3}{4}=\frac{7}{12}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1065249",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Points in general position I'm really confused by the definition of general position at wikipedia.
I understand that the set of points/vectors in $\mathbb R^d$ is in general position iff every $(d+1)$ points are not in any possible hyperplane of dimension $d$.
However I found that this definition is equivalent to affine independence (according to wiki). Does general linear position mean something else?
Could you please explain that? It is extremely confusing since a lot of people omit "linear" and so on.
Anyway could you also please give some hints on the way of proving general position? The hyperplane definition is hard to use.
|
$\def\R{\mathbb{R}}$General position is not equivalent to affine independence. For instance, in $\R^2$, you can have arbitrarily many points in general position, so long as no three of them are collinear. But you can have at most 3 affinely independent points.
Any random set of points in $\R^d$ is almost certain (with probability 1) to be in general position. The points have to be carefully arranged in order not to be in general position—for instance, by having three on one line, or four in one plane in $\mathbb{R}^3$ and above.
Now, if you have $d+1$ points in general position in $\R^d$, then they affinely span $\R^d$, so they are affinely independent; and conversely, any set of affinely independent points must be in general position. But you can have sets in general position with far more than $d+1$ points, and they won't be affinely independent.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1065337",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
}
|
Prove that $G$ is abelian iff $\varphi(g) = g^2$ is a homomorphism I'm working on the following problem:
Let $G$ be a group. Prove that $G$ is abelian if and only if
$\varphi(g) = g^2$ is a homomorphism.
My solution: First assume that $G$ is an abelian group and let $g, h \in G$. Observe that $\varphi(gh) = (gh)^2 = (gh)(gh) = g^2h^2 = \varphi(g)\varphi(h)$. Thus, $\varphi$ is a homomorphism.
I'm having trouble completing the proof in the reverse direction. Assume that $\varphi$ is a homomorphism. We then know that $\varphi(gh) = \varphi(g)\varphi(h)$ and $\varphi(hg) = \varphi(h)\varphi(g)$. However, I don't see a way to use this to show that $gh = hg$.
Could anyone lend a helping hand?
|
Assume $\varphi(g) = g^2$ is a homomorphism and $G$ is non-abelian; we will show a contradiction.
Since $G$ is non-abelian, we can choose $g, h \in G : gh \neq hg$.
Then since $\varphi$ is a homomorphism,
$$ \varphi(gh) = \varphi(g) \varphi(h) = gghh $$
So $$ ghgh = gghh $$
Left multiply be $g^{-1}$ and right multiply by $h^{-1}$:
$$ gh = hg $$
which contradicts our choice of $(g, h)$. Thus it is impossible to choose $g$ and $h$ that don't commute, thus $G$ must be Abelian.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1065396",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 2
}
|
Prove that $T,S$ are simultaneously diagonalizable iff $TS=ST$. Definition: We say that $S,T$ are simultaneously diagonalizable if there's a basis, $B$ which composed by eigen-vectores of both $T$ and $S$
Show that $S,T$ are simultaneously diagonalizable iff $ST=TS$.
I tried both directions, but couldn't get much further.
I'd be glad for help.
Thanks.
|
hint: If $S = P^{-1} D_S P$ and $T = P^{-1} D_T P$ with $D_{S,T}$ diagonal
then what are the values of $ST$ and $TS$?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1065505",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Equivalence of geometric and algebraic definitions of conic sections I have not been able to find a proof that the following definitions are equivalent anywhere, thought maybe someone could give me an idea:
*
*A parabola is defined geometrically as the intersection of a cone and a plane passing under the vertex of a cone that does not form a closed loop and is defined algebraically as the locus of points equidistant from a focus and a directrix.
*An ellipse is defined geometrically as the intersection of a cone and a plane that passes under the vertex and forms a closed loop and is defined algebraically as the locus of points the sum of whose distances from two foci is a constant.
*A hyperbola is defined geometrically as the intersection of a double cone and a plane that does not pass under the vertex and is defined algebraically as the locus of points that have a constant difference between the distances to two foci.
Picture of the geometric definitions:
|
All of these can be proved by using Dandelin spheres. And Dandelin spheres can also be used to prove that the intersection between a plane and a cylinder is an ellipse.
Both spheres in this picture touch but do not cross the cone, and both touch but do not cross the cutting plane. The points at which the spheres touch the plane are claimed to be the two foci. The distance from $P_1$ to $P$ must be equal to that from $F_1$ to $P$ because both of the lines intersecting at $P$ are tangent to the same sphere at $P_1$ and $F_1$ respectively. Similarly the distance from $P_2$ to $P$ equals that from $F_2$ to $P$. It remains only to see that the distance from $P_1$ to $P_2$ remains constant as $P$ moves along the curve.
For the hyperbola, the two spheres are in opposite nappes of the cone. For the parabola, there is only one Dandelin sphere, and the directrix is the intersection of the cutting plane with the plane in which the sphere intersects the cone.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1065655",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
}
|
Find the number of pathways from A to B if you can only travel to the right and down. I would like to solve the following, using Pascal's Triangle. Since there are shapes withing shapes, I am unsure as to where I should place the values.
EDIT 1:
Where do I go from here? How do I get the value for the next vertex?
EDIT 2:
Okay, how does this look? From this, I would have 22 pathways.
|
This problem can be solved by labeling every vertex with the number of ways you can get to $B$ by traveling only right and down. Then the label at vertex $A$ is the desired number.
As to how you can do this labeling, work backwards from $B$. Starting at the vertex directly above $B$, there is only one way to get to $B$ (namely going down). So label this vertex with $1$. Similarly the vertex to the left of $B$ should have label $1$.
Now for the vertex two spaces above $B$, you can only move down. And if you move one space down, there is only one way to get to $B$; thus the vertex two spaces above $B$ should have label $1$.
For the vertex to the left and upwards of $B$, there are two possible paths: one to the right, and one down. If you take the path to the right, you end up at a label with one path to $B$, so there is one path going to the right. If you take the path down, there is a label with only one path, so only one path to down. So the label at this vertex is the sum of the labels to the right and downward of the vertex.
It's easy to reason that this last rule holds in general for all vertices. This should give you an inductive procedure to label backwards, all the way to $A$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1065811",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Probability: Random Sample Problem I need some help on the following problem:
Let $X_1$ and $X_2$ be random sample from the pdf
\begin{equation}
f(x) = \begin{cases}
4x^3,&0<x<1\\
0, & \text{otherwise}
\end{cases}
\end{equation}
Obtain $P(X_1X_2\geq 1/4)$.
So here is what I did:
$P(X_1X_2\geq 1/4)=1-P(X_1X_2< 1/4)$
Next, I need to obtain the distribution of $X_1X_2$ in order to evaluate the above probability, but how? I really appreciate if you could give me a solution on this.
|
The distribution of $X_1X_2$ is $f(x,y)=16x^3y^3$, where I've used $x$ for $X_1$ and $y$ for $X_2$. The limits are just $0<x,y<1$ as before.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1065903",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Evaluate $\int_0^4 \frac{\ln x}{\sqrt{4x-x^2}} \,\mathrm dx$
Evaluate
$$\displaystyle\int_0^4 \frac{\ln x}{\sqrt{4x-x^2}} \,\mathrm dx$$
How do I evaluate this integral? I know that the result is $0$, but I don't know how to obtain this. Wolfram|Alpha yields a non-elementary antiderivative for the indefinite integral, so I don't think I can directly integrate and then plug in the upper/lower limits.
|
First let $t = x-2$ this way $4x-x^2 = 4 - (x-2)^2 = 4-t^2$. Substitute,
$$ \int_{-2}^2 \frac{\log(t+2)}{\sqrt{4-t^2}} ~ dt $$
Now let, $\theta = \sin^{-1}\tfrac{t}{2}$ so that $2\sin \theta = t$ and hence, after substitute,
$$ \int_{-\pi/2}^{\pi/2} \frac{\log [2(1+\sin \theta)]}{2\cos \theta} 2\cos \theta ~ d\theta = \pi \log 2 + \int_{-\pi/2}^{\pi/2} \log(1+\sin \theta)~d\theta $$
To solve this integral, replace $\theta$ by $-\theta$,
$$ I = \int_{-\pi/2}^{\pi/2} \log(1+\sin \theta) ~d\theta= \int_{-\pi/2}^{\pi/2} \log(1-\sin \theta)~d\theta$$
Now,
$$ I + I = \int_{-\pi/2}^{\pi/2} \log(1-\sin^2 \theta) ~ d\theta = 4\int_{0}^{\pi/2} \log (\cos \theta) ~ d\theta$$
The last integral is a well-known integral that computes to $-\frac{\pi}{2}\log 2$.
Your final answer is, $\pi \log 2 -\pi\log 2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1066006",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 4,
"answer_id": 2
}
|
Show the intersection of a nonidentity normal subgroup and the center of P is not trivial P is p-group and M is a nontrivial normal subgroup of P. Show the intersection of M and the center of P is nontrivial.
By the class equation, I proved that Z(P)is not 1. Then, how do prove I the intersection of M and the center of P is not 1 or empty?
Thank you very much for your time...
|
The Class Equation for normal subgroups reads (for details, see here):
$$|M|=|M \cap Z(P)|+\sum_{m \in \{Orbits \space rep's\}}\frac{|P|}{|C_P(m)|} \tag 1$$
where:
*
*$C_P(m)$ is the centralizer of $m$ in $P$;
*"$Orbits$" (capital "O") are the conjugacy orbits in $M$ of size greater than $1$.
Now, by hyphothesis, $|M|$ is some power of $p$; moreover, $m \in \{Orbits \space rep's\} \Rightarrow C_P(m)\lneq G \Rightarrow$ $|P|/|C_P(m)|$ terms in the sum in $(1)$ are also powers of $p$; but then $|M \cap Z(P)|$ must be divisible by $p$, whence $|M \cap Z(P)|\ne 1$ and finally $M \cap Z(P) \ne \{e\}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1066120",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Hypothetical contradiction to Bolzano-Weierstrass we've learned about the Bolzano-Weierstrass theorem that states that if a sequence is bounded, then it has a subsequence that converges to a finite limit.
Let's define $a_n$ as the digits of $\pi$, i.e. $a_1$ = 3, $a_2$ = 1, $a_3$ = 4, and so on infinitely. Certainly this sequence is bounded by 10 and 0, but I can't think of any subsequence that will converge to anything.
Can you help solve my confusion?
|
Sure, take the subsequence of every occurrence of $1$. If there aren't infinitely many $1$s, then $2$s; if not $2$s then $3$s, etc. As $\pi$ is irrational, at least one non-zero digit in the decimal expansion is repeated infinitely many times.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1066220",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.