Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
Finding the local extreme values of $f(x) = -x^2 + 2x + 9$ over $[-2,\infty)$. I'm tutoring a student, and we were trying to solve the following question:
Find the local extreme values of $f(x) = -x^2 + 2x + 9$ over $[-2,\infty)$.
According to the textbook, the local extreme values are essentially the peaks and the valleys in the graph of the function $f$, so basically where $f'(x) = 0$. This is relatively easy to compute: $$f'(x) = -2x + 2,$$ of which the critical points are $x = 1$. Likewise, $f''(x) = -2 < 0$, which means $f$ is concave down everywhere, and thus $x = 1$ is where a maximum value occurs on the graph. The maximum is $f(1) = -1 + 2 + 9 = 10$.
Of course, the endpoint $x = -2$ yields $$f(-2) = -(-2)^2 +2(-2) + 9 = 1,$$ but since the graph is concave down everywhere, $\displaystyle \lim_{x\to\infty}f = -\infty$ implies there really is no minimum per se... right?
The online computer program tells us that $(-2,1)$ is a local minimum, and $(1,10)$ is a local maximum. But in accordance with the definition from the textbook, why is $(-2,1)$ where a local minimum of the graph occurs? It's neither a peak nor a valley in the graph. What exactly does local mean when the interval is infinite? It doesn't quite make logical sense, unless the definition is not as rigorous as it ought to be.
| Sometimes just plotting the function cuts through needless distractions:
Clearly at the point $x=-2$, the function is lower than any other values in the specified domain, and hence is a local minimum.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2982073",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 4
} |
The limit $\lim_{x \to 0-} \frac{e^{-x^2}}{\sqrt{\pi}} \int_0^\infty e^{-t^2/4} \frac{e^{2x} \cos t-1}{e^{4x}-2e^{2x} \cos t+1 } dt$ A while back I derived the following expression valid for $x>0$:
$$\sum_{n=1}^\infty e^{-(n+x)^2}= \frac{e^{-x^2}}{\sqrt{\pi}} \int_0^\infty e^{-t^2/4} \frac{e^{2x} \cos t-1}{e^{4x}-2e^{2x} \cos t+1 } dt$$
While the integral doesn't converge for $x=0$, it has a right limit:
$$\lim_{x \to 0+} \frac{e^{-x^2}}{\sqrt{\pi}} \int_0^\infty e^{-t^2/4} \frac{e^{2x} \cos t-1}{e^{4x}-2e^{2x} \cos t+1 } dt=\sum_{n=1}^\infty e^{-n^2}=\frac{1}{2} \left(\vartheta _3\left(0,\frac{1}{e}\right)-1\right)$$
But, despite the fact that the integral converges for $x<0$, it converges to a different limit from the left side, as can be seen from the numerical plot by Mathematica:
Surprisingly enough, by numerical integration with Mathematica, we seem to have:
$$\lim_{x \to 0-} \frac{e^{-x^2}}{\sqrt{\pi}} \int_0^\infty e^{-t^2/4} \frac{e^{2x} \cos t-1}{e^{4x}-2e^{2x} \cos t+1 } dt \approx -\frac{1}{2} \left(\vartheta _3\left(0,\frac{1}{e}\right)+1\right)$$
In other words, the limits are related as $L^- = -1-L^+$. Is this correct? And why?
Here's the derivation of the first equality https://math.stackexchange.com/a/2751575/269624.
| We have
\begin{align}
L^-
&=\lim_{x \to 0^-} \frac{e^{-x^2}}{\sqrt{\pi}} \int_0^\infty e^{-t^2/4} \frac{e^{2x} \cos t-1}{e^{4x}-2e^{2x} \cos t+1 } dt\\
&=\lim_{x \to 0^+} \frac{e^{-x^2}}{\sqrt{\pi}} \int_0^\infty e^{-t^2/4} \frac{e^{-2x} \cos t-1}{e^{-4x}-2e^{-2x} \cos t+1 } dt\\
&=\lim_{x \to 0^+} \frac{e^{-x^2}}{\sqrt{\pi}} \int_0^\infty e^{-t^2/4} \frac{e^{2x} \cos t-e^{4x}}{1-2e^{2x} \cos t+e^{4x} } dt
\end{align}
hence
\begin{align}
L^++L^-
&=\lim_{x \to 0^+} \frac{e^{-x^2}}{\sqrt{\pi}} \int_0^\infty e^{-t^2/4} \frac{-1+2e^{2x} \cos t-e^{4x}}{1-2e^{2x} \cos t+e^{4x} }dt\\
&=-\lim_{x \to 0^+} \frac{e^{-x^2}}{\sqrt{\pi}} \int_0^\infty e^{-t^2/4} dt\\
&=-1
\end{align}
by the wellknow $\int_0^\infty e^{-t^2/4}dt=\sqrt\pi$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2982182",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 0
} |
Why is it sometimes possible to solve multi variable single equation? Assume you have an equation:
$\ 5x^2+4x+6=ax^2+bx+c $
Now theoretically, this is an equation of 4 variables, and it should not be solvable, but it is very apparent that a,b,c equals 5,4,6, in that order. Something similar happens with complex numbers:
$\ 5 + 4i = a + bi $
Here solution for a,b is 5,4, again it is very apparent.
My theory is that a,b or c, in these are something like incompatible number types. Therefore you can construct a system of equations from one single equation. The complex number equation is a good example, you can easily separate the complex and the non-complex part, as they virtually cannot influence the other.
But in the first equation, it's all in the real plane, there is no complex stuff going on and the x squared can influence the x
You can even do this:
$\ c = 5x^2 + 4x + 6 - ax^2 - bx $
Which would suggest c is dependent on the value of both a and b, and you get a similar result defining a or b, potentially pointing to an infinite number of solutions.
So why do these have only one solution? Assuming my theory is somewhat correct, what do mathematicians call these "incompatible numbers" properly?
| Your question touches on a very important concept in linear algebra, that of linear independence. Some set of vectors $v_1,v_2,\dots,v_n$ are linearly independent if the only solution to $a_1v_1+a_2v_2+\dots+a_nv_n=0$ is when $a_1=a_2=\dots=a_n=0$. For example, $x^2$, $x$, and $1$ are linearly independent because if $f(x)=ax^2+bx+c$ and for any $x$, $f(x)=0$ (ie. $f$ is the $0$ function) then $a=b=c=0$. Now, a very important concept in linear algebra is that if we have some vector $v=a_1v_1+a_2v_2+\dots+a_nv_n$ there is only one possible choice of $a_1,a_2,\dots,a_n$. That means that if $ax^2+bx+c=dx^2+ex+f$, $a=d$, $b=e$, and $c=f$. In addition, note that $1$ and $i$ are linearly independent over the reals because the only real numbers $a,b$ where $a(1)+b(i)=0$ are $a=b=0$, so this property holds again. If you are interested in this topic, I recomend you research linear independence and linear algebra.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2982341",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
Spider Problem Counting Socks and Shoes Problem
A spider has one sock and one shoe for each of its eight legs. In how many different orders can the spider put on its socks and shoes, assuming that, on each leg, the sock must be put on before the shoe?
A) 8! (B) $2^8$ (C) $(8!)^2$ (D) $\frac{16!}{2^8}$ (E) 16!
I am having trouble visualizing how the answer was gotten from this solution.
Solution 2
Each dressing sequence can be uniquely described by a sequence containing two $1$s, two $2$s, ..., and two $8$s -- the first occurrence of number $x$ means that the spider puts the sock onto leg $x$, the second occurrence of $x$ means he puts the shoe onto leg $x$. If the numbers were all unique, the answer would be $16!$. However, since 8 terms appear twice, the answer is $\frac{16!}{(2!)^8} = \boxed{\frac {16!}{2^8}}$
| It might be better to put subscripts on the numbers: $L_1$ means the action of putting the sock, and $L_2$ the shoe, on leg $L$. Then we have 16 distinct symbols $1_1,1_2,2_1,2_2,3_1,3_2,\dots,8_1,8_2$ and there are $16!$ ways to permute them without restrictions.
With the sock-before-shoe restriction, for each pair of $L_1$ and $L_2$, $L_1$ must come before $L_2$. For each sequence where $L_1$ comes before $L_2$ (allowed) there is a corresponding sequence where $L_1$ comes after $L_2$ (disallowed), so we divide by 2 for each leg, yielding the correct answer of $\frac{16!}{2^8}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2982436",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Proof by induction of summation inequality: $1 + 1/2+ 1/3+ 1/4+1/5+⋯+ 1/2^n \leq n + 1$ I have been working on this problem for literally hours, and I can't come up with anything. Please help. I feel like I am about to go insane.
For all n $\in$ N, we have $$1 + \frac{1}{2}+ \frac{1}{3}+ \frac{1}{4}+\frac{1}{5} +⋯+ \frac{1}{2^n} ≤ n + 1$$
I know that I am supposed to use a proof by induction. Here is progress so far:
1) Let P(n) be $$\sum_{i=0}^{2^n} \frac{1}{i} $$
2) Base case: $n = 1$
$$\sum_{i=1}^{2^n} \frac{1}{i} = \frac{1}{1}+ \frac{1}{2} = \frac{3}{2}, \frac{3}{2} ≤ 2 $$
So P(1) is true.
3) Inductive hypothesis:
Suppose that P(k) is true for an arbitrary integer k $\geq$ 1
4) Inductive step:
We want to prove that P(k + 1) is true or, $$\sum_{i=1}^{2^{k+1}} \frac{1}{i} ≤ k + 2$$
By inductive hypothesis,
$$\sum_{i=1}^{2^{k+1}} \frac{1}{i} = \sum_{i=1}^{2^k} \frac{1}{i} + \sum_{i=2^k+1}^{2^{k+1}}\frac{1}{i} ≤ k + 1 + \sum_{i=2^k+1}^{2^{k+1}}\frac{1}{i}$$
I know that I'm supposed to split the expression into two summations, but now I am completely stuck and don't know what to do from here. I got one hint that the fact $\frac{a}{b + c} < \frac{a}{b}$ is relevant, but I don't know how to get there from here.
| For $n \ge 2$, we have $$\frac{1}{n} \le \int_{n-1}^{n} \frac{1}{x}\,dx$$
So, $$\sum_{i=1}^{2^{k}}{\frac{1}{i}} \leq 1+\int_{1}^{2^k} \frac{1}{x}\,dx=1+k \log2 \le 1+k$$ Another proof:as your progress $$\sum_{i=2^k+1}^{2^{k+1}}\frac{1}{i}=\frac{1}{1+2^k}+\frac{1}{2+2^k}+...+\frac{1}{2^k+2^k}\le\frac{2^k}{1+2^k}\le1$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2982549",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Wedge of aspherical spaces I‘d need a reference for the following fact: the one-point union of (nice) aspherical spaces is aspherical. I.e., from $\pi_kX=0$ and $\pi_kY=0$ follows $\pi_k(X\vee Y)=0$.
EDIT: Let‘s assume that the spaces are nice, e.g. manifolds.
For the wedge of circles this is true because the universal covering space is contractible.
| To answer my own question (in the setting of CW-complexes and thus also for smooth manifolds):
According to Ganea Link to Ganea‘s paper the homotopy fiber of $X\vee Y\to X\times Y$ is homotopy-equivalent to $\Omega X*\Omega Y$ if $X,Y$ are CW-complexes.
If $X,Y$ are aspherical, then their loop spaces are homotopy-equivalent to discrete spaces, hence the join of the loop spaces is homotopy-equivalent to a wedge of circles. In particular, $\Omega X*\Omega Y$ is aspherical. Since $X\times Y$ is aspherical, also $X\vee Y$ must be aspherical.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2982742",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Expected time before Farmer Brown is abducted? Farmer Brown is standing in the middle of his perfectly circular field feeling
very content. It is midnight and there is no moon and unknown to the farmer,
Martian zoologists are landing randomly at points on the circumference of his
field. They land at one minute intervals, starting at midnight. As soon as
there are martians at points A,B,C such that triangle ABC contains the center
of the field, Farmer Brown will be teleported to the waiting space-ship and
transported off to spend the rest of his life as an exhibit in a Martian zoo.
What is the expected time until he is abducted?
My approach:
If lets say the farmer gets abducted after k Martians land. This implies that the first k-1 martians all lie in the same semicircle. Also, the kth martian lies on the circle such that the far away martians in the initial k-1 martians, and the kth martian form a triangle that contains the center of the circle.
The probability that the first (k-1) martians don't contain the center of the circle is
(see here). Also, the probability that the kth martian makes the center lie in a triangle formed by the kth martian and the ends of initial k-1 martians should be 1/4, since this is equivalent to the situation of only 3 martians.
So the expected value of k, i.e. no. of martians after which the farmer gets abducted, should be
The answer I get from this, is 3.5. Whereas the actual answer is 5. In the solution in that link, I don't understand where does the right hand side of the probability equation come from. Where is my approach wrong?
| So you know that the probability that $k-1$ martians do not hold the farmer is
$$G(k-1)=\frac{k-1}{2^{k-2}}$$
Then the probability that $k$ martians do not hold the farmer is
$$G(k)=\frac{k}{2^{k-1}}$$
and the probability that the $k$-th martian catches the farmer is
$$P(k)=G(k-1)-G(k)=\frac{k-2}{2^{k-1}}$$
Finally
$$E[K]=\sum_{k=3}^{\infty}k\frac{k-2}{2^{k-1}}=5$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2982878",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
geometric hash code or is there a unique affine transformation mapping two 2D points onto (0,0) and (1,1)? How can I compute it? I have a 2D transformation $T$ composed by a scale $\lambda$, a rotation by angle $\theta$ and a translation vector $\begin{bmatrix}t_x\\t_y\end{bmatrix}$:
$$
T=\begin{bmatrix}
\lambda\cos(\theta) & -\lambda\sin(\theta) & t_x \\
\lambda\sin(\theta) & \lambda\cos(\theta) & t_y \\
0 & 0 & 1 \\
\end{bmatrix}
$$
$T$ operates on 2D point $P$ expressed with homogeneous coordinates:
$$
P=\begin{bmatrix}
x\\
y\\
1
\end{bmatrix}
$$
$T$ maps 2D $(x,y)$ point to 2D $(u,v)$ point according to:
$$
\begin{bmatrix}u\\v\\1\end{bmatrix}=TP
$$
Now I have
$$
A=\begin{bmatrix}x_A\\y_A\\1\end{bmatrix},B=\begin{bmatrix}x_B\\y_B\\1\end{bmatrix}
$$
I would like to map $A$ to $(0,0)$ and $B$ to $(1,1)$:
$$\begin{bmatrix}0\\0\\1\end{bmatrix}=TA$$
$$\begin{bmatrix}1\\1\\1\end{bmatrix}=TB$$
Is $T$ unique?
How can I compute $T$?
Background information:
I am trying to compute an hash code as described in
LANG, Dustin, et al. Astrometry.net: Blind astrometric calibration of arbitrary astronomical images. The astronomical journal, 2010, 139.5: 1782.
See here Figure 1 from the above paper:
| Partial solution
$ \begin{cases}
\begin{align*}
0 &= x_{A}\lambda \cos(\theta) - y_{A}\lambda \sin(\theta) + t_{x} \\
0 &= x_{A}\lambda \sin(\theta) + y_{A}\lambda \cos(\theta) + t_{y} \\
0 &= x_{B}\lambda \cos(\theta) - y_{B}\lambda \sin(\theta) + t_{x} - 1\\
0 &= x_{B}\lambda \sin(\theta) + y_{B}\lambda \cos(\theta) + t_{y} - 1\\
\end{align*}
\end{cases} $
Jacobian:
$J=\begin{bmatrix}
x_{A}\cos(\theta) - y_{A} \sin(\theta) &
-x_{A}\lambda \sin(\theta) - y_{A}\lambda \cos(\theta) &
1 &
0 \\
x_{A}\sin(\theta) + y_{A}\cos(\theta) &
x_{A}\lambda \cos(\theta) - y_{A}\lambda \sin(\theta) &
0 &
1 \\
x_{B}\cos(\theta) - y_{B} \sin(\theta) &
-x_{B}\lambda \sin(\theta) - y_{B}\lambda \cos(\theta) &
1 &
0 \\
x_{B}\sin(\theta) + y_{B}\cos(\theta) &
x_{B}\lambda \cos(\theta) - y_{B}\lambda \sin(\theta) &
0 &
1 \\
\end{bmatrix}$
The rows of J are independent unless:
*
*$\lambda = 0$ or
*$x_{A} = x_{B}$ and $y_{A} = y_{B}$.
In that case, $J$ is invertible.
However, that doesn't guarantee that there is only one solution:
Unique solution to system of nonlinear equations (non-singular Jacobian)
On the other hand, if $\lambda = 0$ is a solution, then the solution is not unique. I can linearize the system around the solution. Since the null space of the Jacobian is non-trivial, the solution is not unique.
That means if a translation can translate points A and B to $(0, 0)$ and $(1, 1)$, then the solution (in terms of $\lambda, \theta, t_{x}, t_{y}$) is not unique. But in that case, $T$ is still unique.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2982975",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Evaluating $\lim_{x \to \infty}\frac{1}{x}\int_0^x|\sin(t)|dt$ I would appreciate some help with this problem:
Evaluate:
$$\lim_{x \to \infty}\frac{1}{x}\displaystyle\int_0^x|\sin(t)|dt$$
| Note that $\vert\sin(t)\vert$ is non-negative, periodic with period $\pi$, and that
$$\int_0^\pi\vert \sin(t)\vert dt=2.$$
Let $f(x)$ be the largest integer smaller than or equal to $x/\pi$. Then it holds that
$$\int_0^{f(x)\pi}\vert\sin(t)\vert dt\leq\int_0^x\vert\sin(t)\vert dt\leq\int_0^{[f(x)+1]\pi}\vert\sin(t)\vert dt.$$
This can be written as
$$2f(x)\leq\int_0^x\vert\sin(t)\vert dt\leq2[f(x)+1].$$
Dividing by $x$ and noting that $\lim_{x\to\infty}f(x)/x=1/\pi$ it follows that
$$\frac2\pi\leq\lim_{x\to+\infty}\frac1x\int_0^x\vert\sin(t)\vert dt\leq\frac2\pi.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2983075",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Octal palindromes with even number digits are all composite numbers? I want to know whether octal palindromes with even number digits (11 or 1221, but not 121) are all composite numbers, and a general proof if so or a counterexample if not.
| let us consider any number in base 8, $a_n 8^n +,\dots, + a_0$ observe that if $n$ is even then $ a^n \equiv 1 \;\text{mod} 9$ and if $n$ is odd then $ a^n \equiv -1\; \text{mod} 9$ then write the number mod 9, it became $-a_n + a_{n-1}+ \dots + a_0$ if $n$ is even ( or with different sign for $n$ odd). In any case we get argue as in the case of base 10 to show that any octal palindrome number with enev number of digit is multiple of $9$.
In general if you consider a number in base $b$ palindrome and with a even number of digits it will be a multiple of $b+1$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2983277",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Integration by parts $3n$ times. I found the equation below in Repeated integration by parts of a definite integral
\begin{align}
\int_a^b f^{(n)}(x)g(x) dx = (-1)^{(n)}\int_a^b f(x)g^{(n)}(x) dx
\end{align}
which is true if $\left.f^{(k)}(x)g^{(n-k)}(x)\right|_a^b=0 \: \forall k \in [0:n-1]$
So I want to test on the integral below:
\begin{align}
\int_0^1 \frac{d^{3n}}{dx^{3n}} (x^3-x^4)^n e^x dx
\end{align}
here $f(x) = (x^3-x^4)^n$ and $g(x)=e^x$.
Is the following correct?
\begin{align}
&\int_a^b f^{(3n)}(x)g(x) dx = (-1)^{(3n)}\int_a^b f(x)g^{(3n)}(x) dx\\
&\int_0^1 \frac{d^{3n}}{dx^{3n}} (x^3-x^4)^n e^x dx =
(-1)^{(3n)} \int_0^1 (x^3-x^4)^n e^x dx
\end{align}
if not, how to proceed?
| The formula is applicable because when you differentiate $f$ $k$ times with $k <n$ there will be at least one factor of $x^{3}-x^{4}$ left in each term and $x^{3}-x^{4}=0$ for $x=0$ and $x=1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2983534",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Let $R$ be an integral domain. If $x \in R$ is prime, then $x$ is irreducible. I am trying to understand the proof for the following theorem:
Let $R$ be an integral domain. If $x \in R$ is prime, then $x$ is irreducible.
Here is the proof:
I typed this a while ago and I don't understand the part where if $x | bc$, then $x=bc$? Is something wrong at this step?
Also, is the definition of prime elements where $p$ is prime if whenever $p|ab$, then either $p|a$ or $p|b$? Now I don't feel so sure. This could be the reason why I am not understanding the proof...
| You have it reversed. What you seek is the inference $(1)\Rightarrow(2)$ below.
Theorem $\,\ (1)\,\Rightarrow\,(2)\!\iff\! (3)\ $ below, $ $ for a nonunit $p\neq 0$
$(1)\ \ \ \color{#c00}{p\ \mid\ ab}\ \Rightarrow\ p\:|\:a\ \ {\rm or}\ \ p\:|\:b\quad$ [Definition of $\:p\:$ is prime]
$(2)\ \ \ \color{#c00}{p=ab}\ \Rightarrow\ p\:|\:a\ \ {\rm or}\ \ p\:|\:b\quad$ [Definition of $\:p\:$ is irreducible, in associate form]
$(3)\ \ \ p=ab\ \Rightarrow\ a\:|\:1\ \ {\rm or}\ \ b\:|\:1\quad$ [Definition of $\:p\:$ is irreducible, in $\rm\color{#0a0}{unit}$ form]
Proof $\ \ \ (1\Rightarrow 2)\,\ \ \ \color{#c00}{p = ab\, \Rightarrow\, p\mid ab}\,\stackrel{(1)}\Rightarrow\,p\mid a\:$ or $\:p\mid b.\ $ Hence prime $\Rightarrow$ irreducible.
$(2\!\!\iff\!\! 3)\ \ \ $ If $\:p = ab\:$ then $\:\dfrac{1}b = \dfrac{a}p\:$ so $\:p\:|\:a\iff b\:|\:1.\:$ Similarly $\:p\:|\:b\iff a\:|\:1.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2983631",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
How to evaluate $\int_{-\infty}^{\infty}\frac{x\arctan\frac1x\ \log(1+x^2)}{1+x^2}dx$ While browsing similar questions on this site I came up with the following integral because I thought I could evaluate it.
$$I=\int_{-\infty}^{\infty}\frac{x\arctan x\ \log(1+x^2)}{1+x^2}dx$$
I've been able to simplify it a bit. We first notice that
$$I=\int_0^{\infty}\frac{\arctan x\ \log(1+x^2)}{1+x^2}2xdx$$
The substitution $u=x^2+1$ gives
$$I=\int_1^{\infty}\arctan\sqrt{u-1}\ \log u\ \frac{du}u$$
Then $w=\log u$ gives
$$I=\int_{0}^{\infty}\arctan\sqrt{e^w-1}\ dw$$
Which I do not know how to proceed with.
Another approach I tried was this. Starting with the original integral,
$x=\tan u$: $$I=2\int_0^{\pi/2}u\tan u\log\sec^2u\ du$$
Which I also do not know how to do. Please help me proceed or give me a value of the integral (and show how you got it).
If no closed form exists (AKA you have an answer in terms of a series or special function), I'm fine with that.
cheers!
Edit: In the comments it is discussed that the integral is not integrable over the positive reals, but the following related integral is:
$$J=\int_0^{\infty}\frac{\log(1+x^2)\arctan\frac1x}{1+x^2}xdx$$
So. How do we find the value for $J$?
| We may also try attacking the third integral in line $(3)$ in my first answer,
$$L = \int_0^1 \frac{\log^2(1+x)+\log^2(1-x)}{\sqrt{1-x^2}} \, dx$$
by exploiting the generating function of $\dfrac{H_{2k-1}}k$, with $H_k$ the $k^{\rm th}$ harmonic number. Using the Cauchy product, we find
$$-\log(1\pm x) = \sum_{n=1}^\infty \frac{(\pm x)^n}n \\ \implies \log^2(1\pm x) = \sum_{n=2}^\infty \sum_{m=1}^{n-1} \frac{x^n}{m(n-m)} = \sum_{n=2}^\infty \frac{2 H_{n-1}}n (\pm x)^n \\ \implies \log^2(1+x) + \log^2(1-x) = \sum_{n=2}^\infty \frac{2H_{n-1}}{n} \left(1+(-1)^n\right) (\pm x)^n = \sum_{n=1}^\infty \frac{H_{2n-1}}{n} x^{2n}$$
Multiply by $\frac1{\sqrt{1-x^2}}$ and integrate to recover an Euler sum:
$$\begin{align*}
L &= \sum_{n=1}^\infty \frac{H_{2n-1}}{n} \int_0^1 \frac{x^{2n}}{\sqrt{1-x^2}} \, dx \\[1ex]
&= \frac12 \sum_{n=1}^\infty \frac{H_{2n-1}}{n} \int_0^1 x^{n-\frac12} (1-x)^{-\frac12} \, dx \\[1ex]
&= \frac12 \sum_{n=1}^\infty \frac{H_{2n-1}}{n} \operatorname{B}\left(n+\frac12,\frac12\right) \\[1ex]
&= \frac12 \sum_{n=1}^\infty \frac{H_{2n-1}}{n\cdot4^n} \binom{2n}n
\end{align*}$$
where in the second line, we substitute $x\mapsto\sqrt x$. Now we can use
$$\sum_{n=1}^\infty \frac{H_n}{n\cdot4^n} \binom{2n}n = \frac{\pi^2}3$$
(see the proof following equation $(20)$) together with
$$H_{2n-1} = \frac12 \left(H_n + H_{n-\frac12}\right) + \log(2)$$
to determine $\displaystyle L=\frac{\pi^3}3+\pi\log^2(2)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2983721",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 3
} |
Are $M, P, N $ collinear? Let $\alpha $ a circle of diameters $AB $ and $\beta $ a circle tangent to $AB $ in $ C$ and tangent to $\alpha$ in $T $.
Let $M\in \alpha $ and $N\in CB $ s.t. $MN\perp AB $ and $MN $ is tangent to $\beta $.
Show that $\angle AMC=\angle CMN $.
My idea: Let $BT\cap \beta=$ {P}.
I need to prove that $MN\cap \beta =${P}. In GeoGebra seems to be right.
If this is true, then:
$\triangle BNP$ ~ $\triangle BTA $ $\Rightarrow BN\cdot AB=BP\cdot BT \Rightarrow BM ^2=BC^2$.
So, $\angle BMC=\angle BCM\Rightarrow \angle AMC=\angle CMN $.
| Let O and Q be the centers of $\alpha$ and $\beta$ respectively. Note that CNPQ is a square and therefore if PQ is extended to cut $\beta$ at X, then, PQX // AOCNB.
Since $\angle TXP = 0.5 \angle TQP = 0.5\angle TOB = \angle TAB$, TXA is a straight line because the two red shaded angles are on the corresponding angle positions of the two parallel lines.
Similarly, since $\angle STP = \angle TXP = \angle TAB = \angle STB$, therefore T, P, and B are collinear.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2983813",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Is the restriction of a map representing a cohomology class on its Poincare dual null-homotopic? Let $M$ be a 5-manifold (possibly non-orientable),
$g\in H^2(M,\mathbb{Z}_2)$ is represented by a map $\tilde{g}:M\to K(\mathbb{Z}_2,2)$. $\text{PD}(g)$ is the submanifold of $M$ representing the Poincare dual of $g$.
$\tilde{g}|_{\text{PD}(g)}$ is the restriction of $\tilde{g}$ on $\text{PD}(g)$, it is a map from $\text{PD}(g)$ to $K(\mathbb{Z}_2,2)$, also represents a cohomology class $f$ in $H^2(\text{PD}(g),\mathbb{Z}_2)$.
My question: Is $\tilde{g}|_{\text{PD}(g)}$ null-homotopic? In other words, is $f$ trivial?
*
*If it is true, please give a simple proof/argument.
*If it is false, please give, counterexamples.
Thank you!
| It is not true. Think of $M= \mathbb CP^2\times S^1$. $H^2$ is generated by a single element follows from product formula. The element $g$ is the mod-2 reduction of the canonical map $\mathbb CP^2\times S^1 \to \mathbb CP^2\hookrightarrow \mathbb CP^{\infty}$.[since $K(\mathbb Z, 2)= \mathbb CP^{\infty}$] {as Mike observe in the comment}. It's Poincare dual is $\mathbb CP^1\times S^1$ in mod-2 homology. $\mathbb CP^1\times S^1 \to \mathbb CP^1 \hookrightarrow \mathbb CP^{\infty}$ is not null homotopuic. [Thanks to mike for correcting my sloppy mistake].
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2983927",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
complex integral: $\oint dz/z$ I'm trying to determine $$\oint \frac{dz}{z}$$ on a closed path $\mathcal C$, where $\mathcal C$ is the circle $|z|=100$ traversed anticlockwise.
What I did was re-write the function as $$\oint \frac{dz}{z-0}$$ and by Cauchy integration formula, it straight away gives $2\pi i$.
Is this method correct? Share a better/correct method and explain my mistakes, if not.
| If $f(z)$ is an analytic function inside a simple closed curve $C$,
except for a finite number of isolated singular points $z_i$, $i \in \mathbb{N}$ located inside C, then
\begin{equation}
\oint f(z) dz= 2 \pi i \sum_{j=1}^N r_j
\end{equation}
where $r_j$ are the residue of $f(z)$ at $z = z_j$. This is know as the Cauchy's residue theorem.
In your case $f(z)=1/z$ and $z_1=0$. If $C$ is a circle of radious R, i.e. $|z| = R$ the residue of $z^{-1}$ is unity and the integral is equal to $2 \pi i$. The Cauchy's residue theorem shows that the correct modification of Cauchy’s Theorem, when $f(z)$ contains one isolated singular point at $z_0 \in D$, is that the integral be proportional to the residue of $f(z)$ at $z_0$, where $0 < |z-z_0|< R$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2984008",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Recurrence relations with algorithm Let $x \in \mathbb{R}$ and $n \in \mathbb{N}$. $c_0=1,c_1=\cos x$
for $k=1,2,...,n-1$:
$c_{k+1}=2c_1c_k-c_{k-1}$
How to prove that $c_k=\cos kx$?
I tried to show this equality with induction:
$k=1:c_1=\cos x$
$k \mapsto k+1: 2\cos(x)c_{k+1}-c_k$
Here I don't know how to continue. I want to use the trigonometric addition formulas, but it doesn't work from here.
| You need strong induction for this problem, where you assume that the $c_n$ formula is true for all $1\le n\le k$:
$$c_{k+1}=2c_1c_k-c_{k-1}=2\cos x\cos kx-\cos(k-1)x$$
We use a product-to-sum identity to get rid of the cosine product:
$$=\cos(kx-x)+\cos(kx+x)-\cos(k-1)x=\cos(k+1)x+\cos(k-1)x-\cos(k-1)x=\cos(k+1)x$$
This completes the inductive step.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2984164",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
length of tangent to a curve passing through another point Let $A$ be a point on the curve
$\mathcal{C} : x^2+y^2-2x-4=0$
If the tangent line to $\mathcal{C}$ at $A$ passes through $P(4,3)$, then what is the length of AP?
Please, include a general method of approaching similar kind of questions.
| Bring the circle into standard form to find our center location and radius.
$$ ( h,k)=(1,0),\, R = \sqrt5 = CT $$
Distance of center to outside point squared
$$ PC^2 = (4-1)^2+ (3-0)^2= 18 $$
$$ PT^2= PC^2-5 = 18-5= 13 \rightarrow PT= \sqrt{13} $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2984274",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Limit at infinity $\lim_{x\to \infty} x^a a^x=$? $\displaystyle \lim_{x\to \infty} x^a a^x=?$; $0<a<1$
I try to use the property: $a^{\log_a x}= x$ and reescribe the expression
$\displaystyle \lim_{x\to \infty} x^a a^x = \lim_{x\to \infty} \frac{a^{a\log_a x}}{a^{-x}} $ but i can't find the limit yet.
Any suggestion?
| Consider $x=ay$; then your expression becomes
$$
x^aa^x=(ay)^aa^{ay}=a^a(ya^y)^a
$$
so you just need to compute
$$
\lim_{y\to\infty}ya^y
$$
For $a\ge1$ the limit is $\infty$. For $0<a<1$ rewrite it as
$$
\lim_{y\to\infty}\frac{y}{a^{-y}}=\lim_{y\to\infty}\frac{1}{-a^{-y}\log a}=0
$$
with a simple application of l'Hôpital.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2984346",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Finding $\lim_{n\to \infty}\sqrt n \int_0^1 \frac{\,dx}{(1+x^2)^n}$ $$
\lim_{n\to\infty} n^{1/2}
\int_{0}^{1} \frac{1}{(1+x^2)^n}\mathrm{d}x=0
$$
Is my answer correct?
But I am not sure of method by which I have done.
| $\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
This can be evaluated by means of
Laplace Method:
\begin{align}
&\bbox[10px,#ffd]{\lim_{n \to \infty}\bracks{n^{1/2}\int_{0}^{1}{\dd x \over \pars{1 + x^{2}}^{n}}}}
\\[5mm] = &\
\lim_{n \to \infty}\bracks{n^{1/2}\int_{0}^{1}
\exp\pars{-n\ln\pars{1 + x^{2}}}\,\dd x}
\\[5mm] = &\
\lim_{n \to \infty}\bracks{n^{1/2}\int_{0}^{\infty}
\exp\pars{-nx^{2}}\,\dd x}
\\[5mm] = &\
\int_{0}^{\infty}\exp\pars{-x^{2}}\,\dd x =
\bbx{\root{\pi} \over 2} \approx 0.8862 \\ &
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2984468",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 6,
"answer_id": 1
} |
Bounding the difference of rank-1 matrices Let $\|\cdot\|$ denote the Frobenius norm and $x,y \in \mathbb{R}^n$
I need a bound of the form
$$
\|x \cdot x^\top - y \cdot y^\top \| \leq C \|x-y\|_2 \quad (*),
$$
where $C>0$ does not depend on $x$ or $y$.
This seems to work when $\|x\|_2=\|y\|_2 = 1$. (Then I get $C=2$ by basic linear algebra).
For general $x,y$ this seems not to be possible, which can already be seen for $n=1$.
I wonder what is known regarding more advanced bounds on the LHS (like eigenvalues of sums of rank-1 matrices?), that comes as close as possible to the RHS of (*).
Thanks very much for any help or suggestion on this.
| For the Frobenius norm, we have
$$
\|xy^T\|_F^2 = \sum_{i,j}(x_i y_j)^2 = \|x\|_2^2 \|y\|_2^2.
$$
This implies
$$
\|xx^T-yy^T\|_F= \|(x-y)x^T+y(x-y)^T\|_F \le
\|(x-y)x^T\|_F+\|y(x-y)^T\|_F
\le (\|x\|_2 + \|y\|_2)(\|x-y\|_2).
$$
The constant cannot be independent of $x,y$, because the mapping $x\mapsto xx^T$ is 'quadratic' in $x$.
Taking the difference of the non-zero eigenvalues of the rank-one matrices does not help, as this difference is $\|x\|_2^2 - \|y\|_2^2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2984627",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Is a set in between two sets of equal measure measurable? The Lebesgue sigma algebra is complete with respect to Lebesgue measure, which means that if $A$ is a Lebesgue measurable set with Lebesgue measure $0$ and $B$ is a subset of $A$, then $B$ is Lebesgue measurable as well. But I'd like to know if something stronger is true.
Suppose that $A$ is a subset of $B$ which is a subset of $C$, where $A$ and $C$ are Lebesgue measurable sets and the Lebesgue measure of $A$ is equal to the Lebesgue measure of $C$. Then my question is, does $B$ have to be Lebesgue measurable as well?
| If $A$ and $C$ are each Lebesgue measurable, then so is $C\setminus A$ and so by finite additivity it has measure zero.
But then $B\setminus A$ also has measure zero and hence is measurable, so $$B=A\cup(B\setminus A)$$ is the union of two measurable sets, hence is measurable.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2984760",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
On existence of positive integer solution of $\binom{x+y}{2}=ax+by$ How can I prove this?
Prove that for any two positive integers $a,b$ there are two positive integers $x,y$ satisfying the following equation:
$$\binom{x+y}{2}=ax+by$$
My idea was that $\binom{x+y}{2}=\dfrac{x+2y-1}{2}+\dfrac{y(y-1)}{2}$ and choose $x,y$, such that $2a=x+2y-1, 2b=y(y-1)$, but using this idea, $x,y$ won’t be always positive.
| If $a=b$ then let $(x,y)=(a,a+1)$.
Otherwise, w.l.g. suppose $a>b$ and let $x+y=2t(a-b)$ for some positive integer $t$. Then
$$t(a-b)\Big(2t(a-b)-1\Big) =ax+by=(a-b)x+2bt(a-b)$$
$\text{Therefore } x=t\Big(2t(a-b)-1\Big)-2bt=t\Big(2t(a-b)-(2b+1)\Big)$.
$x$ will be a positive integer providing $t>\frac{2b+1}{2(a-b)}.$
$y=2t(a-b)-x$ will be a positive integer providing $2(a-b)>2t(a-b)-(2b+1)$ i.e. $\frac{2b+1}{2(a-b)}>t-1$.
$\frac{2b+1}{2(a-b)}$ is positive but not an integer and so, precisely as required, there is a positive integer $t$ such that
$$t>\frac{2b+1}{2(a-b)}>t-1.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2984918",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 5,
"answer_id": 3
} |
$GL(2,R) / SL(2,R)$ isomorphic to R* I am needing to write a prove showing that $GL(2,\mathbb{R}) / SL(2,\mathbb{R}) $ is isomorphic to $\mathbb{R}^*$.
I know that $SL(2,\mathbb{R})$ is a normal subgroup of $GL(2,\mathbb{R})$ but I'm not sure how to use that or where I should start.
Any help would be appreciated. Thanks.
| The homomorphism $GL(2,\Bbb R)\to\Bbb R^*$, $A\mapsto \det A$ has $SL(2,\Bbb R)$ as kernel.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2985047",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
If $A$ is a square matrix that satisfies $A^2-A+2I=0$, show that $A+I$ is invertible If $A$ is a square matrix that satisfies $A^2-A+2I=0$, show that $A+I$ is invertible. I understand how to find if $A$ is invertible but I don't know how to solve for the $A+I$ version.
| $(A+I)(A-2I)=A^2-2A+A-2I=A^2-A-2I=-4I$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2985212",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 0
} |
Gamma Distribution Moments Show that for X ~ Gamma($\alpha$, $\beta$), for positive constant $\nu$,
$E[X^\nu] = \dfrac{\beta^\nu*\Gamma(\nu + \alpha)}{\Gamma(\alpha)}$.
I have the following solution:
Solution
However, I don't understand how we get that $\dfrac{1}{\Gamma(\alpha)*\beta^\alpha}*\int_{0}^{\infty}x^{(\nu+\alpha)-1}e^{-x/\beta}dx = \dfrac{\Gamma(\nu+\alpha)*\beta^{\nu+\alpha}}{\Gamma(\alpha)*\beta^{\alpha}}$
Would appreciate any help on how this step was completed. Basically, I don't understand how: $\int_{0}^{\infty}x^{(\nu+\alpha)-1}e^{-x/\beta}dx = \Gamma(\nu+\alpha)*\beta^{\nu+\alpha}$.
I see that the left side is close to the definition of the Gamma function, but can't see how exactly to turn it into the right side.
| One way to understand the calculation is to recall that for a gamma distribution with shape $\alpha$ and scale $\beta$, $$f_X(x) = \frac{x^{\alpha-1} e^{-x/\beta}}{\beta^\alpha \Gamma(\alpha)}, \quad x > 0.$$ The denominator, being independent of $x$, suggests that $1/(\beta^\alpha \Gamma(\alpha))$ is the required multiplicative factor for the density such that $$\int_{x=0}^\infty f_X(x) \, dx = 1.$$ In other words, $$\beta^\alpha \Gamma(\alpha) = \int_{x=0}^\infty x^{\alpha - 1} e^{-x/\beta} \, dx.$$ This holds true for any $\alpha, \beta > 0$. Now, with this in mind, $$\operatorname{E}[X^\nu] = \int_{x=0}^\infty x^\nu f_X(x) \, dx = \int_{x=0}^\infty \frac{x^{\nu + \alpha - 1} e^{-x/\beta}}{\beta^\alpha \Gamma(\alpha)} \, dx = \frac{\beta^{\nu + \alpha} \Gamma(\nu + \alpha)}{\beta^\alpha \Gamma(\alpha)}\int_{x=0}^\infty \frac{x^{\nu + \alpha - 1} e^{-x/\beta}}{\beta^{\nu + \alpha}\Gamma(\nu + \alpha)} \, dx.$$ This is what the provided solution does, but the motivation for doing so should now be clear, because now the integrand represents a gamma density with shape parameter $\nu + \alpha$, and rate $\beta$. Therefore, its integral over its support is also $1$, provided $\nu + \alpha > 0$. It follows that $$\operatorname{E}[X^\nu] = \frac{\beta^\nu \Gamma(\nu + \alpha)}{\Gamma(\alpha)}, \quad \nu > -\alpha,$$ as claimed.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2985359",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
A martingale bounded from below is $L^1$ bounded
Let $(X_n,\mathcal F_n)$ be a martingale bounded from below i.e. $X_n\geq M$ for some $M\in\mathbb R$. Then show that $\sup_n E|X_n|<\infty$.
It is easy to observe that $X_n$ converges almost surely to some $X\in L^1$ as $X_n-M$ is a non-negative martingale. I can conclude $X_n$ is $L^1-$bounded if I can show $X_n$ is uniformly integrable. But I don't know how to prove it.
| $X_n^{-} \leq M^{-}$ so $EX_n^{-}$ is bounded. Since $X_n=EX_1$ for all $n$ (by martinagle property) we also know that $EX_n^{+}-EX_n^{-}=EX_n$ is bounded. This makes $E|X_n|=EX_n^{+}+EX_n^{-}$ bounded.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2985482",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Basic questions about pythagorean triples and "n-lets" I've had some difficulties finding answer to the two following questions:
1) Given one of natural numbers $a,b$ where $b$ is even and $a^2+b^2=c^2$ is there only one such a pythagorean triple?
2)How about a sum of $n$ squares of natural numbers that is equal to a square of a natural number? Given one of the summed squares is there only one such a pythagorean "n-let"?
I know that $a$ and $c$ are odd and I am aware of Euclid's formula but I don't know how to use it. I have no idea how to tackle 2). I only know that for exemple of $n=4$ there may be or not such two summed squares that would be lengths of the legs of a right triangle. Perhaps this is trivial but I don't know what to do.
Edit: 3) How about a given $c$ instead of $a,b$ in 1)?
| To construct n-lets, we can begin with this function to find values of $(m,n)$ for Euclid's formula:
$$n=\sqrt{m^2-A}\text{ where m varies from }\lceil\sqrt{A}\rceil\text{ to }\frac{A+1}{2}$$
This will let us find a triple with a matching side A, if it exists, for any $m$ that yields a positive integer $n$.
Let's begin with $(3,4,5)$ and find a triple to match the hypotenuse. In this case,
$$\lceil\sqrt{A}\rceil=\frac{A+1}{2}=3\text{ and } \sqrt{3^2-5}=2\text{ so we have }(3,2)$$
$$A=3^2-2^2=5\qquad B=2*3*2=12\qquad C=3^2+2^2=13$$
$\text{ The n-let that follows is }\\3^2+4^2+12^3=13^2\\\text{ and continuing with this process, we can get}$
$$3^2+4^2+12^2+84^2+3612^2=3613^2$$
or
$$3^2+4^2+12^2+84^2+132^2=157^2$$
Here we have two triples that match the hypotenuse of $(13,84,85)$ and here is how we found them
$$m =\lceil\sqrt{85}\rceil=10\text{ to }\frac{85+1}{2}=43$$
In the loop from $10$ to $43$, we find $(11,6)$ and $(43,42)$ with integer $n$.
$$A=11^2-6^2=85\qquad B=2*11*6=132\qquad C=11^2+6^2=157$$
$$A=43^2-42^2=85\qquad B=2*43*42=3612\qquad C=43^2+42^2=3613$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2985610",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
$A + B = \{\,x + y \mid x ∈ A,\, y ∈ B\,\}$ is closed in $ [0,\infty)$ for any closed $A, B \subseteq [0,∞)$ If $A, B$ are closed subsets of $[0,∞)$, then
$$A + B = \{\,x + y \mid x ∈ A,\, y ∈ B\,\}$$
is closed in $ [0,\infty)$.
If $A ,B$ are closed sets in $\mathbb R$ , then i know counterexamples but considering only non negative reals ,i can't able to get a counterexample which doesn't implies it must be true.
So i want to prove or disprove , I know it is not true in genral but under this particular situation i want to know it is TRUE /FALSE
| Take a sequence $(x_n)\subseteq A+B$ convergent to $x\in [0,\infty)$. We want to show that $x\in A+B$.
It follows that $x_n=a_n+b_n$ for some sequences $(a_n)\subseteq A$ and $(b_n)\subseteq B$. Since $A,B$ are subsets of $[0,\infty)$ then $$0\leq a_n\leq x_n$$
$$0\leq b_n\leq x_n$$
In particular, since $x_n$ is bounded then so are $a_n$ and $b_n$ (this is exactly the place where the proof would fail for whole $\mathbb{R}$ case). Therefore they have convergent subsequences, say $a_{n_k}$ and $b_{n_k}$ (we can choose indexes in such a way that they coincide). It follows that
$$x=\lim_{n=0}^{\infty} x_n=\lim_{k=0}^{\infty} x_{n_k}= \lim_{k=0}^{\infty}\big(a_{n_k}+b_{n_k}\big)=\lim_{k=0}^{\infty}a_{n_k}+\lim_{k=0}^{\infty}b_{n_k}\in A+B$$
the last "$\in$" because $A,B$ are closed.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2985787",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Little-o notation for $\ln(x)$ I want to Show the following:
$\ln(x) = o (e^{\sqrt{\ln x}})$ (Little-o Notation).
So I need to Show that: $\displaystyle{\lim_{x \rightarrow \infty} \frac{\ln(x)}{e^{\sqrt{\ln x}}}=0}.$
Can you help me, please?
| As gimusi noted, you want to prove $\lim_{y\to\infty}y^2\exp -y=0$. The function is continuous and non-negative for $y\ge 0$ with only one turning point, and famously integrates to the finite value $2$, which implies the limit.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2985926",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
The relationship between the differential and the directional derivative of a function I am currently studying differential manifolds (from John M. Lee 's book), and have a question concerning the difference between what is defined as the $\textbf{differential of a function F}$, and the $\textbf{directional derivative of a function F}$. Let $M \subset \mathbb{R}^{m}$, let $N\subset \mathbb{R}$, and suppose $F:M \rightarrow N$ is a smooth map. Then,
*
*The differential of $F$ at $p \in M$ is a map
$dF_{p}:T_{p}M \rightarrow T_{F(p)}N$
defined as, for some $v \in T_{p}M$, $dF_{p}(v)$ is a derivation in $T_{F(p)}N$ defined as, for all $f \in C^{\infty}(N)$,
$dF_{p}(v)(f)=v(f \circ F)$.
Now, because $M$ and $N$ are Euclidean themselves, if $v=(v_{1},...,v_{N})$, this can be expressed as
$dF_{p}(v)(f)=v_{1} \cdot \frac{\partial f}{\partial F}\ \frac{\partial F}{\partial x_{1}}+...+v_{m} \cdot \frac{\partial f}{\partial F}\ \frac{\partial F}{\partial x_{m}}$.
*The directional derivative of $F$ at $p$ in direction $v \in \mathbb{R}^{m}$ is given by
$D_{v}F(p)=v_{1} \cdot \frac{\partial F(p)}{\partial x_{1}}+...+v_{m}\cdot \frac{\partial F(p)}{\partial x_{m}}$.
Now, it seems that if $Id:\mathbb{R}\rightarrow \mathbb{R}$ is the identity function on $\mathbb{R}$, we have that for $v \in T_{p}M$,
$v(F)=dF_{p}(v)(Id)=D_{v}F(p)$.
Am I reading this correctly? I am trying to weed through the abstraction of differentials between manifolds and ground it into something more familiar, the directional derivative. Are directional derivatives in the theory of manifolds expressed as the the differential evaluated at the identity function, which are equivalent to simply evaluating $v(F)$ itself?
| The relationship between the differential and directional derivative is the same in differential manifolds as in Euclidean space. The derivative is a linear function. Linear functions take in vectors and output vectors. When the input vector is a unit vector, the output is called the directional derivative. That is because directions are defined by unit vectors.
Let's use an example.
Let $f:\mathbb R^2\to\mathbb R^2$ be defined by $f(x,y)=(x^2-y^2,xy)$
So $f=(f_1,f_2)$ where $f_1(x,y)=x^2-y^2$ and $f_2(x)=xy$
The derivative, or differential, is $df= \begin{pmatrix}
\frac{\partial f_1}{\partial x} & \frac{\partial f_1}{\partial y} \\
\frac{\partial f_2}{\partial x} & \frac{\partial f_2}{\partial y} \\ \end{pmatrix}
= \begin{pmatrix}
2x & -2y \\
y & x \\ \end{pmatrix}$
If we want the derivative of $f$ at say $(1,2)$ then we get $$df(1,2) = \begin{pmatrix}
2 & -4 \\
2 & 1 \\ \end{pmatrix}$$
Imagine being at point $(1,2)$ in the plane. There are $360^{\circ}$ of directions. We represent each direction as a unit vector in that direction. Now we can ask questions like, what is the derivative of $f$ in the $(\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}})$ direction at $(1,2)$?.
We evaluate the derivate to get $$\begin{pmatrix}
2 & -4 \\
2 & 1 \\ \end{pmatrix} \begin{pmatrix}
\frac{1}{\sqrt{2}} \\
\frac{1}{\sqrt{2}} \\ \end{pmatrix}=\begin{pmatrix}
\frac{-2}{\sqrt{2}} \\
\frac{-3}{\sqrt{2}} \\ \end{pmatrix}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2986070",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 1
} |
Finding sum of a finite series. Consider the series
$$\frac {q_1} {p_1} + \frac {q_1 q_2} {p_1 p_2} + \cdots + \frac {q_1q_2 \cdots q_n} {p_1 p_2 \cdots p_n}$$ where $p_i + q_i = 1$ and $0 < p_i < 1$ and $0 < q_i < 1$ for all $i=1,2, \cdots , n$.
How can I find the sum of this series? Please help me in this regard.
Thank you very much.
| Hint:
\begin{align}
\frac{q_1}{p_1}+\frac{q_1 q_2}{p_1 p_2} &=
\frac{q_1}{p_1} \left( 1+\frac{q_2}{p_2} \right) \\
&= \frac{q_1}{p_1 p_2}
\end{align}
Updates for further thoughts:
The sum is not as trivial as we think at first glance. First of all, we write the sum into Horner's form
$$S_n=\frac{q_1}{p_1}
\left \{
1+\frac{q_2}{p_1}
\left[
1+\ldots+\frac{q_{n-1}}{p_{n-1}}
\left( 1+\frac{q_n}{p_n} \right)
\right]
\right \}$$
and the summary for the first few cases:
\begin{align}
S_1 &= \frac{q_1}{p_1} \\
S_2 &= \frac{q_1}{p_1 p_2} \\
S_3 &= \frac{q_1(q_2+p_2 p_3)}{p_1 p_2 p_3} \\
S_4 &= \frac{q_1(q_2 q_3+p_3 p_4)}{p_1 p_2 p_3 p_4} \\
S_5 &= \frac{q_1[q_2(q_3 q_4+p_4 p_5)+p_2 p_3 p_4 p_5]}
{p_1 p_2 p_3 p_4 p_5} \\
S_6 &= \frac{q_1[q_2 q_3(q_4 q_5+p_5 p_6)+p_3 p_4 p_5 p_6]}
{p_1 p_2 p_3 p_4 p_5 p_6} \\
S_7 &= \frac{q_1 \{q_2 [q_3 q_4(q_5 q_6+p_6 p_7)+p_4 p_5 p_6 p_7]+
p_2 p_3 p_4 p_5 p_6 p_7 \}}
{p_1 p_2 p_3 p_4 p_5 p_6} \\
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2986177",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Can some group $G$ have an infinite number of cosets? My textbook states the number of cosets of $H$ in a finite group $G$ is $|G|/|H|$, however Is it possible for a group $G$ to have a subgroup $H$ such that $gH$ has infinite cardinality? I can't think of any examples.
Edit, a more concise way of putting it is,
is there any $H\le G$ s.t. $|G/H|$ is infinite?
| An interesting fact about the Prüfer $p$-group $\mathbb{Z}(p^\infty)$: if $H$ is a proper subgroup of $\mathbb{Z}(p^\infty)$, then
$$
\mathbb{Z}(p^\infty)/H\cong \mathbb{Z}(p^\infty)
$$
However, all proper subgroups of $\mathbb{Z}(p^\infty)$ are finite.
For other examples, consider $V$ a two-dimensional vector space over an infinite field and $U$ a one-dimensional subspace; then $U$ is infinite and $V/U$ is infinite as well.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2986374",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 3
} |
Find an orthononormal basis of $U^{\bot}$ given $U$ Given $U =$ span$\lbrace u_1 = (6,2,-2,-2),u_2 = (-1,1,-1,-1)\rbrace$ find an orthonormal basis for $U$ and $U^\bot$
What I've done so far: $(<u_1,u_2>) = 0 \implies u_1\bot u_2 \implies$ orthonormal basis given by $e_1 = \frac{u_1}{||u_1||},e_2 = \frac{u_2}{||u_2||}$.
However I dont know how to find the orthonormal basis for $U^\bot$ from here, I tried setting $v_1 = (a,b,c,d)$ and solving $(<v,u1>) = 0 \wedge(<v,u2>) = 0$ but only got $(a,b,c,d) = \vec{0}$ or $b = c + d$.
Thanks in advance :)
| As you've noted, $e_1,e_2$ is an orthonormal basis for $U$. Now, that basis can be expanded with some $e_3,e_4$ to an orthonormal basis of $\Bbb R^4$ (I am assuming that we are working in $\Bbb R^4$). Then $e_3$ and $e_4$ must be an orthonormal basis for $U^\perp$. Let's show this. First, $e_3$ and $e_4$ are linearly independent and orthonormal as they are part of an orthonormal basis for $\Bbb R^4$. Now, it is easy to see that $e_1\cdot(ae_3+be_4)=e_2 \cdot (ce_3+de_4)=0$ as $e_1,e_2,3_3,e_4$ are all orthogonal, so $\operatorname{span}\{e_3,e_4\}\subset U^\perp$. On the other hand, suppose $v\in U^\perp$. Then $v=a_1e_1+a_2e_2+a_3e_3+a_4e_4$ as they form a basis for $\Bbb R^4$. However, that means that $\langle e_1,v\rangle=a_1=0$ and $\langle e_2,v\rangle = a_2 = 0$, so $v=a_3e_3+a_4e_4\in\operatorname{span}\{e_3,e_4\}$. Therefore, $e_3$ and $e_4$ are a basis for $U^\perp$.
Now the question is simply how to extend $e_1$ and $e_2$ to an orthonormal basis. To do this, you can use the Gramm-Schmidt procedure after extending $e_1$ and $e_2$ to a basis for $\Bbb R^4$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2986498",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
What is the remainder when $4^{10}+6^{10}$ is divided by $25$? Without using calculator, how to decide? Must go with last two digits of $4^{10}+6^{10}$, can tell the last digit is $2$. How to tell the tenth digit of the sum?
Thanks!
| The last digit of $4^x$ is $4,6,4,6,4,6\dots \implies 4^{10}$ has last digit $6$
The last digit of $6^x$ is always $6 \implies 6^{10}$ has last digit $6$
Therefore $4^{10} + 6^{10}$ has last digit $2$.
Any multiple of $25$ has last digit $0$ or $5\implies
25$ cannot be a divisor!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2986592",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
} |
Evaluating $\lim_{x\to3}\frac{\sqrt[3]{x+5}-2}{\sqrt[4]{x-2}-1}$ without L'Hopital I have the following limit question, where different indices of roots appear in the numerator and the denominator
$$\lim_{x\to3}\frac{\sqrt[3]{x+5}-2}{\sqrt[4]{x-2}-1}.$$
As we not allowed to use L'Hopital, I want to learn how we can proceed algebraically.
| It is rather unpopular to use the standard limit $$\lim_{x\to a} \frac{x^n-a^n} {x-a} =na^{n-1}$$ for evaluating limits of algebraic functions, but it is my preferred approach.
Dividing the numerator and denominator of the given expression by $x-3$ and putting $x+5=u$ in numerator and $v=x-2$ in denominator we can see that the desired limit is equal to $$\dfrac{{\displaystyle \lim_{u\to 8}\dfrac{u^{1/3}-8^{1/3}} {u-8}}} {{\displaystyle \lim_{v\to 1}\dfrac{v^{1/4}-1}{v-1}}}=\dfrac{\dfrac{1}{3}\cdot 8^{-2/3}}{\dfrac{1}{4}\cdot 1^{-3/4}} =\frac{1}{3}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2986763",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 6,
"answer_id": 4
} |
The interval [0,1] with additional modulo 1 The interval $[0, 1]$ is an abelian group with addition modulo $1$.
Let $H$ be a proper subgroup of $[0, 1]$, which is closed as subset of $[0,1]$.
Show $H$ is finite.
I assumed $H$ is infinite: since $H$ is closed, all limit points of $H$ are in $H$.
Then let $x$ be in $[0, 1]\setminus H=S$ ($S$ is open since $H$ is closed) so there exists an open set $U_x$ which contains $x$ in it and $U_x \subset S$.
Let $\{x_n \}_n$ be a sequence converging to $x$. Thus for all $\epsilon>0$ there exists a $N$ natural number so that for all $n>N$ $|x_n-x|<\epsilon$.
For some $\epsilon_1 >0$ $B(x,\epsilon_1)$ is contained in $U_x$, then if I prove that $x$ is an limit point of $H$ then I can say $H$ is dense (because I express a generic element of $[0, 1]$ as belonging to $H$ or as limit point of $H$) then closure of $H$ will equal to $[0, 1]$ and this would contradict $H$ being proper subgroup.
But I couldn't find a way to show that.
| If you did not know you are asking about the subgroups of the circle $S^1$ in the complex plane $\mathbb{C}$. The exponential $e^{ 2\pi i}\colon [0, 1]_{/\mathbb{Z}}\rightarrow S^1$ defines a group isomorphism that is actually an homeomorphism of spaces.
The problem is widely explained in other questions, for example this one should contain all you need:
Subgroup of the unit circle under complex multiplication
Anyway I see a problem in your proof: basically you are not using the fact that $H$ is a subgroup of $S^1$. To get that the $x$ is limiting point of $H$ you have to use this, otherwise you would show that every infinite subset of $S^1$ is dense (clearly false). You could use an argument similar to the one suggested by Seirios in the question I linked: by taking the preimage along the projection get a subgroup of $\mathbb{R}$ and consider the infimum of its positive elements.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2986965",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
how to show that random variable is almost surely finite? Coin toss problem Consider the Coin toss problem, i.e. let $Z_{i} : \Omega \rightarrow \{0,1\}$ with
$$
Z_{i}\left(\omega\right) =
\begin{cases}
1 & \text{if }\omega = H\\
0 & \text{if } \omega = T
\end{cases}
$$
be the outcome of the $i$-th coin toss with $\Omega = \{H,T\}$. Assume all the $Z_{i}$ are independent and identical distributed with
$$
\mathbb{P}(Z_{i} = 1) = \mathbb{P}(Z_{i} = 0) = \frac{1}{2}.
$$
Now define
$$
R := \min \{k\geq 1 \mid Z_{k} = 1,Z_{k+1} = 0,Z_{k+2} = 1, Z_{k+3} = 0\}.
$$
$R$ is the number of tosses to get the pattern HTHT.
My question ist how to show that $R$ is almost surely finite. Any hints?
| Hints:
*
*Show that $$A_k := \{Z_{4k}=0, Z_{4k+1}=1, Z_{4k+2}=0, Z_{4k+3}=1\}$$ satisfies $\mathbb{P}(A_k) = 1/16$ for each $k \in \mathbb{N}$.
*Show that the events $A_k$, $k \geq 1$, are independent.
*It follows from Step 1 that $$\sum_{k \geq 1} \mathbb{P}(A_k) = \infty.$$ Apply the Borel Cantelli lemma (using Step 2) to conclude that $$\mathbb{P}(A_k \, \, \text{infinitely often})=1.$$
*Conclude that $\mathbb{P}(R<\infty)=1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2987309",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to show these three-regular graphs on 10 vertices are non isomorphic?
The number of vertices and edges are same, with each vertex having the same degree and the degree sequence of the graph is also the same. I have even tried finding a bipartite graph in any one of them even that seems to fail.
Question- How to show the three graphs with degree sequence [3,3,3,3,3,3,3,3,3,3] are non isomorphic (see figure)?
| The first graph is the only one not containing $4$-cycles, so it's not isomorphic to the other two. The second and third graphs are not isomorphic because the third is planar and the second contains a subdivision of $K_{3,3}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2987449",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
The function from the $\{\}$ to an any other set? In the $\mathcal{SET}$ - category of sets and maps between them - there is an initial object - the $\{\}$. It means that there is unique map from the $\{\}$ to an any other set (object of $\mathcal{SET}$).
Not sure I understand it completely. A map $\{\} \mapsto S \in Obj(\mathcal{SET})$ must take some $e \in \{\}$ to a some other $s \in S$. However, there is no such $e$ by the definition of the empty set. Thus such function does exist, however why is it unique? As long as particular $S$ contains more that one element, there are many maps of $\{\} \mapsto S$ kind. Thus the only way to prove that all those maps are essentially the same is to compare them to each other. In order to run comparsion, one has to kind of "compute" those functions - i.e. to assign some $e \in \{\}$ to each and ensure that outputs $s_0, s_1, ..., s_n \in S$ are the same. Otherwise, how can you determine that abovementioned functions are equal bypassing their evluation?
| Think of it this way: we say two functions $f$ and $g$ on the same domain are different if there exists an $x$ in that domain so that $f(x) \neq g(x)$. In the case where the domain is empty, there obviously cannot be such an $x$, so $f$ and $g$ are automatically not different.
More formally, a function is formally defined as a set of ordered pairs $(e, f(e))$, where $e$ is taken from the domain. If the domain is empty, no such pairs can exist - so any function with domain $\{\}$ must itself be the empty set. And two "copies" of the empty set are automatically equal.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2987549",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
Properties of Lucas sequence I want to prove the following properties of Lucas sequence:
*
*$3\mid L_m \iff m\equiv 2\pmod 4$
*$L_k\equiv 3\pmod 4$, where $2\mid k$ and $3\nmid k$.
$$$$
For the first property do we use induction?
Does the second property follow from the first one?
| In both cases, it helps to consider the Lucas numbers modulo $m$. For example, modulo $3$, the Lucas numbers (zero-indexed) begin $$2, 1, 0, 1, 1, 2, 0, 2, 2, 1, 0, 1, 1, 2, 0, \dots$$ and you may be able to spot a periodic pattern here: the sequence $2,1,0,1,1,2,0,2$ repeats over and over.
If you prove this periodic pattern, then your first statement follows just by looking where the $0$ appears. Similarly, if you find and prove a periodic pattern for $L_k \bmod 4$, then the second statement will follow by looking where the $3$ appears.
The pattern can be proved by some kind of induction. Essentially, knowing $L_k$ and $L_{k+1}$ modulo $m$ tells you $L_{k+2}$ modulo $m$, so once the mod-$m$ sequence repeats its starting values of $2,1$ once, you know that it will repeat them forever.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2987649",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Equivalence of two methods for generating random numbers that sum to 1 I want a vector $\vec{v}$ of $N$ non-negative random numbers that sum to 1.
Let $X(a)$ be the (continuous) uniform distribution over interval $[0, a]$.
Let $S(n) = \sum_{i = 1}^{n} v_{i}$ be the partial sum of the elements of $\vec{v}$
Method 1
*
*Generate: For each $k$, set $v_k$ to a random number from $X(1)$.
*Normalize: Divide $\vec{v}$ by sum of all elements of $\vec{v}$.
Method 2
Generate the elements of $\vec{v}$ one after another with the following steps.
*
*Generate 1st element: set $v_1$ to a random number from $X(1)$.
*Generate 2nd element: set $v_2$ to a random number from $X(1 - v_{1})$
...
*Generate the $k^{th}$ element: set $v_k$ to a random number from $X(1 - S(k-1))$.
...
*Calculate the last element: set $v_N$ to $1 - S(N - 1)$.
Question
Are the two methods equivalent?
Do the two methods generate $\vec{v}$ with the same $N$ dimensional probability density?
Thank you.
| These methods are certainly not equivalent. It is easy to see that in the first method, $\mathbb{E}[v_n]=\frac 1n$, while in the second method, $\mathbb{E}[v_k]=\frac 1{2^k}$ (you can prove this with induction and linearity of expectation), and so $\mathbb{E}[v_n]=\frac 1{2^n}$.
In response to a comment on another answer, scrambling the elements of the array also does not make the methods equivalent.
With method 2, there is a $\frac 13$ probability that there exists an element with value greater than $\frac 23$ (this is the element that was chosen first).
With method 1, there is only a small probability that some element has value greater than $\frac 23$. To show this, let the vector before normalization be $u$, so that $v=\frac u{\sum u}$. Let us consider the case where element $n$ has value greater than $\frac 23$. Clearly this happens only if $\sum_{i=1}^{n-1}u_i \leq \frac {u_n}2 \leq \frac 12$. But if $\sum_{i=1}^{n-1}u_i \leq \frac 12$, then $u_i \leq \frac 12 \forall i \in [1\dots n-1]$, and clearly this only happens with probability $\frac 1{2^{n-1}} \leq \frac 13$. (Union bounding only changes this to $\frac n{2^{n-1}}$, which is $\leq \frac 13.$)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2987792",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Proof of Isoperimetric Inequality The authors of "Introduction to the Heisenberg Group and the Sub-Riemannian Isoperimetric Inequality" give several proofs of the classical isoperimetric inequality in the plane in the first chapter. There is one I cannot understand.
Let us assume $\Omega \subset \mathbb{C}$ has a $C^1$ Jordan curve as its boundary. Denote the area enclosed by $A$ and the perimeter as $L$. We know the area form in complex notation is $i/2 dz \wedge d\bar{z}$ so
$$4\pi A = \int_\Omega 2 \pi i dz \wedge d\bar{z} = \int_{\Omega} \int_{\partial \Omega} \frac{d\zeta dz \wedge d\bar{z}}{\zeta-z} =^{?} \int_{\partial \Omega} \int_{\partial \Omega}\frac{\bar{\zeta}-\bar{z}}{\zeta-z}dzd\zeta \le L^2$$
I am very puzzled by the last equality. It seems like Green's theorem is being used but I cannot work out in what way.
| It is Stoke's theorem after swapping the order of integration.
As differentials on the $z$-plane (i.e., fixed $\zeta$),
\begin{align*}
\mathrm{d}\left(\frac{\bar{\zeta}-\bar{z}}{\zeta-z}\mathrm{d}z\right)
&=\frac{\partial}{\partial\bar{z}}\left(\frac{\bar{\zeta}-\bar{z}}{\zeta-z}\right)\,\mathrm{d}\bar{z}\wedge\mathrm{d}z\\
&=-\frac1{\zeta-z}\,\mathrm{d}\bar{z}\wedge\mathrm{d}z\\
&=\frac1{\zeta-z}\,\mathrm{d}z\wedge\mathrm{d}\bar{z}
\end{align*}
So
$$
\begin{align*}
\int_\Omega\int_{\partial\Omega}\frac1{\zeta-z}\,\mathrm{d}\zeta\,\mathrm{d}z\wedge\mathrm{d}\bar{z}
&=\int_{\partial\Omega}\color{red}{\int_\Omega\frac1{\zeta-z}\,\mathrm{d}z\wedge\mathrm{d}\bar{z}}\,\mathrm{d}\zeta\\
&=\int_{\partial\Omega}\color{red}{\int_{\partial\Omega}\frac{\bar{\zeta}-\bar{z}}{\zeta-z}\mathrm{d}z}\,\mathrm{d}\zeta.
\end{align*}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2987943",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Evaluating the integral $\int_0^{\infty}\frac{dx}{\sqrt[4]{x}(1+x^2)}$ using Residue Theorem I need to evaluate the integral $$\int_0^{\infty}\frac{dx}{\sqrt[4]{x}(1+x^2)}$$
I've been given the hint to use the keyhole contour. So I would first choose the principal branch of $\sqrt[4]{\cdot}$, then I have the "keyhole" around $0$, giving me
$$2\pi i\left(\text{Res}_if+\text{Res}_{-i}f\right)= \\ \int_{\gamma_{R,\epsilon}}\frac{dz}{\sqrt[4]{z}(1+z^2)}=\int_{\gamma_R}f(z)dz-\int_{\gamma_\epsilon}f(z)dz+\int^R_{\epsilon} f(z)dz - \int^R_{\epsilon} f(-z)dz$$
($f$ is the integrand) Then taking $R \rightarrow \infty$ and $\epsilon \rightarrow 0$, should give me my result. But I happened to check out the integral in Wolfram Alpha and it gives the integral as $\frac{1}{2}\pi\sec\left(\frac{\pi}{8}\right)$, which is not what I get. Can I get some help? I'm sure I've gone wrong somewhere, and I'm pretty new at using these arguments, so help or insights will be nice.
| Keeping it simple we introduce
$$f(z) = \exp(-(1/4)\mathrm{Log}(z)) \frac{1}{1+z^2}$$
with the branch cut of the logarithm on the positive real axis and
argument from $0$ to $2\pi.$ The slot of the keyhole rests on the
positive real axis and the contour is traversed counter-clockwise. Let
the segment above the real axis be $\Gamma_1,$ the large circle
$\Gamma_2$, the segment below the positive real axis $\Gamma_3$ and
the small circle around the origin $\Gamma_4.$
We get for $\Gamma_1$ in the limit
$$J = \int_0^\infty \frac{1}{\sqrt[4]{x}} \frac{1}{1+x^2} \; dx,$$
i.e. the target integral. The contribution from the circlular
components vanishes in the limit. We get below the cut on $\Gamma_3$
in the limit
$$\exp(-(1/4)2\pi i)
\int_\infty^0 \frac{1}{\sqrt[4]{x}} \frac{1}{1+x^2} \; dx
\\= - \exp(-(1/2)\pi i)
\int_0^\infty \frac{1}{\sqrt[4]{x}} \frac{1}{1+x^2} \; dx
= iJ.$$
We have for the first residue at the pole $z=i$
$$\left.\exp((-1/4)\mathrm{Log}(z))\frac{1}{z+i}\right|_{z=i}
= \exp((-1/4)\pi i/2 )\frac{1}{2i}$$
and for the second one
$$\left.\exp((-1/4)\mathrm{Log}(z))\frac{1}{z-i}\right|_{z=-i}
= -\exp((-1/4)3\pi i/2)\frac{1}{2i}.$$
Collecting everything we have
$$(1+i) J = 2\pi i \frac{1}{2i}
(\exp(-\pi i/8) - \exp(-3\pi i/8)).$$
This is
$$J = \pi (\exp(-\pi i/8) - \exp(-3\pi i/8))
\frac{1}{\sqrt{2}} \exp(-\pi i /4)
\\ = \frac{\sqrt{2}}{2} \pi (\exp(-3\pi i/8) - \exp(-5\pi i/8))
\\ = \frac{\sqrt{2}}{2} \pi \exp(-4\pi i/8)
(\exp(\pi i/8) - \exp(-\pi i/8))
\\ = \frac{\sqrt{2}}{2} \pi \exp(-\pi i/2)
\times 2i \sin(\pi/8).$$
The end result is
$$\bbox[5px,border:2px solid #00A000]{
\sqrt{2} \times \pi \times \sin(\pi/8).}$$
Remark. As per the contribution from the circles vanishing, we get
for the large circle $\Gamma_2$ $\lim_{R\to\infty} 2\pi R / R^{1/4} /
R^2 = 0$ and for the small one $\Gamma_4$ $\lim_{\epsilon\to 0} 2\pi
\epsilon / \epsilon^{1/4} / 1 = 0.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2988050",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
Eigenvalues of matrix $A$ $n \times n$, knowing that $A^2+A-6I_n=O_n$ How can I find ALL eigenvalues of matrix $A$, $n \times n$, $n \ge 3$, knowing that
$$A^2+A-6I_n=O_n.$$
I applied the Cayley-Hamilton theorem, and obviously two of the eigenvalues are $-3$ and $2$.
However, how do I find the rest of them?
| I will use a small result related to minimal polynomial.
Result: The characteristic polynomial and the minimal polynomial have the same roots, possibly with different multiplicities.
Now, $x^2+x-6=(x+3)(x-2)$ annihilates $A$. So, the minimal polynomial of $A$ divides $(x+3)(x-2)$. So, we have three options for the minimal polynomial : $(x+3)(x-2), (x+3)$ and $(x-2)$. In all the cases we get $-3$ and $2$ are the only possible eigenvalues.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2988219",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
For which polynomials $p(x)$ is $p(p(x))+p(x)$=$x^4+3x^2+3$, for all $x \in \mathbb{R}$ For which polynomials $p(x)$ is $p(p(x))+p(x)$=$x^4+3x^2+3$, for all $x \in \mathbb{R}$
Since the power of the right hand side is 4, $p(x)$ has to be 2.
So I assumed a solution of: $p(x)=ax^2+bx+c$ and then i put it in $p(p(x))+p(x)$ and got:
$a(ax^2+bc+c)^2+b(ax^2+bx+c)+c+ax^2+bx+c=x^4+3x^2+3$
To find the coefficients I tried $x=0$ and got
$ac^2+bc+2c=3$. But here I'm stuck, how do I go from here?
| Identifying the coefficient of $x^4$ one gets $a=1$ and then the coefficient of $x^3$ you get $b=0$ hence $c=1$ or $c=-3$. Then check that $x^2+1$ is indeed a solution while $x^2-3$ is not.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2988298",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Solving a Markov Chain Let the distribution on variables $(X_t)$ for $t \in N$ satisfy a Markov chain. Each variable can take the values $\{1, 2\}$. We are given the pmfs
$$p(X_1=i) = 0.5$$
for $i=1,2$ and
$$p(X_{t+1} = j\mid X_t = i) = p_{i,j}$$
where $p_{i,j}$ is the $(i, j)$-th element of the matrix
$$P=\begin{pmatrix}
0.3 & 0.7\\
0.6 & 0.4
\end{pmatrix}$$
Find: $P(X_3 = 2)$ and $p(X_2 = 1\mid X_3 = 2)$.
I'm stuck with how to start this problem. So any hints would be appreciated.
| Hints: By the law of total probability:
$$P(X_3=2)=\sum_{i,j\in \{1,2\}}P(X_3=2\mid X_1=i,X_2=j)P(X_1=i,X_2=j)$$ But by the Markov property, $X_3$ is independent of $X_1$, hence the inner term simplifies to $$P(X_3=2)=\sum_{i,j\in \{1,2\}}P(X_3=2\mid X_2=j)P(X_1=i,X_2=j)$$ Now, $P(X_3=2\mid X_2=j)$ is easy to compute (right?) and $$P(X_1=i,X_2=j)=P(X_2=j\mid X_1=i)P(X_1=i)$$ which is again easy to compute (right?) for any $i,j$.
For the second part, just use Bayes rule:
$$P(X_2=1\mid X_3=2)=\frac{P(X_3=2\mid X_2=1)P(X_2=1)}{P(X_3=2)}$$ and of course use the first part to avoid repeating calculations.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2988376",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
How do I prove that $\cos\left(2x\right)=1-2\sin^2\left(x\right)$? While trying to solve the equation $\sin\left(x\right)=\cos\left(2x\right)$, a user on this forum suggested that I turn the equation into a quadratic form by converting $\cos(2x)$ using the identity $\cos\left(2x\right)=1-2\sin^2\left(x\right)$.
What is the logic behind this identity and how can I derive it?
| write $$\cos (2x)=\cos(x+x)$$
you know $$\cos(x+y)=\cos x\cdot \cos y-\sin x\cdot \sin y$$
So, $$\cos(2x)=\cos x\cdot \cos x-\sin x\cdot \sin x$$
Or, $$\cos(2x)=\cos^2x-\sin^2x$$
write $\cos^2x$ as $1-\sin^2x$
So, $$\cos(2x)=1-\sin^2x-\sin^2x$$
you get$$\cos(2x)=1-2\sin^2x$$
Similarly get $$\cos(2x)=\cos^2x-\sin^2x=1-2\sin^2x=2\cos^2x-1$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2988504",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 6,
"answer_id": 4
} |
Inverse of operator with kernel $K(x,y)=(1-xy)^{-1}$ on the unit interval Suppose $$f(x)=\int_0^1 \frac{g(y)}{1-xy}dy.$$ Is there a $G(x,y)$ such that
$$g(y)=\int G(x,y)f(x)dx?$$ (with some integration contour)
| So we have an operator $T$ bringing $g(x)$ into $(Tg)(x)=\int_{0}^{1}\frac{g(y)}{1-xy}\,dy$ and we want an explicit representation for $T^{-1}$, fine. We may notice that $[x^n](Tg)(x)=\int_{0}^{1}y^n g(y)\,dy$, so the reconstruction of $g$ from $Tg$ is equivalent to solving the moment problem. Under suitable regularity assumptions, it can be done through the Laplace or Mellin transform.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2988546",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Simple proof on set difference
Prove if $A$ is a subset of $B$, then $A\backslash B= \emptyset$.
I used a proof by contrapostive.
The assumption ($A\backslash B$ not being equal to the empty set) means there exists an $x$ such that $x\in A\backslash B$ implies $x\in A$ and $x\not\in B$. This implies that $A$ is not a subset of $B$.
How is this proof, thanks.
| Your proof is correct. You could have avoided using the contrapositive and just proved it directly:
Suppose that $A \subseteq B$. If there is some $x \in A \setminus B$, then $x \in A$ and $x \not\in B$, which contradicts the assumption that $A \subseteq B$; hence $A \setminus B$ has no elements, so is empty.
It doesn't really make much of a difference to the structure of the proof, though—the original statement is of the form $p \Rightarrow \neg q$, so its contrapositive is also of the form $q \Rightarrow \neg p$ (where $p$ means '$A \subseteq B$' and $q$ means '$A \setminus B$ is inhabited').
But as a general rule, it's nice not to use indirect proof techniques if you don't have to.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2988817",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Orthonormal basis: Countable $\infty$ vs. Uncountable $\infty$ My doubt is the following, when you create an orthonormal basis for a space, the number of coefficients in each vector, and the number of vectors is equal to the dimension of the space (at least in finite dimensional spaces).
For example the standard basis for $\mathbb{R}^3$ is $(1,0,0),(0,1,0),(0,0,1)$.
What happens when we consider infinite dimensional Hilbert spaces?
For instance, let $\mathcal{L}_2(-\pi ,\pi) $ be the collections of functions $\chi =\{x(t):-\pi \leq t \leq \pi\}$ for which $\int_{-\pi}^{\pi}|x^2(t)|dt<\infty$. Define the vector addition and scalar multiplication coordinatewise such that we end up with a Hilbert space.
My text book says that there's a set of vectors $\{z_n:n=0,\pm1,...\}$ that is a complete orthonormal sequence in $\mathcal{L}_2(-\pi ,\pi)$.
My question is, I know that every single $x(t)$ has an infinite dimension, since we are considering a continuous function, but not countable infinite, while the number of vectors $z_n$ is clearly countable infinite.
That makes me think, isn't necessary that the number of components in a vector of the "basis" and the number of vectors that form the latter coincide?
Hopefully you will shed some light on the problem.
| You can indeed specify a member of ${\mathcal L}_2(-\pi,\pi)$ with countably many real numbers. That doesn't say you have to: you can also choose to specify
the values $f(t)$ for all $t \in (-\pi,\pi)$ (modulo the non-uniqueness due to the fact that these are really
equivalence classes of functions rather than functions: you can change $f$ on a set of measure $0$ and it's the same
member of $\mathcal L_2$). But those uncountably many choices are not independent.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2988914",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Consequences of $f \neq 0, f^2 = 0$ I have been given the following problem.
Given $f: V \to V$ linear. $f \neq 0$, $f^2 = 0$, demonstrate that $(f(u_1), f(u_2) ... f(u_m))$ is linearly independent, where $(u_1, u_2...u_m)$ is a basis of the supplementary subspace of the kernel ($U \oplus \ker(f) = V$)
My reasoning is the following:
$\bar{x} \in V \implies \bar{x} = \bar{x_1} + \bar{x_2}$, $x_1 \in \ker(f), x_2 \in U$
$f(\bar{x}) = f(\bar{x_2})$ since $x_1 \in \ker(f)$
$0 = f(\bar{x_2})$ since $f(x) \in \operatorname{Im}(f) \subseteq \ker(f)$
$0 = a_1 f(u_1) + a_2 f(u_2) ... + a_m f(u_m) $
I know if $(f(u_1), f(u_2) ... f(u_m))$ was injective, then $a_1=a_2=a_m=0$ but I don't know how to demonstrate this.
| Hint:
You don't really need the hypothesis $f^2=0$. Consider the restriction of $f$ to the supplementary subspace $U$. Which properties does it have?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2989038",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Suppose $f:G\longrightarrow H$ is a group homomorphism with $H$, $Ker(f)$ finite. Is $G$ finite? Let $f:G\longrightarrow H$ be a group homomorphism with $G$ not necessarily a finite group, but $H$ is a finite group. By the first isomorphism theorem we have:
$\frac{G}{Ker(f)}\cong Im(f)$.
Suppose further that we know that $Ker(f)$ is finite. Is it now possible to conclude that $G$ is a finite group?
I am currently under the impression that lagrange's theorem can't be used, since it assumes the very thing we are trying to prove. Perhaps I am missing something obvious. Any help would be vastly appreciated.
| The kernel is one of the cosets in the quotient group and all cosets are the same size. Since the image is finite, there are a finite number of cosets. A finite number of cosets, each of a finite size implies that there are a finite number of elements in total.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2989138",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Proving $e$ is irrational using a Beukers-like integral We know the following, for some integers $a_n,b_n$ and $n$ where $n\geq0$:
\begin{align}
&I_n = \int_0^1 x^n(1-2x)^n e^x dx = a_ne+b_n \\
&|I_n|= \left\lvert \int_0^1 x^n(1-2x)^n e^x dx \right\rvert \leq \left( \frac{1}{8} \right)^n(e-1)
\end{align}
therefore, as $n\to\infty$ the integral $I_n$ tends to zero. So if we assume $e=p/q$ then we would have a integer that tends to zero which is not possible, unless the integer is $0$ itself. So, if we prove that $I_n$ is not zero, then this concludes the proof that $e$ is irrational.
But how to prove that $I_n \neq0$ for all integer $n\geq0\:?$
PS: I know that if we changed the polynomial inside $I_n$ it would be easier to show that $I_n \neq 0$ for all $n$, but I'm interested in this case in particular.
EDIT: I believe the estimation for $I_n$ is wrong. I estimated it by the following way:
\begin{align}
&x(1-2x)\leq\text{max}[x(1-2x)] = 1/8\\
&x^n(1-2x)^n\leq \left( 1/8 \right)^n\\
& x^n(1-2x)^n e^x \leq \left( 1/8 \right)^n e^x\\
& \int_0^1 x^n(1-2x)^n e^x dx\leq \left( 1/8 \right)^n \int_0^1 e^xdx\\
& \int_0^1 x^n(1-2x)^n e^x dx\leq \left( 1/8 \right)^n (e-1)
\end{align}
| The case is trivial when $n$ is even ($I_n$ always $>0$), so we just assume $n$ is odd. Write
$$I_n=\underbrace{\int_0^{1/2} x^n(1-2x)^n e^x~\mathrm dx}_{J_1}+\underbrace{\int_{1/2}^1 t^n(1-2t)^n e^t~\mathrm dt}_{J_2}.$$
By considering the area under the curve, $J_1>0$ while $J_2<0$.
Claim: $-J_2> J_1$ when $n$ is odd ($\Leftrightarrow I_n=J_1+J_2<0$).
Proof: Substitute $x=t-\frac12$ in $-J_2$,
\begin{align*}
-J_2=-\int_0^{1/2}\left(x+\frac12\right)^n(-2x)^n e^{x+1/2}~\mathrm dx&=\underbrace{(-1)^{n+1}}_{=1}e^{1/2}\int_0^{1/2}\left(2x+1\right)^n x^n e^x~\mathrm dx\\
(\text{Using the fact $e>1$})\quad&>\int_0^{1/2}\left(2x+1\right)^n x^n e^x~\mathrm dx\\
(2x+1>1-2x\text{ when } x>0)\quad&>\underbrace{\int_0^{1/2}\left(1-2x\right)^n x^n e^x~\mathrm dx}_{J_1}\\
\end{align*}
and we are done.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2989409",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Prove that $\mathbb{R}^2\setminus E$ is path-connected
Let $E$ be the set of all points in $\mathbb{R}^2$ having both coordinates rational. Prove that the space $\mathbb{R}^2\setminus E$ is path-connected.
Path-connected definition: A topological space $(X,\tau)$ is said to be path-connected if given $a,b\in X$, there exists a continuous function $f:[0,1]\to X$ such that $f(0)=a$ and $f(1)=b$.
I have read a similar thread on mathstackexchange but I am failing to build the function that proves that any two points of $\mathbb{R}^2\setminus E$ are path-connected.
If we consider $(x_1,y_1),(x_2,y_2)\in\mathbb{R}^2\setminus E$ so that $x_1,y_2$ are irrational as proposed in the answer of another question.
I can build two functions $f:(x_1,y_1)\to(x_1,y_2)\\(x_1,y_1)\to(x_1,y_1+c)$
so that $c\in\mathbb{R}$
$g:(x_1,y_2)\to(x_2,y_2)\\(x_1,y_1)\to(x_1+d,y_2)$ so that $d\in\mathbb{R}$
So $f \circ g:(x_1,y_1)\to(x_2,y_2)$.
However this is not a generalization for all the points in $\mathbb{R}^2\setminus E$ and I cannot relate the function to the interval $[0,1]$.
Question:
How should I solve the exercise?
Thanks in advance!
| Let $C=\{c_1,c_2,\dots\}\subset \mathbb R^2$ be any countable set. Then $\mathbb R^2\setminus C$ is path connected.
Proof: Suppose $p,q$ are distinct points in $\mathbb R^2\setminus C.$ Consider the set of rays emanating from $p$ that contain a point of $C;$ the set of such rays is countable. Same thing for for $q.$ Thus if we let $L$ be the perpendicular bisector of $[p,q],$ the set of intersections of these rays with $L$ is countable. Hence there exists $r\in L$ such that both $[p,r],[r,q]$ are disjoint from $C.$ We have therefore found an "isosceles" path from $p$ to $q$ within $R^2\setminus C.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2989879",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
Growth of Digamma function
For $1\le \sigma \le 2$ and $t\ge 2$, $s=\sigma+it$ prove that $\displaystyle \frac{\Gamma'(s)}{\Gamma(s)}=O(\log t)$.
From Stirling's formula we have, $\displaystyle \Gamma(s)\approx \sqrt{2\pi}\exp\{s\log s-s-\frac 12 \log s\}$.
Then, $\displaystyle \frac{\Gamma'(s)}{\Gamma(s)}\approx\log s-\frac{1}{2s}$. From here I'm unable to estimate !! Any hint. ?
Where can I get rigorous proof ?
Edit: Wikipedia links below the question are NOT clear enough to me.
| Let's use the following formula, from Abramowitz and Stegun, valid everywhere in the strip of interest:
$$
\psi(z)=-\gamma+\sum_{n=1}^{\infty}\frac{z-1}{n(n+z-1)}.
$$
Letting $a=\sigma-1\in[0,1]$,
$$
\psi(\sigma+it)+\gamma=\sum_{n=1}^{\infty}\frac{a+it}{n(n+a+it)}=\sum_{n=1}^{\infty}\frac{\left(a+it\right)(n+a-it)}{n\left((n+a)^2+t^2\right)}=\sum_{n=1}^{\infty}\frac{a(n+a)+int+t^2}{n\left((n+a)^2+t^2\right)}=\sum_{n=1}^{\infty}\frac{a(n+a)}{n\left((n+a)^2+t^2\right)}+i\sum_{n=1}^{\infty}\frac{t}{(n+a)^2+t^2}+\sum_{n=1}^{\infty}\frac{t^2}{n\left((n+a)^2 + t^2\right)}.
$$
The first term is no more than $2a\sum_{n=1}^{\infty}1/n^2=\pi^2/3$, independent of $t$. The second and third terms can be bounded by integrals; the third integral must be split into two sums first. Specifically,
$$
\sum_{n=1}^{\infty}\frac{t}{(n+a)^2+t^2}\le \sum_{n=1}^{\infty}\frac{1/t}{(n/t)^2+1}\le\int_{0}^{\infty}\frac{dx}{x^2+1}=\frac{\pi}{2},
$$
and
$$
\sum_{n=t+1}^{\infty}\frac{t^2}{n\left((n+a)^2+t^2\right)}\le \sum_{n=t+1}^{\infty}\frac{1/t}{(n/t)\left((n/t)^2+1\right)}\le\int_{1}^{\infty}\frac{dx}{x(x^2+1)}=\log\left(\frac{x}{\sqrt{x^2+1}}\right)\Bigg\vert_{1}^{\infty}=\frac{1}{2}\log 2.
$$
Note the restricted range of summation. Finally,
$$
\sum_{n=1}^{t}\frac{t^2}{n\left((n+a)^2+t^2\right)}\le\sum_{n=1}^{t}\frac{1}{n}=O(\log t),
$$
so we conclude that $\psi(\sigma+it)$ is $O(\log t)$, with bounded imaginary part.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2990150",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
Show that the space of increasing, bounded function is not totally bounded w.r.t. $\sup$-norm Here is the exercise that I got.
Verify that the class $\mathcal{G} = \left\{ g: \mathbb{R} \to [0,1], g \text{ is increasing}\right\}$ is not totally bounded for the supremum norm on $\mathbb{R}$.
I am trying to prove this by constructing a counterexample (for a given $\epsilon > 0$ and some functions $g_1,\cdots,g_n$), but I don't know how. The rough idea I have is that, to construct $f$ such that $\|f - g_i\|_\infty > \epsilon$ for all $g_i$'s. So maybe only at one point $x_i$, $f$ and $g_i$ are far away. For example, maybe at point $x_1$, $f$ is only close to $\max_i g_i$ and far away from $\min_i g_i$, but at another point $x_2$, $f$ is only close to $\min_i g_i$ but far away from $\max_i g_i$, but I don't know how to formalize this, partly because I don't know how far is $\max_i g_i$ from $\min_i g_i$.
The picture in my head is that, if $g_i$ is like a increasing straight line, then I can consider $f = \frac{1}{2}$, so $f$ and $g$ will be far away when $x$ is big or small. If $g$ is quite flat, I can take $f$ to be an increasing line.
I am not sure if I am thinking correctly and how to proceed. Could someone give me a hint?
| The below will work for functions into $[-\pi/2,\pi/2]$. You would just need to shift and stretch a little for your case. Recall that $\arctan(x)$ is strictly increasing and ranges in $(-\pi/2,\pi/2)$. Replacing $x$ by $\alpha x$ for $\alpha>1$ does a horizontal compression. So for large $\alpha$, $\arctan(\alpha x)$ will approach its asymptotes faster despite still starting off at $(0,0)$.
The sequence $f_n(x) = \arctan(nx)$, $n\geq 1$ is contained in $\mathcal{G}$ but can't be contained in finitely many balls of radius $\frac{\pi}{6}$, $B(g,\frac{\pi}{6})$. If they were, then all $f_n$'s would be within distance $\frac{\pi}{3}$ of the set $\{f_{n_1},\ldots,f_{n_k}\}$ for some finite indices $n_1 < \ldots < n_k$. But (feel free to check) that if you let $n >> n_k$, then the distance will approach $\pi/2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2990351",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Proof for $\lor$ Elim: rule in Soundness Theorem So far I have been told to assume the line is invalid and then arrive at a contradiction.
Suppose the first invalid step derives the sentence $C$ by an application of $\lor$ Elim to the sentences $A\lor B$ and $A$ and $B$ appearing earlier in the proof. Let $P_1,\ldots,P_n$ be a list of all the assumptions in force at $C$. If this is an invalid step, $C$ is not a tautological consequence of $P_1,\ldots,P_n$.
Since $C$ is the first invalid step in $p$, we know that $A\lor B$, $A$ and $B$ are all valid steps, that is, they are tautological consequences of the assumptions in force at those steps.
Since $\mathcal{F}_T$ allows us to cite sentences only in the main proof or subproofs whose assumptions are still in force, we know that the assumptions in force at steps $A\lor B$, $A$ and $B$ are also in force at $C$. Hence the assumptions for those steps are among $P_1,\ldots,P_n$.
But I'm not sure how to carry this on...
| There's no compelling reason to use proof by contradiction. The rule consists of a valid one in intuitionstic positive logic, so like anything else in intuitiionistic positive logic it can get proved without the use of proof by contradiction.
Suppose that (A $\lor$ B) is true, (A$\rightarrow$C) is true, and (B$\rightarrow$C) is true also. By the truth table for (A $\lor$ B) one of two cases gets satisfied. A holds true consists of one case, and B holds true for the other case. Suppose that A holds true. Then, by modus ponens C will follow. Similar reasoning shows that C holds for the other case. Since that exhausts all cases, C follows from the set of premises.
Note the above does NOT use $\lor$ in the reasoning. $\lor$ consists of an objective level construct, and is not as comprehensive as meta-linguistic case exhaustive analysis, since case exhaustive analysis could have many more cases than two, while $\lor$ as an objective level connective, defined by the definition of a well-formed formula, consists of a binary connective.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2990466",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Evaluate $\lim_{n\to \infty}(0.9999+\frac{1}{n})^n$ $\lim_{n\to \infty}(0.9999+\frac{1}{n})^n$
Using Binomial theorem:
$(0.9999+\frac{1}{n})^n={n \choose 0}*0.9999^n+{n \choose 1}*0.9999^{n-1}*\frac{1}{n}+{n \choose 2}*0.9999^{n-2}*(\frac{1}{n})^2+...+{n \choose n-1}*0.9999*(\frac{1}{n})^{n-1}+{n \choose n}*(\frac{1}{n})^n=0.9999^n+0.9999^{n-1}+\frac{n-1}{2n}*0.9999^{n-2}+...+n*0.9999*(\frac{1}{n})^{n-1}+(\frac{1}{n})^n$
A limit of each element presented above is 0. How should I prove that limit of "invisible" elements (I mean elements in "+..+") is also 0?
| Hint Look at $n\gt 10000$
For $n\gt 10000$, we have $0.9999 + \frac{1}{n} \leq 0.9999+ \frac{1}{10001}\lt1$, so $\lim_{n\rightarrow\infty}(0.9999 + \frac{1}{n})^n \leq \lim_{n\rightarrow\infty} (0.9999 + \frac{1}{10001})^n = 0$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2990642",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 6,
"answer_id": 2
} |
If $\lim_{n\rightarrow\infty} S_{n} = \infty$, prove that $\lim_{n\rightarrow\infty}\sqrt{S_{n}} = \infty$ I have no idea how to go about this and I have been stuck on it for 3 days.
The formula we have for infinite limits is:
For every $M>0$, there exists an $N \in \mathbb{N}$ such that for $n > N \implies S_{n} > M$.
The only thing that came to my mind is to replace $M$ with $K=M^{2}$, then take the square root of both sides and get $\sqrt{S_{n}}>M$. But I am not sure if its 'allowed'.
| $\fbox{For every $M>0$, there exists an $N \in \mathbb{N}$ such that for $n > N \implies S_{n} > M$.}$
Let $M$ be arbitrary and choose $N \in \mathbb{N}$ such that for $n>N$, $S_n > M^2$. Then for $n>N$, $\sqrt{S_n} > M$
If you want to be really verbose, you could say:
Let $M$ be arbitrary. By supposition, there exists an $N$ such that for $n>N$, $S_n > M^2$. Therefore, for $n>N$, $\sqrt{S_n} > M$, completing the proof.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2990745",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Find a basis and dimension of V When finding a basis for a vector space defined by $V = \{p(x) \in P_3(\Bbb R) \mid p(3)=0\text{ and }p(2)=0\}$, I started by showing $p(x) = (x-2)(x-3)q(x)$ where $q(x)\in P_1(\Bbb{R})$.
So, $p(2) = 8a+4b+2c+d = 0 $
And, $p(3) = 27a+9b+3c+d = 0$
I am wondering do I set these equal and solve for each independent variable a,b,c,d and with that answer is my basis?
Any help would be appreciated!
| Go for $$\{(x-2)(x-3), x(x-2)(x-3)\}$$ for your basis.
These two polynomials will span your space and they are linearly independent.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2990882",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Find a general solution for $\int_{0}^{\infty} \sin\left(x^n\right)\:dx$ So, I was recently working on the Sine Fresnal integral and was curious whether we could generalise for any Real Number, i.e.
$$I = \int_{0}^{\infty} \sin\left(x^n\right)\:dx$$
I have formed a solution that I'm uncomfortable with and was hoping for qualified eyes to have a look over.
So, the approach I took was to employ Complex Numbers (I forget the name(s) of the theorem that allows this).
But
$$\sin\left(x^n\right) = \Im\left[-e^{-ix^n}\right]$$
And so, n
$$ I = \int_{0}^{\infty} \sin\left(x^n\right)\:dx = \Im\left[\int_{0}^{\infty} -e^{-ix^n}\:dx \right]= -\Im\left[\int_{0}^{\infty} e^{-\left(i^{\frac{1}{n}}x\right)^{n}}\:dx \right]$$
Applying a change of variable $u = i^{\frac{1}{n}}x$ we arrive at:
\begin{align}
I &= -\Im\left[i^{-\frac{1}{n}}\int_{0}^{\infty} e^{-u^{n}}\:du \right] \\
&= -\Im\left[i^{-\frac{1}{n}}\frac{\Gamma\left(\frac{1}{n}\right)}{n} \right]\\
&= \sin\left(\frac{\pi}{2n}\right)\frac{\Gamma\left(\frac{1}{n}\right)}{n}
\end{align}
My area of concern is in the substitution. As $i^{-\frac{1}{n}} \in \mathbb{C}$, I believe the limits of the integral should have been from $0$ to $i^{-\frac{1}{n}}\infty$. Is that correct or not?
I'm also struggling with bounds on $n$ for convergence. Is this expression valid for all $n\in\mathbb{R}$
Any guidance would be greatly appreciated
| Here is an alternative approach that avoids complex numbers and series altogether. To get round these two obstacles I will use a property of the Laplace transform.
Let
$$I = \int_0^\infty \sin (x^n) \, dx, \qquad n > 1.$$
We begin by enforcing a substitution of $x \mapsto x^{1/n}$. This gives
$$I = \frac{1}{n} \int_0^\infty \frac{\sin x}{x^{1 - 1/n}} \, dx.$$
The following useful property (does this result have a name? It would be so much nicer if it did!) for the Laplace transform will be used:
$$\int_0^\infty f(x) g(x) \, dx = \int_0^\infty \mathcal{L} \{f(x)\} (t) \cdot \mathcal{L}^{-1} \{g(x)\} (t) \, dt.$$
Noting that
$$\mathcal{L} \{\sin x\}(t) = \frac{1}{1 + t^2},$$
and
$$\mathcal{L}^{-1} \left \{\frac{1}{x^{1-1/n}} \right \} (t)= \frac{1}{\Gamma (1 - \frac{1}{n})} \mathcal{L}^{-1} \left \{\frac{\Gamma (1 - \frac{1}{n})}{x^{1-1/n}} \right \} (t) = \frac{t^{-1/n}}{\Gamma (1 - \frac{1}{n})},$$
then
\begin{align}
I &= \frac{1}{n} \int_0^\infty \sin x \cdot \frac{1}{x^{1 - \frac{1}{n}}} \, dx\\
&= \frac{1}{n} \int_0^\infty \mathcal{L} \{\sin x\} (t) \cdot \mathcal{L}^{-1} \left \{\frac{1}{x^{1 - \frac{1}{n}}} \right \} (t) \, dt\\
&= \frac{1}{n\Gamma (1 - \frac{1}{n})} \int_0^\infty \frac{t^{-1/n}}{1 + t^2} \, dt.
\end{align}
Enforcing a substitution of $t \mapsto \sqrt{t}$ yields
\begin{align}
I &= \frac{1}{2 n \Gamma \left (1 - \frac{1}{n} \right )} \int_0^\infty \frac{t^{-\frac{1}{2} - \frac{1}{2n}}}{t + 1} \, dt\\
&= \frac{1}{2 n \Gamma \left (1 - \frac{1}{n} \right )} \operatorname{B} \left (\frac{1}{2} - \frac{1}{2n}, \frac{1}{2} + \frac{1}{2n} \right )\\
&= \frac{1}{2 n \Gamma \left (1 - \frac{1}{n} \right )} \Gamma \left (\frac{1}{2} - \frac{1}{2n} \right ) \Gamma \left (\frac{1}{2} + \frac{1}{2n} \right ). \tag1
\end{align}
Applying Euler's reflexion formula we have
$$\Gamma \left (\frac{1}{2} - \frac{1}{2n} \right ) \Gamma \left (\frac{1}{2} + \frac{1}{2n} \right ) = \frac{\pi}{\sin \left (\frac{\pi}{2n} + \frac{\pi}{2} \right )} = \frac{\pi}{\cos \left (\frac{\pi}{2n} \right )},$$
and
$$\Gamma \left (1 - \frac{1}{n} \right ) = \frac{\pi}{\sin \left (\frac{\pi}{n} \right ) \Gamma \left (\frac{1}{n} \right )}.$$
So (1) becomes
$$I = \frac{\sin (\frac{\pi}{n} ) \Gamma (\frac{1}{n})}{2n \cos (\frac{\pi}{2n} )},$$
or
$$I = \sin \left (\frac{\pi}{2n} \right ) \frac{\Gamma \left (\frac{1}{n} \right )}{n}, \qquad n > 1$$
where in the last line the double angle formula for sine has been used.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2991201",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 4,
"answer_id": 3
} |
solving an inseparable differential equation involving exponentials Solve $2xe^y$ + $e^x$ + ($x^2$ + 1)$e^y$$\frac{dy}{dx}$ = 0 with $y$ = 0 when $x$ = 0.
So this is clearly an inseparable differential equation so I thought the standard way to approach this was with a substitution but I cannot think of anything that would help me solve this? I thought maybe $u = x^2 e^y$ or $y = ux^2$ but they didn't get me anywhere at all.
| Hint:
$\frac{\partial}{\partial y}(2xe^y+e^x) = \frac{\partial}{\partial x} (x^2+1)e^y$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2991342",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
Three space property I want to show that Finite dimensionality is a three space property. Let $X$ be a normed linear space and let $Y$ be a closed subspace of $X$. If $Y$ and $X/Y$ are finite dimensional spaces, then I want to show $X$ to be finite dimensional.
Let $B$ be a finite basis of $Y$. Then it can be extended to a basis $B^{\prime}$ of $X$. But how to show $B^{\prime}$ to be finite? Any hint will be appreciated.
| Let $x_1,\ldots,x_n\in X$ be such that $(x_1+Y,\ldots,x_n+Y)$ is a basis of $X/Y$ and let $(y_1,\ldots,y_m)$ be a basis of $Y$. If $x\in X$, then there are scalars $\alpha_1,\ldots,\alpha_n$ such that$$x+Y=\alpha_1(x_1+Y)+\cdots+\alpha_n(x_n+Y).$$So, $x-\sum_{k=1}^n\alpha_kx_k\in Y$ and therefore there are scalars $\beta_1,\ldots,\beta_m$ such that$$x-\sum_{k=1}^n\alpha_kx_k=\sum_{l=1}^m\beta_ly_l.$$Therefore, $X$ is spanned by $\bigl\{x_k+y_l\,|\,(k,l)\in\{1,\ldots,n\}\times\{1,\ldots,m\}\bigr\}$. So, $X$ is finite-dimensional.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2991446",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
If $\sin x+\sin^2x+\sin^3x=1$, then find $\cos^6x-4\cos^4x+8\cos^2x$
If $\sin x+\sin^2x+\sin^3x=1$, then find $$\cos^6x-4\cos^4x+8\cos^2x$$
My Attempt
\begin{align}
\cos^2x&=\sin x+\sin^3x=\sin x\cdot\big(1+\sin^2x\big)\\
\text{ANS}&=\sin^3x\cdot\big(1+\sin^2x\big)^3-4\sin^2x\cdot\big(1+\sin^2x\big)^2+8\sin x\cdot\big(1+\sin^2x\big)\\
&=\sin x\cdot(1+\sin^2x)\bigg[\sin^2x\cdot(1+\sin^2x)^2-4\sin x\cdot(1+\sin^2x)+8\bigg]\\
&=
\end{align}
I don't think its getting anywhere with my attempt, so how do I solve it ?
Or is it possible to get the $x$ value that satisfies the given condition $\sin x+\sin^2x+\sin^3x=1$ ?
Note: The solution given in my reference is $4$.
| Let $t=\sin x$ and solve the cubic $$t^3+t^2+t=1$$
Wolfram Alpha gives the real solution as
$$t=(1/3) (-1 - 2/(17 + 3 \sqrt {33})^{1/3} + (17 + 3 \sqrt{33})^{1/3})$$
Plug the real solution of the above to get $$(1-t^2)^3 -4(1-t^2)^2+8(1-t^2) =4$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2991604",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 6,
"answer_id": 4
} |
Integral Representation of $\frac{\mathrm{sign}(x)}{|x|^{s}}$ In this paper, on page 433 (4.12), the authors used an integral formula of the function $\frac{\mathrm{sign}(x)}{|x|^{s}}$, which is
$$|x|^{-s}\mathrm{sign}(x)=\frac{2}{\Gamma(\frac{s+1}{2})}\int_{0}^{\infty}dyy^{s}xe^{-x^{2}y^{2}}$$
for any real non-zero $x$.
This integral seems very unexptected. Can anybody tell me how one knows that the function can be expressed in the above integral? Is there any motivation behind it?
| It's a Laplace transform. Roughly, powers of $y$ transform to powers of $x^{-1}$ because $x,y$ have opposite dimension (as seen from the $x^2y^2$ term in the exponential). Changing variable $u=y^2$ would tidy it up, and then it would be essentially the definition of the Gamma function, which is the Laplace transform of $x^a$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2991695",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Prove that If $p$ is prime s.t. $0Prove that If $p$ is prime s.t. $0<n\leq p$ , then $p|[{p! \over {(p-n)!(n)!}}]$.
I know that if $p|q$ , then $q=kp$, for some integer number $k$.
But I don’t know how to prove that $p$ divided like above.
Is it working to use proof by induction?
| $(p-n)!$ and $n!$ do not contain $p$ as a factor, so neither of them can divide $p$. However $\binom{p}{n}$ is an integer, containing $p$ as a factor. So $p$ divides $\binom{p}{n}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2991996",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
How to compute the limit,$\lim_{x\rightarrow 0}\frac{3x^2-3x\sin x}{x^2+x\cos1/x}$ How to compute the limit,
$$\lim_{x\rightarrow 0}\frac{3x^2-3x\sin x}{x^2+x\cos\frac{1}{x}}$$
| As noticed the limit doesn't exist indeed we can consider the sequence as $x_n \to 0$ such that
$$\cos\frac{1}{x_n}=2x_n \implies \frac{3x^2-3x\sin x}{x^2+x\cos\frac{1}{x}}=\frac{3x_n^2-3x_n\sin x_n}{3x_n^2}=1-\frac{\sin x_n}{x_n} \to 1-1=0$$
and the sequence $x_n \to 0$ such that
$$\cos\frac{1}{x_n}=-x_n+x_n^3 \implies \frac{3x^2-3x\sin x}{x^2+x\cos\frac{1}{x}}=3\frac{x_n^2-x_n\sin x_n}{x_n^4}=3\frac{x_n-\sin x_n}{x_n^3}\to \frac12$$
indeed as $t \to 0$ we have that $\frac{t-\sin t}{t^3} \to \frac16$ wich can be proved by l'Hopital, Taylor or by the method shown here: Are all limits solvable without L'Hôpital Rule or Series Expansion.
For the issue already discussed here in detail by Paramanand Singh about the not satisfactory way to show that the limit doesn't exist because not defined at "infinitely many points", for a useful discussion refer also to the related
*
*What is $\lim_{x \to 0}\frac{\sin(\frac 1x)}{\sin (\frac 1 x)}$ ? Does it exist?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2992131",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
} |
How many $S\subseteq\mathcal{P}(A)$ contain each element of $A$ an even number of times? Let $A=\{1,2,...,n\}$. Let the powerset of $A$ be $\mathcal{P}(A)$.
We call $S\subseteq \mathcal{P}(A)$ a paired family of subsets if $\forall a\in A$, the number of elements of $S$ that contain $a$ is even.
For $|A|=n$, how many $S\subseteq \mathcal{P}(A)$ are paired?
| Answer is $2^{2^n-n}$.
I'll give two (morally the same) proofs.
Proof Using Linear Algebra
Identifying subsets with indicator functions, we can view every $\mathcal{S}\subseteq\mathcal{P}(A)$ as an element of the $\mathbb{F}_2$-vectorspace $(\mathbb{F}_2)^{\mathcal{P}(A)}$. For each $a\in A$, define an $\mathbb{F}_2$-linear map $p_a\colon (\mathbb{F}_2)^{\mathcal{P}(A)}\to\mathbb{F}_2$ by
$$S\in\mathcal{P}(A)\mapsto \begin{cases}1 & a\in S\\0 & a\notin S\end{cases}$$
so $p_a$ is counting whether $a$ appears in even or odd number of elements $S\in\mathcal{S}\subseteq\mathcal{P}(A)$ (exactly as in the condition of paired families).
So counting paired families becomes a problem of counting elements in the common kernel of all $p_a$'s.
The set $\{p_a\mid a\in A\}$ is linearly independent (easy check), hence there are $n$ linearly independent conditions being put on $(\mathbb{F}_2)^{\mathcal{P}(A)}$. So the common kernel has dimension $\lvert\mathcal{P}(A)\rvert-n=2^n-n$ over $\mathbb{F}_2$, so $2^{2^n-n}$ paired families.
Proof Using Simple Counting/Probability
Without loss of generality, let $A=\{1,2,3,\dots,n\}$.
Suppose we ask the question: What proportion of families will contain $1$ even number of times? We can argue the answer is one-half by the following observation: Pair off the elements of $\mathcal{PP}(A)$ as $\mathcal{S},\mathcal{S}\triangle\{\{1\}\}$, i.e. the pairs agree on all subsets of $A$ except the singleton $\{1\}$. Exactly one member from each pair will contain $1$ even number of times.
Continuing this argument for $2$, exactly one from each
$$\mathcal{S},\mathcal{S}\triangle\{\{2\}\}$$
will have even number of $2$'s. Moreover, this is independent of the the number of $1$s, so a quarter of families have both even number of $1$'s and $2$'s.
Repeat for $3,\dots,n$ therefore gives us the probability of each element $1,2,\dots,n$ appearing even number of times is $2^{-n}$, because we have $\{1\},\{2\},\dots,\{n\}$ all can appear (or not) independently.
So the total number of paired families is $2^{-n}\cdot\lvert\mathcal{PP}(A)\rvert=2^{2^n-n}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2992265",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Find a remainder when dividing some number $n\in \mathbb N$ with 30
A number n when you divide with 6 give a remainder 4, when you divide with 15 the remainder is 7. How much is remainder when you divide number $n$ with $30$?
that mean $n=6k_1+4$, $n=15k_2+7$, and $n=30k_3+x$, so I need to find $x$. And $30=6*5$ or I can write $30=2*15$, maybe this can do using congruence bit I stuck there if I use that $4n\equiv x \pmod{30}$, and I can write $n\equiv x\pmod{2*15}$, since $n\equiv 7 \pmod{15}$ and using little Fermat's little theorem $ n\equiv 1 \pmod 2 $ so then $n\equiv 7 \pmod{30}$ is this ok?
| Alternatively:
$$n\equiv 4 \pmod{6} \Rightarrow 5n\equiv 20 \pmod{30};\\
n\equiv 7 \pmod{15} \Rightarrow 2n\equiv 14 \pmod{30}.$$
Add the two:
$$7n \equiv 34\equiv 4 \pmod{30} \Rightarrow \\
91n\equiv n\equiv 52 \equiv 22 \pmod{30}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2992393",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 4
} |
Prove that a connected graph with $n$ vertices and $n+2$ edges is planar. What I have tried is that a tree has $n-1$ number of edges. So if there are $n$ edges there would be a cycle. This cycle would contribute to one face of the graph. Similarly for the other 3 edges. I think this would add one face each to the graph. Is this reasoning correct?
| Let $G$ be a connected graph on $n$ vertices with $n+2$ edges. Perform the following reductions on $G$.
If a vertex $v$ of $G$ has degree $1$, then $G$ is planar iff $G-v$ is plannar. Thus, we may remove all vertices of degree $1$ from $G$ successively, so that at the end, $G$ has no vertex of degree $1$.
If $v$ is a vertex of degree $2$ of $G$, then let $u$ and $w$ be neighbors of $G$. We remove $v$ from $G$ and add an edge $\{u,w\}$ to $G$. Note that $u$ and $w$ may already be adjacent, so this procedure can result in a multigraph, but that is not a problem. Indeed, if $u=w$ (i.e., $v$ is joined to $u$ by two edges), then this procedure removes $v$ and creates a loop at $u=w$. This procedure does not change planarity of the graph $G$.
The reduction steps can be performed only fintely many times. At the end, you will get a connected, possibly non-simple, graph $H$ each of whose vertices is of degree at least $3$. If $m$ is the number of vertices of the graph $H$, then it is still the case that $H$ has $m+2$ edges. However, by the Handshake Lemma,
$$2(m+2)\geq 3m\,,$$
and we conclude that $m\leq 4$. It is easy to list all possible (not necessarily simple) connected graphs on $m$ vertices with $m\leq 4$ with $m+2$ edges, where the minimum degree is at least $3$, and check planarity for each of them.
The case $m=1$ has only $1$ nonisomorphic example. The case $m=2$ has $4$ nonisomorphic examples. The case $m=3$ has $5$ nonisomorphic examples. The case $m=4$ has only $1$ nonisomorphic example. (I hope I did not miscount.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2992490",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Correct notation for rounding off numbers e.g. $201.7 \times 0.028 = 5.6476 = 5.6$ What would be the correct notation to rounding off numbers:
Option 1
$$201.7 \times 0.028 = 5.6476 = 5.6$$
Option 2
$$201.7 \times 0.028 = 5.6476 \approx 5.6$$
PS: I am not sure I inserted the most appropriate tag for this question.
Thank you
| The second way
$$201.7 \times 0.028 = 5.6476 \approx 5.6$$
is to prefer unless we don't specify in some way that we are rounding the result to the first decimal digit.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2992632",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Proving every metrizable space is normal space
A topological space $X,\tau$ is said to be normal space if for each pair of disjoint closed sets $A$ and $B$, there exists open sets $U$ and $V$ such that $A\subseteq U$,$B\subseteq V$ and $U\cap V=\emptyset$. Prove that every metrizable space is normal space.
If $X,\tau$ is a metrizable space then there exists $d:X\times X\to[0,+\infty]$ that defines the open sets in $\tau$.
Consider $\epsilon=\frac{d(a,b)}{2}\forall a\in A$ and $b\in B$.
Then $A\subset \bigcup_{a\in A} \mathscr{B}(a,\epsilon)$ since A is closed. The affirmation is proven in the following way: some $a\in Fr(A)$ then $B(a,\epsilon)\cap Ext(A)\neq\emptyset$
In the same way $B\subset \bigcup_{b\in B} \mathscr{B}(b,\epsilon)$
If $U=\bigcup_{a\in A} \mathscr{B}(a,\epsilon)\\V=\bigcup_{b\in B} \mathscr{B}(b,\epsilon)$,
then $U\cap V=\emptyset$.
Therefore $(X,d)$ is a normal space.
Question:
Is my proof right? If not. Why?
Thanks in advance!
| Let $X$ be a metrizable and let $d: X^{2} \rightarrow \mathbb{R} $ be a metric which defines the topology of $X$. We want to show that $X$ is Normal space. Let $ F_1$ and $F_1$ be disjoint closed subsets of $X$. Let
$$ (X-F_2) = \left\{ x\in X: d(x,F_1) < d(x,F_2) \right\}$$
and
$$ (X-F_1) = \left\{ x\in X: d(x,F_1) > d(x,F_2) \right\}$$
That is, since $F_1$ and $F_2$ are closed, then $(X-F_1)$ and $(X-F_2)$ are disjoint open sets in $X$ containing $F_2$ and $F_1$, respectively, i.e. $F_2 \subset (X-F_1)$ and $F_1 \subset (X-F_2)$. By definition of Normal space, hence $X$ is Normal.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2992758",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
On an expected value inequality. Given $X$ a random variable that takes values on all of $\mathbb{R}$ with associated probability density function $f$ is it true that for all $r > 0$
$$E \left[ \int_{X-r}^{X+r} f(x) dx \right] \ge E \left[ \int_{X-r}^{X+r} g(x) dx \right]$$
for any other probability density function $g$ ?
This seems intuitively true to me and I imagine if it were to be true that it has been proven but I can't find a similar result on the standard textbooks, even a reference is welcome.
| Unfortunately, your intuitive conjecture is INCORRECT.
Let $f(x)$ be the PDF of the random variable $X$ and $F(x)$ be its cumulative PDF, so that $F'(x)=f(x)$, or
$$F(x)=\int_{-\infty}^x f(t)dt$$
Similarly, let $g(x)$ be another PDF with cumulative PDF $G(x)$. Then the expected value of the integral
$$\int_{X-r}^{X+r} g(x)dx$$
is equal to
$$\int_{-\infty}^\infty \int_{x-r}^{x+r} f(x)g(t)dtdx=\int_{-\infty}^\infty (G(x+r)-G(x-r))f(x)dx$$
By using integration by parts, we have that
$$\int_{-\infty}^\infty (G(x+r)-G(x-r))f(x)dx=\int_{-\infty}^\infty (F(x+r)-F(x-r))g(x)dx$$
Consider this simple counterexample. Let $r=1$, and suppose that
$$f(x)=\frac{1}{\pi}\frac{1}{1+x^2}$$
Then, if your conjecture is true, for no function $g$ will the integral
$$\frac{1}{\pi}\int_{-\infty}^\infty (\arctan(x+1)-\arctan(x-1))g(x)dx$$
even surpass the value
$$\frac{1}{\pi^2}\int_{-\infty}^\infty \frac{\arctan(x+1)-\arctan(x-1)}{1+x^2}dx\approx 0.1475$$
However, suppose that we let
$$g(x)=\frac{4}{\pi}\frac{1}{1+4x^2}$$
Then the value of our integral is equal to
$$\frac{4}{\pi^2}\int_{-\infty}^\infty \frac{\arctan(x+1)-\arctan(x-1)}{1+4x^2}dx\approx 0.3743$$
which disproves your conjecture.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2992859",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
PDE Laplace equation. Integral representation form and Green function Let $\Omega$ be a domain in $\mathbb{R}^{d}$ and assume that for any $y \in \Omega$ there is a function $h_{y} \in C^{2}(\overline{\Omega})$ such that
\begin{equation}
\label{eq8.1}
\begin{cases}
\Delta h_{y}(x) = 0 \text{ in } \Omega \\
h_{y}(x) = E(x,y) \text{ on } \partial \Omega
\end{cases}
\end{equation}
Where $E$ denotes the fundamental solution to $\Delta$ in dimension $d$. Under those assumptions we define the Green function $G(x,y) = E(x,y) - h_{y}(x)$
Let $\Omega$ be a bounded domain such that one can define the Green function $G$. Then for any $u \in C^{2}(\overline{\Omega})$ and any $y \in \Omega$, we have
$$ u(y) = - \int_{\Omega} G \Delta u dx - \int_{\partial \Omega} \partial_v G(x,y) u(x) d\sigma(x) $$
Where $v$ is the outer normal of $\partial \Omega$. I don't see how the above integral representation is supposed to directly follow from the definition of the Green function and the fact that if $E$ is a fundamental solution of $\Delta$ we have:
$$
u(y) = - \int_{\Omega} E \Delta u dx + \int_{\partial \Omega} E(x,y) \partial_vu(x) - \partial_vE(x,y) u(x) d\sigma(x)
$$
| It follows from the divergence theorem applied to the vector field $h_y\nabla u - \nabla h_y u$. Then, the we get the terms term $\partial_v u h_y = \partial_vE$ and $\partial_v h_y u$ which cancels the term from $\partial_v G u$, so we recover the original integral representation for $u$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2993067",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Are Christoffel symbols structure coefficients? For a chart $(U,x^1,...,x^n)$ in an $n$ dimensional manifold, the the christoffel symbols for a covariant derivative are defined by $\nabla_{\partial_j}\partial_i=\Gamma_{ij}^k\partial_k$. For a general algebra of dimension $n$ with multiplication $\beta$, the structure constants are define by $\beta(x_i,x_j)=c_{ij}^kx_k$. Since the set of smooth vector fields is a vector space of $\mathbb{R}$, and the covariant derivative is bilinear, is this saying that the Christoffel symbols are exactly the structure constants of the algebra of smooth vector fields with the covariant derivative defined as multiplication? And if the connection is the Levi Cevita connection, is this saying the structure constants when multiplication is given by $\nabla$ is related to the structure constants when multplication is give by the lie bracket by $\Gamma_{ij}^k-\Gamma_{ji}^k=\gamma_{ij}^k$?
| The answer to the first question is not, because the algebra is an infinite-diennsional algebra, while you have only $n^3$ Christoffel symbols. And regarding the second question, if the connection is the Levi-Civita connection then the differences you say are zero, because the Levi-Civita connection is torsion-free.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2993200",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Counting ways to divide $10$ kids into $2$ teams of $5$. Why divide by $2!\,$? Here's the question:
In order to play a game of basketball, $10$ kids at a playground divide themselves into two teams of $5$ each. How many different divisions are possible?
The solution given is:
$$\frac{10!}{5!5!2!}$$
My question is
Why is the answer not just
$$\frac{10!}{5!5!}$$
i.e, Why did they divide by $2!$?
| Think about it this way:
Let's line up all ten people, and let's say that the first five go on team 1, and the next five on team 2. Now, there are $10!$ different ways to line up the $10$ people. However, note that you get the same two teams if:
*
*you shuffle the first five people. Having, say, persons 2 and 4 change positions in the line-up will not change the teams: persons 2 and 4 will still be on team 1. Since there are $5!$ ways to shuffle the first five people, you need to divide the original $10!$ by $5!$
*you shuffle the persons in positions 6 through 10. Same story. So again, divide by $5!$
*You swap the first five people and the second five people. If the teams are indistinct, then the same five people will be on one team, and the same other five on another team. So: divide by $2$ or, what is the same thing: divide by the number of ways we can shuffle the groups, and since there are two groups, that can be done in $2!$ ways. Of course, if the teams are distinct (e.g if the first team is 'the Flyers' and the second 'the Eagles', then swapping the groups of people will make a difference, so in that case do not divide by $2!$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2993316",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
How to simplify a multiplication of several summations? The formula is $$-\sin(i)\sum_{n=0}^\infty (\frac{w}{2i})^n\sum_{n=0}^\infty (\frac{w}{i})^n\sum_{n=0}^\infty \frac{(-1)^n}{(2n)!} (w)^{2n-1}.$$
I only want to get the coefficient of the $w^{-1}$ term, and the coefficients of other terms are negligible, so it looks like this
$$-\sin(i)\sum_{n=0}^\infty (\frac{w}{2i})^n\sum_{n=0}^\infty (\frac{w}{i})^n\sum_{n=0}^\infty \frac{(-1)^n}{(2n)!} (w)^{2n-1} = -\sin(i)w^{-1}+ ...$$
I want to have this kind of expression because I'm finding the residue of a function $f$ at the point $i$, so I only need to know the coefficient of the $w^{-1}$ term.
I tried to use the small $o$ notation, but I don't know if I use it correctly.
$$-\sin(i)\sum_{n=0}^\infty (\frac{w}{2i})^n\sum_{n=0}^\infty (\frac{w}{i})^n\sum_{n=0}^\infty \frac{(-1)^n}{(2n)!} (w)^{2n-1} = -\sin(i)(1+o(1))(1+o(1))(w^-1+o(1)),$$
where $\Phi(w) = o(\Psi(w))$ means $lim_{w\to 0} \Phi(w)/\Psi(w) = 0$.
| By the definition of Laurent series multiplication, the only terms that will contribute to $w^{-1}$ in the product are the lowest-indexed terms (said another way, the first coefficient in the product series is the product of the first coefficients.)
Those are $\left(\frac{w}{2i}\right)^0$, $\left(\frac{w}{i}\right)^0$, and $\frac{(-1)^0}{0!}$. Their product is $1$, and with the additional term of $-\sin(i)$ on the outside, the coefficient for $w^{-1}$ must be $-\sin(i)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2993471",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Logical equivalence of ¬p→q Just wondering what other ways $\neg p \to q$ can be expressed.
I know that $p\to q$ is logically equivalent to $\neg p\lor q$, hence I think that $\neg p\to q$ has the same logical equivalence as $p\lor q$.
| You have pretty much given the answer yourself already. Logical equivalence of $p$ and $q$ is given if $p$ is true if and only if $q$ is true (and hence $p$ is false iff $q$ is false). Since $p$ and $q$ can only be true or false, you can use truth tables and check whether logical equivalence is given. From there you can derive permissible manipulations of propositions.
In this case, $(\lnot p) \rightarrow q \equiv \lnot(\lnot p) \lor q \equiv p \lor q$, as you stated correctly.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2993594",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Error in calculation of number of ways to put $20$ identical balls in $4$ labelled boxes if each box contains at most $18$ balls The number of ways to put $20$ identical balls in $4$ labelled boxes in such a way that each box contains at most $18$ balls is,
$$(a)~~\binom{24}{4}-16~~(b)~~\binom{24}{4}-10~~(c)~~\binom{23}{3}-16~~(d)~~\binom{23}{3}-12~~(e)~~\binom{24}{4}-12$$
My solution:
Answer - Stars bars All balls can go in each box - $4C1\cdot$ Stars bars with $2$ balls and $3$ boxes (Fill $1$ box with $18$ balls leaves $2$ balls and $3$ boxes. AND $4C1$ ways of selecting full box )
$$\begin{align}
&= (23 C 3) - (4 C 1) \cdot ( 4 C 2)\\
&= (23 C 3) - 4 \cdot 6\\
&= (23 C 3) - 24
\end{align}$$
Where did I go wrong?
| Okay so using we know that the number of ways without restrictions is simply $$\binom{n+r-1}{r-1}$$ And using $n=20,r=4$ we get $$\binom{23}{3}$$Now if we put $19$ balls in any one box then we have $\binom{4}1$ ways of selecting the one box and then we also have to select the next box to hold the last ball which can be done in $3$ ways each time, so we get for the case of $19$ balls in one box, $12$ ways that should not be included.
Also for the case of $20$ balls in one box, we get $4$ ways of doing that, so you get in total $16$ cases to be subtracted from the original answer, which leaves us with option c) $$\binom{23} 3 -16$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2993718",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Solve for x negative exponential I have the following equation:
$y = 14857x^{-1.092}$
I know $y = 43$, how do I rewrite this to solve for $x$.
e.g. $43 = 14857x^{-1.092}$
| you have $$43 = 14857x^{-1.092}$$
write it as $$x^{-1.092}=\frac{1}{x^{1.092}}=\frac{43}{14857}$$
You get $$x^{1.092}=\frac{14857}{43}$$
Divide for the power $1.092$: $$x=(\frac{14857}{43})^{\frac{1}{1.092}}=211.154$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2993846",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
$\max\{a_1,a_2,\dots,a_n\}$ converges for a convergent sequence $a_n$ I am tackling the following question and want to be sure that my reasoning is fine.
Let $a_n$ be a convergent sequence s.t $\displaystyle \lim_{n\to\infty}a_n=a$. Let $$b_n\triangleq\max\{a_1,a_2,\dots,a_n\}$$
Prove that $b_n$ converges. Also, is it necessarily the case $\displaystyle \lim_{n\to\infty}b_n=a$?
My try:
As $a_n$ converges, it is bounded. Let $M$ be an upper bound of $a_n$. We note that it is also an upper bound of $b_n$ and that $b_n$ is monotonically increasing, thus $b_n$ converges.
I looked at the sequence $a_n=\dfrac{1}{n}$. We have $\displaystyle \lim_{n\to\infty}a_n=0$ and $\forall n\in\mathbb{N}:\ b_n=1$, i.e $$\displaystyle \lim_{n\to\infty}b_n=1\ne\lim_{n\to\infty}a_n$$
It seems too simple and I believe that I am missing something.
Any comment regarding the solution will be appreciated. In the case it is wrong I will be thankful for some hints in the right direction. Thanks.
| What you did is fine and if you missed something is that an even simpler example than yours can be found. Just take$$a_n=\begin{cases}1&\text{ if }n=1\\0&\text{ otherwise.}\end{cases}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2994000",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Type-I vs. type-II error in statistical hypotheses testing Let us consider standard statistical hypotheses testing:
$$\alpha=P\{\text{type}-I \text{ error}\}=P\{\text{Rejecting } H_0 \text{ when }H_0\text{ is true}\}$$
and
$$\beta=P\{\text{type}-II \text{ error}\}=P\{\text{Accepting } H_0 \text{ when }H_1\text{ is true}\}.$$
My question is as follows: could you give an example of $\alpha$ and $\beta$
when tossing 10 coins so that I can see that it does not hold this equation:
$$\alpha=1-\beta.$$
More specifically, compose an outcome so that inequality between $\alpha$ and $1-\beta$ is seen.
| Suppose you have two boxes of dice, one is a box of fair dice in which all faces are equally likely. The other has loaded dice for which the probability of getting a six is is 1/3. The labels are missing so you will roll a sample of 50 dice from each box to try to identify which box has the loaded dice.
Let $H_0: \text{FAIR},$ so that $p_0(6) = 1/6$ and let
$H_a: \text{LOADED},$ so that $p_a(6) = 1/3.$
In the figure below, blue bars represent the null
distribution under which the number of 6's seen in $n = 50$ trials is $\mathsf{Binom}(n = 50,\, p = 1/6).$ And
let the brown bars represent the alternative
distribution under which the number of 6's seen is
$\mathsf{Binom}(n = 50,\, p = 1/3).$
You choose critical value $c = 10.5$ (dotted line).
Thus $$\alpha = P(S \ge 11 \,|\, p=1/6) = .2014,$$
and $$\beta = P(S \le 10 \,|\, p = 1/3) = .0284.$$
Thus the 'power' of the test is $$1 - \beta =
P(\text{Rej } H_0 | H_a \text{ True}) = P(S \ge 11 | p=1/3) \\
= 1-.0284 = .9716.$$
sum(dbinom(11:50, 50, 1/6))
[1] 0.2013702
sum(dbinom(0:10, 50, 1/3))
[1] 0.02844031
Note: In many practical applications it seems reasonable to design an experiment so that the significance level $\alpha$ is approximately the same as the power $1 - \beta.$
Here, perhaps you chose to make them different because you think it would be more serious to sell a loaded die to a customer
who wants a fair one, than to sell a fair die to someone in the market for a loaded one.
[If you wanted significance level and power to be more nearly equal, you could pick the critical value $c$ to be near the middle of the region where the two distributions 'overlap'.]
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2994150",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Conditional Probability Drawing Candy I figured out the first one and need help with the second, with the second one could you provide solutions for both with and without replacement?
I have two bowls of candy. One is supposed to be filled with candy-covered chocolates, the other filled with fruit-flavored candies with candy shells. Unfortunately, someone has nefariously mixed up the contents of the two bowls, mixing a portion of one bowl into the second, and vice versa. Even worse, the candies are visually indistinguishable, and can only be sorted out by taste.
I know that one of the bowls (A) has candies in the following proportions: A ~ {2/3 chocolate, 1/3 fruity}, while the other bowl (B) has candies in the following proportion: B ~ {1/4 chocolate, 3/4 fruity}. Assume for the questions below that both bowls are large enough that small samples will not have a practical effect on the distribution of candies.
I select a bowl at random and choose a random candy. It's a chocolate candy! What is the probability that I've picked up bowl A (the one weighted towards chocolates)?
ANS:24/33
Continuing with the same bowl as before, I select two more candies. They are both fruity candies. What is the probability that I've picked up bowl A (the one weighted towards chocolates) now? (To clarify, there are three candies drawn in all: one chocolate and two fruity.)
| With replacement:
$$P(A|CFF) = \frac{\frac{2}{3}\cdot (\frac{1}{3})^2}{\frac{2}{3}\cdot (\frac{1}{3})^2+\frac{1}{4}\cdot (\frac{3}{4})^2} = \frac{128}{371}$$
There is an inconsistency here. That is, if both bowls are large enough not to effect the proportion ratio then solutions with and without replacement are theoretically the same. Otherwise, one would need to know the quantities in the bowls to determine the "without replacement" solution.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2994224",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Hints about the limit $\lim_{x \to \infty} ((1+x^2)/(x+x^2))^{2x}$ without l'Hôpital's rule? I've tried to evaluate $\lim_{x \to \infty} \left(\frac {1+x^2}{x+x^2}\right)^{2x}$ as $$\lim_{x \to \infty} \left(\left(\frac {1+ \frac{1}{x^2}}{1+ \frac{1}{x}}\right)^{x}\right)^{2}$$
So the denominator goes to $e^2$, but I don't know how to solve the numerator, because of the $x^2$. Any hint?
Thanks in advance!
| HINT
We have
$$\left(\frac {1+x^2}{x+x^2}\right)^{2x}=\left(\frac {x+x^2+1-x}{x+x^2}\right)^{2x}=\left[\left(1+\frac {1-x}{x+x^2}\right)^{\frac {x+x^2}{1-x}}\right]^{\frac {2x(1-x)}{x+x^2}}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2994404",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Find the smallest positive odd integer n such that φ(n)/n = 7680/12121 In a previous problem, I was able to deduce that if you have φ(n)/n = a/b where gcd(a,b) = 1 then the largest prime factor of b must also be the largest prime factor of n.
I found the prime factorization of both integers:
7680 = 2^9*3*5
12121 = 17*23*31
I believe that in the prime factorization of n we must have the following numbers: 31, 4, 2 since
φ(n)/n = {(p1-1)(p2-1)...(pr-1)}/p1*p2*...*pr
I'm not sure where to go from this. Thanks for your help!
| From "previous problem", you know that $31$ is the largest prime factor of $n$. If $n=31^km$ with $31\nmid m$ and $k\ge 1$, then $\phi(n)/n=\frac{30}{31}\phi(m)/m$ does not depend on $k$, so we take $k=1$ in order to minimize. This leaves us with the new (simpler) problem to find the minimal $m$ with
$$ \frac{\phi(m)}{m}=\frac{256}{391}.$$
As before, the largest orime factor of the denominator, $23$, must be the largest prime factor of $m$. We rinse and repeat: To minimize, we take $23$ only to first power, so $m=23r $ with $$\frac{\phi(r)}r=\frac{256}{391}\cdot \frac {23}{22}=\frac{128}{187}.$$
As before, we find $17$ as next factor and arrive at the new problem
$$\frac{\phi(s)}s=\frac{128}{187}\cdot\frac{17}{16}=\frac8{11},$$
then after finding $11$ at
$$\frac{\phi(t)}t=\frac8{11}\cdot \frac{11}{10}=\frac45,$$
where the journey ends with the last factor $5$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2994476",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
A Nerve functor into any $\infty$-comos $\mathcal{N}: Cat \to \mathcal{K}$ I believe there is a notion of a nerve functor into any $\infty$-cosmos $\mathcal{K}$. My inclination is that it would be defined as the colimit of the constant functor that sends all objects to the terminal object in $\mathcal{K}$, which exists by the definition of an $\infty$-cosmos . Is this the correct definition? I cannot seem to find anywhere in Riehl and Verrity's work where this is explicitly defined. Thanks!
| Your formula would, interpreted literally, produce a coproduct of terminal objects indexed by the connected components of your category. There is an interpretation of your proposal as a lax colimit which works when these exist, but they are not assumed in a general cosmos. It's probably clearer to define $\mathcal N(J)$ as the simplicial tensor of the terminal object with the ordinary simplicial set $N(J)$.
But those tensors are also not generally assumed to exist, and they shouldn't be. For instance, in the $\infty$-cosmos of Kan complexes, to get a tensor $*\otimes \Delta^1$ we would need a Kan complex $I$ such that maps out of $I$ were naturally isomorphic to maps out of $\Delta^1$, which is impossible. Of course, any Kan fibrant replacement of $\Delta^1$ works up to weak equivalence, but that's not strict enough for the $\infty$-cosmos approach. Weak tensors like this exist in any cocomplete $(\infty,2)$-category $\mathcal K$, and in this way such a thing does always admit a "nerve" like your $\mathcal N$. This won't necessarily be a strict functor, if such a thing even makes sense in your setting for $(\infty,2)$-categories.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2994646",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Random Variable with Characteristic function $\frac{1}{2-\phi(t)}$ I am given that $X$ has c.f. $\phi(t)$, I need to find the random variable whose c.f. is equal to $\frac{1}{2-\phi(t)}$ in terms of $X$.
My idea is that express $\frac{1}{2-\phi(t)}$ as a series, since $|\phi(t)| \leq 1$ so we have
$$
\frac{1}{2-\phi(t)} = \sum_{n = 0}^{\infty}\frac{\phi(t)^n}{2^{n+1}}
$$
From the question asked here, I am guessing that this c.f. corresponds to the random variable (I may be wrong):
$$
Z = \sum_{n = 0}^{\infty}I(A = n)Z_n
$$
where $P(A = n) = \frac{1}{2^{n+1}}, \ n = 0,1,2,...$ and $Z_n = \sum_{i = 1}^{n}Y_i$, $Z_0 = 0$, $Y_i$ are iid r.v.'s s.t. $Y_i \sim X$.
But I don't know how to prove this, can anyone point out a general direction? Thanks so much!
| You already got the answer, you are just not completely writing that out. In probability context we usually write
$$\displaystyle Z = \sum_{i=1}^A Y_i$$
as a random sum, with the convention that $Z = 0$ when $A = 0$. We assumed $A$ is independent of $Y_i$ also, and they are defined as what you written. Then by law of total expectation, the characteristic function of $Z$ is
$$ \begin{align}
\phi_Z(t) &= E\left[\exp\left\{itZ\right\}\right] \\
&= E\left[\exp\left\{it\sum_{i=1}^A Y_i\right\}\right] \\
&= \sum_{n=0}^{\infty} E\left[\exp\left\{it\sum_{i=1}^A Y_i\right\} \Bigg| A=n \right]
\Pr\{A = n\} \\
&= \Pr\{A = 0\} + \sum_{n=1}^{\infty} E\left[\exp\left\{it\sum_{i=1}^n Y_i\right\}\right]
\Pr\{A = n\} \\
&= \frac {1} {2} + \sum_{n=1}^{\infty}
E\left[\prod_{i=1}^n\exp\left\{it Y_i\right\}\right] \frac {1} {2^{n+1}} \\
&= \frac {1} {2} + \sum_{n=1}^{\infty}
\prod_{i=1}^n E\left[\exp\left\{it Y_i\right\}\right] \frac {1} {2^{n+1}} \\
&= \frac {1} {2} + \sum_{n=1}^{\infty} \phi_X(t)^n \frac {1} {2^{n+1}} \\
&= \sum_{n=0}^{\infty} \frac {\phi_X(t)^n} {2^{n+1}} \\
&= \frac {1/2} {1 - \phi_X(t)/2} \\
&= \frac {1} {2 - \phi_X(t)}
\end{align}$$
where
line $1$ using the definition of characteristic function,
line $2$ using the definition of $Z$,
line $3$ using the law of total expectation,
line $4$ using the independence of $A$ and $Y_i$ and the convention of the random sum,
line $5$ using the basic property of exponential function,
line $6$ using the independence of $Y_i$,
line $7$ using the definition of characteristic function of $Y_i$ and they are identically distributed with the identical CF $\phi_X$,
and the remaining lines are just some algebra to simplify the expression.
I have not study those complex analysis so if there is any holes in the above steps in regarding complex number, please help to fill them. I guess this should work.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2994784",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
How to prove $\sum_{d\mid q}\frac{\mu(d)\log d}{d}=-\frac{\phi(q)}{q}\sum_{p\mid q}\frac{\log p}{p-1}$? Prove that $$\sum_{d\mid q}\frac{\mu(d)\log d}{d}=-\frac{\phi(q)}{q}\sum_{p\mid q}\frac{\log p}{p-1},$$
where $\mu$ is Möbius function, $\phi$ is Euler's totient function, and $q$ is a positive integer.
I can get
\begin{align}
\sum_{d\mid q} \frac{\mu(d)\log d}{d}& = \sum_{d\mid q}\frac{\mu(d)}{d}\sum_{p\mid d}\log p \\
& = \sum_{p\mid q} \log p \sum_{\substack{d\mid q \\ p\mid d}} \frac{\mu(d)}{d}
= \sum_{p\mid q} \log p \sum_{\substack{d \\ p\mid d \mid q}} \frac{\mu(d)}{d},
\end{align}
Let $d=pr$, then $\mu(d)=\mu(p)\mu(r)=-\mu(r)$,
$$ \sum_{p\mid q} \log p \sum_{\substack{d \\ p\mid d \mid q}} \frac{\mu(d)}{d}= - \sum_{p\mid q} \frac{\log p}{p} \sum_{\substack{r\mid q \\ p \nmid r}} \frac{\mu(r)}{r}.$$
But I don't know why
$$- \sum_{p\mid q} \frac{\log p}{p} \sum_{\substack{r\mid q \\ p \nmid r}} \frac{\mu(r)}{r}=-\frac{\phi(q)}{q} \sum_{p\mid q} \frac{\log p}{p-1}?$$
Can you help me?
| Let me write $n$ instead of $q$.
We have
\begin{align}
\sum_{d|n}\frac{\mu(d)\log(d)}d
&=\sum_{d|n}\frac{\mu(d)}d\sum_{p|d}\log(p)\\
&=\sum_{p|n}\log(p)\sum_{p|d|n}\frac{\mu(d)}d\\
&=\frac 1n\sum_{p|n}\log(p)\sum_{p|d|n}\mu(d)\frac nd
\end{align}
Write $n=p^em$ with $p\nmid m$.
Then $\varphi(n)=p^{e-1}(p-1)\varphi(m)$ and
\begin{align}
\sum_{p|d|n}\mu(d)\frac nd
&=\sum_{d\mid m}\sum_{i=1}^e\mu(p^id)\frac{p^em}{p^id}\\
&=\sum_{d\mid m}\mu(pd)\frac{p^em}{pd}\\
&=-\sum_{d\mid m}\mu(d)\frac{p^em}{pd}\\
&=-p^{e-1}\varphi(m)\\
&=-\frac{\varphi(n)}{p-1}
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2994900",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Nullity of the same matrix over a finite and infinite field Is there a relation between the nullity of the same matrix over a finite field and a infinite field?
Or is there a prime power q so there the differents of nullity of a matrix over, let's say the rational numbers and the field $\Bbb{F_q}$ is small?
| Your question makes most sense if you ask it about a matrix of integers, say $A$. It is slightly easier to talk about the rank, so I will do that: the Rank-Nullity Theorem lets one translate to nullities.
Given such a matrix matrix $A$ we can find (by the Smith Normal Form algorithm) invertible matrices of integers $P,Q$ of determinant $\pm 1$ such that $PAQ=D$ where
$$
D=\begin{pmatrix}
d_1 & 0 & \dots & 0 & \dots & 0\\
0 & d_2 & \dots & 0 & \dots& 0\\
\vdots&\vdots&\ddots & 0 &\dots& 0\\
0 & 0 &\dots& d_k & \dots &0\\
\vdots &\vdots& & \vdots& & \vdots\\
0 & 0 &&0&&0
\end{pmatrix} \text{ (all other entries $0$)}
$$
with non-zero integers $d_1|d_2|\dots|d_k$.
If now $K$ is a field of characteristic $0$, the rank of $A$ is $k$.
If $K$ is a field of characteristic $p$ then the rank of $A$ is $\ell$ where $\ell$ is the least $s$ such that $p\not|d_s$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2995107",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Egorov’s Theorem (?) Let $(X, \mathbb A, m)$ be a measurable space and let $\{f_n : X \to \mathbb R\}_{n \in \mathbb N}$ be a sequence of Borel measurable functions. If such sequence converges $m$-almost everywhere to some Borel measurable $f: X \to \mathbb R$, I have to prove that for any $\epsilon > 0$ there exists $A \in \mathbb A$ with $m(A) < \epsilon$ and such that $$\sup_{x \in X \setminus A} |f_n(x) - f(x) | \to 0$$ as $n \to + \infty$.
My question is simple: Isn't this just Egorov’s Theorem?
| Yes, it is and additionally you need a finite measure space. It is wrong for not-finite measures: For example take $f_n = 1_{[n,n+1]}$, then $f_n \rightarrow 0$ pointwise, but if $\lambda(A) < \varepsilon < 1$, then we must have $\lambda([n,n+1] \setminus A ) >0$. Thus
$$\sup_{x \in \mathbb{R} \setminus A} |f(x) - 0| =1.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2995213",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
contradiction proof on divides Suppose a,b ∈ Z. If 4 | $(a^2 + b^2)$ then a and b are not both odd.
So, assuming that 4 | $(a^2 + b^2)$ and $a$ and $b$ are odd
this gives $4k=(2l+1)^2+(2u+1)^2$ for some $k,l,u\in z$
eventually leading to $4k=4(l^2+l+u)+2(u^2+1)$
The RHS is not a multiple of 4 when $u=2$ contradiction.
Is this valid, thanks.
| It's not yet valid, because you haven't shown why the RHS cannot be a multiple of $4$. You cannot simply set $u=2$, because the $u$ you have is already determined by $b$, since $b=2u+1$.
To correct your proof, re-think how you got from
$$4k=(2l+1)^2 + (2u+1)^2$$
to
$$4k = 4(l^2+l+u) + 2(u^2+1)$$
because I think you were a bit sloppy here.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2995327",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
Short mathematical proofs for teaching I am looking for short proofs in order to illustrate undergraduate notions. Most of the time students struggle with technical exercises without having the time, before going on with the following semester, to realize some great applications or insights about the objects introduced.
I would like to have some such topics to present in detail, in between 10 and 30 minutes on blackboard. To show some examples in my mind:
*
*Poisson formula used to prove Minkowski theorem
*Dimension of spaces of modular forms
*Equiperimetric inequalities
*Prime Number Theorem (or weakened versions) by analytic methods
I would like to develop these themes in many other directions: group actions, representation theory, complex analysis, algebraic number theory, local inversion, PDEs, etc.
| For a class on abstract algebra, there is the proof that the complex numbers are an algebraically closed field, following Artin. This is very short and can be presented in $10$ to $30$ minutes on the blackboard. One obtains it as a "great application" of Galois theory. For details, see also here:
Is there a purely algebraic proof of the Fundamental Theorem of Algebra?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2995474",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Integrate squared trigonometric function I'm trying to integrate $\int_a^b \left( \frac{1}{1+x^2} \right)^2 dx$
I know that $\frac{d}{dx} \arctan(x) = \frac{1}{1+x^2}$, but how can I integrate with the squared part?
I've tried substitution with no success.
| Full Work
Perform a substitution:
$$x = tan(u)$$
$$dx = sec^2(u)du$$
Then:
$$\int\nolimits\frac{sec^2(u)}{(1+tan^2(u))^2} du$$
Use the trig identity:
$$1+tan^2(x) = sec^2(x)$$
Then:
$$\int\nolimits\frac{sec^2(u)}{sec^4(u)} du$$
Simplify:
$$\int\nolimits\frac{1}{sec^2(u)} du$$
$$\int\nolimits{cos^2(u)} du$$
Use the identity:
$$\cos^{2}(x) = \frac{1}{2}(1+\cos(2x))$$
Then:
$$\frac{1}{2}\int\nolimits{(1+\cos(2u))} du$$
Which equals:
$$\frac{1}{2}u + \frac{1}{4}\sin(2u) du$$
Now either sub back:
$$ x = tan(u)$$
Or solve for new limits, by plugging a & b in to:
$$ x = tan(u)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2995619",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 6,
"answer_id": 5
} |
Longest element of Weyl Group for $G_2$ Let $\mathfrak{g}$ be a semisimple Lie Algebra, $\mathfrak{t}$ a Cartan Subalgebra, $\Phi$ the corresponding set of roots, $\Delta \subset \Phi$ a root basis and $W$ the Weyl Group with respect to $\Delta$
I am having trouble finding the longest element of the Weyl Group $w_0$ as a product of the simple reflections $w_\alpha \in W$ in the case of $G_2$.
Here are my thoughts:
Taking $\Delta = \{\alpha, \beta\}$ where in the drawing of $G_2$ (that takes the shape of the star of david), the upper-left corner is $\beta$ and the point to the right of $0$ is $\alpha$.
Then this indeed qualifies as a root basis, and using the fact that for $G_2%$ we have $W \cong D_{12}$ (a quoted result from earlier in my course), then:
Letting $w_\alpha = s \in D_{12}$, we see that $w_\beta = r^2s$ where $r$ is a clockwise rotation by $\frac{\pi}{3}$.
Further, we may note that $w_0$ must send $\Delta$ to $-\Delta$ and so $w_0 = w_\alpha w_{3\alpha + 2\beta}$
But, continuing to identify $W$ with $D_{12}$, we find that $w_{3\alpha+2\beta} = r^3s$ which cannot be generated by $s, r^2s$ which seems to imply that $W$ is not generated by the simple reflections.
Clearly something has gone wrong here, and I am really struggling to find what that might be.
| While Travis' answer gives a nice hands-on calculation, I like to point out two answers to related questions which put things in perspective:
Anton Geraschenko's answer here states, among other things, that the longest element in most simple types (actually, all except $A_{n \ge 2}, D_{2n+1}$ and $E_6$) is just $-id$. So this is also the case here, $w_0$ must be multiplication with $-1$ (which, since we are in two dimensions, is the same as rotating by $\pi$).
Allen Knutson's answer here on MathOverflow gives a nice general method to express $w_0$ as product of simple reflections. In the case of type $G_2$, we can choose $w=w_\alpha, b=w_\beta$, and the Coxeter number for type $G_2$ is $h=6$, so the general formula there gives $w_0 =(w_\alpha w_\beta)^{h/2} = (w_\alpha w_\beta)^{3}$. Switching the roles of $w$ and $b$, which is allowed, gives alternatively $w_0=(w_\beta w_\alpha)^{3}$, as in the other answer.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2995727",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Proving Equivalence Relations, Constructing and Defining Operations on Equivalence Classes I think I have an intuitive sense of how ordered pairs can function to specify equivalence classes when used in the construction of integers and rationals, for example. I put the cart before the horse, however, and am less well versed in how to prove an equivalence relation, construct equivalence classes, and define operations on equivalence classes. I have received feedback that one may not define operations on equivalence classes by appealing to individual elements (e.g., it is insufficient to indicate [(a,b)] + [(c,d)] = [(a+c,b+d)]). How does one prove equivalence relations, construct equivalence classes, and define operations on those classes? If the answer is too long for this forum, is there a good demonstration available online?
| Equivalence relations can be defined on any set $X$.
For a relation $R$ on a set $X$, $R$ is an equivalence relation if $R$ is reflexive, symmetric, and transitive. So to check/prove that a relation $R$ on $X$ is an equivalence relation, you need to check that $R$ satisfies those three properties.
Now given an equivalence relation $R$ on $X$, for any $x\in X$, the equivalence class of $x$ is $[x]_R=\{y\in X:yRx\}$. So the equivalence class of some $x\in X$ is the set of all $y\in X$ that are related to $x$. These equivalence classes are not really "constructed" as much as determined by $R$.
And you can define whatever operation you want on equivalence classes.
EXAMPLE: Let $X$ be the set of strings of length 1 to 4 formed by the English alphabet and define a relation $R$ on $X$ by $xRy$ iff $x$ has the same length as $y$. You can show that $R$ is an equivalence relation.
Now take any string from $X$, say 'xyza', then the equivalence class of 'xyza' is $[xyza]_R=\{y\in X:yRxyza\}$, the set of strings with the same length as 'xyza', i.e. the set of strings with length 4.
There are many operations you can define on the equivalence classes but the important thing is that any such operation can only have one output. You could say, given two equivalence classes $[x]$ and $[y]$, define the operation $+$, by $[x]+[y]=[xy]$. So $+$ outputs the set of all strings with the same length as 'xy', the string formed by concatenating 'x' and 'y'.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2995887",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Testing whether the following integral converges
Determine whether the following integral converges or diverges:
$$I=\int_0^\infty\frac{x^{80}+\sin(x)}{\exp(x)}\,dx.$$
Since
$$0\overset{?}{<}\frac{x^{80}+\sin(x)}{e^x}\leq\frac{x^{80}+1}{e^x}\sim\frac{x^{80}}{e^x}$$
and if we let $f(x)=x^{80}/e^x$ and $g(x)= \sqrt{x}\,e^{-\sqrt{x}}\,$ we have that
$$\lim_{x\to\infty}\frac{f(x)}{g(x)}=0$$ therefore if $\int_0^\infty g(x)\,dx$ converges so does $I$. But
$$\int_0^\infty\sqrt{x}\,e^{-\sqrt{x}}\,dx\to {\small{\begin{bmatrix}&u=\sqrt{x}&\\&du=\frac{1}{2\sqrt{x}}dx&\end{bmatrix}}}
\to\int_0^\infty e^{-u}\,du=1.$$
Thus $I$ does converge.
Now, the limit used above is not too trivial to show that it is $0$. Intuitively, it is easy to see why, but on a more rigorous aspect, L'Hôpital's Rule here would get messy. Thus, my question, is there a more straightforward way to show that the above integral converges?
|
Is there a more straightforward way to show that the above integral converges?
We can actually evaluate the integral of interest. Note that we have
$$\begin{align}
\int_0^\infty (x^{80}+\sin(x))e^{-x}\,dx&=\int_0^\infty x^{80}e^{-x}\,dx+ \int_0^\infty \sin(x)e^{-x}\,dx\\\\
&\Gamma(81)+\frac12\\\\
&=80!+0.5
\end{align}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2995984",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
Understanding abstract algebra proof of Fermat's Little Theorem The following proof of Fermat's Little Theorem is taken verbatim from Fraleigh's A First Course in Abstract Algebra:
For any field, the nonzero elements form a group under the field multiplication. In particular, for $\mathbb Z_p$, the elements $$1,2,3,\ldots,p-1$$ form a group of order $p-1$ under multiplication modulo $p$. Since the order of any element in a group divides the order of the group, we see that for $b\neq0$ and $b\in\mathbb Z_p$, we have $b^{p-1}=1$ in $\mathbb Z_p$. Using the fact that $\mathbb Z_p$ is isomorphic to the ring of cosets of the form $a+p\mathbb Z$, we see at once that for any $a\in\mathbb Z$ not in the coset $0+p\mathbb Z$, we must have $$a^{p-1}\equiv1\pmod{p}.$$
I have no problem following the proof, except for one thing. The whole proof makes no mention at all about how $p$ must be a prime. Hence it seems like this proof demonstrates Fermat's Little Theorem for all numbers $p$, not just primes, which is absurd! Where precisely does this proof break down when $p$ is not prime? What am I missing? (Sorry if this question is too trivial, but I couldn't find an explanation elsewhere.)
| Due to the hint from Bill Dubuque and Doug M, I think I have resolved my own problem. The bolded claim in the proof which is integral to the rest of the proof is only applicable when $\mathbb Z_p$ is indeed a field, which requires that every element in it has a multiplicative inverse, so each of them is coprime with the order of $\mathbb Z_{p}$. In order to have every $1,2,3,\ldots,p-1$ be coprime with $p$ we need $p$ to be prime.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2996246",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Find the sum of $1-\frac17+\frac19-\frac1{15}+\frac1{17}-\frac1{23}+\frac1{25}-\dots$
Find the sum of $$1-\frac17+\frac19-\frac1{15}+\frac1{17}-\frac1{23}+\frac1{25}-\dots$$
a) $\dfrac{\pi}8(\sqrt2-1)$
b) $\dfrac{\pi}4(\sqrt2-1)$
c) $\dfrac{\pi}8(\sqrt2+1)$
d) $\dfrac{\pi}4(\sqrt2+1)$
I have tried a lot.. But i can't find any way to solve it, plz help me.. Advance thanks to u
| We are looking for
$$1+\sum_{k=1}^\infty \left(\frac1{8k+1}-\frac1{8k-1}\right)=1-\sum_{k=1}^\infty \frac{2}{64k^2-1}\approx 1-\frac1{32}\zeta(2)\approx \frac{\pi}{8}(\sqrt2+1)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2996420",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 2
} |
How are the known digits of $\pi$ guaranteed? When discussing with my son a few of the many methods to calculate the digits of $\pi$ (15 yo school level), I realized that the methods I know more or less (geometric approximation, Monte Carlo and basic series) are all convergent but none of them explicitly states that the $n$-th digit calculated at some point is indeed a true digit (that it will not change in further calculations).
To take an example, the Gregory–Leibniz series gives us, for each step:
$$
\begin{align}
\frac{4}{1} & = 4\\
\frac{4}{1}-\frac{4}{3} & = 2.666666667...\\
\frac{4}{1}-\frac{4}{3}+\frac{4}{5} & = 3.466666667...\\
\frac{4}{1}-\frac{4}{3}+\frac{4}{5}-\frac{4}{7} & = 2.895238095...
\end{align}
$$
The integer part has changed four times in four steps. Why would we know that $3$ is the correct first digit?
Similarly in Monte Carlo: the larger the sample, the better the result but do we mathematically know that "now that we tried [that many times], we are mathematically sure that $\pi$ starts with $3$".
In other words:
*
*does each of the techniques to calculate $\pi$ (or at least the major ones) have a proof that a given digit is now correct?
*if not, what are examples of the ones which do and do not have this proof?
Note: The great answers so far (thank you!) mention a proof on a specific technique, and/or a proof that a specific digit is indeed the correct one. I was more interested to understand if this applies to all of the (major) techniques (= whether they all certify that this digit is guaranteed correct).
Or that we have some which do (the ones in the two first answers for instance) and others do not (the further we go, the more precise the number but we do not know if something will not jump in at some step and change a previously stable digit. When typing this in and thinking on the fly, I wonder if this would not be a very bad technique in itself, due to that lack of stability)
| The simplest method to explain to a child is probably the polygon method, which states that the circumference of a circle is bounded from below by the circumference of an inscribed regular $n$-polygon and from above by the circumference of a circumscribed polygon.
Once you have a bound from below and above, you can guarantee some digits. For example, any number between $0.12345$ and $0.12346$ will begin with $0.1234$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2996541",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "40",
"answer_count": 11,
"answer_id": 1
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.