Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
|---|---|---|
How do I compute the following complex number? This was the problem I was given:
Compute the complex number for $\frac{(18-7i)}{(12-5i)}$.
I was told to write this in the form of $a+bi$.
So please give me a hint of how to do this. :)
|
You can "rationalize" or more accurately "real-ize" the denominator by multiplying the numerator and denominator by denominator's conjugate. It is just like with radicals. You will get:
$$\frac{18-7i}{12-5i}=\frac{(18-7i)(12+5i)}{(12-5i)(12+5i)}=\frac{216-84i+90i-35i^2}{144-60i+60i-25i^2}=\frac{216+6i+35}{144+25}=\frac{251+6i}{169}$$
More accurate answer would be: $$\frac{251+6i}{169}=\frac{251}{169}+\frac{6}{169}i$$
If you want the decimal answer, it is around $1.485207101+0.035502959i$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1757021",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Conditional probability involving coin flips A coin has an unknown head probability $p$. Flip $n$ times, and observe $X=k$ heads. Assuming an uniform prior for $p$, then the posterior distribution of $p$ is $B(\alpha = k + 1, \beta = n - k + 1)$. Consider $Y$ = number of additional flips required until the first head appears. Find the following distributions:
*
*$P(Y=j|p=\theta)$, for j = 1,2,3,...
*$P(Y=j|X=k)$, for j = 1,2,3,...
I think $P(Y=j|p = \theta) = \frac{P(Y=j,p=\theta)}{P(p=\theta)}$ and similarly for part 2. But are the RV's independent? And how do I find their joint pdf?
|
$\mathsf P(Y=j\mid p=\theta)$ is the conditional probability of $j$ aditional flips until another head shows for a given bias for the coin (after $X=k$ in $n$ flips). What is the (conditional) distribution used for this? (Hint: you do not need to resort to Bayes' for this; just identify the model.)
$\mathsf P(p=\theta)$ is the prior distribution of the bias; you are told to evaluate it using two cases: (1) $p$ is Uniform$(0;1)$, (2) $p$ is Beta$(\alpha:=k+1,\beta:=n-k+1)$
Random variables $p, Y$ are not independent, and their joint probability is the product of the prior of $p$ and the conditional of $Y$ given $p$. $\mathsf P(Y=j\mid p=\theta)~\mathsf P(p=\theta)$
Then, for each type of prior, you are tasked to find the posterior: $\mathsf P(p=\theta\mid Y=j)$, and this is where you apply Bayes' rule
$$\mathsf P(p=\theta\mid Y=j)~=~\dfrac{\mathsf P(Y=j\mid p=\theta)~\mathsf P(p=\theta)}{\int\limits_0^1\mathsf P(Y=j\mid p=t)~\mathsf P(p=t)\operatorname d t}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1757142",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Show that there is a step function $g$ over $[a,b]$
Assume $f$ is integrable over $[a,b]$ and $\epsilon > 0$. Show that there is a step function $g$ over $[a,b]$ for which $g(x) \leq f(x)$ for all $x \in [a,b]$ and $\displaystyle \int_{a}^b (f(x)-g(x))dx < \epsilon$.
I am having trouble coming up with a step function that satisfies the second condition. Given any $f(x)$, it is easy to come up with a step function such that $g(x) \leq f(x)$ for all $x \in [a,b]$. But how do we deal with the second condition?
|
This follows from the definition of Riemann integral:
For given $\epsilon>0$ there exists $\delta>0$ such that for every partition of $[a,b]$ that is finer than $\delta$, the lower and upper Riemann sum for that partition differ by less than $\epsilon$ from $\int_a^bf(x)\,\mathrm dx$, which is between them. Let $g$ be the step function corresponding to the lower Riemann sum. Then $g(x)\le f(x)$ for all $x$ and the lower Riemann sum is just $\int_a^bg(x)\,\mathrm dx$. Hence $\int_a^b(f(x)-g(x))\,\mathrm dx<\epsilon$, as desired.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1757238",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Understanding the proof of Möbius inversion formula I am trying to understand one step in the proof of the Möbius inversion formula.
The theorem is
Let $f(n)$ and $g(n)$ be functions defined for every positive integer $n$ satisfying $$f(n) = \sum_{d|n}g(d)$$
Then, g satisfies $$g(n)=\sum_{d|n}\mu(d) f(\frac{n}{d})$$
The proof is as follows:
We have $$\sum_{d|n}\mu(d)f(\frac{n}{d}) = \sum_{d|n}\mu(\frac{n}{d})f(d) = \sum_{d|n}\mu(\frac{n}{d}) \sum_{d'|d}g(d') = \sum_{d'|n}g(d')\sum_{m|(n/d')}\mu(m)$$
Where in the last term, the inner sum on the right hand side is $0$ unless $d'=n$.
My question is: how do we get the last equation? I don't really understand how the author interchanges the summation signs.
Thanks for any help!
|
First, considering the sum
\begin{align*}
\sum_{d|n}\mu(\frac{n}{d})\sum_{m|d}g(m),
\end{align*}
let's take a look into the indices
$$
n=\frac{n}{d}d=\frac{n}{d}\frac{d}{m}m=khm,
$$
with
$$
\frac{n}{d}=k, \frac{d}{m}=h.
$$
Thus, we have
\begin{align*}
\sum_{d|n}\mu(\frac{n}{d})\sum_{m|d}g(m)
&=\sum_{dk=n}\mu(k)\sum_{hm=d}g(m)\\
&=\sum_{khm=n}\mu(k)g(m)\\
&=\sum_{mkh=n}g(m)\sum_{kh=\frac{n}{m}}\mu(k)\\
&=\sum_{m|n}g(m)\sum_{k|\frac{n}{m}}\mu(k)\\
&=\sum_{m|n}g(m)[\frac{m}{n}]=g(n).
\end{align*}
If you are reading Apostol, then in his book's convention, we can see that we're in fact doing the convolution
$$
f*\mu=(g*U)*\mu=g*(U*\mu)=g*I=g,
$$
with $U(n)=1$ and $I(n)=[\frac{1}{n}].$
The tricky part here is that there are really three arithmetic functions convoluting with each other, namely, $f$, $g$ and $U.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1757352",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 1
}
|
number of strings of $5$ lower case letters $a\cdots z$ that do not contain any letter twice or more What are the number of strings of $5$ lower case letters $a\cdots z$ that do not contain any letter twice or more?
I think it would be $26*25*24*23*22$ because the first position can be filled in $26$ ways because there are $26$ possible lowercase characters; the second one in $25$ ways and so on
|
community wiki answer so the question can be closed
Your answer is correct.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1757477",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
The $\cos(\alpha-\beta)$ formula always need $\alpha > \beta$ or not? I'm a beginner student study the proof of sum and difference trigonometry formula.
There is a formula that:
$$\cos(\alpha-\beta) = \cos(\alpha)\cos(\beta)+\sin(\alpha)\sin(\beta)$$
In the tutorial it shows only the case when $\alpha > \beta$
Can this formula be used when $\alpha < \beta$ ? and is the answer mathematically legitimate? and why it is? or why it is not?
please give me very detail and basic step by step explanation. I'm newbie ^^"
|
The formula holds for all $\alpha,\beta$. If you only know it for the case that $\alpha-\beta>0$ note that for $\alpha-\beta<0$,
$$ \cos(\alpha-\beta)=\cos(\beta-\alpha)=\cos(\beta)\cos(\alpha)+\sin(\beta)\sin(\alpha)$$
(and of course for $\alpha=\beta$, $1=\cos 0=\cos^2(\alpha)+\sin^2(\alpha)$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1757604",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
}
|
Then there exists a unique natural number $b$ less than $p$ such that $ab \equiv 1 \pmod{p}$. Full question: Let $p$ be a prime and let $a$ be an integer such that $1 \leq a < p$. Then there exists a unique natural number $b$ less than $p$ such that $ab \equiv 1 \pmod{p}$.
Looking for the proof. Is Fermat's little theorem necessary?
|
Hint: When does a linear congruence equation $ax\equiv b($mod $m)$ have a solution?
EDIT: If you know the rule regarding division in modular arithmetic, you can find uniqueness.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1758111",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 4
}
|
If $f(x-f(y))=f(-x)+(f(y)-2x)\cdot f(-y)$ what is $f(x)$
Determine all functions $f:\mathbb{R}\rightarrow\mathbb{R}$ such that $$f(x-f(y))=f(-x)+(f(y)-2x)\cdot f(-y), \quad \forall x,y \in \mathbb{R}$$
It's easy to see that $f(x)=x^2$ is a function satisfying the above equation. Thus I thought it would be wise to first prove that $f$ is an even function. The best I did is to conclude that $f(-f(y))=f(-f(-y))$. Then I tried to prove that $f(0)=0$ but failed.
|
This is a loose derivation.
Let $x = 0$, to have:
$$
f(-f(y))=f(0)+f(y)\cdot f(-y)
$$
Let $ y = -y$:
$$
f(-f(-y))=f(0)+f(-y)\cdot f(y)
$$
So $f(-(f(y)) = f(-f(-y))$, I think this is sufficient to conclude that $f$ is even, by apply $f^{-1}$ on both sides and multiply $-1$.
Now with $f$ even,
$$
f(-f(y))=f(0)+f(y)\cdot f(y)
$$
Let $f(y) = x$, we have:
$$
f(-x)=f(x)=f(0)+x^2
$$
You can not determine what $f(0)$ is.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1758348",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Tridiagonal matrix inner product inequality I want to show that there is a $c>0$ such that
$$
\left<Lx,x\right>\ge c\|x\|^2,
$$
for alle $x\in \ell(\mathbb{Z})$ where
$$
L=
\begin{pmatrix}
\ddots & \ddots & & & \\
\ddots & 17 & -4 & 0 & \\
\ddots & -4 & 17 & -4 & \ddots \\
& 0 & -4 & 17 & \ddots \\
& & \ddots & \ddots &\ddots
\end{pmatrix},
$$
is a tridiagonal matrix and
$$
x=
\begin{pmatrix}
\vdots \\
x_{1} \\
x_0 \\
x_{-1} \\
\vdots
\end{pmatrix}.
$$
I know that the following holds
\begin{align}
\left<Lx,x\right>&=\left<\begin{pmatrix}
\ddots & \ddots & & & \\
\ddots & 17 & -4 & 0 & \\
\ddots & -4 & 17 & -4 & \ddots \\
& 0 & -4 & 17 & \ddots \\
& & \ddots & \ddots &\ddots
\end{pmatrix}\begin{pmatrix}
\vdots \\
x_{1} \\
x_0 \\
x_{-1} \\
\vdots
\end{pmatrix},\begin{pmatrix}
\vdots \\
x_{1} \\
x_0 \\
x_{-1} \\
\vdots
\end{pmatrix}\right>\\
&=\left< -4\begin{pmatrix}
\vdots \\
x_{2} \\
x_1 \\
x_{0} \\
\vdots
\end{pmatrix} +17\begin{pmatrix}
\vdots \\
x_{1} \\
x_0 \\
x_{-1} \\
\vdots
\end{pmatrix} -4\begin{pmatrix}
\vdots \\
x_{0} \\
x_{-1} \\
x_{-2} \\
\vdots
\end{pmatrix},\begin{pmatrix}
\vdots \\
x_{1} \\
x_0 \\
x_{-1} \\
\vdots
\end{pmatrix} \right>\\
&=-4\left<\begin{pmatrix}
\vdots \\
x_{2} \\
x_1 \\
x_{0} \\
\vdots
\end{pmatrix},\begin{pmatrix}
\vdots \\
x_{1} \\
x_0 \\
x_{-1} \\
\vdots
\end{pmatrix} \right> + 17\left<\begin{pmatrix}
\vdots \\
x_{1} \\
x_0 \\
x_{-1} \\
\vdots
\end{pmatrix},\begin{pmatrix}
\vdots \\
x_{1} \\
x_0 \\
x_{-1} \\
\vdots
\end{pmatrix} \right> -4\left<\begin{pmatrix}
\vdots \\
x_{0} \\
x_{-1} \\
x_{-2} \\
\vdots
\end{pmatrix},\begin{pmatrix}
\vdots \\
x_{1} \\
x_0 \\
x_{-1} \\
\vdots
\end{pmatrix} \right>.
\end{align}
Hence,
$$
\left<Lx,x\right>=-4k+17\|x\|^2,
$$
where
$$
k=\sum_{j\in \mathbb{Z}}{x_j(x_{j+1}+x_{j-1})}.
$$
Obviously $\|x\|^2\ge 0$, but how can I choose $c$ such that the inequality holds? Here I get stuck, any hints?
|
You can also finish your proof by noting that $k \le 2 \, \|x\|^2$ (by applying Hölder's inequality). Hence,
$$\langle L \, x , x \rangle \ge -4 \, k + 17 \, \|x\|^2 \ge 9 \, \|x\|^2.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1758470",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Solution of $x^2e^x = y$ The other day, I came across the problem (or something that reduced to the problem):
Solve for $x$ in terms of $y$ and $e$: $$x^2e^x=y$$
I tried for a while to solve it with logarithms, roots, and the like, but simply couldn't get $x$ onto one side by itself without having $x$ on the other side too.
So, how can I solve this, step-by-step?
More generally, how can I solve equations that involve both polynomials (e.g. $x^2$, $x^3$) and exponentials (e.g. $e^x$,$10^x$)?
EDIT - I now remember why I this question came up. I was reading something about complexity theory (the basics: P, NP, NP-hard, etc.), and I got to a part that talked about how polynomial time is more efficient than exponential time. So, I decided to take a very large polynomial function and a very small exponential function and see where they met. Hence, I had to solve an equation with both polynomials and exponentials, which I figured could reduce to $x^2e^x=y$.
|
Solution with Lambert W:
$$
x^2 e^x=y
\\
x e^{x/2} = \sqrt{y}
\\
\frac{x}{2}\;e^{x/2} = \frac{\sqrt{y}}{2}
\\
\frac{x}{2} = W\left(\frac{\sqrt{y}}{2}\right)
\\
x = 2\;W\left(\frac{\sqrt{y}}{2}\right)
$$
One solution for each branch of the W function.
Other solutions by taking the other square-root:
$$
x = 2\;W\left(\frac{-\sqrt{y}}{2}\right)
$$
Is there "no point" in these solutions? Perhaps. There are known properties of W. Your CAS may already have W coded.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1758587",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Limit of a complex sequence So I wanted to calculate $$\lim_{n\rightarrow\infty}\frac{n^2}{(4+5i)n^2+(3+i)^n}$$
I thought that I could do it easier if I calculate $\lim_{n\rightarrow\infty}\frac{(3+i)^n}{n^2}$. First I write $\phi=\arctan(\frac{1}{3})$ so that $3+i=\sqrt{10}(\cos\phi+i\cdot\sin\phi)$. Now we have $\lim_{n\rightarrow\infty}\frac{(3+i)^n}{n^2}=\lim_{n\rightarrow\infty}\frac{(\sqrt{10}(\cos\phi+i\cdot\sin\phi))^n}{n^2}=\lim_{n\rightarrow\infty}\frac{10^{n/2}(\cos(n\cdot\phi)+i\cdot\sin(n\cdot\phi))}{n^2}$. Looking now at the limit of the absolute value of the real and imaginary part, we see both go to $\infty$. Knowing that we than have the complex number should go to $\pm\infty\pm i\infty$. Well adding there $4+5i$ doesn't change a lot. If we now look at $\frac{1}{\pm\infty\pm i\infty}$, can we say it equals to $0$? I am still a bit confused with the complex infinity, but in theory it should. Is there maybe a better proof of this limit?
|
The correct way of doing this is to show that
$$\lim_{n \to \infty} \left| \frac{n^2}{(4+5i)n^2 + (3+i)^n} \right| =0$$
Now, write
$$\frac{n^2}{(4+5i)n^2 + (3+i)^n} = \frac{1}{(4+5i) + (3+i)^n/n^2}$$
and using triangular inequality, $$|(4+5i) + (3+i)^n/n^2| \ge |(3+i)^n/n^2| - |4+5i| =$$ $$ =|3+i|^n/n^2 - |4+5i| = \frac{\sqrt{10}^n}{n^2} - \sqrt{41} \to + \infty$$
so that by comparison $\lim_{n \to \infty} |(4+5i) + (3+i)^n/n^2| = + \infty$ and you conclude.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1758708",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
}
|
Show the Cauchy-Riemann equations hold but f is not differentiable
Let $$f(z)={x^{4/3} y^{5/3}+i\,x^{5/3}y^{4/3}\over x^2+y^2}\text{ if }z\neq0
\text{, and }f(0)=0$$
Show that the Cauchy-Riemann equations hold at $z=0$ but $f$ is not differentiable at $z=0$
Here's what I've done so far:
$\quad$As noted above, there are two cases: $z=0$ and $z\neq0$. The Cauchy-Riemann equations hold at $\quad z\neq0$ because $f(x)$ is analytic everywhere except at the origin.
$\quad$Now we consider $z=0$. In this case,
$$f_x(0,0) = \lim_{x\to 0} \frac{0 + 0}{x^2+0}=0$$
$\quad$and
$$f_y(0,0) = \lim_{y\to 0} \frac{0 + 0}{0+y^2}=0$$
$\quad$So, we see that $\frac{\delta u}{\delta x}=0=\frac{\delta v}{\delta y}$ and $\frac{\delta u}{\delta y}=0= -\frac{\delta v}{\delta x}$. Therefore, the Cauchy-Riemann equations $\quad$hold at $z=0$.
From here, I'm a little confused about how to show that $f$ is not differentiable at $z=0$. I instinctively believe that $f$ is not continuous at $z=0$, which would imply that $f$ is not differentiable at $z=0$. However, I'm not sure how I can prove this.
Any help would be greatly appreciated. Thank you!!
EDIT:
I now see that $f$ is continuous at $z=0$, but I'm not sure how to prove that it is not differentiable. I know I should use $\lim_{z\to 0} \frac{f(z) - f(0)}{z-0}$ to determine differentiability, and that I should get two different answers when I approach the limit from two different paths (for example: $x=0$, $y=0$, or $x=y$), but I'm not sure how to evaluate $\lim_{z\to 0} \frac{f(z) - f(0)}{z-0}$ for $x=0$, $y=0$ or $x=y$.
As before, any help would be much appreciated. Thanks!
|
HINT: Are the limits as $x \to 0$ and $y \to 0$ the same? Because if the limit does not exist and equal the same value for EVERY direction of approach to the origin, then the limit does not exist there.
EDIT: The same argument would work with directional derivatives at the origin; if any two are not equal, then the function (considered as a function on R^2), can not be differentiable and hence whether or not the Cauchy-Riemann equations hold is insufficient.
However, do note that even if all directional derivatives exist and are equal, this is still not sufficient to prove the differentiability at the origin; it is only a necessary, but not a sufficient condition.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1758824",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
}
|
Show that doesn't exist a group $(G,*)$ such that $\mathbb{R}$ is closed under $*$ and the restriction to $\mathbb{R}$ is the usual multiplication
Show that doesn't exist a group $(G,*)$ such that $\mathbb{R}\subset G$ such that $\mathbb{R}$ is closed under $*$ and the restriction to $\mathbb{R}$ is the usual multiplication of $\mathbb{R}$
My approach was, suposse that group $G$ exist, but I can not determine the restriction to $\mathbb{R}$ is the usual multiplication. Any hint! thanks
|
In a group, every element that is not the identity has a inverse element. $1$ is the identity element of the group under multiplication. Now, does zero have an inverse in $\mathbb{R}$? That will answer your question.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1758911",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Derivative of unknown compound function The problem says:
What is $f'(0)$, given that
$f\left(\sin x −\frac{\sqrt 3}{2}\right) = f(3x − \pi) + 3x − \pi$, $x \in [−\pi/2, \pi/2]$.
So I called $g(x) = \sin x −\dfrac{\sqrt 3}{2}$ and $h(x)=3x − \pi$.
Since $f(g(x)-h(x))=3x − \pi$, I called $g(x)-h(x) = j(x) = \sin x −\frac{\sqrt 3}{2} - 3x +\pi$
I imagined that I should find the function $f$, by finding the inverse of $j(x)$ and doing $(f \circ j \circ j^{-1})(x)$ but I discovered that it is too much difficult.
I guess that's not the best way.
What should I do?
|
Given: $$f(\sin x - \frac{\sqrt 3}{2}) = f(3x-\pi) + 3x - \pi$$
Taking derivatives on both sides w.r.t. $x$, we get,
$$f'(\sin x - \frac{\sqrt 3}{2}).\cos x = 3.f'(3x-\pi) + 3$$
Put $x=\frac{\pi}{3}$, we get,
$$f'(\sin \frac{\pi}{3} - \frac{\sqrt 3}{2}).\cos \frac{\pi}{3} = 3.f'(3.\frac{\pi}{3}-\pi) + 3$$
$$f'(0).\frac{1}{2} = 3.f'(0) + 3$$
Solving for $f'(0)$, we get,
$$f'(0)=-\frac{6}{5}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1759027",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Finding the Limit of a Sequence (No L'Hopital's Rule) Okay, I feel almost silly for asking this, but I've been on it for a good hour and a half. I need to find:
$$\lim_{n \to\infty}\left(1-\frac{1}{n^{2}}\right)^{n}$$
But I just can't seem to figure it out. I know its pretty easy using L'Hopital's rule, and I can "see", that it's going to be $1$, but apparently it's possible to compute using only the results I've proved so far. That is, the sandwich theorem, the addition/subtraction/multiplication/division/constant multiple limit laws and the following standard limits:
$\displaystyle\lim_{n \to\infty}\left(n^{\frac{1}{n}}\right)=1$
$\displaystyle\lim_{n \to\infty}\left(c^{\frac{1}{n}}\right)=1$, where $c$ is a real number, and
$\displaystyle\lim_{n \to\infty}\left(1+\frac{a}{n}\right)^{n}=e^{a}$.
As well as those formed from the ratio of terms in the following hierarchy:
$$1<\log_e(n)<n^{p}<a^{n}<b^{n}<n!<n^{n}$$
Where $p>0$ and $1<a<b$
e.g. $\displaystyle\lim_{n \to\infty}\left(\frac{\log_e(n)}{n!}\right)=0$
At first I thought maybe I could express the limit in the form of the $e^{a}$ standard limit, but I couldn't seem to get rid of the square on $n$. I've also tried the sandwich theorem, it's obviously bounded above by $1$, but I just couldn't find any suitable lower bounds. I'd really appreciate some help, or even just a little hint, thanks!
|
$(1 - \frac1{n^2} )^{n^2} \to e^{-1}$ as $n \to \infty$.
Thus $(1 - \frac1{n^2} )^{n^2} \in [\frac12,1]$ as $n \to \infty$.
$\def\wi{\subseteq}$
Now $(1 - \frac1{n^2} )^n \in \left( (1 - \frac1{n^2} )^{n^2} \right)^\frac1n \wi [\frac12,1]^\frac1n \to {1}$ as $n \to \infty$.
Thus by the squeeze theorem $(1 - \frac1{n^2} )^n \to 1$ as $n \to \infty$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1759124",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18",
"answer_count": 8,
"answer_id": 3
}
|
proving every nonempty open set is the disjoint union of a countable collection of open intervals i'm studying real analysis with royden, and i've looked up similar qeustions and answers but i couldn't get the exact answer that i need.
i don't need the whole process of proof , and i confused with certain phrase.
First let me show you what i read.
royden
i know the procedure in this proof, but i don't know why {I$_x$} is countable.
they explain this in line 12 , but i can't understand it.
especially, i think, when i try to construct {I$_x$} , there exist repeated open intervals for certain interval in open subset $O$ , then the number of open interval of {I$_x$} must be finite.
then how can we say that {I$_x$} is countable?
with specific example , i understood I$_x$ for [3,10] as a interval of open set $O$ then, I$_7$ and I$_8$ is same , so we write only one of them into {I$_x$} , it means unless $O$ has countable interval, {I$_x$} cannot be countable ..
what do i missed? or misunderstood? let me know
|
Each $I_x$ contains a rational number say $q_x$.
Since the $I_x$ are disjoint, each $q_x$ belongs to exactly one $I_x$.
This produces a bijection between the sets $\{I_x\}$ and $\{q_x\}$, which is a subset of the rationals and hence countable.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1759246",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
What's the value of $x$ in the following equation?
So this is how I approached this question, the above equations could be simplified to :
$$a = \frac{4(b+c)}{b+c+4}\tag{1!}$$
$$b = \frac{10(a+c)}{a+c+10}\tag{2}$$
$$c=\frac{56(a+b)}{a+b+56}\tag{3}$$
From above, we can deduce that $4 > a$ since $\frac{(b+c)}{b+c+4} < 1$ similarly $10 > b, 56 > c$ so $a + b + c < 70$
Let, $$(a + b + c)k = 70\tag4$$
Now let, $$\alpha(b+c) = b+c+4\tag{1'}$$
$$\beta(a+c) = a+c+10\tag{2'}$$
$$\gamma( a+b ) = a+b+56\tag{3'}$$
Now adding the above 3 equations we get :
$$2(a+b+c) + 70 = a(\gamma + \beta) + c(\beta + \alpha) + b(\alpha + \gamma) \rightarrow (2 + k)(a+b+c) = a(\gamma + \beta) + c(\beta + \alpha) + b(\alpha + \gamma)$$
Now from above we see that coefficient of $a,b,c$ must be equal on both sides so, $$(2 + k) = (\alpha + \beta) = (\beta + \gamma) = (\alpha + \gamma)$$
Which implies $\beta = \gamma = \alpha = 1+ \frac{k}{2} = \frac{2 + k}{2}$,
Now from $(1)$ and $(1')$ we get $a = \frac{4}{\alpha} = \frac{8}{2+k}$ similarly from $(2),(2')$ and $(3),(3')$ we find, $b = \frac{20}{2+k}, c = \frac{112}{2+k}$
Thus from above we get $a+b+c = \frac{140}{2+k}$ and from $(4)$ we get: $\frac{140}{2+k} = \frac{70}{k}$ from which we can derive $k = 2$
Thus we could derive $a = 2, b = 5, c = 28$ but, the problem now is $a, b, c$ values don't satisfy equation $(4)$ above for $k =2$
Well so, where do I err ? And did I take the right approach ? Do post the solution about how you solved for $x$.
|
Your deductions are wrong and that is what is misleading you. Integers can be both positive and negative.
If you solve equations (1), (2) and (3) simultaneously you can find a, b and c.
I did this to find $$a=3$$
$$b=5$$
$$c=7$$
You can then plug this into the forth equation given in the problem to solve for x.
$$x = \frac{abc}{ a + b + c} $$
which solves to give $$x=\frac{105}{15}=7$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1759327",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Closed form for binomial sum with absolute value Do you know whether the following expression has a (nice) closed form or a close enough approximation?
$$\frac{1}{2^n}\sum_{k=0}^{n} \binom{n}{k}|n-2k|$$
Thanks a lot :)
Cheers,
M.
|
If we assume that $n$ is even, $n=2m$, our sum times $2^n$ equals:
$$ \sum_{j=0}^{m-1}\binom{2m}{j}(2m-2j)+\sum_{j=m+1}^{2m}\binom{2m}{j}(2j-2m) =\sum_{j=0}^{m-1}\binom{2m}{j}(4m-4j)$$
where:
$$\sum_{j=0}^{m-1}\binom{2m}{j} = \frac{4^m-\binom{2m}{m}}{2}$$
and:
$$\sum_{j=0}^{m-1}\binom{2m}{j}j = 2m\sum_{j=0}^{m-2}\binom{2m-1}{j}=2m\cdot\frac{2^{2m-1}-\binom{2m-1}{m-1}}{2}.$$
The case $n=2m+1$ can be managed in a similar way.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1759420",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
limit to infinity : trouble with l'hopital Given the following limit for s positive constant
$\lim_{x\to \infty} xe^{-sx}(\sin x-s\cos x) $
how can I prove that the above is equal to $0$ ?
I re-write the limit as $ \frac{x(\sin x-s\cos x)}{e^{sx}} $ and then I use de l'Hopital theorem but it seems that I only go round and round..
I would appreciate any help! Thanks in advance!!
|
See $0\leq sin(x),cos(x)\leq 1$. So numerator is just oscillating between $x,0$ so now if you separate out you will see that denominator is growing so rapidly that at large x the value is almost $0$. So the limit as $x->\infty$ is $0$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1759511",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
These two spaces are not homeomorphic...right? why is $\Bbb R\times[0,1]\not \cong \Bbb R^2$? we can't use the popular argument of deleting a point and finding that one has more path components than the other here.
So my idea is to delete a strip $\{0\}\times[0,1]$ from $\Bbb R\times[0,1]$.
But is $\Bbb R^2-f(\{0\}\times[0,1])$ always path-conneted when $f:\Bbb R\times[0,1]\to \Bbb R^2$ is a homeo?
|
The property of simple connectivity will distinguish between $\Bbb{R} \times[0,1]$ and $\Bbb{R}^2$. When we remove one point from $\Bbb{R} \times[0,1]$ then it is simply connected but removing one point from $\Bbb{R}^2$ then it is not simply connected.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1759660",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 1
}
|
Solve these functional equations: $\int_0^1\!{f(x)^2\, \mathrm{dx}}= \int_0^1\!{f(x)^3\, \mathrm{dx}}= \int_0^1\!{f(x)^4\, \mathrm{dx}}$
Let $f : [0,1] \to \mathbb{R}$ be a continuous function such that
$$\int_0^1\!{f(x)^2\, \mathrm{dx}}= \int_0^1\!{f(x)^3\, \mathrm{dx}}= \int_0^1\!{f(x)^4\, \mathrm{dx}}$$
Determine all such functions $f$.
So far, I've managed to show that $\int_0^1\!{f(x)^t\, \mathrm{dx}}$ is constant for $t \geq 2$. Help would be appreciated.
|
Calculate the difference of first and second integral then second and third.
$$\int_0^1f^2(x)-f^3(x)dx=0$$
$$\int_0^1f^3(x)-f^4(x)dx=0$$
Subtract both equations:
$$\int_0^1f^2(x)-2f^3(x)+f^4(x)dx=0$$
$$\int_0^1f^2(x)\left[1-2f(x)+f^2(x)\right]dx=0$$
$$\int_0^1f^2(x)(1-f(x))^2dx=0$$
Look at the integrand it consists of the product of two squares. It can only become $0$ if $f(x)=0$ or $f(x)=1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1759736",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18",
"answer_count": 3,
"answer_id": 2
}
|
How do I calculate the number of unique permutations in a list with repeated elements? I know that I can get the number of permutations of items in a list without repetition using
(n!)
How would I calculate the number of unique permutations when a given number of elements in n are repeated.
For example
ABCABD
I want the number of unique permutations of those 6 letters (using all 6 letters).
|
Do you want the number of combinations of a fixed size? Or all sizes up to 6? If your problem is not too big, you can compute the number of combinations of each size separately and then add them up.
Also, do you care about order? For example, are AAB and ABA considered unique combinations? If these are not considered unique, consider trying stars-and-bars: How to use stars and bars (combinatorics)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1759845",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
}
|
Understanding a proof by induction In the following proof by induction:
Problem: Prove by induction that $1+3+ \ldots+ \ (2n-1)=n^2$
Answer:
a) $P(1)$ is true since $1^2=1$
b)Adding $2n+1$ to both sides we obtain:
$$
1+3+..+(2n-1)+(2n+1)=n^2+2n+1=(n+1)^2
$$
Why $2n+1$ ? Where does this come from?
And how does knowing that
$$
1+3+ \ldots +(2n-1)+(2n+1)=(n+1)^2
$$
prove anything?
Appreciate any light, thanks.
|
$P(n)$ is the statement $1+3+\dots+(2n-1)=n^2$. To carry out a proof by induction, you must establish the base case $P(1)$, and then show that if $P(n)$ is true then $P(n+1)$ is also true.
In this problem, $P(n+1)$ is the statement $1+3+\dots+(2n-1)+(2n+1)=(n+1)^2$, because $2(n+1)-1=2n+1$. So by starting with $P(n)$ and adding $2n+1$ to both sides, you can prove $P(n+1)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1760021",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
}
|
Is a ball always connected in a connected metric space? If I have a connected metric space $X$, is any ball around a point $x\in X$ also connected?
|
There are even complete path-connected metric spaces that contain a point $x$ such that no ball around $x$ is connected, for example
$$ \{\langle x,0\rangle \mid x\ge 1\} \cup
\{\langle 1,y\rangle \mid 0\le y\le 1 \} \cup
\{\langle x,\tfrac1x \rangle \mid x\ge 1 \} \cup
\bigcup_{n=3}^\infty \{ \langle x,\tfrac1n \rangle \mid 2 \le x \le n \}
$$
No ball of finite radius around $\langle 2,0\rangle$ is connected.
And here's a sketch of a complete and connected (but not path-connected) subspace of $\mathbb R^2$ where no ball whatsoever, around any point, is connected:
Each of the gaps in the Cantor set on the left will eventually be contracted -- but the smaller the gap, the farther to the right will this happen.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1760148",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 5,
"answer_id": 0
}
|
Picking two random points on a disk I try to solve the following:
Pick two arbitrary points $M$ and $N$ independently on a disk $\{(x,y)\in\mathbb{R}^2:x^2+y^2 \leq 1\}$ that is unformily inside. Let $P$ be the distance between those points $P=d(M,N)$. What is the probabilty of $P$ being smaller than the radius of the disk?
Picking a specific point $s$ as the first point $M$ on a disk has probabilty $P(M=s)=0$ since we are looking at the real numbers. My intention, that is probably wrong, is that chosing the second point $N$ such that the distance between them is bigger than the radius of the disk, is as likely as being smaller.
I have absolutely no idea how to solve this.
|
Let $R$ be the distance between $O$, the origin, and $M$. The probability that $R$ is less than or equal to a value $r$ is
$$P(R\le r) = \begin{cases}
\frac{\pi r^2}{\pi\cdot 1^2} = r^2, & 0\le r\le 1\\
1, &r>1\\
0, &\text{otherwise}
\end{cases}$$
The probability density function of $R$ is
$$f_R(r) = \frac{d}{dr}P(R\le r) = \begin{cases}
2r, & 0< r< 1\\
0, &\text{otherwise}
\end{cases}$$
If $OM = d$, $0\le d\le 1$, then the area of the intersection of the unit circles centred at $O$ and $M$ respectively is
$$\begin{align*}
A(d) &= 4\left(\frac12\cos^{-1}\frac d2\right) - 2\left(\frac12d\sqrt{1^2-\frac {d^2}{2^2}}\right)\\
&= 2\cos^{-1}\frac d2 - \frac 12 d\sqrt{2^2-d^2}\\
\frac{A(d)}{\pi\cdot 1^2}&=\frac{2}{\pi}\cos^{-1}\frac d2 - \frac{1}{2\pi} d\sqrt{2^2-d^2}
\end{align*}$$
The last line is the probability that $N$ is within a unit distance of $M$, as a function of distance $d = OM$.
Combining with the probability density function $f_R$ above, the required probability is
$$p = \int_0^1\left(\frac{2}{\pi}\cos^{-1}\frac r2 - \frac{1}{2\pi} r\sqrt{2^2-r^2}\right)\cdot 2r\ dr \approx 0.58650$$
(WolframAlpha)
$$\begin{align*}
I_1 &= \int_0^1 r\cos^{-1}\frac r2\ dr\\
&= \int_{\pi/2}^{\pi/3} 2\theta\cos\theta\cdot(-2)\sin\theta \ d\theta && (r = 2\cos \theta)\\
&= -2\int_{\pi/2}^{\pi/3}\theta\sin 2\theta \ d\theta\\
&= \int_{\pi/2}^{\pi/3}\theta\ d\cos2\theta\\
&= \left[\theta\cos2\theta\right]_{\pi/2}^{\pi/3} - \int_{\pi/2}^{\pi/3}\cos2\theta\ d\theta\\
&= \left[\theta\cos2\theta - \frac 12 \sin2\theta\right]_{\pi/2}^{\pi/3}\\
&= -\frac{\pi}{6} - \frac{\sqrt3}{4} +\frac\pi2\\
&= \frac\pi 3 - \frac{\sqrt3}{4}\\
\frac4\pi I_1 &= \frac 43 - \frac{\sqrt3}{\pi}
\end{align*}$$
$$\begin{align*}
I_2 &= \int_0^1 r^2\sqrt{2^2-r^2}\ dr\\
&= -16\int_{\pi/2}^{\pi/3} \cos^2\theta\sin^2\theta\ d\theta&&(r=2\cos \theta)\\
&= -4\int_{\pi/2}^{\pi/3} \sin^2 2\theta\ d\theta\\
&= -4\int_{\pi/2}^{\pi/3} \frac{1-\cos4\theta}{2}\ d\theta\\
&= -2\left[\theta-\frac{1}{4}\sin4\theta\right]_{\pi/2}^{\pi/3}\\
&= -2\left(\frac\pi3+\frac{\sqrt3}8-\frac\pi2\right)\\
&= \frac\pi3-\frac{\sqrt3}{4}\\
\frac1\pi I_2 &= \frac13-\frac{\sqrt3}{4\pi}
\end{align*}$$
$$p = 1-\frac{3\sqrt3}{4\pi}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1760230",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Is a map that preserves the hyperbolic distance biholomorphic? Let $\lVert z \rVert_w = \frac{|z|}{1 - |w|^2}$ be the hyperbolic distance in $\mathbb{D}$, and let the hyperbolic metric be $d(z, w) = \inf_\gamma \int_0^1 \lVert \gamma'(t) \rVert_{\gamma(t)} \, dt$. Automorphisms of $\mathbb{D}$ preserve this metric. I'm looking to prove the converse, that is, if $f \colon \mathbb{D} \to \mathbb{D}$ preserves $d$ then it is a biholomorphism from $\mathbb{D}$ to itself, or $\overline{f}$ is.
If $f$ is assumed $C^1$ it is mostly not so hard - you can get that at every point either $f$ is holomorphic or its conjugate is by considering $\psi_{f(0)} \circ f$, which preserves the origin and must also preserve the ordinary absolute value, hence is isotropic hence either $f$ or $\overline{f}$ is holomorphic at the origin, and since $f$ was arbitrary this is true of all points. (And hopefully this gives that one of them is globally holomorphic! But I haven't figured out that part yet.)
If nothing is assumed of $f$, is the result even true? I see why $f$ must be continuous, but does preservation of the hyperbolic metric imply $C^1$-ness?
|
Yes, this is true. The proof is not specific to hyperbolic metric: one can argue the same way about the Euclidean metric on the plane, or in higher dimensions.
Step 1: Compose $f$ with a Möbius transformation that sends $f(0)$ to $0$. This reduces the problem to the case $f(0)=0$.
Step 2: Since $f$ is an isometry, every circle around the origin is mapped to self. By composing $f$ with a rotation, we reduce the problem to the case $f(1/2)=1/2$.
Step 3: Consider the (hyperbolic) circles with centers $0,1/2$ passing through $i/2$. Like any two distinct circles, they have at most two points in common: specifically, $i/2$ and $-i/2$. Hence, $f(i/2)$ must be one of these points. By composing $f$ with conjugation, the problem reduces to the case $f(i/2)=i/2$.
Step 4. The same reasoning as in step 3 shows that $f(z)$ is either $z$ or $\bar z$ for every $z$. And since $d(f(z), i/2) = d(z,i/2)$, it must be $f(z)=z$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1760375",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
}
|
Type 2 Error Question - How to calculate for a two tailed? The modulus of rupture (MOR) for a particular grade of pencil lead is known to have
a standard deviation of 250 psi. Process standards call for a target value of 6500 psi
for the true mean MOR. For each batch, an inspector tests a random sample of 16
leads. Management wishes to detect any change in the true mean MOR. (Assume normal
distribution.)
QUESTION : Find the probability of type II error of the test when the true mean MOR is 6400.
How do we solve for two tails? i am able to get 0.4821 but with 1 tail method.
6397-6400/62.5 = 0.045
than using normcdf on MATLAB, i got this value
|
You have not said what significance level you are using, but it seems that what you did for one tail used $5\%$ and
*
*standard error of the mean $\dfrac{250}{\sqrt{16}} = 62.5$
*If the true population mean is $6500$ then there is a $5\%$ probability that the the sample mean will be below $6500 + 62.5 \Phi^{-1}(0.05) \approx 6397$
*If the true population mean is $6400$ then there is about a $\Phi\left(\dfrac{6397-6400}{62.5}\right) \approx 48.21\%$ probability that the the sample mean will be below $6397$
You need to do something similar for two tails:
*
*If the true population mean is $6500$ then there is a $2.5\%$ probability that the the sample mean will be below $6500 + 62.5 \Phi^{-1}(0.025) \approx 6377.5$
*If the true population mean is $6500$ then there is a $2.5\%$ probability that the the sample mean will be above $6500 + 62.5 \Phi^{-1}(0.975) \approx 6622.5$
*If the true population mean is $6400$ then there is about a $\Phi\left(\dfrac{6377.5-6400}{62.5}\right) \approx 35.94\%$ probability that the the sample mean will be below $6377.5$
*If the true population mean is $6400$ then there is about a $1-\Phi\left(\dfrac{6622.5-6400}{62.5}\right) \approx 0.02\%$ probability that the the sample mean will be above $6622.5$
*So the probability of a type II error of the test when the true mean is $6400$ is about $35.96\%$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1760432",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Is the unit circle "stretchy" with respect to its norm? Suppose we have a collection of metric spaces on $\mathbb{R}^n$, each of which has a different p-norm, $1\leq p \leq \infty$. ($p=2$ is Euclidean distance, $p=1$ is taxicab distance, etc.) Then, suppose we have a point $x$ in $\mathbb{R}^2 $ that's within the unit circle of the metric space with the $\infty$-norm, but NOT within the unit circle of the metric space with the 1-norm. I'd like to know if there's a $p$ for which $\|x\|_p$ is exactly 1. Intuitively, I'm asking if, by adjusting the $p$-norm, I can "stretch" the unit circle to sit anywhere I like, as long as it's between the 1-norm and the $\infty$-norm, as suggested by this picture.
I feel like it must be true, and maybe I'm missing something obvious.
|
Yes, this is true by a simple continuity argument. Note that for any fixed $x\in\mathbb{R}^2$, the map $p\mapsto \|x\|_p$ is a continuous function $[1,\infty]\to\mathbb{R}$ (it is obvious that this is continuous for $p<\infty$; continuity as $p\to \infty$ requires a little work but is not hard). So by the intermediate value theorem, if $\|x\|_1>1$ but $\|x\|_\infty<1$ there is some $p$ such that $\|x\|_p=1$.
(Here are the details of the continuity as $p\to \infty$. If $x=(a,b)$ with (WLOG) $a\geq b\geq 0$, then $$a\leq \|x\|_p=(a^p+b^p)^{1/p}\leq (2a^p)^{1/p}=2^{1/p}a.$$ As $p\to\infty$, $2^{1/p}a\to a$, so $\|x\|_p$ is squeezed and must converge to $a$.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1760556",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
inequality between median length and perimeter Is there an inequality between the sum of median lengths and the perimeter? If there is, can you specify a proof as well? I need to use this to solve a question.
I tried using Apollonius theorem.
$$m_a=\sqrt{\frac{2b^2+2c^2-a^2}{4}}$$
|
Yes, there is :
$$\frac 34(\text{the perimeter})\lt \text{(the sum of median lengths)}\lt \text{(the perimeter)}$$
Proof :
Let $G$ be the centroid of $\triangle{ABC}$, and let $X,Y,Z$ be the midpoint of the side $BC,CA,AB$ respectively.
First of all,
$$GY+GZ\gt YZ,\quad GZ+GX\gt ZX,\quad GX+GY\gt XY$$
and so
$$2(GX+GY+GZ)\gt XY+YZ+ZX=\frac 12(AB+BC+CA),$$
i.e.
$$AX+BY+CZ\gt \frac 34(AB+BC+CA)\tag1$$
Also, since we have
$$AX\lt AZ+ZX=AZ+AY$$
$$BY\lt BX+XY=BX+BZ$$
$$CZ\lt CY+YZ=CY+CX$$
we can have
$$AX+BY+CZ\lt (BX+CX)+(CY+AY)+(AZ+BZ)=AB+BC+CA\tag2$$
From $(1)(2)$,
$$\frac 34(AB+BC+CA)\lt AX+BY+CZ\lt AB+BC+CA,$$
i.e.
$$\frac 34(\text{the perimeter})\lt \text{(the sum of median lengths)}\lt \text{(the perimeter)}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1760826",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Prove: $\arcsin\left(\frac 35\right) - \arccos\left(\frac {12}{13}\right) = \arcsin\left(\frac {16}{65}\right)$ This is not a homework question, its from sl loney I'm just practicing.
To prove :
$$\arcsin\left(\frac 35\right) - \arccos\left(\frac {12}{13}\right) = \arcsin\left(\frac {16}{65}\right)$$
So I changed all the angles to $\arctan$ which gives:
$$\arctan\left(\frac 34\right) - \arctan\left(\frac {12}{5}\right) = \arctan\left(\frac {16}{63}\right)$$
But the problem is after applying formula of $\arctan(X)-\arctan(Y)$ the lhs is negative and not equal to rhs? Is this because I have to add pi somewhere please help.
|
How exactly did you convert to arctan? Careful:
$$\arccos\left(\frac {12}{13}\right) = \arctan\left(\frac {5}{12}\right)
\ne \arctan\left(\frac {12}{5}\right)$$
Draw a right triangle with hypotenuse of length 13, adjacent side (from an angle $\alpha$) with length 12 and opposite side with length 5; then $\cos\alpha = 12/13$ and $\tan\alpha = 5/12$.
Perhaps easier: take the sine of both sides in the original equation:
$$\sin\left(\arcsin\left(\frac 35\right) - \arccos\left(\frac {12}{13}\right)\right) = \sin\left( \arcsin\left(\frac {16}{65}\right)\right)$$
The RHS is $16/65$ and simplify the LHS with $\sin(a-b)=\sin a \cos b - \cos a \sin b$ to get:
$$\frac{3}{5}\frac{12}{13}-\sqrt{1-\frac{9}{25}}\sqrt{1-\frac{144}{169} } = \frac{3}{5}\frac{12}{13}-\frac{4}{5}\frac{5}{13}= \cdots$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1760940",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
finding sup and inf of $\{\frac{n+1}{n}, n\in \mathbb{N}\}$ Please just don't present a proof, see my reasoning below
I need to find the sup and inf of this set:
$$A = \{\frac{n+1}{n}, n\in \mathbb{N}\} = \{2, \frac{3}{2}, \frac{4}{3}, \frac{5}{4}, \cdots\}$$
Well, we can see that:
$$\frac{n+1}{n} = 1+\frac{1}{n} > 1$$
Therefore, $1$ is a lower bound for $A$, however I still need to show that it's the greatest lower bound of $A$. Suppose that $1+\frac{1}{n}>c>1$, then $\frac{1}{n}>c-1\implies \frac{1}{c-1}>n\implies$ c has to be $>1$, which is not a problem :c.
Now, for the sup, we have:
$$\frac{n+1}{n} = 1+\frac{1}{n}\le 2$$
because if $n>1$ then $\frac{1}{n}<1$ then $1+\frac{1}{n}<1+1=2$. So, $2$ is the upper bound of $A$, but I still have to show that $2$ is the lowest upper bound of $A$. I've read that if $a\in A$ and $a$ is an upper bound, then $a$ is the sup (how do I prove it?). But suppose that I didn't knew this theorem, then I would have to prove that there is no $c$ such that
$$1+\frac{1}{n}<c<2$$
and such that $c\ge a, \forall a\in A$. Oh, wait, I might have been able to prove the question above: suppose $c\ge A, \forall a\in A$, with $c\in A$. Then if there exists another $b$ such that $c>b\ge a$, i can see that this $b$ is greater than every member of $A$, but not $c$, therefore there isn't such $c$ that is both greater than every member of $A$ and in the middle of $c$ and $a$.
|
Answer on "how do I prove it?"
If $a\in A$ is an upper bound of $A$ then any $c$ with $c<a$ is not an upper bound of $A$ since $a$ is an element of $A$ that does not satisfy $a\leq c$.
We conclude that $a$ must be the least upper bound of $A$.
*
*$1\leq1+\frac{1}{n}$ for each $n$ so $1$ is a lower bound of $A$.
*If $c>1$ we can find an element $1+\frac{1}{n}\in A$ with $1+\frac{1}{n}<c$
so if $c>1$ then it is not a lower bound of $A$.
This together justifies the conclusion that $1=\inf A$.
*
*$1+\frac{1}{n}\leq2$ for each $n$ so $2$ is an upper bound of $A$.
*If $c<2$ then we have element $2\in A$ with with $c<2$ so if $c<2$
then it is not an upper bound of $A$.
This together justifies the conclusion that $2=\sup A$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1761130",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
In MATLAB, $\pi$ value is given as 355/113. why? $\pi$ is an irrational number. MATLAB shows it equal to 355/113 in fractional format. Is there no better fractional representation than 355/113 within the limits of the finite precision the computers use? How is the value arrived at?
|
The irreducible fraction $\frac{355}{113}$ of MATLAB gives certainly a good approximation, as @almagest comment. One could argue that the irreducible $\frac{208341}{66317}$ ($\frac{22}{17}+\frac{37}{47}+\frac{88}{83}$ indeed) is another rational approximation with an accuracy of nine decimal digits of exactness. However it is true that $\frac{355}{113}$ is easier to manipulate that $\frac{208341}{66317}$ which could be the MATLAB's purpose.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1761234",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Finding Laplace Transform of $te^{-t}$ I started with this integral:
$$ \int_{0}^{\infty} e^{-st}\cdot te^{-t}dt$$
= $$\int_{0}^{\infty} te^{-(s+1)t}dt$$
let $dv=e^{-(s+1)t}dt, u=t$ and thus $v=-\frac{1}{s+1}e^{-(s+1)t}dt, du=dt$
$\rightarrow$
$-\frac{t}{s+1}e^{-(s+1)t}|_0^\infty + \frac{1}{s+1}\int_{0}^{\infty}e^{-(s+1)t}dt$
= $-\frac{1}{(s+1)^2}e^{-(s+1)t}|_0^\infty$
What mistakes have I made or do I just use L'Hopital's rule to finish?
|
If you see an integral of the form $$\int t f(t) \,dt$$ then try partial integration!
Assuming $s\neq 1$
\begin{align}\int_0^{\infty} t f(t) \,dt &= \left[t\frac{-1}{s+1}e^{-(s+1)t} \right]_{t=0}^\infty-\int_0^{\infty} \frac{-1}{s+1}e^{-(s+1)t} \,dt \\&= 0 - 0 + \left[ \frac{-1}{(s+1)^2}e^{-(s+1)t} \right]_{t=0}^\infty \\ &= \frac{1}{(s+1)^2}\end{align}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1761367",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
A variant of the exponential integral Consider the following integral (for $x,y\in \mathbb{R}_{>0}$)
$$E(x,y) = \int_0^1 \frac{\mathrm{e}^{-x/s-ys}}{s}\,\mathrm{d}s,$$
which is a variant of the usual exponential integral $E_1(x)$ to which it reduces when $y=0$.
I am interested in the efficient evaluation of $E(x,y)$, either by numerical or analytical means. One example would be e.g. a series representation; to give an example, consider expansion around $|ys|\ll 1$, which yields a series representation in generalized exponential integrals $E_n(x)$
\begin{align}
E(x,y) &= \int_0^1 \frac{\mathrm{e}^{-x/s}}{s}\sum_{n=0}^\infty \frac{(-ys)^n}{n!} \,\mathrm{d}s
= \sum_{n=0}^\infty \frac{(-1)^n y^n}{n!}\int_0^1 \mathrm{e}^{-x/s} s^{n-1}\,\mathrm{d}s\\
&= \sum_{n=0}^\infty \frac{(-1)^n y^n}{n!}E_{n+1}(x).
\end{align}
This series, however, is rather poorly convergent when $y>1$. Can we do better?
Context: This integral arises in the Ewald summation technique for 1D-periodic systems embedded in a 2D coordinate system; in my case, $x$ and $y$ take values from $0$'ish to $10$'ish. Presently, I'm forced to evaluate the integral by numerical quadrature, which is painstakingly slow.
|
$\int_0^1\dfrac{e^{-\frac{x}{s}-ys}}{s}~ds$
$=\int_\infty^1se^{-xs-\frac{y}{s}}~d\left(\dfrac{1}{s}\right)$
$=\int_1^\infty\dfrac{e^{-xs-\frac{y}{s}}}{s}~ds$
$=K_0(x,y)$ (according to https://core.ac.uk/download/pdf/81935301.pdf)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1761491",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Prove: $\frac{a+c}{b+d}$ lies between $\frac{a}{b}$ and $\frac{c}{d}$ (for positive $a$, $b$, $c$, $d$) I am looking for proof that, if you take any two different fractions and add the numerators together then the denominators together, the answer will always be a fraction that lies between the two original fractions.
Would be grateful for any suggestions!
|
Here's one way to look at it:
You're taking a class. Suppose you get $a$ points out of $b$ possible on Quiz 1, and $c$ points out of $d$ possible on Quiz 2.
Your overall points are $a+c$ out of $b+d$ possible. And your overall percentage should be between your lower quiz score and your higher quiz score.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1761557",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 6,
"answer_id": 4
}
|
Linear Algebra with functions Basically my question is - How to check for linear independence between functions ?!
Let the group $\mathcal{F}(\mathbb{R},\mathbb{R})$
Be a group of real valued fnctions.
i.e $\mathcal{F}(\mathbb{R},\mathbb{R})=\left\{ f:\mathbb{R}\rightarrow\mathbb{R}\right\} $
Let 3 functions $f_{1},f_{2},f_{3}$
be given such that
$\forall x\in\mathbb{R}\,\,\,f_{1}=e^{x},\,\,f_{2}=e^{2x},\,\,f_{3}=e^{3x}$
$W=sp(f_{1},f_{2},f_{3})$
what is $dim(W)$
?
How to approach this question ? (from a linear algebra perspective)
I know that $\forall x\in\mathbb{R}\,\,\,W=\alpha e^{x}+\beta e^{2x}+\gamma e^{3x}$
And to get the dimension I need to find the base of $W$
so I need to check whether the following holds true :
$\forall x\in\mathbb{R}\,\,\alpha e^{x}+\beta e^{2x}+\gamma e^{3x}=0\,\Leftrightarrow\,\alpha,\beta,\gamma=0$
However when $x=0$
I get $\alpha+\beta+\gamma=0$
which leads to infinite amount of solutions.
How to approach this question ?
|
Hint: Use Wronskian and show that the Wronskian-Determinant does not vansish.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1761628",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 7,
"answer_id": 2
}
|
A human way to simplify $ \frac{((\sqrt{a^2 - 1} - a)^2 - 1)^2}{(\sqrt{a^2 - 1} - a)^22 \sqrt{a^2 - 1}} - 2 a $ I end up with simplifying the following fraction when I tried to calculate an integral(*) with the residue theory in complex analysis:
$$
\frac{((\sqrt{a^2 - 1} - a)^2 - 1)^2}{(\sqrt{a^2 - 1} - a)^22 \sqrt{a^2 - 1}} - 2 a
$$
where $a>1$. With Mathematica, I can quickly get
$$
\frac{((\sqrt{a^2 - 1} - a)^2 - 1)^2}{(\sqrt{a^2 - 1} - a)^22 \sqrt{a^2 - 1}} - 2 a=2(\sqrt{a^2 - 1} - a).
$$
Would anybody give a calculation for the simplification in a human way?
(*)The integral I did is
$$
\int_{-\pi}^\pi\frac{\sin^2 t}{a+\cos t}\ dt
$$
with $a>1$.
|
Start with
$$\begin{align}\left(\sqrt{a^2-1}-a\right)^2-1&=a^2-1-2a\sqrt{a^2-1}+a^2-1\\
&=\sqrt{a^2-1}\left(2\sqrt{a^2-1}-2a\right)\\
&=2\sqrt{a^2-1}\left(\sqrt{a^2-1}-a\right)\end{align}$$
So you are now down to
$$\frac{\left(2\sqrt{a^2-1}\left(\sqrt{a^2-1}-a\right)\right)^2}{\left(\sqrt{a^2-1}-a\right)^2\cdot2\sqrt{a^2-1}}-2a=2\sqrt{a^2-1}-2a$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1761694",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
How to show that $X^p-t\in\mathbb{F}_p(t)[x]$ is irreducible? This question is previously asked here, but there is no complete solution of it.
I understand that the root $\alpha$ exist in the algebraic closure of $\mathbb{F}_p(t)[x]$, and it is the only root because $f(X)=(X-\alpha)^p$, but how do we proceed to show that $\alpha\not\in\mathbb{F}_p(t)[x]$?
One solution I see writes $f(X)=g(X)h(X)$, and argue that since $g(X)=(X-\alpha)^i$ and $i<p$, then $\alpha\in \mathbb{F}_p(t)[x]$. How does this follow?
|
Because it only has one root, and none of them are in $\Bbb F_p(t)$. Recall for any root $\zeta$ we have $\zeta^p=t$, but then since $\Bbb F_p(t)[x]$ is a UFD, it means that $(x-\zeta)^p=x^p-t$ has just the one root. So if it is reducible, it reduces all the way, and in fact there is an element of $\Bbb F_p(t)$ such that $\left(\displaystyle{q(t)\over r(t)}\right)^p=t$.
But clearly this is impossible since then $r(t)^p = q(t)^p\cdot t$ and taking derivatives on both sides we get
$$0=pr(t)^{p-1}r'(t)=pq(t)^{p-1}\cdot q'(t)\cdot t + q(t)^p = q(t)^p$$
which would imply that $q(t) = 0$ which implies $t=0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1761815",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Evaluation of $ \int_0^\infty\frac{x^{1/3}\log x}{x^2+1}\ dx $ The following is an exercisein complex analysis:
Use contour integrals with $-\pi/2<\operatorname{arg} z<3\pi/2$ to compute
$$
I:=\int_0^\infty\frac{x^{1/3}\log x}{x^2+1}\ dx.
$$
I don't see why the branch $-\pi/2<\operatorname{arg} z<3\pi/2$ would work. Let
$$
f(z)=\frac{z^{1/3}\log z}{z^2+1}.
$$
Denote $\Gamma_r={re^{it}:0\leq t\leq \pi}$. One can then consider a contour consisting of $\Gamma_R$, $\Gamma_r$, and $[-R,-r]\cup[r,R]$. The integral along $[r,R]$ will give $I$. But without symmetries, how could one deal with the integral along $[-R,-r]$?
|
Sketch of a possible argument. Use the branch with the argument from
zero to $2\pi$ of the logarithm, the function
$$f(z) = \frac{\exp(1/3\log z)\log z}{z^2+1}$$
and a keyhole contour with the slot aligned with the positive real
axis.
The sum of the residues at $z=\pm i$ is (take care to use the chosen branch of the logarithm in the logarithm as well as in the power/root)
$$\rho_1+\rho_2 = \frac{1}{8}\,\pi \,\sqrt {3}-\frac{5}{8}\,i\pi.$$
Now below the keyhole we get
$$\exp(1/3\log z)\log z
= \exp(2\pi i/3) \exp(1/3\log x)
(\log x + 2\pi i).$$
Introduce the labels
$$J = \int_0^\infty \frac{x^{1/3}\log x}{x^2+1} dx$$
and
$$K = \int_0^\infty \frac{x^{1/3}}{x^2+1} dx$$
The contribution from the line below the slot is
$$-\exp(2\pi i/3) J - 2\pi i\exp(2\pi i/3) K.$$
We evaluate $K$ and obtain
$$K = \pi\frac{\sqrt{3}}{3}.$$
Solving $$2\pi i (\rho_1+\rho_2) =
(1-\exp(2\pi i/3)) J
- 2\pi i\exp(2\pi i/3) \pi\frac{\sqrt{3}}{3}$$
we obtain
$$J =\frac{\pi^2}{6}.$$
Remark.
The evaluation of $K$ is recursive and uses the same contour.
We get for the two residues
$$\rho_1+\rho_2 = -\frac{1}{4} i\sqrt {3}-\frac{1}{4}$$
and hence we have
$$2\pi i (\rho_1+\rho_2) =
(1-\exp(2\pi i/3)) K$$
which we may solve for $K$ to get the value cited above.
Remark II. The fact that the circular component vanishes follows
by the ML inequality which gives in the case of evaluating $J$
$$2\pi R \times \frac{R^{1/3}\log R}{R^2-1}
\rightarrow\ 0$$
as $R\rightarrow\infty.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1761922",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
}
|
$|f| $ is Lebesgue integrable , does it implies $f$ is also? If $ f $ is Lebesgue integrable then $|f|$ is Lebesgue integrable but does the converse of the result is also true?
|
Think of |f| as a division of f into two functions: $f_+$ and $f_-$.
$f_+$ we define as equal to f on the domain {x: f(x) is non-negative}, and 0 on all other x.
$f_-$ we define as equal to -f on the domain {x: f(x) is negative} and 0 elsewhere. $|f|=f_+ + f_-$.
If |f| is finite, then necessarily both $f_+$ and $f_-$ are finite.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1762006",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 2
}
|
Right Triangle's Proof A right triangle has all three sides integer lengths. One side has length 12. What are the possibilities for the lengths of the other two sides? Give a proof to show that you have found all possibilities.
EDIT: I figured out that there are a total of 4 combinations for a side with length 12.
$$a^2= c^2 -b^2$$
factoring the right side gives
$$a^2 = (c+b)(c-b)$$
so from there I just looked for values of c and b which would make the equation true. And I got: $(37,35), (13,5), (15,9), (20,16)$.
My only problem now is proving that these are all the possibilities. I have an intuition as to why that's true but I don't know how to go about a full proof. I'm guessing I need to make use of the fact that all sides are to be integer lengths and that $12$ itself cannot be equal to $c$.
I know that if I were to try values for values for $b$ and $c$ whose absolute value difference is greater than 8, then the equation would not hold true.
Ex:
$(37,35)$ has a difference of $2$ and works.
$(13,5)$ has a difference of $8$ and works.
$(15,9)$ has a difference of $6$ and works.
$(20,16)$ has a difference of $4$ and woks.
But if I were to pick any pair with an absolute difference greater than 8 it would not work. Can I just prove this is true with a couple of examples? Or do I need a full generic proof ? If so, how would I go about it?
|
Wolfram MathWorld gives the number of ways in which a number $n$ can be a leg (other than the hypotenuse) of a primitive or non-primitive right triangle. For $n=12$, the number is $4$. It also gives the number of ways in which a number $n$ can be the hypotenuse of a primitive or non-primitive right triangle. For $n=12$, the number is $0$. So the four triples you found are the only ones.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1762101",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
}
|
Can we find a non central element of order 2 in a specific 2-group? Let $G$ be a non-abelian group of order $2^5$ and center $Z(G)$ is non cyclic. Can we always find an element $x\not\in Z(G)$ of order $2$ if for any pair of elements $a$ and $b$ of $Z(G)$ of order $2$, the factor groups $G/\langle a\rangle$ and $G/\langle b\rangle$ are isomorphic?
Note: I do not find any contradiction of above statement so I wish to prove it.
|
This is not complete answer; but a partial information, which in addition to Holt's comment may simplify your job. (I will try to write complete proof as I get some directions on it)
Suppose $G$ is a group of order $2^5$ satisfying conditions in your question. We show that $Z(G)=C_2\times C_2$.
For this, by hypothesis, $|Z(G)|=2^2$ or $2^3$. If $|Z(G)|=2^3$, then $G/Z(G)=C_2\times C_2$. Hence there exists $x,y\in G$ be such that $G=\langle x,y,Z(G)\rangle$. Then $G'=\langle [x,y]\rangle$ and order of this commutator subgroup must be $2$ (fact: if $G/Z(G)$ is abelian then $G/Z(G)$ and $G'$ have same exponent.) Note that $G/Z(G)$ is abelian implies $G'\subseteq Z(G)$.
Since $Z(G)$ is non-cyclic, it should contain a subgroup of order $2$ other than $G'$; call it $N$. Then $G/N$ is non-abelian, whereas $G/G'$ is abelian. Thus, we are getting two subgroups of order $2$ in $Z(G)$ with non-isomorphic quotient.
Thus $|Z(G)|$ must be $4$ and it should be then $C_2\times C_2$ (it is non-cyclic).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1762328",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Prove that $a \sqrt{b + c} + b \sqrt{c + a} + c \sqrt{a + b} \le \sqrt{2(a+b+c)(bc + ac + ab)}$ for $a, b, c > 0$ Prove for $a, b, c > 0$ that
$$a \sqrt{b + c} + b \sqrt{c + a} + c \sqrt{a + b} \le \sqrt{2(a+b+c)(bc + ac + ab)}$$
Could you give me some hints on this?
I thought that Jensen's inequality might be of use in this exercise, but I haven't managed to solve this on my own.
|
Use the Cauchy-Schwarz inequality on the two vectors $(\sqrt a, \sqrt b, \sqrt c)$ and $(\sqrt{a(b+c)}, \sqrt{b(a+c)}, \sqrt{c(a+b)})$ (then take the square root on both sides or not, depending on which version of the CS inequality you use), and lastly note that we have:
$$
a(b+c) + b(a + c) + c(a + b) = 2(bc + ac + ab)
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1762452",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Example of a function $u \in W^{1,p}(\Omega)$ whose extension $\hat{u}$ to be $0$ outside $\Omega$ $\hat{u} \notin W^{1,p}(\mathbb{R}^n)$ Let's consider a function $u \in W^{1,p}(\Omega)$, where $W^{1,p}(\Omega)$ is the Sobolev Space and $\Omega$ is an open set. When we extend $u$ to $\hat{u}$ like this:
$$\hat{u}(x)=\left\{\begin{matrix}
u(x) & in \ \Omega\\
0 & \quad \quad \ in \ \mathbb{R}^n\setminus\Omega
\end{matrix}\right. $$
I know that this extension does not need to be in $W^{1,p}(\mathbb{R}^n)$, but I would like to have some counterexample. Certainly there is no problem in $\hat{u}$ to be in $L^p(\mathbb{R}^n)$ and the (weak) derivative isn't it $\hat{u'}$?
|
If $\Omega$ is bounded, it suffices to take $u(x)\equiv 1$. Then $\hat u$ cannot be in $W^{1,p}(\mathbb R^n)$ for $p>n$ since it is discontinuous. (Sobolev embedding)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1762543",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
generalization of the Pythagorean theorem In school, students learn that in a triangle ABC, ACB is a right angle if and only if AB^2=AC^2+BC^2. This deep relation between geometry and numbers is actually only a partial result as one can say much better : the angle in C is
*
*acute if and only if AB^2 < AC^2+BC^2,
*right if and only if AB^2 = AC^2+BC^2,
*obtuse if and only if AB^2 > AC^2+BC^2.
I was wondering why this relation is unknown to almost all people and never taught in school. Does it require more mathematic understanding? does it require analytic geometry? Thanks in advance for comments :)
|
The relation can actually be thought of in terms of the cosine rule:
$a^2 = b^2 + c^2 - 2bc \cos(A)$, where $a, b, c$ are the sides of the triangle and $A$ is the angle opposite to side $a$.
Clearly, if $A = 90^\circ$, then $a^2 = b^2 + c^2$
If $A < 90^\circ$, then $\cos(A) > 0$, hence $a^2 < b^2 + c^2$ and vice versa if the triangle is obtuse.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1762687",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
GCH is preserved when forcing with $Fn(\lambda,\kappa)$. Given a countable transitive model $M$ where $GCH$ holds it is an exercise from Kunen's book to show that GCH also holds in $M[G]$ when $G$ is a $P-$generic filter over $M$, and $P=Fn(\lambda,\kappa)$ ($\aleph_0\leq\kappa<\lambda$ in $M$). Recall that $Fn(\kappa,\lambda)$ is the set of partial functions from finite sets of $\kappa$ to finite sets of $\lambda$, ordered by reverse inclusion.
Does anybody have any idea to prove it? Thank you.
|
Regarding CH:
Working in $M$ we have $\mid Fn(\kappa,\lambda)\mid=\kappa^\lambda{=}\lambda^+$ and also $(\lambda^+)^{\aleph_0}=\lambda^+$ because $cf(\lambda^+)=\lambda^+>\aleph_0$ and $GCH$ holds in $M$. So using the well-known argument with nice names and $\lambda^+-cc$-ness, we conclude $(2^{\aleph_0}\leq (\lambda^+)^{M})^{M[G]}$. Since $Fn(\kappa,\lambda)$ collapses every cardinal in $M$ below $\lambda^+$, then $(\lambda^+)^M=\aleph_1^{M[G]}$ and finally $(2^{\aleph_0}\leq \aleph_1)^{M[G]}$
Regarding GCH:
Let us define
*
*$\kappa_0=(\lambda^+)^M$.
*$\kappa_{\alpha+1}=(\kappa_\alpha^+)^M$.
*$\kappa_\lambda=\sup_{\alpha<\lambda} \kappa_\alpha$ if $\lambda$ is a limit ordinal.
Now by induction it can be shown that $\kappa_\alpha=\aleph^{M[G]}_{\alpha + 1}$ and also $(2^{\aleph_\alpha}\leq \kappa_\alpha)^{M[G]}$. For the last one statement it is enough to apply the same argument as used in $CH$ noting that $cf^M(\kappa_\alpha)=cf^M(\aleph^{M[G]}_{\alpha +1})>\aleph_\alpha^{M[G]}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1762782",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Find the points on the sphere $x^2 + y^2 + z^2 = 4$ that are closest to, and farthest from the point (3, 1, -1). Find the points on the sphere $x^2 + y^2 + z^2 = 4$ that are closest to, and farthest from the point $(3, 1, -1)$.
I identified that this is a constrained optimisation problem which I will solve using an auxiliary function and a Lagrange multiplier however I am struggling to continue from there as I am unsure which is the constraining function.
Any help would be greatly appreciated!
|
$(2/\sqrt{11})(3,1,-1)$ is the closest point and $(-2/\sqrt{11})(3,1,-1)$ is the most distant point.
If you have to use Lagrange multipliiers....
minimize/maximize: $(x-3)^2 + (y-1)^2 + (z+1)^2$ (this is the distance squared from x,y,z to your point.)
constrained by: $x^2 + y^2 + z^2 = 4$
$F(x,y,z,\lambda) = (x-3)^2 + (y-1)^2 + (z+1)^2 - \lambda(x^2 + y^2 + z^2 - 4)$
Now find, $\frac{\partial F}{\partial x},\frac{\partial F}{\partial y},\frac{\partial F}{\partial z},\frac{\partial F}{\partial \lambda}$ and set them all equal to $0,$ and solve the system of equations.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1762885",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Function that is second differential continuous Let $f:[0,1]\rightarrow\mathbb{R}$ be a function whose second derivative $f''(x)$ is continuous on $[0,1]$. Suppose that f(0)=f(1)=0 and that $|f''(x)|<1$ for any $x\in [0,1]$. Then $$|f'(\frac{1}{2})|\leq\frac{1}{4}.$$
I tried to use mean value theorem to prove it, but I found no place to use condition of second derivative and I cannot prove this.
|
A bit more tricky than I thought at first. The idea is easy, the calculation may look complicated. The idea is to find a second order polynomial with second derivate $=1$ which has the same values as $f$ for $x=0 $ and $x= \frac{1}{2}$, and then to show that this function $-f$ is convex, which allows to get an estimate for the derivative at the boundary.
First note that the assumptions and the claim are invariant with respect to multiplication by $-1$.
Now, since we may look at $-f$ instead of $f$, we may assume $f(\frac{1}{2})\le 0$
We will bound $f^\prime(\frac{1}{2})$ by looking at the comparison function
$$ h(x) = \frac{1}{2}x^2 + \left(2f(\frac{1}{2}) -\frac{1}{4}\right)x $$
You'll easily verify $h(0) = 0, \, h(\frac{1}{2}) =f(\frac{1}{2})$ and $h^{\prime \prime}(x) = 1$.
So for
$$\psi(x) = h(x)-f(x)$$
we have $\psi(0)= 0 = \psi(\frac{1}{2})$ and $ \psi^{\prime \prime}(x)=1- f^{\prime \prime}(x)>0$. This means $\psi$ is strictly convex with zero boundary values at $x= \frac{1}{2}$, which implies $\psi^\prime(\frac{1}{2})>0$
Since $h^\prime(x) = x+2f(\frac{1}{2})-\frac{1}{4}$ we have $h^\prime(\frac{1}{2}) = 2f(\frac{1}{2})+\frac{1}{4}$ and so
$$ \begin{eqnarray}
0< \psi^\prime(\frac{1}{2}) = 2f(\frac{1}{2})+\frac{1}{4} -f^\prime (\frac{1}{2})
\end{eqnarray}
$$
which implies (using the assumption on the sign of $f$)
$$ f^\prime (\frac{1}{2})< 2f(\frac{1}{2})+\frac{1}{4} \le \frac{1}{4}$$
To get a bound $f^\prime (\frac{1}{2})\ge- \frac{1}{4}$ with the same assumption $f(\frac{1}{2})\le 0$ the same trick is applied on the interval $[\frac{1}{2},1]$ using the comparison function
$$h(x)=\frac{1}{2}(x-1)^2 - \left( 2f(\frac{1}{2})-\frac{1}{4}\right)(x-1) $$
If you now look at $\phi=h-f$ this is again convex with zero boundary values for $x= \frac{1}{2}$ and $x=1$, so $\phi^\prime(\frac{1}{2})< 0$ this time. The calculations are the same.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1763014",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Using contour integration to solve $ \int ^\infty _0 \frac {\ln x} {(x^2+1)} dx$ Question:
Find the value of contour integration $$ \int ^{\infty}_0 \frac {\ln x} {(x^2+1)} dx$$
Attempt:
I just calculate
$$\text{Res}(f,z=i) = 2\pi i\lim_{z\to i}(z - i)\frac{\ln z}{z^2+i} = \frac{\pi^2 i}2$$
Im not too sure how to move on from here. Any ideas?
|
Consider the branch $f(z) = \frac{\ln z}{z^2 +1}$ where $|z| > 0 , -\frac{\pi}{2}< \arg z < \frac{3\pi}{2}$. Take the path $C = L_2 + L_1 + C_{\rho} + C_R$ where $\rho < 1 < R$ and $C_R$ and $C_{\rho}$ are the semi-circles with radius $R$ and $\rho$ respectively. See the figure below.
$\hskip.75in$
By Cauchy's Theorem we have
$$\int_{L_1} f(z ) dz + \int_{L_2} f(z) \, dz + \int_{C_\rho} f(z) \, dz + \int_{C_R} f(z) \, dz = 2\pi i \,\,\mathrm {Res}_{z = i} f(z)$$
Now if $z = re^{i\theta}$ then we may write
$$f(z) = \frac{\ln r + i\theta}{r^2e^{2i\theta} + 1}$$
and use the parametric representations
$$z = re^{i0} = r \,\,(\rho \leq r \leq R) \,\,\, \text{and}\,\,\,z = re^{i\pi} \,\,(\rho \leq r \leq R)$$
for the legs $L_1$ and $-L_2$ respectively, which yields
$$\int_{L_1} f(z)\, dz - \int_{-L_2} f(z) \, dz = \int_{\rho}^R \frac{\ln r}{r^2 +1} \, dr + \int_{\rho}^R \frac{\ln r + i\pi}{r^2 +1} \, dr$$
The residue at $z = i$ is $\mathrm {Res}_{z=i} = \frac{\pi}{4}$, then
$$\int_{\rho}^R \frac{\ln r}{r^2 +1} \, dr + \int_{\rho}^R \frac{\ln r + i\pi}{r^2 +1} \, dr = \frac{\pi^2i}{2} -\int_{C_\rho} f(z) \, dz - \int_{C_R} f(z) \, dz $$
equating the real parts
$$2 \int_{\rho}^R \frac{\ln r}{r^2 +1} \, dr = -\int_{C_\rho} f(z) \, dz - \int_{C_R} f(z) \, dz $$
It remains only to show that $\displaystyle \lim_{\rho \to 0} \int_{C_\rho} f(z) \, dz = 0 $ and $\displaystyle \lim_{R\to \infty} \int_{C_R} f(z) \, dz = 0$.
Do you think you can take it from here?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1763174",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Show that Set in $M:=\{x\in \Bbb R^3 : x_1^2\ge2(x_2^3+x_3^3) \}$ is closed I have to show this regarding the Euclidean metric.
I've already shown that it isn't bounded by showing that the $d(x,y)\:\forall x,y \in M$ isn't bounded.
I know that in order to show the closedness i have to show that the complement is open, but how do I show that $$M:=\{x\in \Bbb R^3 : x_1^2\lt 2(x_2^3+x_3^3) \}$$ is open? (I know that a set is open if the $\epsilon$-ball around every element is also in the set but how do I show this in this particular case)
I have to determine whether the set is closed and whether it's bounded for the following set too, but I don't have any ideas how to show either:
$$M:=\{x\in \Bbb R^3 : x_1 \ge -1, \:x_2\lt2, ||x||_2\le\sqrt5\}$$
Thanks in advance
|
Let $f(x_1,x_2,x_3)= x_1^2-2(x_2^3+x_3^3)$, which is a polynomial, hence continuous. Then the set in question is $f^{-1}([0,\infty))$ which is closed, hence by the definition of continuity, the inverse image of this closed set is closed.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1763255",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Continuous function injective over a compact set, and locally injective on each point of the set Suppose we have a function $F: \mathbb R^n \rightarrow \mathbb R^k$ continuous over some open set $U \in \mathbb R^n$, and let compact set $K \subset U$. $F$ satisfies the following properties:
1) F is injective over K
2) For every $x \in K$, there is some open set $U_x$ such that F is injective on $U_x$.
Show that there exists an $\epsilon > 0$ such that F is injective over the set $\{x\in \mathbb R^n | dist(x,K) < \epsilon \}$.
My attempt:
I can form an open cover of $K$ by forming $W = \cup(U_x \cap U)$, then consider the boundary of this open cover $W$. The boundary is certainly compact, so $dist(x,K)$ over it takes on a minimum (it's easy to show this value is greater than 0), we may call it $\epsilon$.
But now I am stuck. I am not sure how to show that F is injective on all of $W$...
|
This is a very nice problem indeed. Thank you.
Let's call $B(K,\epsilon)$ the set $\{x\in \mathbb{R} : \mathrm{dist}(x,K)
<\epsilon\}$. It is easy to see that for any $\epsilon$,
$\overline{B(K,\epsilon)}$ is compact, and that there exists an
$n\in\mathbb{N}$ such that $B(K,\frac{1}{n})\subseteq U$. We may assume without loss of
generality that $\overline{B(K,1)}\subseteq U$ (otherwise, shift
subindices below). Also, it can be shown that
$$
K=\bigcap_{n\geq 1} B(K,\tfrac{1}{n})\qquad\qquad (*).
$$
Now assume that the assertion is false. That is, for every
$n\in\mathbb{N}$ there exist $x_n,y_n\in B(K,\frac{1}{n})$ such that
$x_n\neq y_n$ $(**)$ and
$F(x_n)=F(y_n)$. Since $\{x_n\}_n$ and $\{y_n\}_n$ are sequences in
the compact $\overline{B(K,1)}$, there are convergent subsequences
$\{x_{n_j}\}_j$ and $\{y_{n_j}\}_j$ with limits $x$ and $y$, respectively. By
$(*)$, $x,y\in K$. But then $x=y$, since $F$ is injective on $K$. Now,
for sufficiently large $j$, $x_{n_j},y_{n_j}\in U_x$. Since $F$ is
injective on $U_x$, $x_{n_j}=y_{n_j}$ for all such $j$. But this
contradicts $(**)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1763351",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
(Elegant) proof of : $x \log_2\frac{1}{x}+(1-x) \log_2\frac{1}{1-x} \geq 1- (1-\frac{x}{1-x})^2$ I am looking for the most concise and elegant proof of the following inequality:
$$
h(x) \geq 1- \left(1-\frac{x}{1-x}\right)^2, \qquad \forall x\in(0,1)
$$
where $h(x) = x \log_2\frac{1}{x}+(1-x) \log_2\frac{1}{1-x}$ is the binary entropy function. Below is a graph of the two functions.
Of course, an option would be to differentiate $1,2,\dots,k$ times, and study the function this way — it may very well work, but is not only computationally cumbersome, it also feels utterly inelegant. (For my purposes, I could go this way, but I'd rather not.)
I am looking for a clever or neat argument involving concavity, Taylor expansion around $1/2$, or anything — an approach that would qualify as "proof from the Book."
|
Hopefully, this is right!
Note that from the weighted AM-GM inequality, We have that $$h(x)=\log_2{\frac{1}{x^x(1-x)^{1-x}}} \ge \log_2\frac{1}{x^2+(1-x)^2}$$
Thus we have to show $$\left(1-\frac{x}{1-x}\right)^2 \ge 1-\log_2\frac{1}{2x^2-2x+1}=\log_2{(4x^2-4x+2)}$$ Substitute $x=\frac{a+1}{a+2}$, and we have $$f(a)=a^2 -\log_2\left(\frac{a^2}{(a+2)^2}+1 \right) \ge 0$$
For $a \ge -1$. Differentiating gives $$f'(a)=2a\left(1-\frac{1}{(a+2) (a^2+2 a+2) \log(2)}\right)$$ and this $f'(a)>0$ for $a>0$ alternatively $f(a) \ge f(0)=0$ for $a \ge 0$.
Also, notice the local maxima lies between $-1$ and $0$. But since $f(-1)=f(0)=0$, we have that $f(a) \ge 0$ for $-1 \le a \le 0$.
Our proof is done.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1763483",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18",
"answer_count": 9,
"answer_id": 1
}
|
How many draws to have a 90% chance of getting the ace of spades? You have a standard 52-card deck, and you want to take the minimum number of draws from a random/shuffled deck such that you have a 90% chance of drawing the ace of spades. How would you find the minimum number of draws to achieve this 90% probability of succeeding in drawing the ace of spades at least once, for both the case of replacement and non-replacement?
|
Without replacement is (for once) easier. The Ace of Spades is equally likely to be in any of the $52$ positions, so we need to draw $n$ cards, where $n$ is the smallest integer $\ge (0.90)(52)$.
For with replacement, the probability we don't see the Ace of Spades in $n$ draws is $\left(\frac{51}{52}\right)^n$. We want this to be $\le 0.1$. Thus we want $n\ln(51/52)\le \ln(0.1)$, or equivalently $n\ge \ln(10)/(\ln(52/51)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1763559",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Show that $\lim_{n \rightarrow \infty} n \int_{0}^{1}e^{-nx}f(x)dx=f(0)$ when $f:[0,1] \rightarrow \mathbb{R}$ is continuous. Let $f:[0,1] \rightarrow \mathbb{R}$ be a continuous function.
Show that $\lim_{n \rightarrow \infty} n \int_{0}^{1}e^{-nx}f(x)dx=f(0)$
I am making a claim,
$\lim_{n \rightarrow \infty} n \int_{0}^{1}e^{-nx}f(0) dx=f(0).$ Is it really true?
|
Use integration by parts:
$$ \lim_{n\to\infty} n\left[\left.{\frac{f(x)e^{-nx}}{n}}\right|^0_1-\frac{1}{n}\int_{0}^{1}f'(x)e^{-nx}\text{d}x\right] $$
$$ = f(0) - \lim_{n\to\infty}f(1)e^{-n} - \int_{0}^{1}\lim_{n\to\infty}f'(x)e^{-nx}\text{d}x $$
$$ = f(0) $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1763652",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Accurate summation of mixed-sign floating-point values Due to the finite representation, floating-point addition loses significant bits. This is particularly noticeable when there is catastrophic cancellation, such that all the significant bits can disappear (e.g. $10^{-10}+10^{10}-10^{10}$).
Reordering the terms of the sum can improve accuracy. In particular, it is well-known that adding positive terms in increasing order is more accurate than otherwise, as the low-order bits are kept longer. I guess that this method is optimal or close to.
But for sums with both signs, is there a known strategy to improve the accuracy by reordering or grouping the terms ? Would separate summation of the positive and negative terms be a good idea (presumably not) ? Can we do better (with a reasonable increase in time complexity allowed) ?
Update:
I am quoting from the Higham reference given by @gammatester: "In designing or choosing a summation method to achieve high accuracy,
the aim should be to minimize the absolute values of the
intermediate sums ..." and " However, even if we concentrate on a specific set of data the aim is difficult to achieve, because minimizing the bound ... is known to be NP-hard".
Anyway, there seem to be good heuristics available.
|
I would use compensated summation. Basically you can recover most of the error
from a single floating point addition. This was noticed long time ago by e.g. Dekker, Knuth, and others.
There are a lot of references, e.g.
T. Ogita, S.M. Rump, and S. Oishi, Accurate sum and dot product,
SIAM J. Sci. Comput., 26 (2005), pp. 1955-1988. Available as
http://www.ti3.tu-harburg.de/paper/rump/OgRuOi05.pdf
In chapter 4 of N.J. Higham, Accuracy and Stability of Numerical Algorithms, 2nd ed., Philadelphia, 2002, http://www.maths.manchester.ac.uk/~higham/asna/ compensated and other methods are described and analysed.
A simple fast method is https://en.wikipedia.org/wiki/Kahan_summation_algorithm
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1763764",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Existence of how many sets is asserted by the axiom of choice in this case? Applying the axiom of choice to $\{\{1,2\}, \{3,4\}, \{5,6\},\ldots\}$, does only one choice set necessarily exist, or all of the $2^{\aleph_0}$ I "could have" chosen? Or something in between? It seems if only one, then I didn't really have much of a choice; I just had to take what the universe gave me. But letting them all exist has its own problems.
|
In this case you don’t need the axiom of choice to get a choice function: just pick the smaller member of each pair. Various other choice functions are explicitly definable: we could just as well pick the larger member of each pair, for instance. Or we could pick the member that is divisible by $3$ if there is one, and the smaller one otherwise.
In general the axiom of choice gives you more than one choice function. Let $\{A_i:i\in I\}$ be a family of non-empty sets, and let $\varphi:I\to\bigcup_{i\in I}A_i$ be a choice function. For each $i\in I$ let
$$B_i=\begin{cases}
A_i,&\text{if }|A_i|=1\\
A_i\setminus\{\varphi(i)\},&\text{otherwise}\;;
\end{cases}$$
then the axiom of choice gives us a choice function $\psi:I\to\bigcup_{i\in I}B_i$ for the family $\{B_i:i\in I\}$. This $\psi$ is clearly also a choice function for the original family, and it’s different from $\varphi$ unless every $A_i$ was a singleton. Eventually, once the appropriate cardinal arithmetic has been developed, one can prove that it gives us
$$\prod_{i\in I}|A_i|$$
choice functions.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1763875",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Can you use both sides of an equation to prove equality? For example:
$\color{red}{\text{Show that}}$$$\color{red}{\frac{4\cos(2x)}{1+\cos(2x)}=4-2\sec^2(x)}$$
In high school my maths teacher told me
To prove equality of an equation; you start on one side and manipulate it algebraically until it is equal to the other side.
So starting from the LHS: $$\frac{4\cos(2x)}{1+\cos(2x)}=\frac{4(2\cos^2(x)-1)}{2\cos^2(x)}=\frac{2(2\cos^2(x)-1)}{\cos^2(x)}=\frac{4\cos^2(x)-2}{\cos^2(x)}=4-2\sec^2(x)$$ $\large\fbox{}$
At University, my Maths Analysis teacher tells me
To prove a statement is true, you must not use what you are trying to prove.
So using the same example as before:
LHS = $$\frac{4\cos(2x)}{1+\cos(2x)}=\frac{4(2\cos^2(x)-1)}{2\cos^2(x)}=\frac{2(2\cos^2(x)-1)}{\cos^2(x)}=\frac{2\Big(2\cos^2(x)-\left[\sin^2(x)+\cos^2(x)\right]\Big)}{\cos^2(x)}=\frac{2(\cos^2(x)-\sin^2(x))}{\cos^2(x)}=\bbox[yellow]{2-2\tan^2(x)}$$
RHS =$$4-2\sec^2(x)=4-2(1+\tan^2(x))=\bbox[yellow]{2-2\tan^2(x)}$$
So I have shown that the two sides of the equality in $\color{red}{\rm{red}}$ are equal to the same highlighted expression. But is this a sufficient proof?
Since I used both sides of the equality (which is effectively; using what I was trying to prove) to show that $$\color{red}{\frac{4\cos(2x)}{1+\cos(2x)}=4-2\sec^2(x)}$$
One of the reasons why I am asking this question is because I have a bounty question which is suffering from the exact same issue that this post is about.
EDIT:
Comments and answers below seem to indicate that you can use both sides to prove equality. So does this mean that my high school maths teacher was wrong?
$$\bbox[#AFF]{\text{Suppose we have an identity instead of an equality:}}$$ $$\bbox[#AFF]{\text{Is it possible to manipulate both sides of an identity to prove that the identity holds?}}$$
Thank you.
|
It is enough.. Consider this example:
To prove:
$a=b$
Proof:
$$a=c$$
$$b=c$$
Since $a$ and $b$ are equal to the same thing, $a=b$.
That is the exact technique you are using and it sure can be used.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1763978",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "32",
"answer_count": 7,
"answer_id": 6
}
|
How to find the radius of convergence of $\sum_{n=0}^{\infty}[2^{n}z^{n!}]$ I tried with $$1/R = \lim_{n\to\infty}{\sup({\sqrt[n]{2^n}})} = \lim_{n\to\infty}{2} = 2$$
But that don't seem correct.
Thank you for your help!
|
We have
$$\sqrt[n]{2^n\left|z^{n!}\right|}= 2|z|^{(n-1)!} \to \begin{cases}0&,|z|<1\\\\2&,|z|=1\\\\\infty&,|z|>1\end{cases}$$
and the series converges when $|z|<1$ and diverges otherwise.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1764098",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Explain why a gamma random variable with parameters $(t, \lambda)$ has an approximately normal distribution when $t$ is large. Explain why a gamma random variable with parameters $(t, \lambda)$ has an approximately normal distribution when $t$ is large.
What I have come up with so far is:
Let $X=$ the sum of all $X_i$, where $X_1 = X_1, X_2, \ldots , X_i \sim \mathrm{Exp}(\lambda)$. Then because of the CLT as $t$ gets sufficiently large, it can be approximated as a normal distribution.
Is this sufficient or am I missing a key piece of this proof?
|
That's pretty much it except that you assumed $t$ is an integer and you used "$i$" rather then "$t$" at one point. One way of dealing with non-integer values of $t$ is to go back to the proof of the CLT that uses characteristic functions and make a minor modification in the argument to accomodate non-integers.
(BTW, one writes $Y\sim\mathrm{Exp}$, with that entire expression in MathJax and with \mathrm{} or the like, not $Y$ ~ $Exp$.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1764277",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Is the set of periodic functions from $\mathbb{R}$ to $\mathbb{R}$ a subspace of $\mathbb{R}^\mathbb{R}$? A function $f: \mathbb{R} \to \mathbb{R}$ is called periodic if there exists a positive number
$p$ such that $f(x) = f(x + p)$ for all $x \in \mathbb{R}$. Is the set of periodic
functions from $\mathbb{R}$ to $\mathbb{R}$ a subspace of $\mathbb{R}^\mathbb{R}$? Explain
So I have seen a solution to this question and my question has more to do with what thought process was used to even think of the sort of function to show that the set of periodic functions is not a subspace? First I do have a question of what $\mathbb{R}^{\mathbb{R}}$ would look like? I'm visualizing elements being of some sort of infinite list of the sort $(x_1, x_2, x_3,..........), x_i \in \mathbb{R}$.
But to the main question. So the function chosen was $$h(x) = sin\sqrt{2}x + cos(x)$$ where $f(x) = sin\sqrt{2}x$ and $g(x) = cos(x)$
Using these functions, the author arrived at a contradiction with regards to $\sqrt{2}$ being shown to be rational (which it is not). Working this out after being given that function was fine, but what was the motivation to use that function? Where did the idea to show something is irrational would help to disprove a set being a subspace? It almost feels like it arose from osmosis and brilliance.....
|
The idea is that if $f$ is $a$-periodic and $g$ is $b$-periodic, and $\frac{a}{b}\in \mathbb{Q}$, then it's easy to see that $f+g$ is periodic : if $\frac{a}{b} = \frac{p}{q}$ with $p,q\in \mathbb{N}$ then put $c = qa=pb$.
Since $f$ is $a$-periodic, it's also $c$-periodic. Likewise, $g$ is $c$-periodic since it's $b$-periodic.
Thus $f+g$ is $c$-periodic.
So if you want a coutner-example, looking at periods whose quotient is irrational is necessary.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1764349",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 0
}
|
What are those weighed graphs called? Let $G = (V, E)$ be a directed graph, and define the weight function $f : V \sqcup E \to \mathbb{R}^+$ as follows:
*
*sum of weights of vertices is 1,
*if a vertex has edges coming out of it, their weights sum to 1, and
*if a vertex has edges coming into it, their weights sum to 1.
I am curious if this weighted graph has a name in the literature.
|
For undirected graphs, constant weight of neighboring vertices has been called "weighted-regular", generalizing the concept of regular graph (when all the vertex weights are equal).
That definition is not used often enough to have become a standard term.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1764424",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Can$A \cap (B' \cap C')$ be $(A \cap B') \cap (A \cap C')$? If I use the above statement, provided that it is right, in a question, would I have to prove it as well?
|
In this case, the proof is rather trivial: since the $\cap$ operator is both associative and commutative, all the following are equivalent:
$$ (A \cap B) \cap (A \cap C) $$
$$ A \cap B \cap A \cap C $$
$$ A \cap A \cap B \cap C $$
$$ A \cap B \cap C $$
$$ A \cap (B \cap C) $$
As others said, whether or not you actually have to include the proof is mostly a matter of context.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1764511",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Maximum concyclic points Given n points, find an algorithm to get a circle having maximum points.
|
The method of choosing
all sets of 3 points,
finding the circle that
passes through that set,
and seeing which other points
lie on that circle
has one big problem:
roundoff error.
If you try to use
any method that involves
taking square roots,
roundoff can cause problems.
Here is a method,
based on some previous work of mine,
that allows this to be done
without any error,
assuming that the points
all have rational coordinates.
All operations in the following
are assumed to be done
with exact rational arithmetic.
Do the following
for all sets of three points.
Some optimizations can be done
by keeping track of when
a point is found
to be on a circle
passing through a set
of three points.
Let the points be
$(x_i, y_i),
i=1, 2, 3
$.
Let the circle be
$(x-a)^2+(y-b)^2
=r^2
$
or
$x^2-2ax+y^2-2by
=r^2-a^2-b^2
$
or
$2ax+2by+c
=x^2+y^2
$
where
$c = a^2+b^2-r^2
$.
Solve the linear system
$2ax_i+2by_i+c
=x_i^2+y_i^2
, i=1,2,3
$
for $a, b, c$
exactly,
using rational arithmetic.
If there is no solution,
try the next trio.
Then,
for every other point
$(x, y)$,
check if
$2ax+2by+c
=x^2+y^2
$.
If so,
the point is on the circle.
As before,
this can be done
exactly with
rational arithmetic.
At every stage,
keep track of the set of three points
that has the most points
on its circle.
If you use $O(n^3)$ storage,
you can keep track
of all previous computation.
At the end,
you will have
the circle with most points on it.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1764722",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Shortest path tree different than minimum spanning tree My professor brought up in class that a shortest path tree can be different than minimum spanning tree for an undirected graph. However, I have not been able to find a case where this is possible.
|
Remember that the shortest path is between two points, while the minimum spanning tree is the tree that spans the entire graph, and not just two points.
If you consider a triangle with side lengths of 1, can you see the MSP and the minimum path between all pairs of vertices in your head? do they differ for one pair of vertices?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1764853",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
multiplication of finite sum (inner product space) I am having difficulty to understand the first line of the proof of theorem 3.22 below. (taken from a linear analysis book)
Why need to be different index, i.e. $m,n$ when multiplying the two sums? This is very basic, but I really need help for explanations.
|
Orthornormality refers to the basis $e_i$. When a basis is orthonormal it means the inner product between any two elements of the basis $e_i,e_j$ is $\langle e_i, e_j \rangle = \delta_{ij}$ (see kronecker delta). More generally, two vectors $u,v$ are orthogonal if $\langle u, v \rangle = 0$. The normality part comes from elements of the basis having norm $1$.
The part on algebraic properties refers simply to the bilinearity. If $u,v,w$ are vectors and $\alpha, \beta \in \mathbb{R}$, then $\langle \alpha \cdot u, v\rangle=\alpha \cdot \langle u, v \rangle = \langle u, \alpha \cdot v \rangle$ and $\langle u + v, w\rangle = \langle u, w\rangle + \langle v, w\rangle$ and $\langle u,v+w\rangle = \langle u, v\rangle+\langle u, w\rangle$.
The proof uses the fact that $\lVert v \rVert^2 = \langle v, v \rangle$ and the properties above to expand the inner product, then the orthonormality to simplify it.
In the end, this theorem says that for orthonormal bases, you can use Pythagoras's Theorem to calculate lengths.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1764927",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 2
}
|
Mistake while evaluating the gaussian integral with imaginary term in exponent I am trying to evaluate the integral $I=\int_0^\infty e^{-ix^2}\,dx$ as one component of evaluating a contour integral but I am dropping a factor of $1/2$ and after checking my work many times, I worry that I am making a conceptual mistake in moving to polar coordinates for the interval $(0,\infty)$ rather than $(-\infty,\infty)$. Below is my work:
$$
\begin{align*}
\text{Define } I&=\int_0^R e^{-ix^2} \, dx=\int_0^R e^{-iy^2} \, dy \\
\text{then } I^2&=\int_0^R e^{-ix^2} \, dx \int_0^R e^{-iy^2} \, dy \\
&\Rightarrow I^2=\int_0^R\int_0^R e^{-ix^2}e^{-iy^{2}} \, dx \, dy \\
&\Rightarrow I^2=\int_0^R\int_0^R e^{-i(x^2+y^2)} \, dx \, dy
\end{align*}
$$
And converting to polar coordinates, with jacobian $r$ for the integral,
and taking $\theta\in[0,\pi]$ since we are integrating in the first quadrant,
we have:
\begin{align*}
I^2&=\int_0^\pi\int_0^R e^{-i(r^2\cos^2(\theta)+r^2 \sin^2(\theta))} r \, dr \, d\theta\\
I^2&=\int_0^\pi\int_0^R e^{-i(r^2)} r \, dr \, d\theta\\
\end{align*}
Which we can now perform a u substitution on:
$u=r^2\Rightarrow \frac{du}{2r}=dr$, yielding
\begin{align*}
I^2&=\frac{1}{2}\int_0^\pi\int_{0=r}^{R=r}e^{-iu} \, du \, d\theta\\
&\Rightarrow I^2=\frac{\pi}{2} \left[\frac{-e^{-iR^2}}{i}+\frac{1}{i}\right] \\
&\Rightarrow I^2=\frac{\pi}{2i}[1-e^{-iR^2}] \\
&\Rightarrow I=\sqrt{\frac{\pi}{2i}}\sqrt{1-e^{-iR^2}}
\end{align*}
Then in the limit $R\rightarrow \infty$ we have
\begin{equation*}
\lim_{R\rightarrow \infty}\sqrt{\frac{\pi}{2i}}\sqrt{1-e^{-iR^{2}}}=
\sqrt{\frac{\pi}{2i}}
\end{equation*}
|
The mistake is in my eyes that you allow the polar angle to be in $\phi\in[0,\pi]$ although just integrating over a quadrant integration domain in cartesian coordinates,$(x,y)\in[0,R]^2$. I'd suggest putting $\phi\in[0,\pi/2]$ in order to account for the integration in the first quadrant. This modification of your calculation results in an additional factor of $1/2$ when going to polars $(r,\phi)$.
$I = \sqrt{\frac{\pi}{4i}}\sqrt{1-e^{-iR^2}}$.
The limit $R\to\infty$ works for practical purposes.
Does this help you?
David
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1765002",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Triangle group $(\beta,\beta,\gamma)$ is a subgroup of the triangle group $(2,\beta, 2\gamma)$. Let $1/\alpha+1/\beta+1/\gamma<1$, and let us consider the triangle group $(\alpha,\beta,\gamma)$, i.e. the subgroup of $\mathbb{P}\mathrm{SL}(2,\mathbb{R})$ induced by the hyperbolic triangle with angles
$\pi/\alpha,\pi/\beta,\pi/\gamma$.
I have read that the triangle group $(\beta,\beta,\gamma)$ is a subgroup of the triangle group $(2,\beta, 2\gamma)$. How could we prove it? This is probably very standard, but I am not used to work with triangle groups. Any help would be appreciated.
|
It's the subgroup $\langle xyx,y \rangle$ of $\langle x,y \mid x^2,y^\beta,(xy)^{2\gamma} \rangle$.
The index of this subgroup $2$, so checking that the subgroup has the presentation $\langle z,y \mid z^\beta,y^\beta,(xy)^\gamma \rangle$ (with $z=xyx$) is routine.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1765144",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Calculate $\int_D \rvert x-y^2 \rvert dx \ dy $ $$\int_D \rvert x-y^2 \rvert dx \ dy $$
$D$ is the shape that is delimited from the lines:
$$
y=x \\
y=0 \\
x=1 \\$$
$$D=\{ (x,y) \in \mathbb{R}^2: 0 \le x \le 1 \ , \ 0 \le y \le x \}$$
$$\rvert x-y^2 \rvert=x-y^2 \qquad \forall (x,y) \in D $$
$$\int_0^1 \Big( \int_0^x (x-y^2) \ dy \Big) \ dx= \int_0^1 \Big[_0^x xy-\frac{1}{3} y^3 \Big] \ dx= \int_0^1 \left(x^2-\frac{1}{3} x^3\right) \ dx=\Big[_0^1 \frac{1}{3} x^3-\frac{1}{12} x^4 \Big]=\frac{1}{4}$$
Is it correct?
|
This is correct, you are correct that the absolute value integrand is equal to x-y^2 for all x and y in your region of integration.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1765260",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Multiplying two logarithms (Solved) I was wondering how one would multiply two logarithms together?
Say, for example, that I had:
$$\log x·\log 2x < 0$$
How would one solve this? And if it weren't possible, what would its domain be?
Thank you!
(I've uselessly tried to sum the logs together but that obviously wouldn't work. Was just curious as to what it would give me)
EDIT: Thank you everyone for the answers!
|
There is no particular rule for the product of logarithms, unlike for the sum.
Applying the latter, you can rewrite
$$\log(x)\log(2x)=\log(x)(\log(x)+\log(2))=t(t+\log(2))$$
and proceed as usual to find the domain of $t$. Then $x=e^t$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1765403",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 5,
"answer_id": 1
}
|
Maximum of $xy+y^2$ subject to right-semicircle $x\ge 0,x^2+y^2\le 1$ Maximum of:
$$
xy+y^2
$$
Domain:
$$
x \ge 0, x^2+y^2 \le1
$$
I know that the result is:
$$
\frac{1}{2}+\frac{1}{\sqrt{2}}
$$
for
$$
(x,y)=\left(\frac{1}{\sqrt{2(2+\sqrt{2})}},\frac{\sqrt{2+\sqrt{2}}}{2}\right)
$$
But I don't know how to get this result.
I know that:
$$
xy+y^2 \le \frac{1}{2}+y^2
$$
so:
$$
xy \le \frac{1}{2}
$$
And also:
$$
xy+y^2 \le xy+1-x^2 \equiv 1+x(y-x)
$$
But I don't know what to do next...
|
Hint$$x^2+y^2=x^2+(3-2\sqrt{2})y^2+(2\sqrt{2}-2)y^2 $$
Now notice $$x^2+(3-2\sqrt{2})y^2+(2\sqrt{2}-2)y^2 \ge 2(3-2\sqrt{2})^{\frac{1}{2}}xy+(2\sqrt{2}-2)y^2 (\because \text{AM-GM})$$Now note $(\sqrt{2}-1)^2=3-2\sqrt{2}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1765612",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Given $\tan a = -7/24$ in $2$nd quadrant and $\cot b = 3/4$ in $3$rd quadrant find $\sin (a + b)$. Say $\tan a = -7/24$ (second quadrant) and $\cot b = 3/4$ (third quadrant), how would I find $\sin (a + b)$?
I figured I could solve for the $\sin/\cos$ of $a$ & $b$, and use the add/sub identities, but I got massive unwieldy numbers outside the range of $\sin$. How ought I to go about this problem? Thanks.
|
$\tan(a) = -7/24$
Opposite side $= 7$ and adjacent side $= 24$
Pythagorean theorem
$\Rightarrow$ hypotenuse $= \sqrt{49+576} = 25$
$\sin(a) = 7/25$ (sin is positive in second quadrant)
$\cos(a) = - 24/25$ (cos is negative in second quadrant)
$\cot(b) = 3/4 \Rightarrow \tan(b) = 4/3$
Opposite side $= 4$ and adjacent side $= 3$
Pythagorean theorem
$\Rightarrow$ hypotenuse $= \sqrt{9+16} = 5$
$\sin(b) = - 4/5$ (sin is negative in third quadrant)
$\cos(b) = -3/5$ (cos is negative in third quadrant)
$\sin(a+b) = \sin a \cos b + \cos a \sin b$
$\sin(a+b) = 3/5$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1765720",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Showing a C* Algebra contains a compact operator In my functional analysis class we are currently dealing with C* Algebras, and I just met this problem:
Let $ \mathbb{H} $ be a separable Hilbert space, and suppose we have $ A \subset B(\mathbb{H}) $ a C* Algebra of bounded operators of bounded operators on H. Now we suppose there exists $ a \in A $ and that there exists a compact operator $ K \in K(\mathbb{H}) $ such that $ ||a-K|| < ||a|| $. We are to show A contains a compact operator on H that is not the zero operator, that is $ A \cap K(H) \neq \{0\} $.
I am quite new to C* Algebras and I still have not much intuition but I cannot really see how to do this. I cannot seem to find a way of doing this. I certainly appreciate all help.
|
Here is an answer in terms of a standard result of C*-algebra theory.
Theorem: Every injective $*$-homomorphism from one C*-algebra to another isometric.
In addition to this result, we will need to know that the quotient of a C*-algebra by a closed ideal makes sense, and that the result is a C*-algebra. In particular, we have the Calkin algebra $Q(H)=B(H)/K(H)$, the C*-algebra of bounded operators modulo the compact operators.
Consider now the restriction to your algebra $A$ of the quotient map $\pi:B(H) \to Q(H)$. Supposing that your algebra $A$ contains no compact operator, this restriction is injective, hence isometric by the theorem. That is, for every $a \in A$, the norm of $a$ equals the norm of $\pi(a) \in Q(H)$. But, recalling the definition of the quotient norm, this says exactly that
$$\|a\| = \inf_{k \in K(H)} \|a-k\|$$
i.e. the distance from $a$ to the compacts equals the norm of $a$. Therefore, there is not any $k \in K(H)$ with $\|a\| > \|a-k\|$
It would probably be an instructive exercise to unpack the proof of the theorem and see how it applies to this particular case.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1765816",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Does there exist a non-parallelisable manifold with exactly $k$ linearly independent vector fields?
For any positive integer $k$, is there a smooth, closed, non-parallelisable manifold $M$ such that the maximum number of linearly independent vector fields on $M$ is $k$?
Note that any such $M$, for any $k$, must have Euler characteristic zero by the Poincaré-Hopf Theorem.
Without the non-parallelisable hypothesis, the $k$-dimensional torus is an easy example.
One might think that the product of a $k$-dimensional torus with a manifold which admits no nowhere-zero vector fields provides an example, but this doesn't necessarily work. For example, the product of a torus and an even dimensional-sphere is actually parallelisable. The reason this approach fails with the even-dimensional sphere is that it is has stably trivial tangent bundle; maybe using a manifold with non-stably trivial tangent bundle might work.
Determining the maximal number of linearly independent vector fields on a manifold is not easy in general. Even for spheres, the answer is complicated: if $n + 1 = 2^{4a+b}u$ where $a \geq 0$, $0 \leq b \leq 3$ and $u$ is odd, then the maximal number of linearly independent vector fields on $S^n$ is $8a + 2^b - 1$. So for the first few odd values of $n$ we have
$$
\begin{array}{c|c}
& \text{maximum number of linearly}\\
n &\text{independent vector fields on}\ S^n\\
\hline
1 & 1\\
3 & 3\\
5 & 1\\
7 & 7\\
9 & 1\\
11 & 3\\
13 & 1\\
15 & 8
\end{array}
$$
The above result shows that the answer to the initial question is yes for $k \equiv 0, 1, 3, 7 \bmod 8$.
|
Consider $M$, the product of the Klein bottle $K^2$ with $k-1$-torus $T^{k−1}$. This manifold is nonorientable, hence, nonparallelizable. On the other hand, it is the total space of a circle bundle over $T^k$, since $K^2$ is a circle bundle over the circle. Let $H$ be a (smooth) connection on this bundle. Take $k$ independent vector fields $X_1,...X_k$ on $T^k$ and lift them to $M$ via the connection $H$. Hence, $M$ admits $k$ independent vector fields. It cannot have $k+1$ vector fields since it is not parallelizable. I am sure there are orientable examples as well, but this requires more work.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1765967",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
}
|
Sum of sum of binomial coefficients $\sum_{j=1}^{n}{\sum_{k=0}^{j}{{n}\choose{k}}}$ I know there is no simple way to solve the sum:
$$\sum_{k=0}^{j}{{n}\choose{k}}$$
But what about the equation:
$$\sum_{j=1}^{n}{\sum_{k=0}^{j}{{n}\choose{k}}}$$
Are there any simplifications or good approximations for this equation?
|
$$\begin{align}
\sum_{j=\color{red}0}^{n}{\sum_{k=0}^{j}{{n}\choose{k}}}
&=\sum_{k=0}^n\sum_{j=k}^n\binom nk\\
&=\sum_{k=0}^n (n-k+1)\binom n{n-k}\\
&=\sum_{k=0}^n (j+1)\binom nj
&&\text{putting }j=n-k\\
&=\sum_{k=0}^n j\binom nj+\sum_{k=0}^n \binom nj\\
&=n 2^{n-1}+2^n\\
\sum_{j=\color{red}1}^n\sum_{k=0}^n\binom nk&=\sum_{j=\color{red}0}^n\sum_{k=0}^n\binom nk-1\\
&=n 2^{n-1}+2^n-1\\
&=(n+2)2^{n-1}-1\quad\blacksquare
\end{align}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1766025",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 2
}
|
Cosine Inequality Show that given three angles $A,B,C\ge0$ with $A+B+C=2\pi$ and any positive numbers $a,b,c$ we have $$bc\cos A + ca \cos B + ab \cos C \ge -\frac {a^2+b^2+c^2}{2}$$
This problem was given in the course notes for a complex analysis course, so I anticipate using $$bc\cos A + ca \cos B + ab \cos C=\mathbf R\mathbf e (bc\text{cis} A + ca \text{cis} B + ab \text{cis} C )\qquad=\mathbf R\mathbf e (c\text{cis}(-C)b\text{cis}(-B)+c\text{cis}(-C)a\text{cis}(-A)+a\text{cis}(-A)b\text{cis}(-B))$$
where $\mathbf R\mathbf e(x)$ denotes the real part of $x$,
is the start, although following this through does not lead me to the inequality I am after.
Also I am unsure of the geometric interpretation of this identity.
This is a similar inequality to $$bc+ca+ab \le a^2+b^2+c^2$$
but the constraints on the angle allow a stronger inequality.
I want some geometric intuition for what the equality is trying to prove, but also I want to see where the inequality comes from.
|
I applied the Cosine Rule three times (from each side to the opposite angle) and added to arrive at the inequality (actually an equality in a triangle?)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1766099",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
}
|
Distribution of $aXa^T$ for normal distributed vector $a$ Let $a$ be $1\times n$ random vector with entries chosen independently from normal distribution with zero mean and unit variance. What is the distribution of $aXa^T$ for a given $n\times n$ matrix $X$.
If $X$ is symmetric matrix, then the above is a Wishart distribution. What is $X$ is not symmetric?
|
$X$ can be written as a sum $X_{s} + X_a$, where $X_{s}$ is symmetric and $X_a$ is antisymmetric. But $a X_a a^\text{T} = 0$ for any $a$ and any antisymmetric $X_a$, so WLOG, suppose $X = X_s$. Then your penultimate sentence applies.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1766203",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Is $\langle A,B\rangle =\operatorname{trace}(AB^T)$ an inner product in $\mathbb R^{n\times m}$? I don't understand why one should take transpose of $\operatorname{tr}(AB^T)$ and why we use the fact that $\operatorname{tr}(M)=\operatorname{tr}(M^T)$ for any $M$ that is a square matrix to solve the problem.
|
If you grind through the details, you will see that
$\operatorname{tr} (A B^T) = \sum_{i,j} [A]_{ij} [B_{ij}]$, hence
this is the 'standard' inner product if you view the matrices
$A,B$ as giant columns.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1766303",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Find how many solutions the congruence $x^2 \equiv 121 \mod 1800$ has I want to find how many solutions the congruence $x^2 \equiv 121 \mod 1800$ has.
What is the method to find it without calculating all the solutions?
I can't use euler criterion here because 1800 is not a primitive root. thanks!!!
|
Here is a different approach to find the number of solutions to the congruence
$$ x^2 \equiv a \space(\bmod n)$$
If
$$ n = 2^kp_1^{k_1}\cdots p_r^{k_r}$$
where $p_1, \dots ,p_r$ are odd different primes, $k \ge 0$ and $k_1,\dots,k_r \ge 1$.
If the congurence has solutions, the number of the solutions will be equal to $2^r \cdot 2^m,$ Where $$
m= \begin{cases}
0, & k=0,1\\
1, & k=2\\
2, & k \ge 3
\end{cases} $$
In our case, $$ x^2 \equiv 121 \space(\bmod 1800)$$
$$n = 1800 =2^3\cdot 5^2 \cdot 3^2 \Longrightarrow k=3,r=2$$
And the number of solutions is $2^2 \cdot 2^2 = 16$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1766505",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Show that $3P_{\lceil n \rceil}-2=\sum_{k=1}^{A}\left(4-\left\lceil \frac{\pi(k)}{n}\right\rceil^2\right) $ We proposed a formula for calculating nth prime number using the prime counting function.
Where $\lfloor x\rfloor$ is the floor function and $\lceil x\rceil$ is a ceiling function.
$\pi(k)$ is prime counting function and $ P_n $ the nth prime number.
Let $A=\lfloor 2n\ln(n+1)\rfloor$
Where $n \ge 0.9 $ (n MUST be a decimal number). [why n must be a decimal number I don't know, we will leave that to the authors to explain to us the reason for it]
Formula for calculating the nth prime,
$$ 3P_{\lceil n \rceil}-2=\sum_{k=1}^{A}\left(4-\left\lceil \frac{\pi(k)}{n}\right\rceil^2\right) $$
|
Dusart showed that $\pi(n) \ge n (\log n + \log \log n - 1)$ for $n\ge 2$. From it's not too hard to calculate that for any $n\ge 2$,
$$\pi(2n) \ge 2n (\log 2n + \log \log 2n - 1) \ge 2n( \log n + \log \log 2n - 0.307) > A.$$
In particular (ignoring very small values of $n$), for all values of $k$ in the sum, $\pi(k) < 2n$, which means the summand is never negative. It is also precisely $0$ whenever $n < \pi(k) < 2n$. So the only contribution is from $\pi(k) \le n$.
Since $n$ is fractional, $\pi(k)$ is never exactly $n$. The first value of $k$ for which $\pi(k) > n$ is $P_{\lceil n \rceil}$, so the last contributing term is $k=P_{\lceil n \rceil}-1$. Meanwhile $\pi(k)$ is always positive except for the first term $\pi(1)=0$. So the RHS simplifies to
$$4 + \underbrace{3 + 3 + \cdots + 3}_{P_{\lceil n \rceil}-2} + 0,$$
which is a very boring sum that simplifies to $3P_{\lceil n \rceil} - 2$ for elementary reasons that have nothing to do with number theory. Consequently, this "formula" for the $n$th prime provides no insight whatsoever into the distribution of primes because it is essentially the trivial identity that $3p = \sum_1^p 3$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1766606",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Sign of composition of transpositions Let $\sigma \in S_n$.
Definition: Suppose that $\text{sign}\sigma=(-1)^N$, where $N$ - number of inversions in permutation $\sigma$.
Suppose that $\tau_1$ and $\tau_2$ transpositions. How to prove that $\text {sign}(\tau_1\circ \tau_2)=\text {sign}\tau_1\cdot \text {sign}\tau_2?$
I tried to prove it but any results.
Can anyone show the rigorous proof please.
|
During a long drive this evening I realized my other answer, involving determinants, was just a long trip around Robin Hood's Barn. Here's the short proof.
Suppose that
$\tau_1$ is a composition of $k$ transpositions $X_1, \ldots, X_k$,
$$
\tau_1 = X_k X_{k-1} \cdots X_2 X_1,
$$
and that
$\tau_2$ is a composition of $p$ transpositions $Y_1, \ldots, Y_p$. Then $\text{sign}(\tau_1) = (-1)^k$, and similarly $\text{sign}(\tau_2) = (-1)^p$.
To compute the sign of $\tau_2 \circ \tau_1$, observe that $\tau_2 \circ \tau_1$ can be written as
$$
\tau_2 \circ \tau_1 = Y_p Y_{p-1} \cdots Y_2 Y_1 X_k X_{k-1} \cdots X_2 X_1,
$$
so that its sign is $(-1)^{p+k}$. And since
$$
(-1)^{p+k} = (-1)^p (-1)^k
$$
we get the desired conclusion.
There's an important question here: why can a single permutation not be written as a product of $k$ permutations and of $b$ permutations, where exactly one of $b$ and $k$ is odd? I.e., why is the sign well-defined in the first place? I'm assuming that question has already been covered in your text...
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1766735",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
How to calculate the radius of a circle inscribed in a regular hexagon? If I know how long one side of a regular hexagon is, what's the formula to calculate the radius of a circle inscribed inside it?
Illustration:
|
Draw the six isosceles triangles.
Divide each of these triangles into two right angled triangles.
Then you have
$s = 2x = 2 (r \sin \theta)$
where $r$ is the radius of the circle, $\theta$ is the top angle in the right angled triangles and there are in total $12$ of these triangles so its easy to figure out $\theta$. $x$ is the short side in these right angled triangles and $s$ is of course the outer side in the isosceles triangles, i.e. the side length you say you know.
Hence the formula for the radius is
$$r = \frac{s}{2 \sin \theta}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1766870",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 2
}
|
Which are the connected components of $K=\{1/n\mid n\in\mathbb{N}\}\cup\{0\}$? Which are the connected components of the topological subspace $K=\left\{\left.\dfrac{1}{n}\ \right|\ n\in\mathbb{N}\right\}\cup\{0\}$ of $\mathbb{R}$?
I think they are every single point, but I can't find open set of $\mathbb{R}$ separating the point $0$ from the others, thanks.
|
You cannot find such an open set. What you need to check is that all points of the form $\frac{1}{n}$ are isolated points.
Any subset $C$ with more than 1 point and at least one isolated point $p$ is disconnected (use $\{p\}$ and its complement in $C$, both of which are open and closed in $C$).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1767072",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Prove that the sum of the squares of two odd integers cannot be the square of an integer. Prove that the sum of the squares of two odd integers cannot be the square of an integer.
My method:
Assume to the contrary that the sum of the squares of two odd integers can be the square of an integer. Suppose that $x, y, z \in \mathbb{Z}$ such that $x^2 + y^2 = z^2$, and $x$ and $y$ are odd. Let $x = 2m + 1$ and $y = 2n + 1$. Hence, $x^2 + y^2$ = $(2m + 1)^2 + (2n + 1)^2$
$$= 4m^2 + 4m + 1 + 4n^2 + 4n + 1$$
$$= 4(m^2 + n^2) + 4(m + n) + 2$$
$$= 2[2(m^2 + n^2) + 2(m + n) + 1]$$
Since $2(m^2 + n^2) + 2(m + n) + 1$ is odd it shows that the sum of the squares of two odd integers cannot be the square of an integer.
This is what I have so far but I think it needs some work.
|
Let $a=2n+1$, $b=2m+1$. Then $a^2 + b^2=4n^2 + 4n +4m^2 +4m+2$. This is divisible by $2$, a prime number, but not by $4=2^2$. Hence it cannot be the square of an integer.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1767200",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 0
}
|
Find all prime p such that Legendre symbol of $\left(\frac{10}{p}\right)$ =1 In the given question I have been able to break down $\left(\frac{10}{p}\right)$=
$\left(\frac{5}{p}\right)$ $\left(\frac{2}{p}\right)$. But what needs to be done further to obtain the answer.
|
Hint:
By the second supplementary law of quadratic reciprocity,$\biggl(\dfrac 2p\biggr)=1$ if and only if $p\equiv \pm 1\mod 8$.
On the other hand, $\biggl(\dfrac 5p\biggr)=\biggl(\dfrac p5\biggr)=1\;$ if and only if $p\equiv \pm 1\mod 5$.
So you have to solve the systems of congruences:
\begin{align*}
\begin{cases}
p\equiv \pm 1\mod8\\p\equiv\pm 1\mod 5
\end{cases}&&
\begin{cases}
p\equiv \pm3\mod8\\p\equiv\pm 2\mod 5
\end{cases}
\end{align*}
This is done with the Chinese remainder theorem. I'll show for one, say $p\equiv 3\mod 8,\;p\equiv 2\mod 5$: start from the Bézout's relation $\:2\cdot 8-3\cdot 5=1$. The solutions are then
$$p\equiv 3\cdot (2\cdot 8)-2\cdot(3\cdot 5)\equiv18\mod 40.$$
There is no prime number in this congruence class.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1767306",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Help solving the inequality $2^n \leq (n+1)!$, n is integer I need help solving the following inequality I encountered in the middle of a proof in my calculus I textbook:
$2^n \leq (n+1)!$
Where $\mathbf{n}$ in an integer.
I've tried applying lg to both members, but got stuck at:
$n \leq \lg(n+1) + \lg(n) + \lg(n-1) + ... + \lg(3) + 1$
A proof by induction is acceptable, but I wanted an algebraic one. I find it more... elegant?
|
$(n+1)!=2\cdot 3\cdot \dots\cdot(n+1)$ here a product of $n$ numbers all are at least 2 so the result follows...
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1767427",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Problem based on area of triangle In the figure, E,C and F are the mid points of AB, BD and ED respectively. Prove that: $8\triangle CEF=\triangle ABD$
From the given,
$ED$ is the median of $\triangle ABD$
So, $\triangle AED=\triangle BED$
Also, by mid point Theorem
$EC||AD$ and $CF||AB$.
Now, what should I do next?
|
Well $\triangle$AED=$\frac{1}{2}\triangle ABD$.
So the problem reduces to showing that
$$\triangle CEF = \frac{1}{4} \triangle BED$$
Since F is the midpoint of ED, we have by similarity that:
$$|EB| = 2|FB|$$
in other words:
$$\triangle CFD = \frac{1}{4} \triangle BED$$
Likewise, also by similarity,
$$\triangle BEC = \frac{1}{4} \triangle ABD = \frac{1}{2} \triangle BED$$
Which implies immediately that $\triangle CEF = \frac{1}{4} \triangle BED$, and hence we are done.
Note that the factors of 1/4 come from the scaling of both the base and the height by 1/2 due to similarity, hence the area is scaled by $(1/2)^2 = 1/4$ due to similarity.
The scaling factor of 1/2 for the bases and heights follows from the fact that the relevant line segments are medians and the midpoint theorem and the subsequent similarity.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1767501",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 1
}
|
Multiplicative inverse of $x+f(x)$ in $\Bbb Q[x]/(f(x))$ So I have $f(x) = x^3-2$ and I have to find the multiplicative inverse of $x + f(x)$ in $\mathbb{Q}[x]/(f(x))$. I'm slightly confused as to how to represent $x + (f(x))$ in $\mathbb{Q}[x]/(f(x))$. Would I be just finding the inverse of $x+1$? How would I do that? Thanks a lot!
|
$1=(x+1)(\frac{x^2-x+1}{3})-\frac{x^3-2}{3}$, then $\frac{x^2-x+1}{3}$ is the multiplicative inverse of $x+1+\langle f\rangle$ in $\frac{\mathbb{Q}[x]}{\langle f\rangle}.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1767591",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
A different notion of convergence for this sequence? I was thinking about sequences, and my mind came to one defined like this:
-1, 1, -1, 1, 1, -1, 1, 1, 1, -1, 1, 1, 1, 1, ...
Where the first term is -1, and after the nth occurrence of -1 in the sequence, the next n terms of the sequence are 1, followed by -1, and so on. Which led me to perhaps a stronger example,
-1, 1, -1, 1, 1, -1, 1, 1, 1, 1, -1, 1, 1, 1, 1, 1, 1, 1, 1, -1, ...
Where the first term is -1, and after the nth occurrence of -1 in the sequence, the next $2^n$ terms of the sequence are 1, followed by -1, and so on.
By the definition of convergence or by Cauchy's criterion, the sequence does not converge, as any N one may choose to define will have an occurrence of -1 after it, which must occur within the next N terms (and certainly this bound could be decreased)
However, due to the decreasing frequency of -1 in the sequence, I would be tempted to say that there is some intuitive way in which this sequence converges to 1. Is there a different notion of convergence that captures the way in which this sequence behaves?
|
As André Nicolas wrote,
the Cesaro mean,
for which the $n$-th term
is the average of the
first $n$ terms
will do what you want.
In both your cases,
for large $n$,
if $(a_n)$ is your sequence,
if
$b_n = \frac1{n}\sum_{k=1}^n a_k
$,
then
$b_n \to 1$
since the number of $-1$'s
gets arbitrarily small
compared to the number of $1$'s.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1767682",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
}
|
Determine the derivative $\frac{dy}{dx}$ of the integral
Determine the derivative of the integral $$ \,\int_{\sqrt x}^{0}\sin (t^2)dt $$
What does this question mean.
I do not understand it and I think you can't integrate $\sin t^2\,$.
|
You can take the derivative of an integral, even though you can't directly integrate it.
let
$I = \int \sin t^2 dt$
The value that you care about is $$R = I(0) - I(\sqrt x)$$
since we are asked to evaluate the integral from $\sqrt x$ to $0$.
Now, differentiating $R$ by $dx$, we get the expression
$$\frac{dR}{dx} = \frac{d I(0)}{dx} - \frac{dI(\sqrt x)}{dx}$$
Since $I(0)$ is a value (constant), it simply evaluates to $0$
using chain rule on the second term,
$$R = 0 - \frac{dI(\sqrt x)}{dx} = 0 - I'(\sqrt x) \cdot \frac{1}{2 \sqrt x}$$
Now, we need to find $I'$. Since $$I = \int \sin t^2 dt\\I' = dI = \sin t^2$$
Hence, $$I'(\sqrt x) = \sin ({\sqrt x}^2) = \sin x$$
Therefore,
$$R = 0 - \sin x \cdot \frac{1}{2 \sqrt x} = \frac{- \sin x}{2 \sqrt x}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1767804",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Legendre polynomial with argument zero $P_n ( 0 )$ I want to find the expression for $P_n(x)$ with $x = 0$, ie $P_n(0)$ for any $n$. The first few non-zero legendre polynomials with $x=0$ are
$P_0(0) = 1$, $P_2(0) = -\frac{1}{2}$, $P_4(0) = \frac{3}{8}$, $P_6(0) = -\frac{5}{16}$, $P_8(0) = \frac{35}{128}$ but I can't find a relationship between them to write as an equation for arbitrary $n$. Any help is appreciated.
|
One way to define Legendre polynomials is through its generating function:
$$\frac{1}{\sqrt{1-2xt+t^2}} = \sum_{n=0}^\infty P_n(x)t^n$$
Together with following sort of well known expansion:
$\displaystyle\;\frac{1}{\sqrt{1-4z}} = \sum_{k=0}^\infty \binom{2k}{k} z^k$,
we have
$$\sum_{n=0}^\infty P_n(0) t^n = \frac{1}{\sqrt{1+t^2}} = \sum_{k=0}^\infty \binom{2k}{k}\left(-\frac{t^2}{4}\right)^k
\quad\implies\quad
P_n(0) = \begin{cases}
\frac{(-1)^k}{4^k} \binom{2k}{k}, & n = 2k\\
0, & \text{ otherwise }
\end{cases}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1767969",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
}
|
Does every invertible matrix A has a matrix B such that A=Adj(B)? I'm trying to understand if it's always true, always true over $\mathbb C$ or never true.
I know that if $A$ is invertible, than there exists $A^{-1}$.
$$A=\frac{1}{det (A^{-1})}Adj(A^{-1})$$
So I have an adjoint matrix multiplied by a scalar, but how do I know if the result is an adjoint by itself?
|
There is no solution over $\mathbb R$ if $n \ge 3$ is odd and $\det(A) < 0$.
$\det(\text{adj}(B))= \det(\det(B) B^{-1}) = \det(B)^{n-1}$, which can't be negative in this case.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1768076",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
How to show $\frac{19}{7}How can I show $\dfrac{19}{7}<e$ without using a calculator and without knowing any digits of $e$?
Using a calculator, it is easy to see that $\frac{19}{7}=2.7142857...$ and $e=2.71828...$
However, how could this be shown in a testing environment where one does not have access to a calculator?
My only thought is to use the Taylor series for $e^x$ with $x=1$ to calculate $\displaystyle e\approx\sum \limits_{n=0}^{7}\frac{1}{n!}=\frac{685}{252}=2.7182...$
However, this method seems very time consuming and tedious, finding common denominators and performing long division. Does there exist a quicker, more elegant way?
|
$$ \int_{0}^{1} x^2 (1-x)^2 e^{-x}\,dx = 14-\frac{38}{e},$$
but the LHS is the integral of a positive function on $(0,1)$.
Another chance is given by exploiting the great regularity of the continued fraction of $\coth(1)$:
$$\coth(1)=[1;3,5,7,9,11,13,\ldots] =\frac{e^2+1}{e^2-1}$$
gives the stronger inequality $e>\sqrt{\frac{133}{18}}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1768195",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16",
"answer_count": 5,
"answer_id": 1
}
|
Evaluating integral with $e^{\sin x}$ I had this integral $ \int e^{\sin(x)} {\sin(2x)} dx$
I tried to split it up using integration by parts but I can't evaluate integral of $e^{\sin x}$
|
By parts works perfectly,
$$\int2\sin(x)\left(\cos(x)e^{\sin(x)}\right)dx=2\sin(x)e^{\sin(x)}-2\int\cos(x)e^{\sin(x)}dx\\
=2\sin(x)e^{\sin(x)}-2e^{\sin(x)}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1768333",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
}
|
If $A,B,C$ are $3\times 3$ matrices such that $A,(A-B)$ are invertible and if $(A-B)C=BA^{-1}$, show that $C(A-B)=A^{-1}B$.
If $A,B,C$ are $3\times 3$ matrices such that $A,(A-B)$ are invertible and if $(A-B)C=BA^{-1}$, show that $C(A-B)=A^{-1}B$.
Usually, $AB$ may not be equal to $BA$.
I tried starting from the answer.
$$C(A-B)-A^{-1}B+I=I$$
$$C(A-B)-A^{-1}(B-A)=I$$
$$(C+A^{-1})(A-B)=I$$
From the given equation,
$$(A-B)C-BA^{-1}+I=I$$
$$AC-BC-BA^{-1}+I=I$$
$$A(C+A^{-1})-B(C+A^{-1})=I$$
$$(A-B)(C+A^{-1})=I$$
I can only prove the equation given if I can show that $(C+A^{-1})(A-B)=(A-B)(C+A^{-1})$
How can I do that?
|
If you have $AB=I$, then it follows $ABA=A$ , and therefore $BA=I$.
So, a matrix always commutes with its inverse. This is exactly what you need to prove your claim.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1768412",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Using Euclid's algorithm to find Multiplicative Inverse 71 mod 53 I begin by writing out the recursion until a mod b == 0
53 -> 71-> 53-> 18-> 17 ->1 -> 0
to get in the form $sa+tn$
starting with $1 = 18-17$ I then substitute $17 = 53-(18\cdot2)$
this gives me $18\cdot3-53$
I then substitute $18 = (71-53)$ which gives me
$$71\cdot3 - 53\cdot4$$
this has me stuck because I know I need to substitute $53$ in a form of $x\cdot53-y\cdot71$ but I am unsure how to find this form without a calculator
|
You have done almost all the work yourself. You just need to interpret what you already have.
Your arrangement in the second last line gives you $71\cdot3-53\cdot4=1$ which on rearrangement is $71\cdot3=53\cdot4 + 1$ which exactly implies by modular property that $3\cdot71=1 \pmod{53}$ i.e. in modulo group $\mathbb{Z}_{53}$, $\overline{3} \cdot \overline{71} = \overline{1}$. So your inverse is $\overline{3}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1768773",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Show that $\lim_{x \to 0}\sqrt{1-x^2} = 1$ with the help of the definition of a limit The original problem is to calculate $\lim_{x \to 0}\dfrac{1-\sqrt{1-x^2}}{x}$
I simplified the expression to $\lim_{x\to 0}\dfrac{x}{1 + \sqrt{1-x^2}}$
The only definitions and theorems I can use are the definition of a limit and the theorems which states that for two functions $f$ and $g$ that approaches $L_1$ and $L_2$, respectively, near $a$ it is true that
(1.) $\lim_{x\to a} f + g = L1 + L2$
(2.) $\lim_{x\to a} fg = L_1L_2$
(3.) $\lim_{x\to a} \dfrac{1}{f} = \dfrac{1}{L_1}, \quad $ if $L_1 \neq 0$
In order to use (2.) for the simplified expression I first need to establish that I can use (3.) by showing that $\lim_{x\to 0} 1 + \sqrt{1-x^2} \neq 0, \quad$ so I need to find $\lim_{x\to 0} \sqrt{1-x^2}$ with the help of the definition, since none of the theorems says anything about the composition of functions. I know intuitively that the limit is $1$, so I tried to work out a suitable epsilon-delta proof, but I am stuck, because I feel like requiring $|x| < \delta$ will only make $|1 + \sqrt{1-x^2} - 1| = \sqrt{1-x^2}$ bigger than some $\epsilon$, not smaller.
|
Let $f(x)$ be our function. We want to show that for any given $\epsilon\gt 0$, there is a $\delta$ such that if $0\lt |x-0|\lt\delta$, then $|f(x)-0|\lt \epsilon$.
Note that $1+\sqrt{1-x^2}\ge 1$, at least when $|x|\le 1$. (When $|x|\gt 1$, it is not defined.) It follows that for such $x$ we have
$$\left|\frac{x}{1+\sqrt{1-x^2}}\right|\le |x|.$$
Let $\delta=\min(1,\epsilon)$. If $0\lt |x-0|\lt \delta$, then $|f(x)-0|\lt \epsilon$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1768873",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
How to prove that $\lim_{x \to 0^{+}} \frac{e^{-1/x}}{x} = 0$ Today someone asked me how to calculate $\lim_{x \to 0^{+}} \frac{e^{-1/x}}{x}$. At first sight that limit is $0$, because the exponential decreases faster than the lineal term in the denominator. However, I didn't know how to prove it formally.
I thought of expressing the exponential with its series expansion, but the person who asked me this didn't know about series expansions yet. Niether does he know about L'Hopital.
So, is there a "simple" way to prove that $\lim_{x \to 0^{+}} \frac{e^{-1/x}}{x}=0$?
|
Putting $t = 1/x$ we see that $t \to \infty$ as $x \to 0^{+}$. Also the function is transformed into $t/e^{t}$. Next we put $e^{t} = y$ so that $y \to \infty$ as $t \to \infty$. Thus the function is transformed into $(\log y)/y$. Since $y \to \infty$ we can assume $y > 1$ so that $\sqrt{y} > 1$. We have the inequality $$\log x \leq x - 1$$ for all $x \geq 1$. Replacing $x$ by $\sqrt{y}$ we get $$0 \leq \log\sqrt{y} \leq \sqrt{y} - 1 < \sqrt{y}$$ or $$0 \leq \log y < 2\sqrt{y}$$ or $$0 \leq \frac{\log y}{y} < \frac{2}{\sqrt{y}}$$ for $y > 1$. Applying squeeze theorem when $y \to \infty$ we get $$\lim_{y \to \infty}\frac{\log y}{y} = 0$$ and therefore the desired limit is $0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1769000",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 6,
"answer_id": 5
}
|
Prove that $\mathscr{F}[f] \in L^2(\mathbb{R})$ Let $f \in L^2(\mathbb{R})$ (square integrable functions), I'm trying to prove that his Fourier transform also does: $\mathscr{F}[f] \in L^2(\mathbb{R})$.
I have tried to bound it
\begin{align}
\int_{-\infty}^{+\infty}|\ \hat{f}(\omega)\ |^2 d\omega &= \int_{-\infty}^{+\infty} \left| \int_{-\infty}^{+\infty} f(t)\ e^{-i\omega t} \right|^2 d\omega \\
&\leq \int_{-\infty}^{+\infty} \left( \int_{-\infty}^{+\infty}|\ f(t)\ e^{-i\omega t}\ | \right)^2 d\omega \\
&= \int_{-\infty}^{+\infty} \left( \int_{-\infty}^{+\infty}|\ f(t)\ | \right)^2 d\omega \\
\end{align}
but the last expression is not necessarily finite. Any help will be helpful!
|
Let $f$ be absolutely and absolutely square integrable on $\mathbb{R}$. Then,
$$
\overline{\hat{f}(s)}=\overline{\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}f(t)e^{-ist}dt}=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}\overline{f(t)}e^{ist}dt = \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}\overline{f(-t')}e^{-ist'}dt'
$$
Let $g(t)=\overline{f(-t)}$. Then $\overline{\hat{f}(s)}=\hat{g}(s)$. By the convolution theorem, $f\star g$ is absolutely integrable
$$
|\hat{f}(s)|^2 = \hat{f}(s)\hat{g}(s)=\frac{1}{\sqrt{2\pi}}\widehat{(f\star g)}(s)
$$
Then you can evaluate the $L^2$ norm of the Fourier transform
\begin{align}
\int_{\infty}^{\infty}|\hat{f}(s)|^2ds & =\left.\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}\widehat{f\star g}(s)e^{isx}ds\right|_{x=0} \\
& = (f\star g)(0) \\
& = \int_{-\infty}^{\infty}f(t)g(0-t)dt \\
& = \int_{-\infty}^{\infty}|f(t)|^2dt.
\end{align}
This is an outline of the shortest argument I know. Everything goes through if $f\star g$ is differentiable at $0$ because then you have the classical Fourier inversion result at $0$, and that's what is being used in the last equation string. At least it gives you an idea of why the result is true for the Fourier transform.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1769101",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
What is the sum of all the Fibonacci numbers from 1 to infinity. Today I believe I had found the sum of all the Fibonacci numbers are from $1$ to infinity, meaning I had found $F$ for the equation
$F = 1+1+2+3+5+8+13+21+\cdots$
I believe the answer is $-3$, however, I don't know if I am correct or not. So I'm asking if someone can show how $F = -3$.
|
None of these answers adhere to analytic continuation, which is clearly what you are looking for. A fairly non-rigorous method for doing this is to use the generating function for the Fibonacci series, namely
$$\frac{1}{1-x-x^2} = 1 + 1x + 2x^2 + 3x^3 + 5x^4 + 8x^5 +\cdots$$
Clearly we get the answer $-1$ if we plug the number $1$ into each side. This does not give $-3$.... I am not sure how you would even get that. Again, this is VERY non-rigorous... see this post by Terence Tao for a better foundation into the topic. While it is true the sum diverges in the classical sense, it is often possible to give finite values to sums which have meanings in some contexts (and are simply absurd in others!). For a basic example, look up Cesaro summation, or even Abel summation.
https://terrytao.wordpress.com/2010/04/10/the-euler-maclaurin-formula-bernoulli-numbers-the-zeta-function-and-real-variable-analytic-continuation/
EDIT
Please see this link! Apparently a zeta regularization for the Fibonacci series has been done, and matches my answer.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1769145",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 7,
"answer_id": 4
}
|
Trying to show that $\ln(x) = \lim_{n\to\infty} n(x^{1/n} -1)$ How do I show that $\ln(x) = \lim_{n\to\infty} n (x^{1/n} - 1)$?
I ran into this identity on this stackoverflow question. I haven't been able to find any proof online and my efforts to get from $\ln(x) := \int_1^x \frac{\mathrm dt}t$ to that limit have been a failure.
|
You can even do a bit more using Taylor series $$x^{\frac 1n}=e^{\frac 1 n \log(x)}=1+\frac{\log (x)}{n}+\frac{\log ^2(x)}{2
n^2}+O\left(\frac{1}{n^3}\right)$$ which makes $$n(x^{\frac 1n} -1)=\log (x)+\frac{\log ^2(x)}{2 n}+O\left(\frac{1}{n^2}\right)$$ which shows the limit and also how it is approached.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1769256",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 8,
"answer_id": 2
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.