Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
|---|---|---|
Find $P(X^2+Y^2<1)$ if $X,Y$ are independent standard normal variables Suppose that $X$ and $Y$ are independent $n(0,1)$ random variables.
(a) Find $P(X^2+Y^2<1)$
My solution:
since then are independent, the $F_{X,Y}(x,y)=F_X(x)F_Y(y)=\dfrac{1}{2\pi} e^{\frac{-(x^2+y^2)}{2}}$
then $P[X^2+Y^2]=\int\int F_{X,Y}$
I am not sure how to define the boundary of the integrals.
|
If X~N(0,1), then the pdf of $X^2$ is not the square of a normal pdf. Even if you integrate what you have, you'll get the wrong answer. In fact, $X^2$ has a chi-squared distribution with one degree of freedom.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1998691",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Let $X$ be a variety such that $\forall x,y\in X$ there exists an open affine $U\in X$ containing $x,y$. Then $X$ is separated I am struggling to prove this. Note that I do not know anything about schemes, so please no schemes.
I know that in order to show $X$ is separated I need to show that $\Delta_{X}=\{(x,x)\in X\times X\}$ is closed in $X\times X$. I'm really stuck...
A hint or answer would be greatly appreciated.
|
You want to show that the complement of $\Delta_X$ is open in $X\times X$. Let $x\ne y$, then $(x,y)$ is in the complement of $\Delta_X$, so we want to produce an open neighborhood of $(x,y)$ not intersecting $\Delta_X$. If $x,y\in U$, then it suffices to show that $(U\times U) - \Delta_X = (U\times U) - \Delta_U$ is open, but this is automatic, since affine varieties are always separated.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1998787",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How to identify the pattern of a sequence
Are there some particular methods for identifying the following types of number series?
*
*$6, 10, 19, 44, 93, \cdots$ (Difference being prime no's square starting from 2)
*$1, -2, 15, 52, -512, \cdots $ ( $^*2-4,\ ^*-6+3,\ ^*4-8,\ ^*-10+5$, and so on)
*$4, -2, -7, 25, 95,\cdots$ ( $^*-1+2,\ ^*2-3,\ ^*-3+4,\ ^*4-5$, and so on)
I mean they do not follow the arithmetic or geometric series nor do their common difference seem to follow any AM-GM pattern. So, is there any generalized mathematical theorems on these types of number series? Or we have to do it on a trial & error basis using intuition?
|
You ask
is there any generalized mathematical theorems on these types of
number series?
I'll risk an unsatisfactory answer too long for a comment: essentially, "no".
The sequences school kids work on often come from arithmetic or geometric series, which is probably why you suggest trying them first. But there are no general rules for "these types of number series".
When mathematicians look for patterns they usually have some reason to expect a particular form, so their intuition informs the search. Knowing the source of a sequence in advance really matters. If I encountered your first one while thinking about number theory I might guess something involving primes after I noticed that the differences were all squares but that 16 and 36 were missing.
Any finite sequence can be continued in many ways that look as if they extend a pattern - you can always do this with a polynomial by taking enough differences. (This google search for successive differences polynomial finds lots of links.)
When a mathematician thinks she's found a new pattern she then tries to prove it goes on forever - that requires more than checking the next few terms. It's fun.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1998900",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Calculate commission percentage by amount In our system, we got a booking of the price 1100, the default commission percentage is 15% hence the raw price is
1100 / 1.15 = 956.52173913
Which means that the current commission amount is
1100 - 956.52173913 = 143.47826087
I need to change the commission percentage of this booking, but i only know the commission amount which is 100
How do i calculate the new commission, based on the only on commission amount?
|
More general. The following formula can be solved for $x$.
$P\cdot \left(1-\frac{1}{1+x}\right)=C$
with $P=$ gross price, $x$=comission rate and $C$=commission
Multipying out the brackets
$P-\frac{P}{1+x}=C$
$P-C=\frac{P}{1+x}$
Interchanging numerators and denominators
$\frac{1}{P-C}=\frac{1+x}{P}$
$\frac{P}{P-C}=1+x$
$\frac{P}{P-C}-1=x$
$\frac{P}{P-C}-\frac{P-C}{P-C}=x$
$\boxed{\frac{C}{P-C}=x}$
With $P=1100$ and $C=100$
$x=\frac{100}{1100-100}=0.1=10\%$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1999024",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Does $E(X_n)\rightarrow E(X)$ as $n\to\infty$ If there is a sequence of random variables ${X_n}$ converging almost surely to $X$, therm is it true that $E(X_n)\rightarrow E(X)$ as $n\to\infty$ ? Only thing given is that $E(X_n)\le 23$ for all $n$.
I am not getting how to do it. I can't use DCT here, can I?
|
No it's not.
Consider: $$X_n(\omega) = \begin{cases}
n & \mbox{ if } \omega\in [0,\frac{1}{n}] \\
0 & \mbox{ otherwise}
\end{cases}$$
Then $X_n \to X$ where $X \equiv 0$, $E[X_n] = 1 \le 23$, so $\lim\limits_{n\to\infty} E[X_n] = 1$ but $E[X] = 0$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1999156",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Proving this modulo question Let a,b be integers and p is prime
$a^pb^{p^2} \equiv 0\pmod p \Rightarrow a \equiv 0\pmod p$ or $b \equiv 0\pmod p$
and using elemination method
$a^pb^{p^2} \equiv 0\pmod p$ and $a \not\equiv 0\pmod p \Rightarrow b \equiv 0\pmod p$
from here
since $a \not\equiv 0\pmod p \Rightarrow p \nmid a \Rightarrow gcd(a,p) = 1$
by Fermat's little theorem we get
$a^p \equiv a\pmod p$
so,
$ab^{p^2} \equiv 0\pmod p$
again since $ p \nmid a$
so
$p \mid b^{p^2}$
I'm pretty sure $p \mid b^{p^2} \Rightarrow p \mid b$ is wrong
I can't seem to find a way to make it so that $p \mid b$ to get
$b \equiv 0\pmod p$
|
$Z/p$ is a field so $a^pb^{p^2}=0$ mod $p$ implies that $a^p=0$ mod $p$ or $b^{p^2}=0$ mod $p$.
If $a^p=0$ $mod$ $p$, Little fermat implies that $a^p=a$ mod $p$ done.
If $b^{p^2}=0$ mod $p$, you have $b^{p^2}=(b^p)^p=b$ mod $p$ by little fermat.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1999544",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Slopes and lines Given two lines having slope $m_1$ and $m_2$, the angle between them is given by $\tan(\theta)=\frac{m_2- m_1}{1+m_1m_2}$
Does the order of $m_1$ and $m_2$ matter here, and if so what is the significance? Graphical aid would be helpful.
|
Note that $m_1=\tan \theta_1$ and $m_1=\tan \theta_2$ are the tangents of the angles $\theta_1$ and $\theta_2$ between the lines and the $x$ axis. So:
$$
\frac{m_2-m_1}{1+m_1m_2}=\frac{\tan \theta_2-\tan \theta_1}{1+\tan \theta_1\tan \theta_2}=\tan (\theta_2-\theta_1)
$$
and:
$$
\frac{m_1-m_2}{1+m_2m_1}=\frac{\tan \theta_1-\tan \theta_2}{1+\tan \theta_2\tan \theta_1}=\tan (\theta_1-\theta_2)
$$
so the two formulas give two possible orientation for the angle $\theta$ between two lines.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1999702",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
preimages under group homomorphism Let $\varphi : G \rightarrow H$ be a group homomorphism with kernel $K$ and let $a,b \in \varphi(G)$. Let $X = \varphi^{-1}(a)$ and $Y = \varphi^{-1}(b)$. Fix $u \in X$. Let $Z=XY$. Prove that for every $w \in Z$ that there exists $v \in Y$ such that $uv=w$. This is Dummit and Foote exercise 3.1.2.
My attempt:
Suppose we assume that $v = u^{-1}w$
I try to show that $v \in Y$
$\varphi(v) = \varphi(u^{-1})\varphi(w) = a^{-1}\varphi(w)$
If I could somehow show $\varphi(w) = ab$, then $\varphi(v) = b$ so that $v \in Y$ but I think I am going in circles.
|
Since $w\in Z$, you know that $w=xy$, for some $x\in X$ and $y\in Y$.
By definition, $\varphi(x)=a$ and $\varphi(y)=b$.
Also $\varphi(u)=a$, which implies $u^{-1}x\in\ker\varphi$.
Then
$$
w=xy=u(u^{-1}xy)
$$
Can you finish?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1999834",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Solving $\sin z = i$ I know that
$$\sin z = \frac{e^{iz}-e^{-iz}}{2i}$$
so:
$$\frac{e^{iz}-e^{-iz}}{2i} = i\implies e^{iz}-e^{-iz} = -2$$
but I can't take anything useful from here. How do I solve such equations?
What about $\tan z = 1$? Is there any solutions?
|
$\sin z = \cos z $ comes down to ..
$$ e^{iz}-e^{-iz} = i( e^{iz} + e^{-iz} ) $$
$$ (1-i)e^{iz}= (1+i)e^{-iz} $$
$$ e^{2iz}= \frac{ 1+i}{1-i} =i=e^{i( \frac\pi2+2n\pi )}$$
So $$z=\frac\pi4 +n\pi$$
All solutions are real.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1999924",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 3
}
|
Is the Post Office Metric applicable in $\Bbb{R}^n$ for all $n$? I was required to provide a metric space $(X,d)$ with $x,y\in X$ and $0<r<R$ such that $B_R(x)\subsetneq B_r(y)$. After a lot of thinking and reading, I came by a metric function called the "Post Office Metric", always attributed to $\Bbb{R}^2$, in particular when giving examples for a metric space such as the above. I constructed a metric function (defined on $\Bbb{R}$) of the same concept as follows: $d(x,y)=
\begin{cases}
|x|+|y|, & \text{if } x\ne y \\
0, & \text{otherwise}
\end{cases}$,
and to what I checked it seems like a legitimate metric function. Then I looked for it and came by nothing like that, and I begin to feel like I am doing something wrong. I would appreciate your thought on the matter.
|
The idea is fundamentally similar to Mike F's answer, but perhaps simpler or at least more common in analysis.
Consider $X=[0,1]$ with the induced Euclidian metric. Let $x=0$; then $B_R(x)=[0,x)$. Take say, $R=1/2$.
Then take, say, $y= 1/3$ and $r=2/5$, we have that $B_r(y)=[0,11/15)\simeq [0,0.73)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2000058",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
If $f$ is differentiable in $B(a)$ and $f(x) \leq f(a)$ for all $x$ in $B(a)$, then $\nabla f(a) = 0$
Assume $f$ is differentiable at each point of an n-ball $B(a)$. Prove that if $f(x) \leq f(a)$ for all $x$ in $B(a)$, then $\nabla {f(a)} = 0.$
I had my proof, but I'm not sure it is correct.
Proof:
Since f is differentiable at each point of the n-ball B(a), meaning
$$\lim_{h \to 0} \frac{f(a+hy)-f(a)}{h} = \nabla f(a) \cdot y$$
, where y is an arbitrary unit vector.
From the mean value theorem, we know that
$$\lim_{h \to 0} \frac{f(a+hy)-f(a-hy)}{h} = \nabla f(c) \cdot y$$
for some c where $||c|| < r$.
Since
$$\lim_{h \to 0} \frac{f(a+hy)-f(a)}{h} = \nabla f(c) \cdot y = - \lim_{h \to 0} \frac{f(a)-f(a-hy)}{h}$$
Since the RHS of the the first equation is 0, we have $\nabla f(a) = 0$.
So, is there any mistake of any suggestion about the point that I can improve mathematically or about the way that I wrote ?
|
You never used the fact that $f(x)\leq f(a)$. I may be missing something, but it seems that your proof would imply that all differentiable functions have this property.
My proof is the following: Compute
$$\lim_{h\to 0 } \frac{f(a+hy)-f(a)}{h} = \nabla f(a) \cdot y,$$
and note that $f(a+hy)\leq f(a)$ for all $h$ sufficiently small, so $\nabla f(a)\cdot y\leq 0$. However, taking $y\mapsto -y=:\tilde{y}$ we again have
$$\lim_{h\to 0} \frac{f(a+h\tilde{y})-f(a)}{h} = \nabla f(a)\cdot \tilde{y},$$
implying $\nabla f(a) \cdot \tilde{y} \leq 0$. But, since $\tilde{y}=-y$ we have $\nabla f(a)\cdot y \geq 0$. Combining this with the first inequality we derive $\nabla f(a) \cdot y =0$. Since $y$ was arbitrary it immediately follows that $\nabla f(a)=0.$
EDIT: To make the last argument explicit: Our above work implies that $\nabla f(a) \cdot y =0 $ for any $y$. In particular, take $y=\nabla f(a)$ so that
$$0 = \nabla f(a) \cdot \nabla f(a) = |\nabla f(a)|^2.$$
Then, $|\nabla f(a)|^2 = 0$ only if $\nabla f(a) =0.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2000124",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Help needed showing that $f(x,y)=U(x+y)+V(x-y)$ Let $h(u,v)=f(u+v,u-v)$ and $f_{xx}=f_{yy}$ for every $(x,y)\in\mathbb{R}^2$. In addition, $f\in{C^2}.$ Show that $f(x,y)=U(x+y)+V(x-y)$.
I think applying the Taylor theorem could be useful.
$$f(x,y)=f(x+h_1,y+h_2)-\left(\frac{\partial{f(x,y)}}{\partial{x}}h_1+\frac{\partial{f(x,y)}}{\partial{y}}h_2\right)-\frac 1 2\left(\frac{\partial^2{f(x,y)}}{\partial^2{x}}h_1^2+\frac{\partial^2{f(x,y)}}{\partial{x}\partial{y}}h_1h_2+\frac{\partial^2{f(x,y)}}{\partial^2{y}}h_2^2\right)-R(h_1,h_2)$$
|
This is a d'Alembert form solution for the hyperbolic PDE
$$
f_{xx} - f_{yy} = 0
$$
One changes to variables
$$
\xi = x - y \\
\eta = x + y
$$
and uses the chain rule to get
$$
\frac{\partial f}{\partial x} =
\left(
\frac{\partial f}{\partial \xi}
\right)
\frac{\partial \xi}{\partial x}
+
\left(
\frac{\partial f}{\partial \eta}
\right)
\frac{\partial \eta}{\partial x}
=
\frac{\partial f}{\partial \xi}
+
\frac{\partial f}{\partial \eta}
\\
\frac{\partial f}{\partial y} =
\left(
\frac{\partial f}{\partial \xi}
\right)
\frac{\partial \xi}{\partial y}
+
\left(
\frac{\partial f}{\partial \eta}
\right)
\frac{\partial \eta}{\partial y}
=
-\frac{\partial f}{\partial \xi}
+
\frac{\partial f}{\partial \eta}
$$
and
$$
\left(
\frac{\partial}{\partial x}\frac{\partial}{\partial x}
\right) f
=
\left(
\frac{\partial}{\partial \xi}
+
\frac{\partial}{\partial \eta}
\right)
\left(
\frac{\partial}{\partial \xi}
+
\frac{\partial}{\partial \eta}
\right) f
=
\left(
\frac{\partial}{\partial \xi}
\right)^2 f
+
2
\left(
\frac{\partial}{\partial \xi}
\frac{\partial}{\partial \eta}
\right) f
+
\left(
\frac{\partial}{\partial \eta}
\right)^2 f \iff \\
f_{xx} = f_{\xi\xi} + 2 f_{\xi\eta} + f_{\eta\eta}
\\
\left(
\frac{\partial}{\partial y}\frac{\partial}{\partial y}
\right) f
=
\left(
-\frac{\partial}{\partial \xi}
+
\frac{\partial}{\partial \eta}
\right)
\left(
-\frac{\partial}{\partial \xi}
+
\frac{\partial}{\partial \eta}
\right) f
=
\left(
\frac{\partial}{\partial \xi}
\right)^2 f
-
2
\left(
\frac{\partial}{\partial \xi}
\frac{\partial}{\partial \eta}
\right) f
+
\left(
\frac{\partial}{\partial \eta}
\right)^2 f \iff \\
f_{yy} = f_{\xi\xi} - 2 f_{\xi\eta} + f_{\eta\eta}
$$
This gives the transformed PDE:
$$
f_{\xi\eta} = 0
$$
Integration regarding $\eta$ gives
$$
f_\xi = C(\xi)
$$
where $C = C(\xi)$, as this is only constant regarding $\eta$, so it can still be dependent on $\xi$, thus $C(\xi)$ instead of just $C$.
Another integration, now regarding $\xi$, gives
$$
f = \underbrace{\int C(\xi) d\xi}_{E(\xi)} + D(\eta)
= E(\xi) + D(\eta)
= E(x - y) + D(x + y)
$$
where $E(\xi)$ is an antiderivative of $C(\xi)$ and $D$ is constant regarding $\xi$, so it can still be $D(\eta)$.
If we rename the introduced functions, we get
$$
f = V(x-y) + U(x+y)
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2000254",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Calculate $\lim\limits_{n \to \infty} \frac1n\cdot\log\left(3^\frac{n}{1} + 3^\frac{n}{2} + \dots + 3^\frac{n}{n}\right)$
Calculate $L = \lim\limits_{n \to \infty} \frac1n\cdot\log\left(3^\frac{n}{1} + 3^\frac{n}{2} + \dots + 3^\frac{n}{n}\right)$
I tried putting $\frac1n$ as a power of the logarithm and taking it out of the limit, so I got
$$ L = \log\lim\limits_{n \to \infty} \left(3^\frac{n}{1} + 3^\frac{n}{2} + \dots + 3^\frac{n}{n}\right)^\frac1n $$
At this point I thought of the fact that $\lim\limits_{n \to \infty} \sqrt[n]{a_1^n+a_2^n+\dots+a_k^n} = max\{a_1, a_2, \dots,a_k\}$ but this won't be of any use here, I guess. How can I calculate this limit, please?
|
Hint. $$0<3^{\frac n 2}+\cdots+3^{\frac n n}\le(n-1)3^{\frac n 2}=\frac{3^n}{3^{\frac n 2}/(n-1)}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2000378",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
}
|
can a Car Registration Number, a combination of prime, be prime? While waiting in my car, I noticed registration number of a car parked in front of my car was 6737. So it was a concatenation of two prime numbers 67 and 37.
Now I know following ways to check whether any number is prime or not
Let $p$ be the number to be checked for being prime or not
1) if p is divided by any prime in between 2 to $\sqrt[2]{(p)}$
then it is not a prime.
2) $\forall i ; 1< i < p $ if p mod i equals $0$ then the number is not prime.
Do we have s method so that we can check whether combination of prime numbers (here 67 and 37) constitutes prime number (6737) or not ?
|
This is true for the prime numbers 3, 7, 109 and 673 that if you concatenate any two of these numbers in any order , the resulting number will be a prime ; as in this case Concatenating 7 at the end of 673 resulting in 6737 which is a prime . Concatenating 7 in the front of 673, which gives 7673 , is also a prime . So is 1093, 1097, 3109, 7109.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2000483",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 1,
"answer_id": 0
}
|
How is a morphism different from a function How is a morphism (from category theory) different from a function?
Intuitive explanation + maths would be great
|
If $G$ is a group, there is a famous example that constructs a category in which morphisms are not functions: you take it to consist of a single object called "$\bullet$" and state that the morphisms (of $\bullet$ to $\bullet$, because there is no other object available) are the elements of $G$. Check for yourself that all the properties defining a category are verified by this (admittedly strange) example.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2000595",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Show that $f$ is unbounded below
Let $f:\Bbb R\to \Bbb R$ be continuous and satisfies $|f(x)|\ge |x| $ for all $x$.Also $f(x+y)=f(x)+f(y)$ for all $x,y$.Show that $f$ is bijective.
My try:
$f$ is injective ;since $f(x)=f(y)\implies f(x-y)=0\implies |x-y|\le 0\implies x=y$.
To show that $f$ is onto.
Every continuous injection is either strictly increasing or decreasing.Hence if I can show that $f$ is both unbounded above and below then by the IVP we can show that $f$ is surjective.
Now since $|f(x)|\ge |x|>x$ forall $x$ so $f$ is unbounded above.However I am failing to show that $f$ is unbounded below also.
Please help me out here.
|
From $f(x+y)=f(x)+f(y)$ and continuity you get
$f(x)=cx$
https://en.wikipedia.org/wiki/Cauchy's_functional_equation
From $|f(x)|\ge |x|$ you get $c\ne 0$
So $f$ is bijective.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2000660",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 0
}
|
For which values of positive integer k is it possible to divide the first 3k positive integers into three groups with the same sum? I'm on a GCSE-a level syllabus currently, and I can't seem to think of any algebraic equation that I could comprise to solve this (with the GCSE/early a level syllabus). The question in full is
For which values of positive integer k is it possible to divide the first 3k positive integers into three groups with the same sum?
(e.g. if k = 3, then the first 3k integers are 1,2,3,4,5,6,7,8,9. You can split these into 3 groups of 15, for example {{1,2,3,4,5},{7,8},{6,9}}. so it is possible for k=3)
Any help would be appreciated. Thanks
|
The sum of the first $3k$ numbers is $\frac 12(3k)(3k+1)$, so we want three groups that sum to $\frac 12k(3k+1)$. Clearly $k=1$ doesn't work because the desired sum is $2$ and we have a $3$ which is too big. $k=2$ does work as we have $7=1+6=2+5=3+4$. Intuitively, as we have more numbers we have more freedom, so expect it to work. You have shown a solution for $k=3$. Now note any number greater than $1$ can be expressed as a sum of $2$s and $3$s, so we can separate $3k$ into a sum of runs of $6$ and $9$. Our examples have the feature that each group is the same size, so we can add a constant to each entry and get a division of the numbers from $m$ to $m+5$ or $m+8$ into three groups. Each run can be treated as one of our examples, so the problem can be solved for all $k \gt 1$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2000768",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
}
|
Prove that a graph on $n$ vertices with girth $g$ has at most $\frac{g}{g-2}(n-2)$ edges I'm not sure how to go about this. One of my thoughts was to take the number of edges in a complete graph, and subtract all of the edges that would be needed to make a smaller cycle.
If you have $C_5$, there is only one way to make a $5$-cycle on $5$ vertices, but there's $10$ ways to make a $3$-cycle, and...
I have no idea how to do this without counting things multiple times.
|
This isn’t true as stated: the Petersen graph has $10$ vertices, $15$ edges, and girth $5$, and
$$15>\frac{40}3=\frac53(10-2)\;.$$
It is true for planar graphs.
HINT: Use Euler’s formula, $v-e+f=2$, where $v,e$, and $f$ are the numbers of vertices, edges, and faces, respectively, of a planar graph when it is embedded in the plane without any edge intersections.
If that’s not enough, here’s a further hint:
Count each edge once for each face in which it appears. Do this in two different ways, once by edges and once by faces, to get an inequality involving $e,f$, and $g$, and then use Euler’s formula to eliminate $f$. Finally, solve the inequality for $g$.
And the complete solution, bar a bit of algebra at the end.
Each face has at least $g$ edges. If you count each edge once for each face in which it appears, you get $2e$ when you count by edges, since each edge appears in $2$ faces, and you get a minimum of $gf$ when you count by faces. Thus, $2e\ge gf=g(2-v+e)$. Now solve this inequality for $e$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2000888",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Proof of the disjunction property I am trying to prove the disjunction property "if $\,\vdash\phi\lor\psi$ then $\,\vdash\phi$ or $\,\vdash\psi$" for intuitionistic propositional logic.
So far I thought about choosing two non-tautologies $\phi$ and $\psi$ with two Kripke countermodels:
$$(W, R_W, f_W)~~\mbox{such that}~~\exists (w\in W)(w\not\Vdash\phi)$$
$$(V, R_V, f_V)~~\mbox{such that}~~\exists (v\in V)(v\not\Vdash\psi)$$
Then create a new Kripke model $(W\cup V\cup\{u\}, R, f)$ with $R(u,w)$ and $R(u,v)$.
My reasoning is that $w\not\Vdash\phi\implies u\not\Vdash\phi$ and $v\not\Vdash\psi\implies u\not\Vdash\psi$, and therefore $u\not\Vdash\phi\lor\psi$, which provides a countermodel for $\phi\lor\psi$ being a tautology and $\phi$ and $\psi$ not being tautologies.
Is this sufficient to be a proof? For me it's unclear whether adding a world $u$ changes which sentences $w$ forces. So if I add a world $u$, can I be certain that still $w\not\Vdash\phi$?
|
Yes, your proof works; you have only to specify that the two countermodels must have disjoint frames $\langle W_1, R_1 \rangle$ and $\langle W_2, R_2 \rangle$.
The new model will have $W = \{ w_0 \} ∪ W_1 ∪ W_2$ and the "extended" accessibility relation will be :
$xRy$ iff $x=w_0$ or $xR_1y$ or $xR_2y$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2000978",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Rolles Theorem - problem with interval I am trying to show that there is only one solution for the equation $x-\cos x = 1$ in the interval $]0,\frac{\pi}{2}[$.
Using $f(x) = x-\cos x -1 = 0$, I took the derivative $1 + \sin x$.
Now I would expect to find solutions for $1 + \sin x = 0$ within the interval, but the next candidate to the left is $-\frac{\pi}{2}$, which is outside of the interval - this does not prove the existence of a solution within the interval, does it?.
What am I doing wrong?
|
Rolles theorem does not apply here : For Rolles theorem you need two distinct real numbers $a$ and $b$ with $\ f(a)=f(b)\ $. Here, no such pair within the interval $\ [0,\frac{\pi}{2}]\ $ exists.
The correct way is using $\ f'(x)=1+\sin(x)>0\ $ to show that there is at most one solution and looking at the signs of $f(0)$ and $f(\frac{\pi}{2})$ to see that there is at least one solution.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2001114",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Multivariable Calculus Help with Laplacian in Polar coordinates I am trying to see why
$\big(\partial_{xx} + \partial_{yy}\big) u(r, \theta) = u_{rr} + \frac{1}{r}u_r + \frac{1}{r^2}u_{\theta\theta}$
I first use the chain rule to say that:
$\frac{\partial u}{\partial x} = u_r r_x + u_{\theta} \theta_x$
And then I calculate:
$r_x = \frac{x}{\sqrt{x^2 + y^2}} = \frac{rcos\theta}{r} = cos\theta$
$\theta_x = \frac{-y}{x^2+y^2} = \frac{-rsin\theta}{r^2} = \frac{-sin\theta}{r}$
Plugging in gives
$\frac{\partial u}{\partial x} = u_r cos\theta - u_{\theta} \frac{sin\theta}{r}$
But I am unsure of how to take the next $x$ derivative and I am wondering if someone can help?
|
Let's start with
$$
\begin{cases}
x=r\cos(\theta)\\
y=r\sin(\theta).
\end{cases}
$$
We compute first $u_r:$
$$
u_r=u_xx_r+u_yy_r=\cos\theta u_x+\sin\theta u_y.
$$
$$
u_{rr}=\cos\theta u_{xr}+\sin\theta u_{yr}=\cos\theta (u_{xx}x_r+u_{xy}y_r)+sin\theta(u_{xy}x_r +u_{yy}y_r)=\\
=\cos^2\theta u_{xx}+2\cos\theta\sin\theta u_{xy}+\sin^2\theta u_{yy}.
$$
Similarly
$$
u_\theta=u_x x_\theta+u_yy_\theta=-r\sin\theta u_x+r\cos\theta u_y.
$$
So
$$
u_{\theta\theta}=-r(\cos\theta u_x+\sin\theta u_y)+r^2(\sin^2\theta u_{xx}-2\cos\theta\sin\theta u_{xy}+\cos^2\theta u_{yy}).
$$
Dividing both sides by $r^2$, adding $u_{\theta\theta}$ and $u_{rr}$, and rearraging term we obtain:
$$
\Delta u=u_{rr}+\frac 1{r^2} u_{\theta\theta}+\frac 1r u_r
$$
A bit of calculation omitted at the end, feel free to ask if something is unclear or wrong.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2001216",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Coefficients of a formal power series satisfying $\exp(f(z)) = 1 + f(q\,z)/q$ Let $(q;\,q)_n$ denote the $q$-Pochhammer symbol:
$$(q;\,q)_n = \prod_{k=1}^n (1 - q^k), \quad(q;\,q)_0 = 1.\tag1$$
Consider a formal power series in $z$:
$$f(z) = \sum_{n=1}^\infty \frac{(-1)^{n+1}P_n(q)}{n!\,(q;\,q)_{n-1}}z^n,\tag2$$
where $P_n(q)$ are some (yet unknown) polynomials in $q$:
$$P_n(q) = \sum_{k=0}^{m} c_{n,k} \, q^k,\tag3$$
where $m=\binom{n-1}2 = \frac{(n-1)(n-2)}2$ and $c_{n,k}$ are some integer coefficients.
Suppose the formal power series $f(z)$ satisfies the functional equation
$$\exp(f(z)) = 1 + f(q\,z)/q.\tag4$$
Expanding the left-hand side of $(4)$ in powers of $z$ using the exponential partial Bell polynomials, and comparing coefficients at corresponding powers of $z$ at both sides, we can obtain a system of equations, by solving which we can find the coefficients of the polynomials $P_n(q)$:
$$
\begin{align}
P_1(q) &= 1\\
P_2(q) &= 1\\
P_3(q) &= 2 + q\\
P_4(q) &= 6 + 6 q + 5 q^2 + q^3\\
P_5(q) &= 24 + 36 q + 46 q^2 + 40 q^3 + 24 q^4 + 9 q^5 + q^6\\
\dots
\end{align}\tag5
$$
This is a quite slow process, even when done on a computer. I computed the polynomials up to $n=27$ (they can be found here) using a Mathematica program that can be found here.
There are some patterns in the coefficients I computed (so far they are just conjectures):
$$
\begin{align}
c_{n,0} &= (n-1)!&\vphantom{\Huge|}\\
c_{n,1} &= \frac{(n-2)(n-1)!}2, &n\ge2\\
c_{n,2} &= \frac{(3n+8)(n-3)(n-1)!}{24}, &n\ge3\\
c_{n,3} &= \frac{(n^2 + 5 n - 34)\,n!}{48}, & n\ge4
\end{align}
\tag6
$$
and
$$
\begin{align}
c_{n,m} &= 1&\vphantom{\Huge|}\\
c_{n,m-1} &= \frac{(n+1)(n-2)}2, &n\ge2\\
c_{n,m-2} &= \frac{(3 n^3 - 5 n^2 + 6 n + 8)(n-3)}{24}, &n\ge3\\
c_{n,m-3} &= \frac{(n^4 - 10 n^3 + 43 n^2 - 74 n + 16) (n - 1) \, n}{48}, &n\ge4
\end{align}
\tag7
$$
where $m=\binom{n-1}2$. Other coefficients seem to follow more complicated patterns. We can also observe that
$$
\begin{align}
P_n(1) &= \frac{(n-1)!\,n!}{2^{n-1}}\\
P_{2n}(-1) &= \frac{(2n-1)!\,n!}{3^{n-1}}\\
P_{2n-1}(-1) &= \frac{(2n-1)!!\,(2n-2)!}{6^{n-1}},
\end{align}\tag8
$$
where $n!!$ denotes the double factorial.
I am trying to find a more direct formula for the polynomials $P_n(q)$ or their coefficients $c_{n,k}$ (possibly, containing finite products and sums, but not requiring to solve equations).
|
Using the first recurrence relation here, we can find a recurrence for the polynomials $P_n(q)$:
$$P_1(q) = 1, \quad P_n(q) = \sum_{k=1}^{n-1} {{n-1} \choose {k-1}} {{n-2} \brack {k-1}}_q P_k(q) \, P_{n-k}(q) \, q^{n-k-1},$$
where $n \choose k$ is the binomial coefficient, and ${n \brack k}_q$ is the $q$-binomial coefficient (also known as the Gaussian binomial coefficient). A Mathematica program that computes them using this recurrence can be found here.
If we introduce a notation for the coefficients of the formal power series $f(z)$, that are rational functions of $q$:
$$f(z) = \sum_{n=1}^\infty Q_n(q)\,z^n, \quad Q_n(q) = \frac{(-1)^{n+1}P_n(q)}{n!\,(q;\,q)_{n-1}},$$
then we can have a simpler recurrence for them:
$$Q_1(q) = 1, \quad Q_n(q) = \frac1{n \, (1-q^{1-n})}\sum_{k=1}^{n-1} k \, q^{-k} \, Q_k(q) \, Q_{n-k}(q).$$
It would be nice to find a more direct, non-recurrent formula for them.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2001325",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 1,
"answer_id": 0
}
|
Is the axiom of choice needed to prove $|G/H| \times |H| = |G|$ (Lagrange's Theorem)? Consider the following sequence of assertions, each of which implies the next. Nothing below has any topology, we just have sets and discrete groups.
*
*If $F \hookrightarrow E \twoheadrightarrow B$ is fibre bundle of sets (this just means every fibre has the same cardinality), then $|E| = |F \times B|$
*If $H \hookrightarrow E \twoheadrightarrow B$ is a principal bundle, then $|E| = |H \times B|$
*If $H$ is a subgroup of $G$, then $|G| = |H \times G/H|$
*If $H$ is a normal subgroup of $G$, then $|G| = |H \times G/H$|
Question: Do all these assertions fail without the axiom of choice? Or, which of them hold? I actually only care about (3), but for some reason I thought adding these extra statements might somehow clarify things.
|
Yes, the axiom of choice is needed, to some extent.
As bof mentions in the comments, it is always the case that $\Bbb{ Q\hookrightarrow R\twoheadrightarrow R/Q}$. However, as explained by Andrés E. Caicedo on MathOverflow, it is consistent that $\Bbb{R/Q}$ cannot be linearly ordered, as a set. In that case, it is impossible that $\Bbb{|R|=|Q\times R/Q|}$, as that would imply that you can linearly order $\Bbb{R/Q}$.
This should be a counterexample for all the cases.
If you are in the situation where all the equivalence classes in $G/H$ have the same cardinality as $H$ (e.g. like in the $\Bbb{R/Q}$ case), then as explained in the second part of this answer on MathOverflow, you can get the equality from the assumption that "cardinal multiplication is just 'repeated summation'", which is equivalent to the Partition Principle. However, it is still open if the Partition Principle is equivalent to the axiom of choice.
So it might be the case that having $|G|=|H\times G/H|$ implies the axiom of choice, even just from this equality in the Abelian case.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2001438",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
}
|
How to construct the set E invoving an almost constant function? Assume that $f$ is a function from $\mathbb{R}$ to $\mathbb{R}$, and for all $h\in \mathbb{R}$, the set $E_h=\{x:f(x+h)-f(x)\neq 0,x\in \mathbb{R}\}$ is a finite set which has no more than 2016 elements. Prove that there exists a set $E$ which has no more than $1008$ elements, such that $f$ is a constant in $\mathbb{R}\backslash E$.
To solve this problem, I think it needs a keen observation. What I have thought is first to prove $f(\mathbb{R})$ have no more than 1008 elements, but it is hard for me. How can I do this?
|
Let $x_m=min(E_1)+1$ and $x_M=max(E_1)$.
We claim that $f(x)=f(x_m-1)$ for all $x<x_m$. In fact, assume by contradiction that $x<x_m$ and $f(x)\ne f(x_m-1)$. Then $f(x)=f(x-n)$ for all $n\in\Bbb{N}$, since otherwise $f(x-j)\ne f(x-j-1)$ for some $j\in\Bbb{N}_0$ and then $x-j-1\in E_1$, but $x-j-1<x_m-1=min(E_1)$. Similarly $f(x_m-1)=f(x_m-1-n)$ for all $n\in\Bbb{N}$.
But then $x-n\in E_h$ for $h=x_m-1-x$ and $n\in \Bbb N$, since
$$
f(x-n)=f(x)\ne f(x_m-1)=f(x_m-1-n)=f(x-n+h),
$$
and this is impossible, since $E_h$ is finite, and so we have proven the claim.
Similarly one proves that $f(x)=f(x_M+1)$ for all $x>x_M$.
So $f(x)=c$ for $x<x_m$ and $f(x)=C$ for $x>x_M$. But if $c\ne C$, then for $h>x_M-x_m$ we would have infinitely many $x\in E_h$, so $c=C$.
Consequently $f(x)=c$ for all $x$ outside the interval $[x_m,x_M]$.
Now take $h$ sufficiently big, for example take $h=3(x_M-x_m)$.
Then $f(x)\ne c$ for some $x\in [x_m,x_M]$, if and only if $x,x-h\in E_h$. Since there are at most 2016 elements in $E_h$, there are at most 1008 elements $x$ such that $f(x)\ne c$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2001523",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
}
|
Show that $d \geq b+f$. Let $a, b, c, d, e, f$ be positive integers such that:
$$\dfrac{a}{b}<\dfrac{c}{d}<\dfrac{e}{f}$$
Suppose $af - be = -1$. Show that $d \geq b+f$.
Looked quite simple at first sight...but havent been able to solve this inequality. Have no idea where to start. Need help. Thanks!!
|
Hint: Try to derive that $bf<d$. What can you conclude from there?
(Hint 2: $bf = (b-1)(f-1) + (b+f) - 1$.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2001685",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Show that $\mathbb{E}\left(\bar{X}_{n}\mid X_{(1)},X_{(n)}\right) = \frac{X_{(1)}+X_{(n)}}{2}$
Let $X_{1},\ldots,X_{n}$ be i.i.d. $U[\alpha,\beta]$ r.v.s., and let $X_{(1)}$ denote the $\min$, and $X_{(n)}$ the $\max$. Show that
$$
\mathbb{E}\left(\overline{X}_{n}\mid X_{(1)},X_{(n)}\right) = \frac{X_{(1)}+X_{(n)}}{2}.
$$
I know that $\displaystyle\mathbb{E}\left(\overline{X}_{n}\mid X_{(1)},X_{(n)}\right)=\mathbb{E}\left({X}_{1}\mid X_{(1)},X_{(n)}\right)$ but not much more.
|
After having used a translation and rescaling, we can assume that $\alpha=0$ and $\beta=1$. Assume that $1\lt i\lt n$. Denote $X_{(i)}$ the $i$th greater element among $X_1,\dots,X_n$ (which is almost surely well-defined, as the vector $\left(X_1,\dots,X_n\right)$ has a continuous distribution). Then for each Borel subset $B$ of $\mathbb R^2$, we have, by symmetry,
$$
\mathbb E\left[\left(\frac{X_{(1)}+X_{(n)}}2-X_{(i)} \right)\mathbf 1\left\{\left(X_{(1)},X_{(n)}\right)\in B\right\} \right]\\ =
n!\mathbb E\left[\left(\frac{X_{1}+X_{n}}2-X_{i} \right)\mathbf 1\left\{\left(X_{1},X_{n}\right)\in B\right\} \mathbf 1\left\{X_1\lt X_2\lt \dots X_i\lt \dots \lt X_n \right\} \right].
$$
Now, we use the fact that if $a$ and $b$ are two fixed real numbers such that $0\leqslant a\lt b\leqslant 1$, then
$$\int_0^1\mathbf 1\left\{a\lt x_2\lt\dots \lt x_{i-1}\lt b\right\}\mathrm dx_2\dots dx_{i-1}=\frac{\left(b-a\right)^{i-2}}{(i-2)!} \mbox{ and } $$
$$\int_0^1\mathbf 1\left\{a\lt x_{i+1} \lt\dots \lt x_{n-1}\lt b\right\}\mathrm dx_{i+1} \dots dx_{n-1}=\frac{\left(b-a\right)^{n-i-1}}{(n-i-1)!} . $$
We get, using independence and then the fact that $\left(X_1,X_i,X_n\right)$ has the same distribution as $\left(X_1,X_2,X_3\right)$,
$$
\mathbb E\left[\left(\frac{X_{(1)}+X_{(n)}}2-X_{(i)} \right)\mathbf 1\left\{\left(X_{(1)},X_{(n)}\right)\in B\right\} \right]\\ =
\frac{ n!}{\left(i-2\right)!\left(n-i-1\right)!} \mathbb E\left[\left(\frac{X_{1}+X_{n}}2-X_{i} \right)\mathbf 1\left\{\left(X_{1},X_{n}\right)\in B\right\}\left(X_i-X_1\right)^{i-2}\left(X_n-X_i\right)^{n-i-1} \mathbf 1\left\{X_1\lt X_i\lt X_n \right\} \right]\\ =
\frac{ n!}{\left(i-2\right)!\left(n-i-1\right)!} \mathbb E\left[\left(\frac{X_{1}+X_{3}}2-X_{2} \right)\mathbf 1\left\{\left(X_{1},X_{3}\right)\in B\right\}\left(X_2-X_1\right)^{i-2}\left(X_3-X_2\right)^{n-i-1} \mathbf 1\left\{X_1\lt X_2\lt X_3 \right\} \right].
$$
Define $A :=\sum_{i=2}^{n-1} \mathbb E\left[\left(\frac{X_{(1)}+X_{(n)}}2-X_{(i)} \right)\mathbf 1\left\{\left(X_{(1)},X_{(n)}\right)\in B\right\} \right]$. In view of the previous computations, we have
$$A=\sum_{j=0}^{n-3}\frac{n!}{j!(n-3-j)!} \mathbb E\left[\left(\frac{X_{1}+X_{3}}2-X_{2} \right)\mathbf 1\left\{\left(X_{1},X_{3}\right)\in B\right\}\left(X_2-X_1\right)^{j}\left(X_3-X_2\right)^{n-3-j} \mathbf 1\left\{X_1\lt X_2\lt X_3 \right\} \right] \\
=n(n-1) (n-2) \mathbb E\left[\left(\frac{X_{1}+X_{3}}2-X_{2} \right)\mathbf 1\left\{\left(X_{1},X_{3}\right)\in B\right\} \left(X_3-X_1\right)^{n-3} \mathbf 1\left\{X_1\lt X_2\lt X_3 \right\} \right] .$$
Starting to integrate the last expectation with respect to $X_2$, we derive that $A=0$. We therefore showed that
$$\mathbb E\left[\sum_{i=2}^{n-1}X_{(i)}\mid X_{(1) },X_{(n)} \right] =\frac{n-2}2\left(X_{(1)} + X_{(n)} \right).$$
To conclude the wanted result, it suffices to notice that $\sum_{i=1}^nX_i= \sum_{i=1}^nX_ { (i)}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2001780",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 0
}
|
Every Real number is expressible in terms of differences of two transcendentals Is it true that for every real number $x$ there exist transcendental numbers $\alpha$ and $\beta$ such that $x=\alpha-\beta$?
(it is true if $x$ is an algebraic number).
|
Suppose there is some real number x that can't be written as the difference of transcendental numbers. Then for every transcendental number y there exists a unique algebraic number z = y+x. But this means there is an injective function f(a) = a+x from the transcendental numbers to the algebraic numbers, which implies the cardinality of the transcendental numbers is less than or equal to that of the algebraic numbers. But we know this is false because the cardinality of the algebraic numbers is strictly less than that of the transcendentals.
Q.E.D.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2001922",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26",
"answer_count": 5,
"answer_id": 2
}
|
Discrete Math composition functions Give examples of sets $X$, $Y$, $Z$ and functions $f: X \to Y$, $g: Y \to Z$, so that
the composition $g\circ f: X \to Z$ is a bijection, although neither $f$ or $g$ it is.
I have no idea to begin thinking on amounts.
|
WLOG we can take $Z = X$ and $g \circ f = Id$
If you can apply $f$ without "losing information", it's because $f$ is bijective on its image, i.e. injective.
Thus take Y bigger than X.
Then you just have to left invert $f$.
If you want to take $Z \neq X$ and $g \circ f \neq Id$, just left compose with a bijection from $X$ to $Z$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2002063",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
What kind of pde is and how can I solve it? I could not find a category of PDE where I can classify this equation and I do not know how to begin to solve it.
$\Delta u - u = 1$ in $\Omega = [0,\pi]\times [0,\pi]$
with the boundary conditions:
$u = 0$ on $[0,\pi]\times\{0\}\cup[0,\pi]\times\{\pi\}$ and
$\frac{\partial u}{\partial \eta} = 0$ on $\{0\}\times[0,\pi]\cup\{\pi\}\times[0,\pi]$
I tried with separation of variables, but without results.
|
The conditions given result in a function $u(x,y)$ that is constant in $x$. That is, $u(x,y)=U(y)$. And,
$$
U''(y)-U(y)=1,\;\;\; U(0)=0,\; U(\pi)=0.
$$
That can be solved with ODE methods and the annihilator method
$$
D(D-1)(D+1)U=0.
$$
The general solution is
$$
U(y)=-1+Be^{y}+Ce^{-y}
$$
where the constants $B,C$ are uniquely determined by the endpoint conditions.
$$ U(0)=-1+B+C=0 \\
U(\pi)=-1+Be^{\pi}+Ce^{-\pi}=0
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2002140",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Correct formulation of axiom of choice In a paper (Wayback Machine), Asaf Karagila writes:
Definition 1 (The Axiom of Choice). If $\{A_i
\mid i ∈ I\}$ is a set of non-empty sets, then there exists
a function $f$ with domain $I$ such that $f(i) ∈ A_i$
for all $i ∈ I$.
Does this formally make sense? Shouldn't it say
If $(A_i)_{i\in I}$ be a family of non-empty sets …
?
Because if we have a set $M$ of non-empty sets there may be no unique family $(A_i)_{i\in I}$ with $\{A_i\mid i\in I\}= M$.
|
It does make sense subject to a generous interpretation of the role of the indexing set $I$ and the indexing function $i \mapsto A_i$. It would be much better style (in my opinion) either to write it as you suggested, stating explicitly that the indexing function $i \mapsto A_i$ is part of the data or to write it without mentioning the indexing set at all: "if $U$ is a set of non-empty sets, then there is a function $f$ with domain $U$ such that $f(A) \in A$ for every $A \in U$". The two statements are equivalent (because any indexing function $i \mapsto A_i$ of the set $U$ factors through the identity function on $U$, which you can regard as a sort of "minimalist" indexing function).
[Aside: I don't think there is any general agreement that "family" means indexed family. Indeed, some authors explicitly state that they use terms like "set", "family" and "collection" as synonyms. So it's safest to say "indexed set" or "indexed family".]
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2002278",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Steady State Differential Equation I am trying to find the steady-state solution of the following ODE:
$$(D^2+D+4.25I)y=22.1\cos(4.5t)$$
The answer from the back of my textbook is:
$$y_p = -1.28\cos(4.5t)+0.36\sin(4.5t)$$
I found
$$\begin{align}
y_p&=k_1\sin(4.5t)+k_2\cos(4.5)t\\
y_p'&=4.5k_1\cos(4.5t)-4.5k_2\sin(4.5t)\\
y_p''&=-20.25k_1\sin(4.5t)-20.25k_2\cos(4.5t)
\end{align}$$
But I am not sure how to plug back into initial equation to find the answer $y_p$. I don't understand how $D$ and $I$ work.
|
You are almost there, we choose the particular solution:
$$y_p(x) = a \cos(4.5 t) + b \sin(4.5 t)$$
We take:
$$(D^2 + D + 4.25 I)y_p = y_p'' + y_p' + 4.25 y_p = 22.1 \cos(4.5t)$$
Using your derivatives and adding all these terms and simplifying, we get:
$$(-4.5 a-16 b) \sin (4.5 t)+(-16 a+4.5 b) \cos (4.5 t) = 22.1 \cos(4.5t)$$
Equating like terms, we have to solve the $2x2$ system:
$$-4.5 a-16 b = 0 \\-16 a + 4.5 b = 22.1$$
Solving this using your favorite approach (Cramer's, Substitution, Gaussian, ...), we get:
$$a=-1.28, b=0.36$$
Now you have:
$$y(x) = y_h(x) + y_p(x)$$
Now all that is left is finding the steady-state.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2002380",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Fibonacci Identity with Binomial Coefficients A friend showed me this cool trick: Take any row of Pascal's triangle (say, $n = 7$):
$$1, 7, 21, 35, 35, 21, 7, 1$$
Leave out every other number, starting with the first one:
$$7, 35, 21, 1$$
Then these are backwards base-5 "digits", so calculate:
$$7 + 35 \cdot 5 + 21 \cdot 5^2 + 1 \cdot 5^3 = 7 + 175 + 525 + 125 = 832$$
and divide by $2^{7-1} = 2^6 = 64$:
$$\frac{832}{64} = 13$$
and $F_7 = 13$ (the seventh Fibonacci number)!
He said it works for any $n$. I have worked out that this would be to prove that:
$$\frac{1}{2^{n-1}}\sum_{k = 0}^{\lfloor{\frac{n}{2}}\rfloor}{\left(5^k {n \choose 2k + 1} \right)} = F_n $$
I'm not sure how to proceed from here. Is there a neat combinatoric or easy algebraic proof I am missing? Thanks!
|
To cancel out the binomial coefficients with even bottom indices, the sum can be written as
$$
\begin{align}
\frac1{2^{n-1}}\sum_{k=0}^n\binom{n}{k}\frac{\sqrt5^k-\left(-\sqrt5\right)^k}{2\sqrt5}
&=\frac{\left(\frac{1+\sqrt5}2\right)^n-\left(\frac{1-\sqrt5}2\right)^n}{\sqrt5}\\
&=F_n
\end{align}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2002702",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16",
"answer_count": 4,
"answer_id": 3
}
|
What is the general formula for a convergent infinite geometric series? This question is related, but different, to one of my previous questions (Does this infinite geometric series diverge or converge?). To avoid the previous question getting off-topic, I have created a separate question.
I'm looking for the general formula of a convergent infinite geometric series. I want to be able to calculate any convergent infinite geometric series I come across, regardless of where it starts at. Some examples of this are:
$$ \sum_{n=0}^\infty ar^n$$
$$ \sum_{n=1}^\infty ar^n$$
$$ \sum_{n=2}^\infty ar^n$$
...
$$ \sum_{n=5}^\infty ar^n$$
...
etc.
I would appreciate it if someone could present such a formula and explain the reasoning behind it. Also, please illustrate how the formula can be applied to the above examples.
Thank you.
|
In my opinion, the simplest way to memorize the formula is
$$ \frac{\text{first}}{1 - \text{ratio}} $$
So whether you're computing
$$ \frac{1}{8} + \frac{1}{16} + \frac{1}{32} + \ldots $$
or
$$ \sum_{n=3}^{\infty} 2^{-n} $$
or
$$ \sum_{n=0}^{\infty} \frac{1}{8} 2^{-n} $$
you can quickly identify the sum as
$$ \frac{ \frac{1}{8} }{1 - \frac{1}{2}} $$
Similarly, for a finite geometric sequence, a formula is
$$ \frac{\text{first} - \text{(next after last)}}{1 - \text{ratio}} $$
The infinite version can be viewed as a special case, where $(\text{next after last}) = 0$.
I find this formula more convenient written as
$$ \frac{\text{(next after last)} - \text{first}}{\text{ratio} - 1} $$
e.g.
$$ 2 + 4 + 8 + \ldots + 256 = \frac{512 - 2}{2 - 1}$$
But in a pinch, you can always just rederive the formula since the method is simple:
$$ \begin{align}(2-1) (2 + 4 + 8 + \ldots + 256)
&= (4 - 2) + (8 - 4) + (16 - 8) + \ldots + (512 - 256)
\\&= 512 - 2
\end{align}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2002780",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
When does the tangent line to the sine curve pass through the origin? I am trying to find values $a$ and $w$ for which the line $y=ax$ is tangent to the curve $y=\sin(x)$ at $x=w$.
One immediate solution is $a=1$ and $w=0$, but I would like $a<0$ and $w\in (\pi,3\pi/2)$, and I am having a hard time finding such a solution. Asking Mathematica doesn't help (Solve, Reduce, Minimize all say This system cannot be solved) and asking Google gives me elementary calculus problems. Here is a picture showing such values of $a$ and $w$ exist (and giving an impression of how the problem changes with $a$ changing):
It is clear that there is some $a\in(-.3,-.2)$ such that $y=ax$ is tangent to $\sin(x)$, and I could find a numerical approximation, but I want an exact solution.
Thought 1: Describe the tangent line to $\sin(x)$ at $(w,\sin(w))$ as $y=\sin(w)+\cos(w)(x-w)$, and find when that intersects the origin. This leads to solving the equation $\tan(w)=w$, seemingly impossible.
Thought 2: Given $a$, I can find a spot on the sine curve with slope $a$. Indeed, such a point is $\cos(w)=a$, or $w=\arccos(a)$. Due to the limited range of $\arccos$ and wanting $w\in (\pi,3\pi/2)$, I need $w=2\pi-\arccos(a)$. Now I have to solve $a(2\pi-\arccos(a))=\sin(2\pi-\arccos(a))$, which can be simplified by the periodicity of $\sin$ and knowing $\sin(\arccos(a))=\sqrt{1-a^2}$, but that doesn't help too much.
My problem is actually slightly more general, I have the curve $y=\sin(cx)$ for some changing $c$. I know there exists exactly one triple $(a,c,w)$, for $a<0$ and $w\in (\pi/c, 3\pi/2c)$ such that $y=ax$ is tangent to $y=\sin(cx)$ at $x=w$, but I am worried this is a problem that cannot be solved. I tried doing this in the (seemingly) simplest case $c=1$ above, but to no avail. I would be very glad to know if there are related problems / solutions / approaches.
|
I guess the equation you need to solve is
$$\frac{\sin(x)}{x} = \cos(x).$$
The left hand side is the slope of the line through the points $(x,\sin(x))$ and the right hand side is the slope of the line tangent to the graph of $y=\sin(x)$ at the point $x$. You want these to be equal.
The reason you're getting the complaint that "This system cannot be solved" is because you need numerical, rather than symbolic techniques to solve the system. The logical way to do this with Mathematica is to use the FindRoot command:
FindRoot[Cos[x] == Sin[x]/x, {x, Pi, 3 Pi/2}]
(* Out: {x -> 4.49341} *)
This yields the following graph:
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2002921",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
What is the smallest n for which n! has more than 10 digits? I saw this exercise from the open sourced "Book of Proof", which I'm using in an abstract math class. I could only find out the answer by manually calculating different factorials. But I'm also curious if there is better method for it. Any hint and answers are appreciated!
|
A way to reduce complexity is by keeping only n (say n=3) significant digits when doing your computations (since you only care about the number of digits). But honestly speaking, what question asks is not that much computationally complex anyway.
If you would have for example $10^{100}$ digits, than it may be useful to use a more or less refined Stirling approximation, depending on your needs.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2003015",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Prove $f(x)\le0$ if $f(0)=0$ and $\int_0^xf(t)\mathbb dt\ge xf(x)$
$f(x)$ is a differentiable real valued function that satisfies following conditions:
$$f(0)=0$$
$$\int_0^xf(t)\mathbb dt\ge xf(x)\quad$$
Prove that for all $x>0$
$$f(x)\le0$$
I tried but couldn't derive the conclusion from the given conditions. $f(x)$ seems to have to decrease around $x=0$ but not necessarily for all $x$.
|
Hint: write is as $$\int_0^x \big(f(t)-f(x)\big) dt \ge 0$$
[ EDIT ] The above implies that for a given $x$, there exists a $c \in (0,x)$ such that $f(c) \ge f(x)$. Therefore the set $C_1 = \{ c \in (0,x) \mid f(c) \ge f(x)\}$ is not empty and, being obviously bounded, it has an infimum $0 \le c_1 = \inf C_1 \lt x$. Also, it follows by continuity that $f(c_1) \ge f(x)$.
Repeat for interval $(0,c_1)$ and so on, building the decreasing sequence $x \gt c_1 \gt c_2 \gt \cdots$ with $f(x) \le f(c_1) \le f(c_2) \le \cdots$. Let $c = \lim_{n \to \infty} c_n$, then again by continuity $f(x) \le f(c)$.
If $c=0$ then $f(x) \le f(0) = 0$ and the proof is complete. Otherwise $c \gt 0$ and it follows fairly easily that $f(t) \le f(x)$ for $t \in (0,c)$. If $f$ is constant on $(0,c)$ then yet again again by continuity $f(x) \le f(c) = f(0) = 0$. Otherwise there must exist a $t_0 \in (0,c)$ where strictly $f(t_0) \lt f(c)$. But then the strict inequality would hold on a neighborhood of $t_0$ which, together with $f(t) \le f(c)$, would cause $\int_0^c \big(f(t)-f(c)\big) dt \lt 0$ contradicting the premise.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2003158",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
}
|
What is $(-1)^{\frac{1}{7}} + (-1)^{\frac{3}{7}} + (-1)^{\frac{5}{7}} + (-1)^{\frac{9}{7}} + (-1)^{\frac{11}{7}}+ (-1)^{\frac{13}{7}}$? The question is as given in the title. According to WolframAlpha, the answer is 1, but I am curious as to how one gets that. I tried simplifying the above into $$6(-1)^{\frac{1}{7}}$$ however that does not give 1.
|
We have
$$x^7+1=(x+1)(x^6-x^5+x^4-x^3+x^2-x+1).$$ Let $x=(-1)^{1/7}$ and get the result.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2003317",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
}
|
Prove that the matrix $I_n - A$ is invertible Let $A$ be an $n \times n$ matrix with the property that $A^3 = O_n$, where $O_n$ denotes the $n \times n$ matrix which
has all the entries equal to $0$. Let $I_n$ be the $n \times n$ identity matrix. Prove that the matrix $I_n - A$
is invertible, and indicate how you would obtain its inverse.
|
I think that it should read $I_n -A$ instead of $I_n \rightarrow A$.
Observe that
$ I_n=I_n-A^3=(I_n-A)(A^2+A+I_n)$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2003429",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Regarding the Tricomi confluent hypergeometric function Is the following equation true for Tricomi confluent Hypergeometric function? $$\phi(1,0,ax)=1-ax\phi(1,1,ax)$$ here $\phi(.,.,.)$ is the Tricomi confluent hypergeometric function. Thanks in advance.
|
As usual examine carefully :)
Using DLMF 13.2.11, 13.6.6
$U(1,0,z)=z\cdot U\left(2,2,z\right)=e^{z}\cdot E_{2}\left(z\right)$
$z\cdot U(1,1,z)=z\cdot e^{z}E_{1}\left(z\right)$
So your question is
$E_{2}\left(z\right)+z\cdot E_{1}\left(z\right)=e^{-z}$
Which matches 8.19.12
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2003541",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Find the largest possible integer n such that $\sqrt{n}+\sqrt{n+60}=m$ for some non-square integer m. The solution to this problem says that:
Squaring both sides gives us that $n(n+60)$ is a perfect square
I do not understand this, can someone explain me why this is true.
squareing gives:
$2n+60+2\sqrt{n}\sqrt{n+60}=m^2$
embaresed that i missed it
$\sqrt{n}\sqrt{n+60}$ must be equal to some positive integer which implies that
$n(n+60)$ must be equal to some squared integer
|
As you have found out it is necessary that $n(n+60)$ is a perfect square. Therefore we want $n(n+60)=(n+r)^2$, or $(60-2r)n=r^2$, for some integer $r\geq0$. It follows that $r$ has to be even, and writing $r=2s$ we arrive at the condition $$(15-s)n=s^2\tag{1}$$ for integer $s$ and $n\geq0$. Considering the cases $0\leq s\leq 15$ in turn we see that the following pairs $(s,n)$ solve $(1)$:
$$(0,0),\quad(6,4),\quad(10,20),\quad(12,48),\quad(14,196)\ .$$
The largest occuring $n$ in this list is $196$, and this $n$ indeed leads to an integer $m=\sqrt{196}+\sqrt{196+60}=30$. As $30$ is not a square the answer to the question is $196$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2003624",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
What is the limit $\lim_{n\to\infty}\sqrt[n]{n^2+3n+1}$? Can you try to solve it? I tried to do something but I do not know how to continue it:
\begin{align}
& \lim_{n\to\infty}\sqrt[n]{n^2+3n+1}=\lim_{n\to\infty}(n^2+3n+1)^{1/n} = e^{\lim_{n\to\infty} \frac{1}{n}\ln(n^2+3n+1)} \\[10pt]
= {} & e^{\lim_{n\to\infty}\frac{1}{n} \frac{\ln(n^2+3n+1)}{n^2+3n+1} (n^2+3n+1)}.
\end{align}
|
PRIMER
In THIS ANSWER, I showed using only the limit definition of the exponential function along with Bernoulli's Inequality that the logarithm function satisfies the inequalities
$$\frac{x-1}{x}\le \log(x) \le x-1 \tag 1$$
for $x>0$.
We will now use $(1)$ to show that $\lim_{n\to \infty}\frac1n \log(n)=0$. To do so, we see from $(1)$ that for any $\alpha>0$
$$\begin{align}
\alpha \log(x) &=\log(x^\alpha)\\\\
&\le x^\alpha -1\\\\
&<x^\alpha\\\\
\log(x)\le \frac{x^\alpha}{\alpha}
\end{align}$$
Hence, we can assert that
$$\frac1n \log(n)\le \frac{n^{\alpha -1}}{\alpha} \tag 2$$
for any $\alpha >0$.
Since $(2)$ is true for any positive $\alpha$, it is certainly true for $0<\alpha <1$. Therefore, for $0<\alpha <1$ we see that $\lim_{n\to \infty}\frac1n \log(n)=0$.
Next, using $\log(n^2+3n+1)\le \log(5n^2)=\log(5)+2\log(n)$, we see that
$$\begin{align}
\lim_{n\to \infty}\sqrt[n]{n^2+3n+1}&=\lim_{n\to \infty}e^{\frac1n \log(n^2+3n+1)}\\\\
&=e^{\lim_{n\to \infty}\frac1n \log(n^2+3n+1)}\\\\
&=e^0\\\\
&=1
\end{align}$$
as was to be shown!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2003708",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 2
}
|
Difference of Squares functional equation Find all twice differentiable functions mapping the reals to the reals such that $$f(x)^2-f(y)^2=f(x+y)f(x-y)$$.
From plugging in $x=y=0$, I got $f(0)=0$. Then from plugging in $x=0$, we get that $f(x)=-x$.
However, I can't continue further. Any help?
|
Hint (suggested by the "twice differentiable" condition): take the derivative in $y$.
$$-2f(y)f'(y) = f'(x+y)f(x-y) - f(x+y)f'(x-y)$$
Now take the derivative in $x$.
$$
\begin{align}
0 & = f''(x+y)f(x-y) + f'(x+y)f'(x-y) - f'(x+y)f'(x-y) - f(x+y)f''(x-y) \\
& = f''(x+y)f(x-y) - f(x+y)f''(x-y)
\end{align}
$$
With $x+y \mapsto a$ and $x-y \mapsto b$ the above gives $\frac{f''(a)}{f(a)} = \frac{f''(b)}{f(b)}$ so in the end $\frac{f''(x)}{f(x)} = \text{const}$.
The above is a necessary condition, though not a sufficient one. After eliminating the candidate functions which don't satisfy the original equation, the eligible solutions are $f(x) = \lambda x, \lambda \in \mathbb{R}$ and $(\dagger)\;$ $f(x) = \lambda \sin(\mu x), \lambda, \mu \in \mathbb{R}$.
[ EDIT ] $\;\;(\dagger)\;\;$ As kindly pointed by @CarlSchildkraut, the original answer was missing the $\sin$ family of solutions. To fill-in the blanks left after deriving the differential equation $\frac{f''(x)}{f(x)} = c\;:$
*
*$c=0$ has $f(x) = \lambda x + \mu$ as solutions. The initial condition $f(0)=0$ implies $\mu = 0$, and the remaining functions $f(x)=\lambda x$ are easily verified to satisfy the original equation due to the algebraic identity $a^2 - b^2 = (a+b)(a-b)$.
*$c \lt 0$ gives $f(x) = \lambda \sin(\sqrt{-c}\,x) + \mu \cos(\sqrt{-c}\,x)$ with $\mu=0$ from the initial condition, so $f(x)= \lambda \sin(\sqrt{-c}\,x)$ which again can be easily verified to satisfy the original condition due to the trigonometric identity $\sin^2(a) - \sin^2(b) = \sin(a+b)\;\sin(a-b)$.
*$c \gt 0 $ gives $f(x) = \lambda e^{\sqrt{c}\,x} + \mu e^{-\sqrt{c}\,x}$ with $\mu = -\lambda $ from the initial condition, so in the end $f(x) = \lambda ( e^{\sqrt{c}\,x} - e^{-\sqrt{c}\,x})$ which can be easily verified to not satisfy the original equation.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2003825",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Evaluate the summation of $(-1)^k$ from $k=0$ to $k=n$ $$\sum_{k=0}^n(-1)^k$$
I know that the answer will be either -1 or 0 depending on whether there are an odd or an even number of sums in total, but how can I determine this if $k$ goes to infinity (which I am thinking means there is neither an even nor odd amount of sums).
How would I determine this?
|
Using the general form derived as:
\begin{align}
\sum_{k=0}^{n} x^{k} &= \sum_{k=0}^{\infty} x^{k} - \sum_{k=n+1}^{\infty} x^{k} = \frac{1}{1-x} - \frac{x^{n+1}}{1-x} = \frac{1 - x^{n+1}}{1-x}
\end{align}
then for $x = -1$ the result becomes
$$ \sum_{k=0}^{n} (-1)^{k} = \frac{1 + (-1)^{n}}{2}.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2003955",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
}
|
Power set of Cartesian product I'm trying to see the differences between a Power set of a cartisian product and the cartisian product of two power sets. I have these 2 examples:
$P(\{1, 2\} \times \{3, 4\})$
$P(\{1, 2\}) \times P(\{3, 4\})$
With the first one I can arrive here, but I'm not sure how to continue to compute the power set.
$P(\{(1,3), (1,4), (2,3), (2,4)\})$
The second one I'm stuck on the cartisian product between the to result of the power set.
$\{\{\}, \{1\}, \{2\}, \{1,2\}\} \times \{\{\}, \{3\}, \{4\}, \{3,4\}\} $
How can I keep going from there? And the notation is correct?
|
Well, on a practical and obvious level . $A \times B$ = a set of ordered pairs so $P(A\times B)$ = a set of sets of ordered pairs. $P(A)$ = a set of set of elements. So $P(A) \times P(B)$ = a set of ordered pairs of sets.
It might seem abstract but a set of ordered pairs of sets = {({..},{....})}, is completely different than a a set of sets of ordered pairs. {{(x,y)}}.
e.g.
P({1,2} X {3,4}) = P({(1,3),(1,4),(2,3),(2,4)}) = {$\emptyset$, {(1,3)},{(1,4)},{(2,3)},{(2,4)},{(1,3),(1,4)},{(1,3),(2,3)},{(1,3),(2,4)},{(1,4),(2,3)}{(1,4),(2,4)},{(2,3),(2,4)},{(1,3),(1,4),(2,3)},{(1,3),(1,4),(2,4)},{(1,3),(2,3),(2,4)},{(1,4),(2,3),(2,4)},{(1,3),(1,4),(2,3),(2,4)}}
wherease P({1,2})X P({3,4}) = {$\emptyset$, {1},{2},{1,2})X ($\emptyset$, {3},{4},{3,4}) =
= {($\emptyset, \emptyset$),($\emptyset$, {3}), ($\emptyset$, {4}),($\emptyset$, {3,4}),({1}, $\emptyset$),({1}, {3}), ({1}, {4}),({1}, {3,4}),({2}, $\emptyset$),({2}, {3}), ({2}, {4}),({2}, {3,4})({1,2}, $\emptyset$),({1,2}, {3}), ({1,2}, {4}),({1,2}, {3,4})}
Different things. By coincidence they both have 16 elements.
In general:
$|A \times B| = |A| * |B|$
$|P(A)| = 2^{|A|}$.
So $|P(A \times B)| = 2^{|A||B|}$ while $|(P(A) \times P(B)| = 2^{|A|}2^{|B|}=2^{|A| + |B|} \ne 2^{|A||B|}$
so we know this can't be true.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2004106",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 3,
"answer_id": 0
}
|
Parametrization of the intersection between a sphere and a plane I can't find a way to get the parametric equation $\gamma(t)=(x(t),y(t),z(t))$ of a curve that is the intersection of a sphere and a plane (not parallel to any coordinate planes). That is
$$\begin{cases} x^2+y^2+z^2=r^2 \\ ax+by+cz=d \end{cases}$$
I don't know how to move the variables in order to get something easily parameterizable.\
Can anyone let me know the main steps to get the parameterization of this type of curve?
Example
$$\begin{cases} x^2+y^2+z^2=1 \\ x+y+z=0 \end{cases}$$
Answer: $\gamma(t)=(\frac{\sqrt{2}}{2}cost +\frac{\sqrt{6}}{6}sint,-\frac{\sqrt{2}}{2}cost +\frac{\sqrt{6}}{6}sint,\frac{\sqrt{6}}{3}sint) , \,\,\,\, t \in [0,2\pi]$
|
Just to present a vectorial approach to the problem
Consider the plane equation written as:
$$
\frac{{a\,x + b\,y + c\,z}}{{\sqrt {a^{\,2} + b^{\,2} + c^{\,2} } }} = \frac{d}{{\sqrt {a^{\,2} + b^{\,2} + c^{\,2} } }}
$$
That means that the points ${\bf p} = \left( {x,y,z} \right)$ on the plane
shall project onto the unit vector ${\bf n} = \left( {a,b,c} \right)/\sqrt {a^{\,2} + b^{\,2} + c^{\,2} } $
at a constant distance $\delta = d/\sqrt {a^{\,2} + b^{\,2} + c^{\,2} } $, i.e.
$$
{\bf p} \cdot {\bf n} = \delta
$$
so that the plane is distant $\delta$ from the origin.
At the same time we shall have
$$
\left| {\,{\bf p}\,} \right| = r
$$
So with reference to the sketch, we can put
$$
{\bf p} = \delta \,{\bf n} + \sqrt {r^{\,2} - \delta ^{\,2} } \,{\bf t} = \delta \,{\bf n} + \rho \,{\bf t}
$$
where ${\bf t}$ is a generic unit vector parallel to the plane, that is normal to ${\bf n}$.
We can express ${\bf t}$ by taking two unit vectors ${\bf u}$ and ${\bf v}$, normal to ${\bf n}$ and to each other,
and then putting
$$
{\bf t} = \cos \theta \,{\bf u} + \sin \theta \,{\bf v}
$$
To determine ${\bf u}$ and ${\bf v}$ we can take (if, e.g. $c \ne 0$)
$$
{\bf u} = \left( {0, - c,b} \right)/\sqrt {b^{\,2} + c^{\,2} } \quad \quad {\bf v} = {\bf n} \times {\bf u}
$$
example
with $r=4 \; a=1 \; b=2 \; c=3 \; d=7$ we get
$$
{\bf n} = \,\sqrt {14} /14\,\left( {\begin{array}{*{20}c}
1 \\
2 \\
3 \\
\end{array}} \right)\;\quad \delta = \sqrt {14} /2\quad \rho = 5\sqrt 2 /2
$$
$$
{\bf u} = \sqrt {13} /13\;\left( {\begin{array}{*{20}c}
1 \\
2 \\
3 \\
\end{array}} \right)\quad {\bf v} = \sqrt {182} /182\;\left( {\begin{array}{*{20}c}
{13} \\
{ - 2} \\
{ - 3} \\
\end{array}} \right)
$$
and thus
$$
\begin{array}{l}
{\bf p} = \delta \,{\bf n} + \rho \,\left( {\cos \theta \,{\bf u} + \sin \theta \,{\bf v}} \right)\quad \Rightarrow \\
\Rightarrow \quad \left( {\begin{array}{*{20}c}
x \\
y \\
z \\
\end{array}} \right) = \frac{1}{2}\,\left( {\begin{array}{*{20}c}
1 \\
2 \\
3 \\
\end{array}} \right) + \frac{{5\sqrt {364} }}{{364}}\,\left( {\sqrt {14} \cos \theta \;\left( {\begin{array}{*{20}c}
1 \\
2 \\
3 \\
\end{array}} \right) + \sin \theta \,\;\left( {\begin{array}{*{20}c}
{13} \\
{ - 2} \\
{ - 3} \\
\end{array}} \right)} \right) \\
\end{array}
$$
and you can verify that:
$$
{\bf p} \cdot {\bf n} = \delta \,\quad {\bf p} \cdot {\bf p} = r^{\,2}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2004224",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
}
|
Define equivalence relation on set of real numbers so distinct equivalence classes are $[2k,2k+2)$ Define an equivalence relation $\sim$ on $\mathbb{R}$ such that the distinct equivalence classes of $\sim$ are $[2k,2k+2)$, where $k$ is an integer (Hint: find an appropriate function $f$ with all real numbers as its domain and let $R = \{(x,y)|f(x)=f(y)\}$.)
I feel like I can do these problems, but starting out I almost never know what is being asked. I am so utterly confused. I know what an equivalence relation is, and I know what equivalence classes are, but how can equivalence classes be $[2k,2k+2)$ with $k $ integer? I think I'm confusing myself further.
|
When they say that $A $ is an equivalence class they mean that $a, b \in A \rightarrow a \sim b $. So they are asking you to create an equivalence relation $R $ for which the classes are the intervals of the form $[2k, 2k + 2) $.
Let $f(x) = \left\lfloor{\frac{x }{2}}\right\rfloor$
Now define the relation $R = \{(x, y) : f(x) = f(y)\} $.
That is, $xRy \iff f(x) = f(y) $.
You should now be able to prove that $R $ is an equivalence relation and that its equivalence classes are as asked.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2004320",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Analog of Gauss-Bonnet formula for principal bundles over manifolds with boundary The Gauss-Bonnet formula gives a topological invariant as an integral over a local density on the given manifold. In particular, when there is a boundary, GB formula has to be supplemented by a boundary term, for example the extrinsic curvature in two dimensional case.
I am wondering if there is an analogous topological invariant for a gauge theory (or principal bundle) on manifolds with boundary, and if so, what is the local density to be integrated over, including the boundary term. I am particularly interested in the case of the two dimensional base manifold with U(1) gauge group.
Since I am from a physics background and pretty new to this kind of subject, it would be especially great if one can also point to physics-oriented reference on this.
|
What you are looking for is the theory of characteristic classes of principal bundles. You can take any connection on the principal bundle, plug it in the appropriate $G$-invariant polynomial and what you get is an element of cohomology that descends to the base manifold. For a certain choice you would get the Chern class of the principal bundle which is integral in the sense that it corresponds to an element of integral singular cohomology via the de-Rham isomorphism.
If you want to learn the theory of connections, curvature and characteristic classes for principal bundles, I recommend the books 'Foundations of differential geometry' by Kobayashi and Nomizu.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2004431",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
equivalent expressions for trace norm I understand the trace norm (or nuclear norm) of a matrix $X\in\mathbf{R}^{n\times m}$is usually defined as
$$\|X\|_{tr}=\sum_{i=1}^{\min\{m,n\}}\sigma_i$$
where $\sigma_i$'s are the singular values of $X$.
However, some papers uses an alternative definition:
$$\|X\|_{tr}=\min_{X=AB'}\|A\|_{Fro}\|B\|_{Fro}$$
where $X=AB'$ is some arbitrary decomposition of $X$ and $\|*\|_{Fro}$ is the Frobenius norm.
But why are these two definitions equivalent?
As far as I understand, if $X=USV'$ is the SVD of $X$ and if $A=UC, B=VSC^{-1}$ where $C$ is diagonal s.t. $X=AB$, then $C=S^{1/2}$ leads to the minimum of $\|A\|_{Fro}\|B\|_{Fro}$.
However, I don't see how to prove the equivalency when $A,B$ are some arbitrary decomposition (not constraint to the form $A=UC,B=VSC^{-1}$)
Can anyone help me ?
|
Note that, since $X^*X$ is positive-semidefinite (and square), the square roots of its eigenvalues are the eigenvalues of the square root. Thus
$$
\|X\|_{\rm tr}=\sum_j\lambda_j(X^*X)^{1/2}=\sum_j\lambda_j((X^*X)^{1/2})=\sum_j\lambda_j(|X|)=\text{Tr}\,(|X|)
$$
(hence the name of the norm). Now, if $X=AB$, then with $AB=W|AB|$ the polar decomposition,
\begin{align}
\|AB\|_{\rm tr}&=\text{Tr}\,(|AB|)=\text{Tr}\,((W^*A)B)\leq\text{Tr}(A^*WW^*A)^{1/2}\,\text{Tr}\,(B^*B)^{1/2}\\ \ \\
&=\text{Tr}(A^*A)^{1/2}\,\text{Tr}\,(B^*B)^{1/2}=\|A\|_2\,\|B\|_2
\end{align}
(where the inequality is due to Cauchy-Schwarz).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2004542",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
If $\dfrac x{x^{1.5}}=(8x)^{-1}$ and $x>0$, then $x=\;?$
If $\dfrac x{x^{1.5}}=(8x)^{-1}$ and $x>0$, then $x=\;?$
I solved this many different ways and every time I get a different answer...
Attempt #1:
$$\frac1{\sqrt x}=\frac1{8x}$$ $$8x=\sqrt x$$ $$64x^2=x$$ $$x(64x-1)=0$$ $$x=0,\;\frac1{64}$$ $$x=\frac1{64}$$
Attempt #2:
$$\frac1{\sqrt x}=\frac1{8x}$$ $$\frac{\sqrt x}x=\frac1{8x}$$ $$x=8x\sqrt x$$ $$\require{cancel}\cancel{8\sqrt x=0}\quad{8\sqrt x=1}$$ $$\cancel{x=0}\quad{\sqrt x=\frac18}$$ $$\cancel{\text{no solution}}\quad x=\frac1{64}$$
Attempt #3:
$$\require{cancel}\cancel{\frac x{x^{0.5}}=1}\quad \frac {8x}{x^{0.5}}=1$$ $$\cancel{\sqrt x=1}\quad 8x=\sqrt x$$ $$\cancel{x=1}\quad x=\frac1{64}$$
Can someone provide a solution and also explain what I did wrong in each of these attempts? Thanks.
|
ʜᴇʟʟᴏ ᴅᴇᴀʀ!
$$\frac{x}{x^{1.5}} = (8x)^{-1}$$
$$x^{(1-1.5)} = \frac{1}{8x}$$
$$ x^{-0.5}= \frac{1}{8x}$$
$$\frac{1}{\sqrt{x}} = \frac{1}{8x}$$
$$x = \frac{\sqrt{x}}{8}$$
On squaring both sides,
$$x^2=\frac{x}{64}$$
Given $x>0$:
$$x = \frac{1}{64}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2004639",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Determining convergence with taking the limit of a function greater than it Why does the Sandwhich Theorem have to be used here? Why couldn't we just test for the result of the right side of the inequality and say that $a_n$ converges, since the function greater than itself converges?
|
You may have confused with this correct theorem (called comparison test):
If $\sum b_n$ converges and $0\leq a_n\leq b_n$ then $\sum a_n$ also converges.
What you mentioned: " $a_n$ converges, if the function greater than itself converges" is not correct:
Example: $a_n=(-1)^n\leq 2$, and 2 being a constant sequence surely converges, but $a_n$ does not converge.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2004796",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Does $f\colon x\mapsto 2x+3$ mean the same thing as $f(x)=2x+3$? In my textbook there is a question like below:
If $$f:x \mapsto 2x-3,$$ then $$f^{-1}(7) = $$
As a multiple choice question, it allows for the answers:
A. $11$
B. $5$
C. $\frac{1}{11}$
D. $9$
If what I think is correct and I read the equation as:
$$f(x)=2x-3$$
then,
$$y=2x-3$$
$$x=2y-3$$
$$x+3=2y$$
$$\frac {x+3} {2} = y$$
therefore:
$$f^{-1}(7)=\frac {7+3}{2}$$
$$=5$$
|
As a matter of not merely style but of writing in complete sentences you should write $x=f^{-1}(7)\iff f(x)=7\iff 2x-3=7\iff (2x-3)+3=7+3\iff 2x=10\iff x=5.$ This reduces errors and shows the flow of the reasoning. In this case the sequence of "$\iff$" shows that the implications go in both directions. The importance of this is that we can conclude that $x=5\implies f(x)=7$ and that $x\ne 5\implies f(x)\ne 7.$ There are other equations that imply $x=5$ but in fact have no solutions. For example $x=x+1\implies x=5,$ but $x=5$ does not imply $x=x+1.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2004895",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 6,
"answer_id": 4
}
|
Find all prime numbers p such that 2p+1 is a perfect cube. $$2p+1=n^3$$
$$ 2p = n^3 - 1$$
$$ 2p = (n-1)(n^2+n+1)$$
The only number that fits the criteria is 13, how can I prove this?
|
HINT: The factorization of $2p$ shows that $n-1$ must be either $2$ or $p$. (The other two cases, $n-1=1$ and $n^2+n+1=1$, are trivially eliminated.) You already know what happens when $n-1=2$. Otherwise, $n-1$ must be an odd prime $p$. In that case $n$ is even. What does that tell you about $n^3-1$?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2005023",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
}
|
$m>2$ and $n > 2$ are relatively prime $\Rightarrow$ no primitive root of $mn$
Show that if $m>2$ and $n > 2$ are relatively prime, there is no primitive root of $mn$
I know that $mn > 4$, and thus $\varphi(mn)$ is an even number so that I might write $\varphi(mn) = 2x$ for an integer $x$. If I could prove that $x = \frac{1}{2} \varphi(mn)$ is the order of some integer $a$ modulo $mn$, then I've proven that there is no primitive root of $mn$.
Since $m$ and $n$ are relatively prime, I can write the equation
\begin{align}
a^{\frac{1}{2} \varphi(mn)} \equiv 1 \pmod{mn}
\end{align}
as a set of congruences
\begin{align}
\begin{cases}
a^{\frac{1}{2} \varphi(mn)} \equiv 1 \pmod{m} \\
a^{\frac{1}{2} \varphi(mn)} \equiv 1 \pmod{n}
\end{cases}
\end{align}
This is where I get stuck. Am I on the right track? Or is there a better way to prove this?
|
Carmichael's reduced totient $\lambda(m)$ and $\lambda(n)$ are both even, so $\lambda(mn) < \phi(mn)$ and there is no primitive root.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2005121",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
}
|
Will the resulting Euclidean space be Hilbertian? In the linear space of sequences $x=( x_{1}, x_{2}, ... ), (x_{k}\in \mathbb{R})$ such that $\sum_{k=1}^{\infty }x_{k}^{2}<\infty$.
Let $(x,y)=\sum_{k=1}^{\infty }\lambda_{k}x_{k}y_{k}$, where $\lambda_{k}\in \mathbb{R}, 0<\lambda_{k}<1$. Will the resulting Euclidean space be Hilbertian?
|
The form $(x,y)$ is bilinear symmetric i.e. a scalar product in
$$
H=\{( x_{1}, x_{2}, ... )\in\mathbb R^\infty\;|\,\sum_{k=1}^{\infty }x_{k}^{2}<\infty\}
$$
Whether $H$ is Hilbertian or not depends on $\lambda_k$. For eample, if
$$
\lambda_k=2^{-k},
$$
then $H$ is not hilbertian. In fact, the sequence
$$
x^{(k)}=(1,\ldots,1,0\ldots)\qquad (\textrm {with k one's})
$$
is Cauchy in the norm of the $\lambda$-scalar product, but has no limit in $H$.
Instead
$$
H'=\{( x_{1}, x_{2}, ... )\in\mathbb R^\infty\;|\,\sum_{k=1}^{\infty }\lambda_kx_{k}^{2}<\infty\}
$$
is Hilbertian
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2005198",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Find all numbers such that "Product of all divisors=cube of number". While solving some old olympiad problems I came across this one. As I m stuck at it, so I m here.
The problem is: Find all positive integers $N$ such that the product of all the positive divisors of N is equal to $N^3$.
Since I was not able to solve this one mathematically hence I tried Hit and trial method to find the pattern and then work upon it. I got that:
12 has divisors 1,2,3,4,6,12 product of all of which give 1728($12^3$).Similarly 18,20,28 also follow the same case. I noticed that all of them have 4 factors, but I don't think it can take me any further (I also think that a perfect power(such as $2^3$)will not follow the case).
After all of my efforts I m on U guys. Need help. Any Mathematical formulation or suggestion is heartily welcome. Thanks.
|
Hint: The way to go is by prime factorisation. You need only consider the case up to four distinct prime factors (the rest should follow easily from your discussion). If you realise, all your found examples follow the pattern $pq^2$, where $p,q$ are primes.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2005297",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 6,
"answer_id": 2
}
|
Jacobian and local invertibility of function Following is a question in the text book:
Show by an example that the condition that the Jacobian vanishes at a point is not necessary for a function to be locally invertible at that point..... and the answer given is f : R$^2$- R$^2$ f(x,y) = (x$^3$, y$^3$)
I am a bit confused by the question itself because my understanding is that for local invertibility we check that Jacobian is not zero....also for f(x,y) = (x$^3$, y$^3$), J works out to 9x$^2$y$^2$. So dont we say that this function is not invertible at either x=0 or y=0 since J at these values becomes 0? I am unable to understand the solution given in the textbook
I request help to understand this problem and the given solution
|
It looks like a typo in the text. It should say "...the condition that the Jacobian does not vanish...".
$f(x,y)=(x^3,y^3)$ is invertible. Its inverse is $f^{-1}(x,y)=(\sqrt[3]{x},\sqrt[3]{y})$. What happens is that the inverse is not differentiable.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2005447",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
CCRT = Constant case CRT: $\,x\equiv a\pmod{\! 2},\ x\equiv a\pmod{\! 5}\iff x\equiv a\pmod{\!10}$ Problem: Find the units digit of $3^{100}$ using Fermat's Little Theorem (FLT).
My Attempt: By FLT we have $$3^1\equiv 1\pmod2\Rightarrow 3^4\equiv1\pmod 2$$ and $$3^4\equiv 1\pmod 5.$$ Since $\gcd(2,5)=1$ we can multiply the moduli and thus, $3^4\equiv 1\pmod {10}\Rightarrow3^{4*25}\equiv 1\pmod{10}.$ So the units digit is $1.$
|
Yours is a valid, clean argument. It is based on this:
If $m$ and $n$ divide $a$, then $lcm(m,n)$ divides $a$.
In your case, you have that $2$ and $5$ divide $3^4-1$, and so $10=lcm(2,5)$ divides $3^4-1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2005579",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 1
}
|
What is $E(X^2)$ mean in literal terms? In my probability and statistics class we learned about expected value, or $E(X)$. We also did some work about finding expected values of functions and such, like $E(g(x))$. And in the case of finding the variance, one of the steps involve finding $E(X^2)$. Does this mean anything in real-life terms?
Using a geometric distribution as an example, is $E(X^2)$ the expected number of times a trial needs to happen until the event $X$ happens twice in a row?
|
Here's an example. Suppose $X$ is a random variable that represents the outcome of a roll of a die numbered $1$ to $6$ inclusive. No assumption is made about the fairness of the die. Then $X^2$ is a random variable that represents the outcome of the square of the roll; whereas $$X \in \{1, 2, 3, 4, 5, 6\},$$ we have $$X^2 \in \{1, 4, 9, 16, 25, 36\}.$$
Now, if we suppose that this die is indeed fair, then we can easily compute the expected value of $X$: $$\operatorname{E}[X] = \sum_{x=1}^6 x \Pr[X = x] = \frac{1}{6} (1 + 2 + 3 + 4 + 5 + 6) = \frac{7}{2}.$$ The expectation of $X^2$ is: $$\operatorname{E}[X^2] = \sum_{x=1}^6 x^2 \Pr[X = x] = \frac{1}{6}(1 + 4 + 9 + 16 + 25 + 36) = \frac{91}{6}.$$ This should give you some intuition behind the meaning of $X$ versus $X^2$ and their corresponding expectations.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2005675",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 0
}
|
Deriving Taylor series without applying Taylor's theorem. First, a neat little 'proof' of the Taylor series of $e^x$.
Start by proving with L'Hospital's rule or similar that
$$e^x=\lim_{n\to\infty}\left(1+\frac xn\right)^n$$
and then binomial expand into
$$e^x=\lim_{n\to\infty}1+x+\frac{n-1}n\frac{x^2}2+\dots+\frac{(n-1)(n-2)(n-3)\dots(n-k+1))}{n^{k-1}}\frac{x^k}{k!}+\dots$$
Evaluating the limit, we are left with
$$e^x=1+x+\frac{x^2}2+\dots+\frac{x^k}{k!}+\dots$$
which is our well known Taylor series of $e^x$.
As dxiv mentions, we can exploit the geometric series:
$$\frac1{1-x}=1+x+x^2+\dots$$
$$\ln(1+x)=x-\frac12x^2+\frac13x^3-\dots$$
$$\arctan(x)=x-\frac13x^3+\frac15x^5-\dots$$
which are found by integrating the geometric series and variants of it.
I was wondering if other known Taylor series can be created without applying Taylor's theorem. Specifically, can we derive the expansions of $\sin$ or $\cos$?
|
Alan Turing, at a young age, derived the series expansion of $\arctan$ without using (and, purportedly without knowing) calculus whatsoever.
Using the identity
$$\tan 2x = \frac{2 \tan x}{1 - \tan^2 x},$$
he obtained
$$\tan(2 \arctan x) = \frac{2x}{1-x^2},$$
and
$$2 \arctan x = \arctan\left( \frac{2x}{1-x^2}\right).$$
Using the geometric series for $|x| < 1$,
$$\tag{1}2 \arctan x =\arctan [2x(1 + x^2 + x^4 + x^6 + \ldots)]$$
Assuming $\arctan x = a_0 + a_1x + a_2x^2 + \ldots$ and matching coefficients in the expansions of each side of (1), he obtained
$$\arctan x = a_1\left(x - \frac{1}{3} x^3 + \frac{1}{5}x^5 - \ldots \right).$$
Some basic trigonometry reveals
$$a_1 = \lim_{x \to 0} \frac{\arctan x}{x} = 1.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2005872",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 3
}
|
Prove that if P(x) is a quartic polynomial, then there can only be at most one line that is tangent to it at two points. Prove that if P(x) is a quartic polynomial, then there can only be at most one line that is tangent to it at two points.
From what I have seen, the only case in which I have found a tangent line at 2 points is when the tangent line is horizontal. Is there any other way to approach it?
|
Suppose $P(x)$ is quartic and $T(x)=Ax+B$ is a line that is tangent to $P(x)$ at $\xi_1,\xi_2$. Then you can factor $P-T$ as
$$P(x)-T(x) = (x-\xi_1)^2(x-\xi_2)^2$$
Then $$P''(x)= 2((x-\xi_1)^2+4(x-\xi_1)(x-\xi_2)+ (x-\xi_2)^2))$$
So that $P''(\xi_1) = P''(\xi_2)= 2(\xi_1-\xi_2)^2$. This determines $\xi_1,\xi_2$ uniquely, since they must be symmetric points with respect to the vertex of the quadratic $P''$.
In the example given in the comments, $P(x)= x^4-2x^2+x+1$ so $P''(x) = 12x^2 -4$ and $\xi_1=-\xi_2$. Hence
$$12\xi_1^2- 4= 2(\xi_1+\xi_1)^2$$
$$\xi_1^2 = 1$$
whence $\xi_1 = -\xi_2 = 1$. In this case,
$$P(x)-T(x) = (x-1)^2(x+1)^2$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2005999",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Whispering wall function After visiting the Whispering wall where a whisper can be clearly transmitted between two locations on the surface of a dam wall over more than 100 metres, I was puzzled as to how this occurs. I decided to try to solve the following problem: What must be the shape of a curved wall such that any sound emitted towards the wall from one corner will be reflected directly to the opposite corner as in the following diagram (co-ordinates are $x$ and $y$)?
From this I derived the following differential equation (derivation below) but myriad substitutions and changes of variable have still left me puzzled as how to put it into a form which can be solved:
$$(y’)^2++y’\left(\frac{a^2+y^2-x^2}{ay}\right)+\frac{x}{a}=0$$
for $-a<x<0$ with $y(\pm a)=0$ and $y’(0)=0$, but I’m not sure that this is right. Can solve this equation or else find a mistake in it and find the correct equation to solve my geometrical problem?
Derivation (note that from the answers it would appear that this is slightly incorrect): Taking the following horrible paint diagram where $D$ is the point at which the sound wave is reflected and $AB$ is the tangent to the curve at that point, where we must have that the angles $BDC$ and $ADF$ marked $\theta$ are equal since a specular reflection is assumed to occur:
(Note that this is for negative $x$ only so $-x>0$; the vertical line is through the maximum of the curve). I then take the triangle $DBC$ and divide the lengths of the sides by $a-x$ to find that (using the relation between the tangent and the derivative):
$$\theta=\arctan{y’}+\arctan{\frac{y}{a-x}}$$
I then take the triangle $ADE$ and divide the lengths by $y$ to obtain:
$$\theta=\arctan{\frac{1}{y'}}-\arctan{\frac{a+x}{y}}$$
Setting these equal and taking $\tan$ of both sides and using the law for the tangent of a sum of angles to obtain two fractions which I cross-multiply and then collecting terms I get the differential equation shown above.
(Note that interesting facts about similar walls which might better explain the Whispering wall I originally mentioned can be found here, here and here, but I am now specifically looking for a solution to my problem. Also not that the Wikipedia page in my very first link mentioned a ‘parabolic effect’, but I could not see how that would be relevant to my problem)
|
Following @Djura Marinkov's remark, the correct equation to study is
\begin{equation}
y'^2 + \frac{x^2 - y^2 - a^2}{x y} y' -1 = 0. \tag{1}
\end{equation}
Using the substitution $y(x)^2 = z(\eta(x))$ where $\eta(x) = x^2$, we obtain
\begin{equation}
z'^2 + \left(1-\frac{z+a^2}{\eta}\right) z' - \frac{z}{\eta} = 0, \tag{2}
\end{equation}
which looks marginally easier than $(1)$, but not much. However, this changes when we take the derivative of $(2)$ to $\eta$: then we obtain
\begin{align}
z'' =& \frac{-z(1+z') + z'(-a^2 + \eta(1+z'))}{\eta(-a^2+\eta-z+2 \eta z')}\\
=& \frac{z'^2 + \left(1-\frac{z+a^2}{\eta}\right)z'-\frac{z}{\eta}}{-a^2+\eta-z+2 \eta z'}\\
=& 0,
\end{align}
where we used $(2)$ to obtain the last line. Therefore, $z$ must be a linear function, i.e.
\begin{equation}
y^2 = \alpha x^2 + \beta \tag{3}.
\end{equation}
You can now substitute $(3)$ in $(1)$ to find (a relation between) $\alpha$ and $\beta$, in terms of $a$. Note that this answer reflects (yes) some useful properties of ellipses, see here, here or here.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2006215",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
Why is the supremum this? $M=\{y\in \mathbb R |y=3x+10:x\in(9,14)\}$
From what I've learned, this means
$9<x<14$?
DEF: Supremum
A figure $u\in \mathbb R$ so
*
*$a\leq u$ for all $a\in A$
*for all $\epsilon>0$ there exists such $a\in A$ so $u-\epsilon<a$
How come 52 is said to be the supremum..? I would like to say that 49 is.
Sorry for the bad layout - I don't really know how to use LaTeX
|
Remember that $x$ doesn't have to be an integer! For example, take $x=13.9999$. Then $y=3x+10$ will be very very close to $52$ (specifically, $51.9997$).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2006312",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Is it possible to get an example of two matrices Is it possible to get an example of two matrices $A,B\in M_4(\mathbb{R})$ both having $rank<2$ but $\det(A-\lambda B)\ne 0$ i.e it is not identically a zero polynomial. where $\lambda$ is indeterminate, I mean a variable. I want to say $(A-\lambda B)$ is of full rank matrix, assuming entries of $A-\lambda B$ is a polynomial matrix with linear polynomial.
|
Note that $rank\ A, B < 2$ means their ranks are 1 (If any of them is 0 your question has a trivial answer). That means that all columns of $A$ are $v, \alpha_v v, \beta_v v, \delta_v v$ and the columns of $B$ are $u, \alpha_u u, \beta_u u, \delta_u u$. Now note that the columns of $(A - \lambda B)$ will be sums of those multiples of $u$ with multiples of $v$. Regardless of how you distribute the $v, \alpha_v v, \beta_v v, \delta_v v$ with the $u, \alpha_u u, \beta_u u, \delta_u u$, all columns of $(A - \lambda B)$ will be of the form $xu + yv$ for some $x$ and $y$ scalars. Therefore, all your 4 columns will be a linear combination of the two vectors $u$ and $v$ and therefore the rank of $(A - \lambda B)$ can never be greater than 2.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2006437",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
}
|
Solve for $x$ $2\log_4(x+1) \le 1+\log_4x$ $2\log_4(x+1) \le 1+\log_4x$
I did:
$$2\log_4(x+1) \le 1+\log_4x \Leftrightarrow \log_4(x^2+1) \le 1+\log_4(x) \Leftrightarrow \log_4(\frac{x^2+1}{x}) \le 1 \Leftrightarrow 4 \ge \frac{x^2+1}{x} \Leftrightarrow 4x \ge n^2 +1 \Leftrightarrow 0\ge x^2 -4x +1$$
Using the quadratic formula this becomes $x \le 2 + \sqrt{3}$ $\lor$ $x\le 2 - \sqrt{3}$
But my book says that the solution is $[1,1] = [1]$
What did I do wrong?
|
I think you have made mistake in the first step. It should be
$$2\log_4(x+1)=\log_4(x+1)^2=\log_4(x^2+2x+1).$$
Correcting it should give you the solution in your book.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2006674",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
$\lim \limits_{x \to e} (1-\log x)\log (x-e)$ without l'hopital's rule $\lim \limits_{x \to e}\space\space(1-\log x)\log (x-e)$
How can is it possible to eliminate indeterminate without using L'Hospital's Rule ? I tried to manipulate this formula but still the same problem.
|
Let put $$x=te$$ and compute the limit
$$\lim_{t\to1^-}(1-log(te))log(e-te)$$
$$=-\lim_{t\to1^-}log(t)(1+log(1-t))$$
$$=-\lim_{t\to1^-}log(1+(t-1))(1+log(1-t))$$
$$=-\lim_{t\to1^-} \frac{log(1+(t-1))}{t-1}(t-1+(t-1)log(1-t))$$
$=-1(0-0)=0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2006772",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
How do i integrate this? Hyperbolic substitution? I'm trying to integrate
$\int_0^\pi \sqrt{2+t^2}\; dt$
I know there it involves with something along the lines of either hyperbolic substitution or trigonometric.
I tried $t= \sqrt {2}\tan(u)$ and $u=\arctan(\frac{x}{\sqrt{2}})$. But then, I get lost and confused. Any guidance please?
|
Try out $t=\sqrt{2} \sinh(z)$. This leads to an easier calculation compared to the gonimetric substitution. More precisely, $t=\sqrt{2} \sinh(z)$, $dt=\sqrt{2} \cosh(z) dz$ and our integral becomes:
$$ I= \sqrt{2} \int \sqrt{1+\sinh^2(z)} \cosh(z) \, dz = \sqrt{2} \int \cosh^2(z) \, dz = \sqrt{2} \int \frac{1+\cosh(2z)}{2} \, dz = \frac{\sqrt{2}}{2} z + \frac{\sqrt{2}}{4} \sinh(2z) = \frac{1}{2} t \sqrt{2+t^2} + \sinh^{-1}\left(\frac{\sqrt{2}}{2} t \right). $$
Now keep in mind that $\sinh^{-1} (x)= \log(x + \sqrt{1+x^2})$. Finally, of course, evaluate the primitive in $t=0$ and $t=\pi$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2006879",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Finding Polynomial Equations for $1^4 + 2^4 + 3^4 + \ldots n^4$ Find a polynomial expression for
$$ 1^4 + 2^4 + 3^4 + ... + n^4 $$
I know you have to use the big theorem but I can't figure out how you would start to compute the differences. Suggestions?
|
Even though this might be too much for simple task you want (you can assume the polynomial of fifth degree and calculate coefficients) I think it is worth mentioning that there is a general formula for finding polynomial of sum of $p$-th powers $1^p+2^p+\dots+n^p$, and it is called Faulhaber's formula.
In short
$$1^p+2^p+\dots+n^p = \frac{1}{p+1} \sum_{j=0}^{p}(-1)^j\binom{p+1}{j}B_j n^{p+1-j}$$
where $B_j$ are called Bernoulli numbers, which you can compute for example as
$$
B_n = \sum_{k=0}^{n}\frac{1}{k+1}\sum_{r=0}^{k}(-1)^r \binom{k}{r} r^n
$$
giving first couple values $1,-1/2$,$1/6$,$0$,$-1/30$,$\dots$.
So in your case this becomes
\begin{align}
1^4+2^4+\dots+n^4 &= \frac{1}{5} \sum_{j=0}^{4}(-1)^j\binom{5}{j}B_j n^{5-j}\\
&= \frac{1}{5} ((-1)^0\binom{5}{0}B_j n^{5}+(-1)^1\binom{5}{1}B_1 n^{5-1}\\
&+(-1)^2\binom{5}{2}B_2 n^{5-2}+(-1)^3\binom{5}{3}B_3 n^{5-3}+(-1)^4\binom{5}{4}B_4 n^{5-4})\\
&= \frac{1}{5} (B_0 n^{5}-5 B_1 n^{4}+10 B_2 n^{3}-10 B_3 n^{2}+5 B_4 n)\\
&= \frac{1}{5} (n^{5}+\frac{5}{2}n^{4}+\frac{10}{6} n^{3}-\frac{5}{30} n)\\
\end{align}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2006979",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 7,
"answer_id": 3
}
|
Is $\sqrt{2}^{\sqrt{2}^{\sqrt{2}^{\ldots}}}$ rational, algebraic irrational, or transcendental? Let
$$
x=\sqrt{2}^{\sqrt{2}^{\sqrt{2}^{\ldots}}}.
$$
Assume $x$ is algebraic irrational. By the Gelfond-Schneider Theorem, $x^x$, which is also $x$, is transcendental, a contradiction.
But I have no idea how to do the rest.
|
The only reasonable meaning of $\sqrt 2^{\sqrt2^{\sqrt2^{\cdots}}}$ would seem to be the limit of the sequence
$$ \sqrt2, \sqrt2^{\sqrt2}, \sqrt2^{\sqrt2^{\sqrt2}}, \ldots $$
which can also be defined recursively as
$$ x_0 = \sqrt 2 \\
x_{n+1} = \sqrt2^{x_n} $$
This sequence converges to the number $2$, which is rational.
Why? It is fairly easy to prove (using calculus) that if $x_n$ is less than $2$, then $x_{n+1}$ is between $x_n$ and $2$. Therefore the the sequence of $x_n$s is bounded and strictly increasing, so it must converge towards something. But by continuity the limit must be a solution to $\sqrt 2^x=x$, and the only such solutions are $x=2$ and $x=4$. The latter cannot be a limit because it would require the sequence to increase past $2$, so $2$ is the only possibility.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2007083",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Analysis problem: Show that$ f(x)$ is less or equal than$ g(x)$ Analysis problem:
Let $f$ and $g$ be differentiable on $ \mathbb R$. Suppose that $f(0)=g(0)$ and that $f' (x)$ is less or equal than $g' (x)$ for all $x$ greater or equal than $0$ Show that $f(x)$ is less or equal than$g(x)$ for all $x$ greater or equal than $0$.
Is my proof correct?
I am trying to use the Generalized Mean Value Theorem:
As $f$and g are differentiable on$ \mathbb R$, $f$ and $g$ are continuous on $ \mathbb R$ and we can use the Generalized Value Theorem. Using the starting condition $f(0)=g(0)$, we have that for any b that is greater than$ 0$, exist a $c$ element of $(0,b)$ such that
$f' (c) g(b) = g' (c) f(b)$
By the starting conditions,
$f' (x) $is less or equal than$g' (x)$ for all $x$ greater or equal than $0$
Therefore, $f(b)$ is less or equal than $g(b)$ for any b element of $(0, b)$
As $b$ is any number bigger than$ 0$
$f(x)$ is less or equal than $g(x)$ for any $x$ greater or bigger than$ 0$. Q.E.D.
|
Let $h(x)=f(x)-g(x);x\in [0,\infty)$
$h^{'}(x)=f^{'}(x)-g^{'}(x)\le 0\implies h$ is decreasing on $[0,\infty)\implies h(x)\le h(0)\forall x\in [0,\infty)\implies f(x)\le g(x)$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2007224",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
How can I check my solution for this ODE? I have $x''(t) - x(t) = e^{t}$ with boundary conditions $x(0) - x(1) = 0$ and $x'(0) - x'(1) = 0$.
I find the solution to be $x(t) = e^t \frac{4t-2}{8} + c_{1} e^{t} + c_{2}e^{-t}$, but finding $c_{1},c_{2}$ is tedious to find. I find $c_{1} = \frac{-\frac{1}{4} ((1-e) + e + 1)}{-(1-e^{-1}) - (1-e)} \cdot \frac{(1-e^{-1}) - \frac{e}{4} - \frac{1}{4}}{1-e}$ and $c_{2} = \frac{\frac{1}{4} ((1-e) + e + 1)}{-(1-e^{-1}) - (1-e)} $
How am I supposed to check if $c_{1},c_{2}$ are correct other than by computing by hand? This seems too messy, but I do not immediately see a mistake in my work.
|
To solve:
$$\text{x}''\left(t\right)-\text{x}\left(t\right)=e^t$$
Use Laplace transform:
$$\text{s}^2\text{X}\left(\text{s}\right)-\text{s}\text{x}\left(0\right)-\text{x}'\left(0\right)-\text{X}\left(\text{s}\right)=\frac{1}{\text{s}-1}$$
Solving $\text{X}\left(\text{s}\right)$:
$$\text{X}\left(\text{s}\right)=\frac{\frac{1}{\text{s}-1}+\text{s}\text{x}\left(0\right)+\text{x}'\left(0\right)}{\text{s}^2-1}$$
Now:
*
*$$\text{x}\left(0\right)-\text{x}\left(1\right)=0\space\Longleftrightarrow\space\text{x}\left(0\right)=\text{x}\left(1\right)$$
*$$\text{x}'\left(0\right)-\text{x}'\left(1\right)=0\space\Longleftrightarrow\space\text{x}'\left(0\right)=\text{x}'\left(1\right)$$
Using inverse Laplace transform:
$$\text{x}\left(t\right)=\frac{\cosh\left(t\right)\left(t+2\text{x}\left(0\right)\right)+\sinh\left(t\right)\left(t+2\text{x}'\left(0\right)-1\right)}{2}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2007351",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
If we are given $3$ positive integers $a,b,c$ such that $a>b>c$ , and $91b>92c>90a$ . What is the minimal value of $a+b+c$? If we are given $3$ positive integers $a,b,c$ such that $a>b>c$ , and $91b>92c>90a$ . What is the minimal value of $a+b+c$?
I am getting the bounds of the fractions $\frac{a}{b},\frac{b}{c},\frac{c}{a}$..
But I dont know what to do next
|
This follows up on my former comment and proves $413=139+138+136$ is the minimal sum.
$92 c \gt 90 a \implies 2c > 90(a-c) \ge 90 \cdot 2 = 180$ therefore $c \gt 90 \iff c \ge 91$.
$91 b \gt 92 c \implies 91(b-c) \gt c \ge 91$ therefore $b-c \gt \frac{91}{91} = 1 \iff b-c \ge 2 \iff b \ge c+ 2$.
Since $b \ge c+2$ and $a \gt b$ it follows that $a-c \ge 3$ and the first inequality $2c \gt 90(a-c) \ge 270$ gives the stronger bound $c \gt 135 \iff c \ge 136$.
Using the lowest values allowed by $c \ge 136$ and $b \ge c+2$ gives $c=136$, $b=c+2=138$, and $a=b+1=139$. The triplet satisfies $91 \cdot 138 = 12558 \gt 92 \cdot 136 = 12512 \gt 90 \cdot 139 = 12510$, so the minimal sum is $139+138+136 = 413$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2007475",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Sum of the digits of $N=5^{2012}$ The sum of the digits of $N=5^{2012}$ is computed.
The sum of the digits of the resulting sum is then computed.
The process of computing the sum is repeated until a single digit number is obtained.
What is this single digit number?
|
Hint : You can prove that if $d(n)$ denotes sum of digits of $n$, then $n$ is congruent to $d(n)$ mod 9. Also, $N<10^{2012}$ implies $d(N)<9*2012<20000$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2007610",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 2
}
|
Finding matrix BA given AB
Given a matrix $$AB = \begin{bmatrix}-2&-14&14\\5&15&-10\\4&8&-3\end{bmatrix},$$ where $A$ is a $3\times 2$ matrix, and $B$ a $2\times 3$ matrix, how do I find the matrix $BA$?
I was told to find the basis for the rowspace of $AB$, and that $(AB)^2 = 5AB$. However, I do not see how these 2 informations can help me find $BA$ at all.
Any help would be appreciated.
|
I will use isomorphisms of matrix algebras with corresponding linear operator spaces a lot. Thus, $A\colon\Bbb R^2\to\Bbb R^3$ and $B\colon\Bbb R^3\to \Bbb R^2$.
If you find basis for rowspace of $AB$ you will find that $r(AB) = 2$, where $r$ denotes rank. That also means that $n(AB) = 1$, by rank-nullity theorem (where $n$ is dimension of nullspace of $AB$).
Obviously, $n(B)\leq n(AB)$ because $\ker B\subseteq \ker (AB)$, hence, by rank-nullity theorem we get that $n(B) = 1$, $r(B) = 2$ ($r(B)$ is at most $2$, so $n(B)$ is at least $1$). Thus, $B$ is epimorphism.
Similarly, $r(AB)\leq r(A)$, since $\operatorname{im} (AB)\subseteq \operatorname{im} A$. But, $r(A)$ is at most $2$ by rank-nullity theorem and thus $r(A) = 2$. We conclude that $A$ is monomorphism.
Finally, $$5AB = (AB)^2\implies A(BA-5I)B = 0 \implies BA = 5I$$ since $A$ is monomorphism and $B$ epimorphism.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2007728",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
}
|
How do you actually calculate numbers like $2^{\pi}$ or $\sqrt[\leftroot{-2}\uproot{2}160]{2}$ My question is very simple, but I started to wonder how does one calculate numbers like $2^{\pi}$ or $\sqrt[\leftroot{-2}\uproot{2}160]{2}$?
For example I know that:
$$2^{3/2} = \sqrt{2^3}=\sqrt{8}\approx2.83,$$
which is easy to calculate. But what about the cases I gave as an example? How would one go about and calculate those numbers using nothing else but a pencil and paper without calculator allowed?
Would we use some kind of series here to approximate these numbers?
It is also a bit unclear to me what $2^{\pi}$ means. For example to me, $2^3$ means in words: "Multiply number $2$ three times by itself", so multiplying $2$ by $\pi$ times by itself feels a bit weird when you're used to having integers in the exponent.
Thank you for any help and clarifications :)
|
How I would think about $2^{\pi}$ is the following:
Consider an infinite sequence $(a_{0},a_{1},a_{2}\ldots)$ such that:
$$\sum_{n=0}^{\infty}a_{n}=\pi$$
Then
$$2^{\pi}=2^{\sum_{n=0}^{\infty}a_{n}}=2^{a_0+a_1+a_2+\cdots}=2^{a_{0}}2^{a_{1}}2^{a_{2}}\cdots$$
Now $2^{\pi}$ can be represented as a infinite product:
$$2^{\pi}=\prod_{n=0}^{\infty}2^{a_{n}}$$
There are infinite number of equations for $a_{n}$ but here's a link for a few, if you want to know them.
http://mathworld.wolfram.com/PiFormulas.html
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2007859",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 3
}
|
How to calculate the actual distance flown by a golf ball? I am interested in finding out what the actual distance is flown by a golf ball on a drive. You can assume no wind, 10 degree launch angle, goes straight down the middle of the fairway (no draw, no slice...) and no erratic backspin on the ball (so it wont "balloon" but will have stable flight). You can also assume that the ball flies 200 yards of linear distance (relative to the ground). So basically I would like to know how much "extra" distance does the ball fly relative to the 200 yards of linear distance? I was thinking somewhere on the order of 15% more (30 yards more) but that is just a guess.
The answer need not be exact.
Also I am not sure what tag I should use so if someone has a better one go ahead and change it or just tell me and I will change it.
|
This is a non mathematical answer but still mildly interesting and worth mentioning. For someone that really wanted to know this but didn't have the math skills to solve it (like me), I suppose someone could record the flight of the ball on their smartphone such that the entire ballflight could be recorded. Then they would take a piece of string and trace the ballflight on the smartphone screen. Then they would simply measure the length of the string relative to the length of the flat path (along the ground). Solutions like this is why I didn't major in math in college. I would always want to do things in a simpler (non mathematical) way.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2007967",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
}
|
Analytical continuation of complete elliptic integral of the first kind I am dealing with a problem involving the complete elliptical function of the first kind, which is defined as:
$K(k)=\int_0^{\pi/2} d\theta \frac{1}{\sqrt{1-k^2\sin^2(\theta)}}=\int_0^1 dt \frac{1}{\sqrt{1-t^2}\sqrt{1-k^2 t^2}} $
for $k^2<1$. I am trying to find however how to analytically continue the function for cases where $k^2>1$, and especially when $k^2$ is complex and classify it depending on $\mathrm{Im}(k^2)$ is complex or not.
This looks like a very trivial question that must be written somewhere, but i couldn't find it explicitely not in several references or here in the forum. The closest solution i found has been in these notes:
http://www.damtp.cam.ac.uk/user/md327/fcm_2.pdf
Where they say in a last remark that the analytically continuation should be written in terms of $K(k^2) n +iK'(k')m$, where $k'$ is just the complimentary modulus and $n,m$ integers, however, right now i don't know how to prove it, and what are the $n,m$ depending on $\mathrm{Re,Im}(k^2)$
Someone can help?
|
On an abstract level, the elliptic integral $K$ is an inverse of the elliptic functions which are doubly periodic (in the complex plane) with periods $K$ and $iK'$ (this in a sense answers your question).
To be concrete, one can answer your question given the integral representation you have given (setting $m=k^2$) for convenience. We are interested to know the value of the integral for $m_\epsilon = m \pm i\epsilon$ with $m>1$ and $\epsilon>0$. We can evaluate
$$\lim_{\epsilon\to0} K(m_\epsilon) = \lim_{\epsilon\to0} \int_0^{\pi/2}\!d\theta\,\frac{1}{\sqrt{1-m_\epsilon \sin^2 \theta}} = \underbrace{\int_0^{\theta^*}\!d\theta\,\frac{1}{\sqrt{1-m \sin^2 \theta}}}_{I_1}\pm i
\underbrace{\int_{\theta^*}^{\pi/2}\!d\theta\,\frac{1}{\sqrt{m \sin^2 \theta-1}}}_{I_2}$$
with $\theta^*=\arcsin(1/m^{1/2})$.
Now, we treat the two integrals separately. We perform the substitution $$ m \sin^2\theta = \sin^2 \theta'$$ and obtain
$$I_1 = \frac{1}{\sqrt{m}} \int_{0}^{\pi/2} \!d\theta'\,\frac{1}{\sqrt{1- m^{-1} \sin^2 \theta'}} = \frac{K(m^{-1})}{\sqrt{m}}.$$
Similarly employing the substitution $1- \sin^2 \theta =(1-m^{-1}) \sin^2 \theta'$ , we find that
$$ I_2 = \frac{1}{\sqrt{m}} \int_0^{\pi/2}\!d\theta'\,\frac{1}{\sqrt{1-(1-m^{-1}) \sin^2 \theta'}} = \frac{K(1-m^{-1})}{\sqrt{m}}.$$
Thus, we have that
$$ K(m) = \frac{1}{\sqrt{m}}\left[K(m^{-1}) \pm i K(1-m^{-1}) \right] . \tag{1}$$
The right hand side is the analytical continuation through the branch cut at $m>1$.
With the formula (1), you can find all the additional branches. E.g., at the point $m<1$ the right hand side becomes invalid but then using (1) you can find a the analytical continuation again.
To get a connection to your formula, you have to note that
$$ \frac{K(1-m^{-1})}{\sqrt{m}} = K(1-m)= K'(m).$$ Thus, we have that (for $m>1$ as before)
$$\mathop{\rm Im} K(m) = \pm K'(m)$$
and thus the branches differ by (even) multiples of $i K'(m)$. Similarly, by analytically continuing $K'(m)$ you observe that you get additional branches displaces by $K(m)$.
You find some more information e.g. here.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2008090",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
}
|
Inequivalent cusps of $\Gamma_0(4)$ I have been trying to do this by finding the $\Gamma_0(4)$ orbit of $\infty$, then finding an element not in here (namely $0$) and considering the orbit of this. But I feel like there is a slight error in my computation as I suspect there are in fact 3 non-equivalent cusps.
Here is what I have done thus far:
$\Gamma_0(4)\cdot\infty = \{\left( \begin{array}{cc}
a & b \\
4c & d \end{array} \right)\infty : ad - 4bc = 1\} = \{\frac{a}{4c} : gcd(a,4c) = 1\} = \{\frac{p}{q} : gcd(p, q) = 1, 4|q\}$
and then similarly for $0$. This method would give just the two inequivalent cusps which I am pretty sure is wrong. I feel like the error is probably in my final equality but I cannot see why this would be incorrect.
|
I guess you know how to compute the number of inequivalent cusps of $\Gamma_{0}(N)$: $\sum_{d\mid N}\phi(\gcd(d,N/d))$. So the number of cusps of $\Gamma_{0}(4)$ has three inequivalent cusps. We know that the cusps $\infty$ and 0 are inequivalent. I claim that $1/2$ is not equivalent to neither $\infty$ nor $0$.
If $1/2$ were equivalent to $0$, then there would exist $j\in\mathbb{Z}$ such that $2j+1\equiv 0\pmod{4}$ (please see p. 99 of 'A first course in modular forms' written by Diamond and Shurman), which is absurd. We conclude that $1/2$ is not equivalent to $0$. Similarly, $1/2$ is not equivalent to $\infty$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2008204",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Isomorphism with Directional Graphs I was told isomorphism was when two graphs are the same but have different forms. In order for it to be the same two vertices must be adjacent across the graphs. So if vertex1 is adjacent to vertex2 and vertex3 in one graph, than it must do so in the other. Also that isomorphism can easily be applied to other varietys of graphs.
The simple graphs $G1 = (V_1, E_1)$ and $G2 = (V_2, E_2)$ are isomorphic if
there exists a one-to-one and onto function $f:V_1\to V_2$ with the
property that $a$ and $b$ are adjacent in $G_1$ if and only if $f(a)$ and $f(b)$ are adjacent in $G_2$, for all $a$ and $b$ in $V_1$. Such a function $f$ is called an isomorphism. ∗ Two simple graphs that are not isomorphic are called nonisomorphic.
Are these two graphs isomorphic? They retain the same shape, but the direction of the edge from vertex1 to vertex4 changes between the two, so it cant be isomorphic, right? Well, I was told it was without an explanation. Are they wrong, or am I missing something? Any help would be appreciated.
|
In the original graph, vertices have following degrees.
$1$ — $2$ outs, $1$ in.
$2$ — $1$ out, $2$ ins.
$3$ — $2$ outs, $1$ in.
$4$ — $1$ out, $2$ ins.
Let's give the names to the vertices of the second graph:
Here's their degrees:
$A$ — $1$ out, $2$ ins.
$B$ — $2$ outs, $1$ in.
$C$ — $2$ outs, $1$ in.
$D$ — $1$ out, $2$ ins.
Let's try to assign numbers to the letters. For vertex $A$, we have only two options: either $A$ is $2$ or $4$.
Suppose $A$ is $2$. Then $D$ can only be $4$ (can you see why?) and $C$ is $3$. Then $B$ is $1$, which it absolutely can be, since it has the same degree and is adjacent to the same vertices. By showing the isomorphism, we effectively prove its existence.
So the error was here:
They retain the same shape, but the direction of the edge from vertex1 to vertex4 changes between the two, so it cant be isomorphic
On the second graph, there are no named vertices, so you can't just say that the direction between some named vertices changed. You need to have a more rigorous method of proving that, and often the one I've described above will suffice.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2008340",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
$\sum_\limits{n=0}^{\infty} a_n$ converges $\implies \sum_\limits{n=0}^{\infty} a_n^2$ converges
Let $(a_n)$ be a sequence of positive terms and suppose that $\sum_\limits{n=0}^{\infty} a_n$ converges. Show that $\sum_\limits{n=0}^{\infty} a_n^2$ converges.
This is in the section on the Comparison Test so that must be what I'm supposed to use. But I don't see how. $(a_n)^2$ might be smaller or larger than $a_n$ depending on $a_n$. And I can't use the Comparison Test with some other series because there's no info here about how fast $\sum a_n$ converges. Hmm. Any hints?
|
For any $\epsilon>0$, there must be an $N$ such that $a_n<\epsilon$ for all $n>N$. If this were not true, you'd be adding an infinite amount of terms which didn't approach zero, meaning the sum would diverge, which, according to the statement, is false.
Let $\epsilon=1$ so that we have some $N$ such that $a_n<1$ for all $n>N$. It should then be clear that for any $0<a_n<1$, we have $0<(a_n)^2<a_n<1$.
So, we have
$$\sum_{n>N}(a_n)^2<\sum_{n>N}a_n$$
And since $\sum_{n=1}^N(a_n)^2$ is finite, we show by comparison test that
$$\sum_{n\ge0}(a_n)^2<\sum_{n=1}^N(a_n)^2+\sum_{n>N}a_n$$
which converges.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2008442",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 7,
"answer_id": 2
}
|
Solving ${\log_{\frac{2}{\sqrt{2-\sqrt{3}}}}(x^2-4x-2)=\log_{\frac{1}{2-\sqrt{3}}}(x^2-4x-3)}$ This was given to me by my Math Teacher almost a year ago and I've not been able to make much progress on it. I am hoping to see it resolved by our community members. $$\large{\log_{\frac{2}{\sqrt{2-\sqrt{3}}}}(x^2-4x-2)=\log_{\frac{1}{2-\sqrt{3}}}(x^2-4x-3)}.$$
|
Hint
If $x^2-4x-2<0$ or $x^2-4x-3<0$ then the corresponding $x$ cannot be a solution. But forget about that for a second and explain the meaning of the equation below:
$$\log_{\frac{2}{\sqrt{2-\sqrt{3}}}}(x^2-4x-2)=\log_{\frac{1}{2-\sqrt{3}}}(x^2-4x-3).$$
If for an $x$ the equation holds then there must exist some $A$ for which
$$\log_{\frac{2}{\sqrt{2-\sqrt{3}}}}(x^2-4x-2)=A$$
and
$$\log_{\frac{1}{2-\sqrt{3}}}(x^2-4x-3)=A.$$
By the definition of the logarithm one can say that for such an $A$:
$$\left(\frac{2}{\sqrt{2-\sqrt{3}}}\right)^A=x^2-4x-2$$
and
$$\left(\frac{1}{2-\sqrt{3}}\right)^A=x^2-4x-3.$$
Or
$$x^2-4x-\left(2+\left(\frac{2}{\sqrt{2-\sqrt{3}}}\right)^A\right)=0$$
and
$$x^2-4x-\left(3+\left(\frac{1}{2-\sqrt{3}}\right)^A\right)=0.$$
These are two quadratic equations which can be solved for $x$. The solutions will depend on the parameter $A$. Since the corresponding solutions will have to be equal one will have equations for the possible $A$'s.
Given a suitable $A$ an $x$ can be calculated. (Finally test the negativity of the argument of the logarithm.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2008534",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How do you prove the series $\sum_\limits{n=0}^{\infty}(2n+1)(-x)^n$ is convergent? The Wikipedia article about Formal Power Series states that
$$
S(x)=\sum_{n=0}^{\infty}(-1)^n(2n+1)x^n,
$$
if considered as a normal power series, has radius of convergence 1. How do we prove this?
I understand that for positive $x$ the series is alternating, and apart from a finite number of terms the sequence $(2n+1)x^n$ is monotone decreasing in $n$ for $0\le x<1$ and tends to zero for $n\to\infty$. Thus the alternating series test applies and the series is (conditionally) convergent for $0\le x<1$. But, since this is a power series, it has to be convergent in a disk in the complex plane, and thus we conclude the series is in fact convergent for complex $x$ when $|x|<1$. Is this argument correct?
Also, I checked with Maple and I get:
$$
S(x)=\frac{\sqrt{\pi}(-x)^{1/4}\text{LegendreP}\left(\frac12,\frac12,\frac{1-x}{1+x}\right)}{(1+x)^{3/2}},
$$
where the function in the numerator is the Legendre function of the first kind. This closed form solution, if correct, suggests that there is a branch cut for $x>0$ and $x<-1$, hence I would naively think based on this representation that true converge occurs for $-1<x<0$. Does anybody know how to prove this identity and where on the complex plane it holds?
|
The root test for me,
since I know that
$\lim_{n \to \infty} a^{1/n}
=\lim_{n \to \infty} n^{1/n}
=1$
for any $a > 0$.
Then
$\lim_{n \to \infty} (2n+1)^{1/n}
\le \lim_{n \to \infty} (3n)^{1/n}
\le \lim_{n \to \infty} 3^{1/n}\lim_{n \to \infty} n^{1/n}
=1
$
and
$\lim_{n \to \infty} (2n+1)^{1/n}
\ge 1$
so the radius of convergence is
$1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2008633",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
}
|
Find $1/\alpha$ when basis is {1, $\alpha$, $\alpha^2$, $\alpha^3$} I am trying to calculate a sub field and in the process, I need to state $1/\alpha$ in terms of $\alpha$.
Now, my $\alpha$ = $\sqrt{3+\sqrt{20}}$.
I can't for the life of me do this simple calculation!!
The minimal polynomial is $x^4-6x^2-11$ and I know I should be able to state $1/\alpha$ in terms of 1, $\alpha$, $\alpha^2$ and $\alpha^3$ but it is just not working for me.
Can someone please explain to me how to get the solution??
|
Note that $\alpha^4-6\alpha^2=11$. Divide on both sides by $11\alpha$, and you're done.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2008736",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Can you determine the remainder when divided by 6?
An integer $x$ gives the same remainder when divided by both $3$ and
$6$. It also gives a remainder of $2$ when divided by $4$, can you
determine an unique remainder when $x$ is divided by $6$?
I feel like you can't since $x=4q+2$ for integer $q$. Listing out some $x$'s gives $x = 2, 6, 10, 14, 18, \cdots$. When you divide these numbers by 6 you get the remainders $2, 0, 4, 2, 0, 4, \cdots$ and when you divide these numbers by 3, you get $2, 0, 1, 2, 0, 1 \cdots$, so the remainders in common are $2$ and $0$, and so it's not enough to determine an unique remainder.
Could anyone show me a proper argument of this without actually having to list out all the numbers and manually "test" it?
|
The only value possible is 2. Because x gives also a remainder of 2 when divided by 4. It means that we can exclude 0 as common remainder.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2008851",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Find all real solutions of this equation $x^2=2y-1$,$x^4+y^4=2$. Find all real solutions of this equation $x^2=2y-1$,$x^4+y^4=2$.
My attempt:I put the value of $x^2$ in the second equation.I get:
$(2y-1)^2+y^4=2 \Rightarrow [(2y-1)^2-1^2]+(y^4-1^4)=0 \Rightarrow 4y(y-1)+(y-1)(y+1)(y^2+1)=0 \Rightarrow (y-1)(y^3+y^2+5y+1)=0$
Now one solution is $y=1,x=1or-1$ but what about others is there another real solution?
|
There are no other real solutions.
From $2y-1=x^2\ge 0,$ $y$ has to satisfy $y\ge \frac 12$.
However, if $y\gt 0$, then $y^3+y^2+5y+1\gt 0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2009001",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
(Geometry) Circle, angles and tangents problem Let P be an external point of a circle with center in O and also the intersection of two lines r and s that are tangent to the circle. If PAB is a triangle such that AB is also also tangent to the circle, find AÔB knowing that P = 40°.
I draw the problem:
Then I tried to solve it, found some relations, but don't know how to proceed.
I highly suspect that PAB is isosceles, but couldn't prove it.
|
First of all, note that $\angle PAB + \angle PBA = 140^\circ$. That means that $\angle MAB + \angle NBA = 220^\circ$.
Then we see that $AO$ bisects $\angle MAB$, and $BO$ bisects $\angle NBA$, so $\angle OAB + \angle OBA = 110^\circ$.
Lastly, looking at the quadrilateral $AOBP$, we see that $x = 360^\circ - 40^\circ - 140^\circ - 110^\circ = 70^\circ$.
There is no reason to believe $\triangle PAB$ to be isosceles. In fact, from just the given information it might not be. If we move $A$ closer to $M$, we see that $AB$ touching the circle will force $B$ closer to $P$. It's just that you've happened to draw the figure symmetrically.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2009098",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
}
|
Finding sum of infinite series $1+\frac{x^3}{3!}+\frac{x^6}{6!}+\frac{x^9}{9!}+\ldots $ So the question is 'express the power series $$1+\frac{x^3}{3!}+\frac{x^6}{6!}+\frac{x^9}{9!}+\ldots $$
in closed form'.
Now we are allowed to assume the power series of $e^x$ and also we derived the power series for $\cosh x$ using exponentials.
Now my question is the best way to approach the problem above. There is a hint that says 'use the fact that $\zeta^2+\zeta +1=0 $ where $\zeta $ is a cube root of unity'.
The way I solved this was to recognise that the third derivative of the series was equal to the series itself. So I just solved a linear ODE of order 3 and then found the constants (and so the hint made sense - in a way). But I don't think the question was designed for me to do this and so I feel as though I'm missing something obvious that makes this problem very easy.
Can anyone see any alternatives that make use of the hint in a more natural way?
|
Let $1, w, w^2$ be the cube roots of unity. Let $f(x) = (e^x + e^{xw} + e^{xw^2})/3$. Expand $f$ in a Taylor series. Because the cube roots of unity sum to 0, all terms vanish except where the exponents are multiples of 3, in which case they give the coefficients of your series. You can convert f(x) to $\frac{e^x}{3} +\frac{2e^{-x/2}}{3} cos(\frac{\sqrt(3) x}{2})$ by using $cos(z) = \frac{(e^{iz}+e^{-iz})}{2}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2009189",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Are these two scenarios equivalent ? (random walks on chessboard) The random walks start at $(x,y)=(0,0)$ on an infinite chessboard which covers the whole upper plane. Let's say $(0,0)$ is white.
Random walk 1: At every step, I always go up one square, and either one square to the left or to the right with probability $1/2$ for each. (so the displacement occurs only on white squares)
After many random walks 1 stopped at step $t$, the final values of $x$ (position of the end of the paths) are stored and their standard deviation $\sigma_1$ is computed
Random walk 2: On each square of the chessboard, a normal number following $N(0,1)$ is placed randomly. An algorithm computes for every connected path of $t$ steps on white squares the sum of the Gaussian numbers encountered and chooses the path whose corresponding sum is minimal. The end values of $x$ are stored and their standard deviation $\sigma_2$ is computed, for many random walks of length $t$ (the same amount of samples than for random walk $1$).
Should $\sigma_1$ and $\sigma_2$ differ ?
Context: I am asking this question because I was tasked to design an algorithm which picks the path minimizing the sum of Gaussian numbers, and wanted to check my results with a simpler problem which I think is equivalent. I programmed the two algorithms, and get different values for the $\sigma_i$, although I can't see how a significant difference can be justified.
Actually, for the random walk $1$, I didn't need to even code anything, I could solve it using combinatorics too. A random path is basically picked uniformly. All paths have the same length, but there are more paths leading to an end position close to the center. Computing the probabilities I can infer $\sigma_1$ easily.
In random walk $2$, the setting seems to be the same... there can only be one path minimizing the sum, and each path has the same length, so every path has the same probability to be chosen. Using the combinatorics argument, $\sigma_2$ should be the same as $\sigma_1$
So, are my maths wrong or is my algorithm faulty ?
|
This is a partial answer. In RW1 the end point will be $(X,t)$ where $X=\sum_1^t Y_k$ and the $Y_k$'s are independent random variables with $P(Y_k=-1)=P(Y_k=1)=1/2$. The variance of each of the $Y_k$'s is $1$ and therefore
$$
\sigma_1=\sqrt{t}.
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2009309",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
$\lim_{ n\to \infty} \frac{n^{100}}{1.01^n} = ?$ And how to prove it? There are many such similar limits. But it seems that each of the proof is isolated, there are any good ways to solve it?
|
In all generality,
$$\lim_{n\to\infty}n^ab^n=\lim_{n\to\infty}\left(nb^{n/a}\right)^a=\lim_{n\to\infty}\left(nc^n\right)^a=\left(\lim_{n\to\infty}nc^n\right)^a,$$ if it exists, with $c:=\sqrt[a]b$.
Then every time you increment $n$, the expression between the parenthesis is multiplied by $$\frac{n+1}nc=\left(1+\frac1n\right)c.$$
The first factor gets closer and closer to $1$ so that in the end $c$ makes the decision.
If $c>1$, the factor is larger than $1$ and the limit goes to infinity.
If $c<1$, the factor will end up being smaller than $1$ and the limit goes to zero.
Then if $c=1$, the limit is just that of $n$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2009397",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 4
}
|
Weyl Tensor undefined or vanishing? So we have that the Weyl tensor in component form satisfies in dimension $n$
\begin{equation}
C_{abcd}=R_{abcd} -\frac{2}{n-2}\left(R_{a[c}g_{b]d}+R_{b[d}g_{c]a} \right)+\frac{2}{(n-1)(n-2)}Rg_{a[c}g_{b]d}
\end{equation}
The confusion is simple really. Many texts say that when $n=2$, the Weyl Tensor vanishes; but I stumbled across a text today (Gravitation: Foundations and Frontiers, By T. Padmanabhan - a great read to add) which says that it the Weyl tensor is in fact undefined in the first place for $n=2$!
After having used the fact that in two dimensions, the Riemann tensor can be expressed for $n=2$ as
\begin{equation}
R_{abcd}=Rg_{a[c}g_{d]b}=\frac{1}{2}R(g_{ac}g_{bd}-g_{ad}g_{bc})
\end{equation}
Without substituting in the value $n=2$, The above expression for the Weyl tensor seems to reduce to (using the Riemann Tensor defined above)
\begin{equation}
C_{abcd}=\frac{1}{2}R \left( \frac{n+1}{n-1} \right)(g_{ad}g_{bc}-g_{ac} g_{bd})
\end{equation}
which does not seem to in general vanish for $n=2$. So is it just the case that, looking at the above formula, the $n=2$ case is ill defined (as mentioned in the book) for some specific reason, or that indeed the Weyl tensor does in fact always vanish?
Thanks.
|
Your formula for the Weyl tensor is wrong. I haven't double-checked my computation, but I think the first term in parentheses should be $R_{a[c}g_{d]b}$, not $R_{a[c}g_{b]d}$, and the last term should be a multiple of $Rg_{a[c}g_{d]b}$.
In any case, there has to be something wrong with your formula, because the curvature tensor satisfies $R_{abcd}=\frac{1}{2}R(g_{ac}g_{bd}-g_{ad}g_{bc})$ for any metric with constant sectional curvature in any dimension, and for all such metrics, the Weyl tensor is identically zero.
The question of whether Weyl is zero or undefined in dimension $2$ depends on how you define it. If you define it by (the corrected version of) the formula you wrote down, then it clearly is undefined in dimension $2$ because of the $(n-2)$'s in the denominator. But there's another way to define it that does make sense in dimension $2$: Let $V$ be a finite-dimensional inner product space, and let $\mathscr R(V^*)$ denote the space of all algebraic curvature tensors on $V$ (covariant $4$-tensors that have the symmetries of the curvature tensor). Define $\operatorname{trace}\colon \mathscr R(V^*)\to \operatorname{Sym}^2(V^*)$ by
$$
(\operatorname{trace}R)_{ac} = g^{bd} R_{abcd}.
$$
Then you can define the Weyl tensor as the orthogonal projection of the Riemann tensor onto the kernel of the trace operator. This yields the usual formula in dimensions $4$ and up, but it yields zero in dimensions $2$ and $3$ because the trace operator is injective in those dimensions. So in that sense, the Weyl tensor is identically zero in dimensions $2$ and $3$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2009500",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Cancellation in quotient of fractional ideals When reading about fractional ideals of rings of integers, I came upon the following footnote:
For fractional ideals $\mathfrak{a}$, $\mathfrak{b}$ and $\mathfrak{c}$ with $\mathfrak{a} \supset \mathfrak{b}$, $$\displaystyle ^{\mathfrak{a}\mathfrak{c}}/_{\mathfrak{b}\mathfrak{c}} \simeq \ ^{\mathfrak{a}}/_{\mathfrak{b}}$$ as $\mathcal{O}_K$-modules.
This was not obvious to me, so I tried to prove it, however did not succeed. I think it must be connected to the unique product decomposition in Dedekind domains. I also found this question, where someone was also not sure how to prove this isomorphism, but did not succeed either.
Any help is greatly appreciated! Thanks in advance!
|
Here is the main statement:
Proposition: Let $R$ be a Dedekind domain (for instance the ring of integers $R=\mathcal O_K$ of some finite extension $K$ of $\Bbb Q$).
Let $\mathfrak{a}$, $\mathfrak{b}$ and $\mathfrak{c}$ be non-zero fractional ideals of $R$ with with $\mathfrak{a} \supset \mathfrak{b}$.
Then there is an isomorphism of $R$-modules $$\dfrac{\mathfrak{a}\mathfrak{c}}{\mathfrak{b}\mathfrak{c}} \cong \dfrac{\mathfrak{a}}{\mathfrak{b}}.$$
We first need a little result:
Lemma: let $I,J$ be non-zero integral ideals of a Dedekind domain $R$. Then there is $x \in \mathrm{Frac}(R)$ such that $xI+J=R$.
This lemma is proved here ; we just take $x:=b/a \in \mathrm{Frac}(R)$ from the proof there.
Proof of proposition.
— Since $\frak a, b, c$ are fractional ideals, there are elements $\alpha, \beta, \gamma \in \mathrm{Frac}(R)$ such that
$\alpha \frak a, \beta \frak b, \gamma \frak c$ are all integral ideals of $R$.
We clearly have isomorphisms of $R$-modules
$$
\dfrac{\frak a}{\frak b} \cong \dfrac{\alpha \beta \frak a}{\alpha \beta \frak b},
\qquad
\dfrac{\frak ac}{\frak bc} \cong \dfrac{\alpha \beta \gamma \frak ac}{\alpha \beta \gamma \frak bc}
$$
We may therefore assume that all of $\frak a, b, c$ are integral ideals (i.e. contained in $R$) : replace them by $\alpha \beta \frak a, \alpha \beta \frak b, \gamma \frak c$ respectively.
— Then, by the lemma, there is $x \in \mathrm{Frac}(R)$ such that $x \mathfrak{c}^{-1} + \mathfrak{ba}^{-1} = R$, i.e.
$$x \frak{a + bc = ac}.$$
Consider the morphism
$f : \frak{a} \to \frak{ac/bc}$ be defined by
$f(r) = [xr]$. It is clearly surjective. Moreover, the kernel of $f$ is
$$\mathfrak{a} \cap x^{-1}\mathfrak{bc} = (\mathfrak{c}^{-1} \cap x^{-1}\mathfrak{ba}^{-1}) \mathfrak{ac} = x^{-1}(x\mathfrak{c}^{-1} \mathfrak{ba}^{-1})\mathfrak{ac} = \frak b$$
The first equality comes from (13) in this answer and the second comes from (14), using $x \mathfrak{c}^{-1} + \mathfrak{ba}^{-1} = R$ (notice that the answer holds in our setting with fractional ideals of the Dedekind domain $R$, not only for integral ideals in $R$).
As a conclusion, $f$ induces an isomorphism of $R$-modules $\frak{a/b \cong ac/bc}$ as claimed.
$\hspace{1.5cm}\blacksquare$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2009591",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
}
|
Can an equivalence relation $C$ on $A$ relate two non-equal elements of a set $A$? I was working through one of the exercises in Topology: A First Course by Munkres, and I came across this:
Let $f : A \to B$ be a surjective function. Let us define a relation on $A$ by setting $a_0 C a_1$ if $f(a_0) = f(a_1)$. Show that $C$ is an equivalence relation.
Now essentially what this is saying is the following:
$$C = \{(a, b) : f(a) = f(b) \ \ \text{where} \ \ a,b \in A\}$$
But since $f$ is surjective and not injective we could have $f(a) = f(b)$ when $a \neq b$. But clearly by what we have above $aCb$, where $C$ is an equivalence relation on $A$.
Is this an error or a misunderstanding on my part or can an equivalence relation relate two elements of a set that are not in fact equal to each other?
|
Yes, an equivalent relation can, and usually does, relate elements that are not originally equal to each other, generalizing what we know as equality.
Let us take, for instance, $\mathbb{Z}$ with the following equivalence relation: $x \sim y$ iff $x - y$ is even. With the above relation, we generalize the idea of "equality" saying that two numbers are "equal" if they are both even or both odd, e. g., $1 \equiv 3$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2009734",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Sum of all possible combinations Guys I just discovered something amazing. Can someone please confirm this? The sum of all possible ways to form a number with $n$ digits, using its digits, without repetition, is equal to $11\ldots1\cdot m(n-1)!$, where $m$ is the sum of the digits of the number, and the amount of $1$'s is equal to $n$. For example, $123$ can be arranged $132, 231, 213, 312, 321$. The sum of these numbers is equal to $1332$. $(111)(6)(2)$. I'll be waiting for my Fields Medal.
|
Each digit has a "chance" to "occupy" every column.
Hence the sum is that digit multiplied by $\underbrace{111\cdots 1}_n$.
Each column will be occupied by every digit, which when summed, gives $m$ for that column.
...Multiply by $m$ to give $\underbrace{111\cdots 1}_n\cdot m$
When a given column occupies a given position, other digits permute
amongst themselves in $(n-1)!$ ways, so the number of times it occupies that column is $(n-1)!$.
...Multiply by $(n-1)!$ to give $\color{red}{\underbrace{111\cdots 1}_n\cdot m (n-1)!}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2009888",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 1
}
|
Prove $\gcd(a,b)=\gcd(a+b,\operatorname{lcm}(a,b))$ Any ideas on how to prove this equality?
I tried various methods, using properties of gcd and lcd, but I can't prove it.
|
You may want to try and prove first that
$\operatorname{gcd}(a, b) = \operatorname{gcd}(b, a - b)$
This identity should be enough to get you rolling. You will just have to be able to use it correctly.
Alternatively, try this approach.
Suppose $d $ is the gcd of $a $ and $b $. Show it divides $a+b $ and $\operatorname {lcm}(a, b) $.
Then suppose $d'$ is the gcd of $a+b $ and $\operatorname {lcm}(a, b) $. Show $d'$ divides both $a $ and $b $. Hence $d $ divides $d'$ which divides $d $, meaning they must be equal.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2009979",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
}
|
Two men, each placing one ball in five out of ten boxes There are 10 boxes and each box can hold any number of balls. A man having 5 balls randomly puts one ball in each of the arbitrary chosen five boxes. Then another man having five balls again puts one ball in each of the arbitrary chosen five boxes. The probability that there are ball(s) in at least 8 boxes. Assuming all boxes and balls are identical. Both men can choose any arbitrary box.
My attempt:
I was really not able to pull through the solution. What I thought was to let the first man choose five boxes by $^{10}C_5$, and put the balls in those 5 boxes each. There is $1$ way to do that.
Now the second man comes and chooses his 5 boxes and put each ball in those 5 boxes. It can be done in $^{10}C_5 * 1$
Now these are the following ways the boxes can be filled:
$$1 1 1 1 1 1 2200$$
$$1111111120$$
$$1111111111$$
Now I am not able to understand how to apply combinatorics and find the number of ways these conditions can happen. Please correct me if I was wrong in my attempt at the solution.
|
Each of the three conditions you present represents a different way to fill the ten boxes with a different amount of balls so that at least 8 boxes have one ball.
In the first case, you have two boxes with two balls, two boxes with zero balls, and six boxes with one ball. If you labelled the ten boxes, you are then looking for the amount of ways to choose 2 boxes to have two balls, 2 boxes to have zero balls, and 6 boxes to have one ball. Solving for this case gives:
\begin{align*}
{10\choose2}{10 - 2 \choose 2}{10 - (2 + 2) \choose 6}
= {10\choose2}{8\choose2}{6\choose6} =
\frac{10!}{2! * 2! * 6!}
\end{align*}
Repeat this for your remaining two cases, and you will have all the ways to arrange the balls in the ten boxes so that you have at least 8 boxes with 1 ball.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2010094",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Finding elliptic curve with exactly p+1 number points over F_p Hi all I am beginner in Elliptic Curves. I want to design an elliptic curve with exactly $p+1$ points over $\mathbb{F}_p$. Any approach towards starting to solve this problem or recent progress or any references would be really helpful.
Thanks
|
The quantity $1+p - |E(\mathbb F_p)|$ is usually denoted $a_p$, and so you are asking for curves for which $a_p = 0$. When $p \geq 5$, this condition is equivalent to the elliptic curve being supersingular.
In general, there is a finite positive number of elliptic curves over $\overline{\mathbb F}_p$, and they can always be defined over $\mathbb F_{p^2}$ (i.e. their $j$-invariants necessarily lie in this field). You are looking for supersingular $j$-invariants that actually lie in $\mathbb F_p$.
One way to find such curves is to take curves with CM by the ring of integers in $\mathbb Q(\sqrt{-p})$, which are defined over the Hilbert class field of $\mathbb Q(\sqrt{-p})$, and then reduce them modulo a prime
lying over $\sqrt{-p}$ (perhaps after making an extension totally ramified
over this prime in order to obtain good reduction). More details of this construction are given in section 3 of this paper by Andrew Baker.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2010212",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Rewrite rational function $f(x)$ as a series if the quadratic expression in the denominator has no roots A function of the type
$$f(x)=\frac{ex+f}{ax^2+bx+c}$$
with $b^2-4 a c \geq 0$ can be written as a series using partial fraction decomposition and geometric series.
But if one has the same function with $b^2-4 a c < 0$? Partial fraction decomposition does not help and geometric series neither, so what is the strategy to rewrite $f(x)$ as a series in that case?
Example
$$f(x)=\frac{x+1}{x^2+x+3}$$
|
More than likely, too simplistic !
You wrote the function $$F(x)=\frac{ex+f}{ax^2+bx+c}$$ Since you look for an expansion around $x=0$, let us rewrite it as $$F(x)=\frac{f+ex}{c+bx+ax^2}$$ and now use the long division to get $$F(x)=\frac{f}{c}+\frac{ c e-b f}{c^2}x+\frac{ b^2f-a c f-b c
e}{c^3}x^2+O\left(x^3\right)$$ I did not write the next terms because the expression becomes too long.
Using your example, we should get $$\frac{x+1}{x^2+x+3}=\frac{1+x}{3+x+x^2}=\frac{1}{3}+\frac{2 x}{9}-\frac{5 x^2}{27}-\frac{x^3}{81}+\frac{16
x^4}{243}+O\left(x^5\right)$$ I am sure that you notice the powers of $3$ as denominators of the coefficients.
Let us try an easy value $x=\frac 1{10}$. The exact value is $\frac{110}{311}\approx 0.3536977492$ while the expansion gives $\frac{429743}{1215000}\approx 0.3536979424$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2010337",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Doolittle transformation is non-unique for singular matrices Decomposing the singular matrix $$A = \begin{bmatrix}
1 & 2 \\
1 & 2
\end{bmatrix} = \begin{bmatrix}1 & 0 \\ 1 & 1\end{bmatrix}\begin{bmatrix}1 & 2 \\ 0 & 0\end{bmatrix}=LU$$
by Doolittle decomposition seems to be unique for this case. But how to proove that this is not necessarily unique?
|
The row of zeroes in your $U $ allows you to play with the second column of $L $. You have
$$
\begin{bmatrix}
1 & 2 \\
1 & 2
\end{bmatrix} = \begin{bmatrix}1 & 0 \\ 1 & x\end{bmatrix}\begin{bmatrix}1 & 2 \\ 0 & 0\end{bmatrix}
$$ for any choice of $x $.
If you need $x=1$, there is no other choice and the decomposition is unique for that matrix. Such is the case for any singular $2\times2$ matrix: if
$$
A=\begin{bmatrix}r&s\\ tr&ts\end{bmatrix}
=
\begin{bmatrix}1&0\\
x&1\end{bmatrix}
\begin{bmatrix}a&b\\
0&c\end{bmatrix},
$$ it follows immediately that $a=r $, $b=s $, $x=t $, $c=0$.
For $3\times3$, here is an example where you are free to choose $z $:
$$
A=\begin{bmatrix}1&1&1\\1&1&1\\1&1&1\end{bmatrix}
=
\begin{bmatrix}1&0&0\\
1&1&0\\
1&z&1\end{bmatrix}
\begin{bmatrix}1&1&1\\
0&0&0\\ 0&0&0\end{bmatrix}.
$$
On the other hand, the decomposition is always unique when $A$ is non-singular and we require one of the two triangular matrices to have all ones in the diagonal, like
$$
A=\begin{bmatrix}1&2&3\\4&6&9\\5&8&11\end{bmatrix}
=
\begin{bmatrix}1&0&0\\
4&1&0\\
5&1&1\end{bmatrix}
\begin{bmatrix}1&2&3\\
0&-2&-3\\ 0&0&-1\end{bmatrix}
$$
is unique.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2010470",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
How to calculate wedge product of differential forms Consider the differential forms on $\mathbb{R}^3$,
$\omega_1 = xy \space dx + z \space dy + dz$ , $\omega_2 = x \space dy + z \space dz$.
I need to determine $\omega_1 \wedge \omega_2$.
However, I do not know how to find such wedge products. Any help is appreciated.
|
$$ \sum_I a_I dx^I \wedge \sum_J b_J dx^J: = \sum_{I, J} (a_I b_J)\ dx^I \wedge dx^J$$
$\textbf{Example}$:
$$(x dx + y dy) \wedge (2 dx - dy) = 2x \ dx \wedge dx- x \ dx \wedge dy + 2y \ dy \wedge dx- y \ dy \wedge dy\\ \hspace{-.41in}= (-x-2y)\ dx \wedge dy$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2010563",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Equivalent of the integral of a function sequence I gave an exercise to a student last wednesday, but I had some trouble finding a correct solution to the last question !
For $n\ge2$, let $f_n$ be the function
$$f_n:[0,+\infty[\to\mathbb R,\,t\mapsto \frac{t^n}{1+t+t^{n-1}}$$
The first question asks the punctual limit of the sequence $(f_n)$, no problem, it's the function
$$f:x\mapsto\begin{cases}0&\text{if }0\le x<1 \\
\frac{1}{3}&\text{if }x=1 \\
t&\text{if }x>1 \end{cases}$$
The convergence cannot be uniform due too the fact that the limit is not continue.
Now by dominated convergence theorem (not usable at this time of the year), or a simple majoration, it's easy to prove that
$$\lim_{n\to\infty} \int_0^1 f_n=\int_0^1 f$$
Problem I had was to find an equivalent of $\int_0^1 (f_n-f)$. A usual technique is to make an integration by parts, but that leads to a troubling $n$ from the derivation of the denominator which doesn't permit any conclusion.
Another technique is to guess the equivalent by getting rid of troubling terms. I thought of $\int_0^1 \frac{t^n}{1+t}{\rm d}t$, or $\int_0^1 \frac{t^n}{3}{\rm d}t$, but I can't find a good majorant of the difference.
Do you have any thoughts to share about this ?
Thanks.
|
HINT:
The substitution $t\to t^{1/n}$ yields
$$\int_0^1 \frac{t^n}{1+t+t^{n-1}}\,dt=\frac1n\int_0^1 \frac{t^{1/n}}{1+t^{1/n}+t^{(n-1)/n}}\,dt\le \frac1n$$
Alternatively, simply note that $\left|\int_0^1 \frac{t^n}{1+t+t^{n-1}}\,dt\right|\le \int_0^1 t^n\,dt=\frac{1}{n+1}$.
EDIT: The OP is requesting the first-order asymptotic term for the integral of interest
To develop the first term in the asymptotic (large $n$) expansion of the integral of interest we enforce the substitution $t=e^{-x/(n-1)}$. Then, we have
$$n\int_0^1 \frac{t^n}{1+t+t^{n-1}}\,dt=\frac{n}{n-1}\int_0^\infty \frac{e^{-(n+1)x/(n-1)}}{1+e^{-x}+e^{-x/(n-1)}}\,dx \tag 1$$
As $n\to \infty$, the limit of the integral on the right-hand side of $(1)$ is given by
$$\lim_{n\to \infty}\int_0^\infty \frac{e^{-x}}{2+e^{-x}}\,dx=\log(3/2)$$
Therefore, we see that
$$\lim_{n\to \infty}n\int_0^1 \frac{t^n}{1+t+t^{n-1}}\,dt=\log(3/2)$$
We can write therefore that
$$\int_0^1 \frac{t^n}{1+t+t^{n-1}}\,dt\sim \frac{\log(3/2)}{n}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2010686",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Finding the integral: $\int_{0}^{\large\frac{\pi}{4}}\frac{\cos(x)\:dx}{a\cos(x)+b \sin(x)}$ What is
$$\int_{0}^{\large\frac{\pi}{4}}\frac{\cos(x)\:dx}{a\cos(x)+b \sin(x)}?$$
$a,b \in \mathbb{R}$ appropriate fixed numbers.
|
$$\mathcal{I}\left(\text{a},\text{b}\right)=\int_0^{\frac{\pi}{4}}\frac{\cos\left(\text{x}\right)}{\text{a}\cos\left(\text{x}\right)+\text{b}\sin\left(\text{x}\right)}\space\text{d}\text{x}=$$
$$\frac{\text{a}}{\text{a}^2+\text{b}^2}\int_0^{\frac{\pi}{4}}1\space\text{d}\text{x}-\frac{\text{b}}{\text{a}^2+\text{b}^2}\int_0^{\frac{\pi}{4}}\frac{\text{a}\sin\left(\text{x}\right)-\text{b}\cos\left(\text{x}\right)}{\text{a}\cos\left(\text{x}\right)+\text{b}\sin\left(\text{x}\right)}\space\text{d}\text{x}$$
Now, use:
$$\int_0^{\frac{\pi}{4}}1\space\text{d}\text{x}=\frac{\pi}{4}$$
So, we get:
$$\mathcal{I}\left(\text{a},\text{b}\right)=\frac{\pi}{4}\cdot\frac{\text{a}}{\text{a}^2+\text{b}^2}-\frac{\text{b}}{\text{a}^2+\text{b}^2}\int_0^{\frac{\pi}{4}}\frac{\text{a}\sin\left(\text{x}\right)-\text{b}\cos\left(\text{x}\right)}{\text{a}\cos\left(\text{x}\right)+\text{b}\sin\left(\text{x}\right)}\space\text{d}\text{x}$$
Now, substitute $\text{u}=\text{a}\cos\left(\text{x}\right)+\text{b}\sin\left(\text{x}\right)$ and $\text{d}\text{u}=\left(\text{b}\cos\left(\text{x}\right)-\text{a}\sin\left(\text{x}\right)\right)\space\text{d}\text{x}$:
$$\int_0^{\frac{\pi}{4}}\frac{\text{a}\sin\left(\text{x}\right)-\text{b}\cos\left(\text{x}\right)}{\text{a}\cos\left(\text{x}\right)+\text{b}\sin\left(\text{x}\right)}\space\text{d}\text{x}=-\int_\text{a}^{\frac{\text{a}+\text{b}}{\sqrt{2}}}\frac{1}{\text{u}}\space\text{d}\text{u}=-\left(\ln\left|\frac{\text{a}+\text{b}}{\sqrt{2}}\right|-\ln\left|\text{a}\right|\right)$$
So, we get:
$$\mathcal{I}\left(\text{a},\text{b}\right)=\frac{\pi}{4}\cdot\frac{\text{a}}{\text{a}^2+\text{b}^2}+\frac{\text{b}}{\text{a}^2+\text{b}^2}\left(\ln\left|\frac{\text{a}+\text{b}}{\sqrt{2}}\right|-\ln\left|\text{a}\right|\right)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2010770",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.