Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
|---|---|---|
Given a convex set in a normed vector space, take a neighbourhood of it. Is still convex? Consider a normed vector space and a set there, call it $\mathrm{E}.$ Define the neighbourhood $\mathrm{E}^\eta$ of $\mathrm{E}$ with radius $\eta > 0$ as the set of vectors $v$ whose separation from $\mathrm{E}$ differs less than $\eta;$ in symbols, set the function $$v \mapsto \rho(v, \mathrm{E}) = \inf_{e \in \mathrm{E}} \|v - e\|,$$ which is uniformly continuous since $|\rho(v_1, \mathrm{E}) - \rho(v_2, \mathrm{E})| \leq \|v_1 - v_2\|,$ and the neighbourhood is defined to be $\mathrm{E}^\eta = \rho( \cdot, \mathrm{E} )^{-1}(-\infty, \eta)$ (it is readily seen as an open set).
If $\mathrm{E}$ is convex, is $\mathrm{E}^\eta$ convex?
I think I (almost) got a proof but is rather messy; one take any two points in $\mathrm{E}$ and then consider open balls of radii $\eta$ centred at them and show that any segment commencing in one ball and terminating in the other one will remain in $\mathrm{E}^\eta;$ call $\mathrm{F}$ the union of all this possible segments. The $\mathrm{F}$ ought to be convex and coincidential with $\mathrm{E}^\eta.$
|
Let $x,y\in E^\eta$. Then there exist $x_E,y_E\in E$ such that $\|x-x_E\|$ and $\|y-y_E\|$ are lesser than $\eta$. Now, set $t\in(0,1)$. Then $tx_E+(1-t)y_E\in E$ and
$$\|tx+(1-t)y-tx_E-(1-t)y_E\|\le t\|x-x_E\|+(1-t)\|y-y_E\|<t\eta+(1-t)\eta=\eta$$
So $tx+(1-t)y\in E^\eta$. This proves the convexity of $E^\eta$.
Perhaps another proof: This is only an unpolished idea.
$E$ is an intersection of half-spaces: $E=\bigcap_i H_i$. Translate the border of each $H_i$ a distance $\eta$ orthogonally, moving away from $E$. You get new open half-spaces $H_i^\eta$, that contain $E^\eta$. Then $E^\eta=\bigcap_i H_i^\eta$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2093680",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Complex Roots with improper fraction I'm having trouble with the following:
$(-16i)^{5/4}$
My calculations for the Principal root is:
$32(\cos (3\pi/2) * 5/4) + i \sin (3\pi/2)* 5/4))$
$=32(Cis (15\pi/8))$
This answer does not agree with the online calculators. It gives a positive real value and the online calculators show a principal angle in Quadrant 3.
Confused on what happened here.
|
Let $z=-16i$ and $n=\dfrac54$. For solving you have to compute $r=|z|$ and argument $\theta$ where $\tan\theta=\dfrac{y}{x}$.
then
$r=|z|=|-16i|=16$ and argument $\theta=\dfrac{3\pi}{2}$. Then write
$$z_k=r^n(\cos n\theta+i\sin n\theta)$$
But argument adds with $2k\pi$ so we have
$$z_k=r^n\Big(\cos n(\theta+2k\pi)+i\sin n(\theta+2k\pi)\Big)$$
so
$$z_k=16^\dfrac54\Big(\cos\dfrac54(\dfrac{3\pi}{2}+2k\pi)+i\sin\dfrac54(\dfrac{3\pi}{2}+2k\pi)\Big)$$
and for $k=0,1,2,3$ write
$$z_k=32\Big(\cos(\dfrac{15\pi}{8}+\frac{5k\pi}{2})+i\sin(\dfrac{15\pi}{8}+\frac{5k\pi}{2})\Big)$$
finally for $k=0,1,2,3$ we conclude that
\begin{eqnarray}
k=0 &\Rightarrow& z_0=32\Big(\cos\dfrac{15\pi}{8}+i\sin\dfrac{15\pi}{8}\Big)=19.28+25.5i\\
k=1 &\Rightarrow& z_1=32\Big(\cos\dfrac{35\pi}{8}+i\sin\dfrac{35\pi}{8}\Big)=12.24+29.5i\\
k=2 &\Rightarrow& z_2=32\Big(\cos\dfrac{55\pi}{8}+i\sin\dfrac{55\pi}{8}\Big)=-29.5+12.24i\\
k=3 &\Rightarrow& z_3=32\Big(\cos\dfrac{75\pi}{8}+i\sin\dfrac{75\pi}{8}\Big)=-12.24-29.5i
\end{eqnarray}
This was step by step solving.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2093793",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Big-Omega Proof So I'm trying to figure this one out, suppose this problem A.
$f(n) = n^2 - 2n$ and $g(n) = n^2$.
I want to prove that $f(n) \in \Omega(g(n))$ by showing a set of inequalities between $f(n)$ to $g(n)$ to derieve the $c > 0$ and $n_0 > 0$.
For example, say
$f(n) = n^2 + 2n$ and $g(n) = n^2$ and $f(n) \in O(g(n))$.
Clearly, $n^2 + 2n \le n^2 + 2n^2 = 3n^2 \quad \forall n_0>0$ Thus, if $c = 3$ and $n_0 > 0$, we have proved $f(n) \in O(g(n))$.
How do I do this for the former problem, problem A?
Thanks.
|
For large $n$,
$$f(n)=n^2-2n \ge \frac{1}{2} n^2 = \frac{1}{2} g(n).$$
To find $n_0$ explicitly, note
$$n^2-2n - \frac{1}{2}n^2= \frac{1}{2} n(n-4) > 0$$
for $n>4$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2093856",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
$A^H=A^{-1}$ implies $\|x\|_2 = \|Ax\|_2$ for any $x\in \mathbb{C}^n$ Show that the following conditions are equivalent.
1) $A\in \mathbb{C}^{n\times n}$ is unitary. ($A^H=A^{-1}$)
2) for all $x \in \mathbb{C}^n$, $\|x\|_2 = \|Ax\|_2$, where $\|x\|_2$ is the usual Euclidean norm of $x \in \mathbb{C}^n. $
I am totally lost in this problem, I appreciate any hint. And here is my argument.
From $1\to 2$, we get $A^{H}A=I$. By this problem "Prove that $\|A\|_2 = \sqrt{\|A^* A \|_2}$", I can say $\|Ax\|_2=\sqrt{\|A^HAx\|_2}$. Therefore, I get $\|Ax\|_2^2=\|x\|_2$.
I also have problem to show the equality in this problem "Prove that $\|A\|_2 = \sqrt{\|A^* A \|_2}$".
|
Hint: $||x||^2_2 = x^H x$. What is $(Ax)^H$?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2093964",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
}
|
Is this induction question wrong or am I going insane? Here's a question that I've come across: Prove by induction that for every integer $n$ greater than or equal to $1$,
$${\sum_{i=1}^{2^n}} \frac{1}{i} \ge 1 +\frac{n}{2}.$$
Now I know how to prove by induction, but wouldn't this fail $p(1)$ since
$$\frac{1}{1} \ge 1 +\frac{1}{2}$$ is not true?
|
Neither: the question is not wrong and you are also not going insane ... You just made a little mistake. :)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2094051",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
}
|
Show that if the sum of components of one vector adds up to 1 then the sum of the squares of the same vector is at least 1/n (NOTE: Already posted here, but closed without an answer)
Hi, I've been trying to complete the following question:
Suppose we have two vectors of $n$ real numbers, $[x_1,x_2,⋯,x_n]$ and $[y_1,y_2 ⋯,y_n]$ and the following inequality holds:
$$(x_1y_1+x_2y_2+⋯+x_ny_n)^2≤(x_1^2+x_2^2+⋯+x_n^2)(y_1^2+y_2^2+⋯+y_n^2)$$
Show that if the sum of components of one vector adds up to 1 then the sum of the squares of the same vector is at least $\frac 1n$.
I've tried a few avenues, and have come up a proof that I am not confident is right.
Proof by induction:
Base case is $n=1$, which is trivial, since $x_1^2 = 1^2 = 1$ and so
$1 \ge \frac 1 1$. Therefore base case is true.
Assume it is true for n.
$$x_1^2+...+x_n^2+x_{n+1}^2 \ge \frac 1 {n+1}$$
Since $x_1^2+...+x_n^2 \ge \frac 1 n$ by our assumption,
$$\frac 1 n + x_{n+1}^2 \ge \frac 1 {n+1}$$ It is this step that I think is incorrect, as the $x_1^2+...+x_n^2$ must get smaller in order to accomodate the new value of $x_{n+1}$, and still remain equal to 1. Therefore I don't think I can do this step?
$$x_{n+1}^2 \ge \frac 1 {n+1} - \frac 1 n$$
The left hand side must always be $\ge 0$ and the right hand side must always be negative for values of $n \ge 1$. Therefore true for $n+1$, so must be true for all $n$. QED.
Can you confirm it is wrong? If it is wrong, could you please explain how to prove it in your answer. Thanks.
|
You can prove it using Jensen's inequality for $f(x) = x^2$. Since $x^2$ opens upwards,
$$f(\text{avg}~ a) \le \text{avg}~f(a)$$
More specifically:
$$\left(\frac 1n \sum_{k=1}^n a_k\right)^2 \le \frac 1n \left(\sum_{k=1}^n a_k{}^2\right)$$
The rest is just algebra, use $\sum_{k=1}^n a_k = 1$ to establish $\sum_{k=1}^n a_k{}^2 \ge \frac{1}{n}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2094253",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Natural deduction proof of $p \rightarrow q \vdash \lnot(p \land \lnot q)$ So yeah, the entire question is pretty much in the title.
$$p \rightarrow q \vdash \lnot(p \land \lnot q)$$
I've been able to derive the reverse, but I don't how to logically go from the premise to the conclusion using natural deduction only. I can see that the two formulas are equal using transformations.
These are the rules I'm allowed to use:
Please help me understand how to do this.
|
$1.$ $p \rightarrow q \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $ -- (Premise)
$2.$ $p \wedge \neg q \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $ -- (Assume the contrary to what has to be proved in the conclusion)
$3.$ $p \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $ -- ($\wedge E$ on $2.$)
$4.$ $\neg q \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $ -- ($\wedge E$ on $2.$)
$5.$ $q \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $ -- (Modus ponens on $1.$ and $3.$)
$6.$ $\bot \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $ -- (bot introduction due to contradiction on $4.$ and $5.$)
$7.$ $\neg(p \wedge \neg q) \ \ \ \ \ \ \ \ \ \ \ $ --(assumption wrong due to arrival of contradiction)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2094334",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
finding $ \int^{4}_{0}(x^2+1)d(\lfloor x \rfloor),$ given $\lfloor x \rfloor $ is a floor function of $x$ finding $\displaystyle \int^{4}_{0}(x^2+1)d(\lfloor x \rfloor),$ given $\lfloor x \rfloor $ is a floor function of $x$
Assume $\displaystyle I = (x^2+1)\lfloor x \rfloor \bigg|^{4}_{0}-2\int^{4}_{0}x\lfloor x \rfloor dx$ ( integration by parts )
i have a doubt about limit part , did not understand whether the limit corresponding to $x$
or corrosponding to $\lfloor x \rfloor$
because when we take $\displaystyle \int^{b}_{a}f(x)dx,$ then limits are corrosponding to $x$
but when we take $\displaystyle \int^{b}_{a}f(x)d(\lfloor x \rfloor ),$ then limit corrosonding to $\lfloor x \rfloor$
please clearfy my doubt and also explain me whats wrong with my method above , thanks
|
About your attempt to integrate by parts: instead of doubting about the legitimacy of the change of the "corresponding limits", I advise you take the following steps:
1). Try to understand the basic theory of Riemann Stieljes integration.
2). Find a proof of integration by parts for R-S integrals, be careful about the assumptions, and then try to fully understand the proof. (A quick search on this site gives many, for example.)
A helpful answer would have to be based on the definition you adopt for this integral. Here I give two treatments, which should lead to the same result.
1). From a measure theoretic perspective, we have to find the measure $\mu$ on $\Bbb R$ induced by $g(x)=\lfloor x\rfloor$, in a way such that $\mu((c,d])=g(d)-g(c)$ for $c<d$. Clearly, we can find that $\mu$ is an atomic measure which assigns mass $1$ to each integral point in $\Bbb R$ and $0$ elsewhere, which basically says that you need only to care about integral points, to which you should assign mass $1$, over your integration domain, i.e.
$$\int_A f(x)\mathrm dg(x)=\sum_{i\in \Bbb Z\cap A} f(i).$$
2) Yet another simpler to understand treatment is the Riemann-Stieljes integral. For details of the computation, they bear much resemblance to computing a Riemann integral by definition, the only change being replace $\Delta x_i$ by $\Delta g_i$. For a fuller explanation, you should be able to check the definitions and relevant properties online (where resources should be plenty) by yourself.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2094459",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
}
|
Connection between rank and matrix product I have a problem understanding the following:
Let $A$ be an $m \times n$ matrix and let t $\in \mathbb{N}$. Prove that
$\operatorname{rank}(A)\leq t$ if and only if there exists an $m \times t$ matrix $B$ and a $t \times n$ matrix $C$ so that $A = BC$.
I know what a rank is but I can't make a connection between the rank and the existence of two matrices such that $A = BC$.
|
Think of matrices as linear transformations: $A:F^n\to F^m$, $C:F^n\to F^t$, $B:F^t\to F^m$. If $\mathrm{rk}(A)\le t$, then $\mathrm{Im}(A)\le t$, so there exists a $t$-dimensional subspace $T$ of $F^m$, such that $\mathrm{Im}(A)\subset T$. Consequently, if we restrict $A$ to $A'=C:F^n\to T$, and let $B:T\to F^m$ be the canonical inclusion, we have $A=BC$. The converse is obvious: If $A=BC$, then
\begin{equation}
\mathrm{rk}(A)=\mathrm{rk}(BC)\le\mathrm{dim}(\mathrm{Im}(B))\le t,
\end{equation}
because the domain of $B$ is $t$-dimensional.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2094588",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Two circles touches each other externally at the point O. .. I am stuck on the following elementary problem that says:
Two circles touches each other externally at the point O. If PQ and RS are two diameters of these two circles respectively and $PQ || RS$ (Where || indicates parallel),then prove that P,O,S are collinear.
My Try:
I added the points P,O,S and also the points Q,O; ans R,O.
From the given condition , $\angle POQ=90^{\circ}=\angle ROS$. [since,Any angle inscribed in a semi-circle is a right angle.]
Now,from the figure we see $\angle POR=180^{\circ}-90^{\circ}=\angle QOS$.
And hence,$\angle POQ+\angle QOS=90^{\circ}+90^{\circ}=180^{\circ}=\angle POS$.
Hence ,we can conclude P,O,S are collinear.
Can someone verify it ? Am I right?
Thanks in advance for your time.
|
I do not think that your approach is valid, since you have assumed that $QOR$ and $POS$ are collinear and then calculated
$\angle POR=180^{\circ}-90^{\circ}=\angle QOS$
What you can do instead, is to draw the line between centers of circles. The center of the left circle is $C_1$ and the right one is $C_2$. Also, draw a line from $P$ to $S$. Note that we do not know if $PS$ passes $O$. The line that passes through the centers meets $PS$ at $X$. Now, you can easily see that triangles $PC_1X$ and $SC_2X$ are similar. Therefore, the ratio of their edges is
$\frac{|C_1P|}{|C_2S|}=\frac{|C_1X|}{|C_2X|}=\frac{r_1}{r_2}$
Where $r_1$ and $r_2$ are radius of the circles.
Therefore, we know that $X$ is on the line $C_1C_2$ and it also has to satisfy the ratio. So, it should be the same as $O$.
To be more specific, we have two equations and two unknowns
$|C_1X|+|C_2X|=r_1+r_2$
$\frac{|C_1X|}{|C_2X|}=\frac{r_1}{r_2}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2094692",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
}
|
Automorphisms of order 2 Let $G$ be a finite group with no element of order $p^2$ for each prime $p$. Does there always exist an automorphism $\phi$ of order 2 such that for at least one subgroup of $G$ say $H$, we have $\phi(H)\neq H$?
Update: What about if add the supposition that $G$ is not cyclic of prime order?
|
No, take for example the cyclic group $C_2$. It certainly has no element of order $p^2$. Nevertheless, every automorphism is the identity, i.e., $\phi(H)=H$. So there exists no automorphism of order $2$ with $\phi(H)\neq H$ for some $H$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2094903",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Equivalence classes in $\mathbb{R}^{3}$ such that $Mv_{1} = v_{2}$ for orthogonal $M$
Let $ A = \mathbb{R}^{3}$, let $R$ be an equivalence relation such that $\left(v_{1},v_{2}\right) \in R$ if and only if $\exists P \in M_{3 \times 3}\left(\mathbb{R}\right)$ such that $P$ is orthogonal and $Pv_{1} = v_{2}$
Describe and sketch the equivalence classes of $R$ in $\mathbb{R}^{3}$
I am having difficulty attempting this question. I attempted to find the equivalence class, $C$, of $\left(1,0,0\right)^{T}$ for which I used: $$P = \begin{bmatrix}1 & 0 & 0 \\ 0 & \cos\left(\theta\right) & \sin\left(\theta\right)\\ 0 & -\sin\left(\theta\right) & \cos\left(\theta\right)\\ \end{bmatrix}$$ and said that $\displaystyle v = \left(\alpha,\beta,\gamma\right)^{T} \in C \implies \alpha = 1, \tan\left(\theta\right) = \frac{\gamma - \beta}{\gamma + \beta}$
But I feel that I am not following the right route with this approach. Thank you in advance.
|
An orthogonal matrix is an isometry when viewed as an operator. So for any orthogonal matrix $P$ this holds: $|v|=|Pv|$. In particular if $(v_1, v_2)\in R$ then $|v_1|=|v_2|$.
The other implication also holds, i.e. if $|v_1|=|v_2|$ then you can find an orthogonal matrix $P$ such that $Pv_1=v_2$ (for explicit construction in dimension 3 see this).
All in all, an equivalence class of $v$ in $R$ is a set of all vectors with the same norm, i.e. a sphere.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2095026",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Prove that $\int_0^u\frac{\sin(mt)}{2\sin(t/2)}dt$ is bounded When I read the Fourier Series theory, a property show in the book but without the details.
That property is:
$\displaystyle\int_0^u\frac{\sin(mt)}{2\sin(t/2)}dt$ is bounded for
all $m$ and $0\leq u\leq\pi$
I want to ask how to prove it? Also, what is the situation we need this theorem? Maybe it can simply be found on other books on this topic, but I don't have them.
|
Notice that, in general, for $n \ge 0$
$$\cos nx + \Bbb i \sin nx = \Bbb e ^{\Bbb i n x} = (\Bbb e ^{\Bbb i x})^n = (\cos x + \Bbb i \sin x)^n = \sum _{k=0} ^n \binom n k \Bbb i ^k \cos ^{n-k} x \sin ^k x$$
whence equating the imaginary parts in both sides gives us
$$\sin nx = \binom n 1 \cos ^{n-1} x \sin x - \binom n 3 \cos ^{n-3} x \sin ^3 x + \binom n 5 \cos ^{n-5} x \sin ^5 x - \\ \binom n 7 \cos ^{n-7} x \sin ^7 x + \dots = (\sin x) P_{n-1} (\cos x)$$
with $P_{n-1} = a_{n-1,n-1} x^{n-1} + a_{n-1,n-2} x^{n-2} + \dots + a_{n-1,0}$ a polynomial function of degree $n-1$.
It follows that
$$\sin mt = \sin \left( 2m \frac t 2 \right) = \left( \sin \frac t 2 \right) P_{2m-1} \left( \cos \frac t 2 \right) ,$$
whence, using the obvious fact that $\int _a ^b f \le \int _a ^b |f|$,
$$\left| \int \limits _0 ^u \frac {\sin mt} {\sin \frac t 2} \ \Bbb d t \right| = \left| \int \limits _0 ^u \frac {\left( \sin \frac t 2 \right) P_{2m-1} \left( \cos \frac t 2 \right)} {\sin \frac t 2} \ \Bbb d t \right| = \left| \int \limits _0 ^u P_{2m-1} \left( \cos \frac t 2 \right) \ \Bbb d t \right| \le \int \limits _0 ^u \left| P_{2m-1} \left( \cos \frac t 2 \right) \right| \ \Bbb d t \le \\
\sum _{k=0} ^{2m-1} |a_{2m-1,k}| \int \limits _0 ^u \left| \cos ^k \frac t 2 \right| \ \Bbb d t \le \sum _{k=0} ^{2m-1} |a_{2m-1,k}| \int \limits _0 ^u 1 \ \Bbb d t = \sum _{k=0} ^{2m-1} |a_{2m-1,k}| u \le \sum _{k=0} ^{2m-1} |a_{2m-1,k}| \pi$$
which is a constant independent of $u$.
Of course, the much simpler approach would be to notice that the only point where the integrand might raise problems is $t=0$, but there
$$\lim _{t \to 0} \frac {\sin mt} {\sin \frac t 2} = 2m ,$$
so using the comparison test for improper integrals your integral must converge, i.e. be finite for all $0 \le u \le \pi$. Since the result of the integration is continuous with respect to $u$, and since $[0, \pi]$ is compact, Weiertrass's extreme value theorem tells us that your integral is bounded and attains its bounds.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2095120",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
consistency of numerical methods for ODE I was reading the Wikipedia article on numerical methods for ODEs
https://en.wikipedia.org/wiki/Numerical_methods_for_ordinary_differential_equations#Consistency_and_order
and I saw that when it discusses "consistency and order", the consistency is defined as
$$\lim_{h\to 0} \frac{\delta^h_{n+k}}{h} = 0$$
where $\delta^h_{n+k}$ is the local truncation error with mesh size $h$.
Why is consitencty defined this way? My guess about the definition of consistency was ${\delta^h_{n+k}} \to 0$ as $h\to 0$
|
Suppose you have $y(0)$ and your goal is to obtain $y(t)$ where $t$ is some small positive number. If $t$ is small enough and the method is consistent, you should be able to run your numerical method with $h=t/N$ for large $N$, and then as $N \to \infty$ you should get convergence to $y(t)$. This means that you will incur the local truncation error $N=O(1/h)$ times in the course of the calculation. Thus the local truncation error had better be $o(h)$ so that the overall error is $o(1)$.
(If $t$ is no longer required to be small then you need stability as well, as you probably are aware.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2095242",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
If $k$ is a field of characteristic $p>0$ and $G$ is a finite group of order divisible by $p$ then $k[G]$ is not a semi-simple ring. If $k$ is a field of characteristic $p>0$ and $G$ is a finite group of order divisible by $p$ then $k[G]$ is not a semi-simple ring.
My failed attempt:
Since $G$ is finite and $p$ divisdes $|G|$ then $G$ has an element of order $p$. I wanted to use the cyclic subgroup generated by this element to create an ideal of $k[G]$ and show that it could not be produced by an idempotent element but I got stuck.
|
Hint:
Square $\sum_{g\in G}g$, and observe that it is central.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2095370",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Factoring an expression with three variables the expression is $(a-b)c^3+(b-c)a^3-(a-c)b^3$
I've tried factoration by grouping, but it has been a trial and error situation. I don't know any strategies to factor expressions like that, so I'm basically just guessing.
There is a ton of questions like this one in the book I'm studying right now and I can't solve anything, it just feels like I have to guess since I don't know any method to do this.
|
One may first set the expression equal to $0$.
$$0=(a-b)c^3+(b-c)a^3+(c-a)b^3$$
One can then see that for this to hold, we have one solution $a=b$, $a=c$, and $b=c$. Turning this into "factors" that we can use, we get, as a polynomial of $a$,
$$(a-b)(a-c)(b-c)P(a)=(a-b)c^3+(b-c)a^3+(c-a)b^3$$
whereupon one will find that
$$P(a)=a+b+c$$
Thus,
$$(a-b)c^3+(b-c)a^3+(c-a)b^3=(a-b)(a-c)(b-c)(a+b+c)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2095461",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Eigenvectors of complex matrix I'm working on a problem where I am trying to find the eigenvectors of a pretty complicated matrix, and I am in need of some assistance. The matrix in question is:
$$A =\begin{bmatrix}
\sin(x) & \cos(x)\cos(y) - i\cos(x)\sin(y)\\
\cos(x)\cos(y) + i\cos(x)\sin(y) & -\sin(x)\\
\end{bmatrix}$$
I know that the matrix is Hermitian, so that it is equal to its own conjugate transpose. Moreover, the eigenvalues are $\lambda = \pm 1$, as $A^2 = I$. However, I'm not sure how to use these properties to find the possible eigenvectors (if that would even help), and I would like to avoid doing it by brute force if possible, as it seem unruly.
Thus far, I have tried to separate the matrix into real and imaginary parts, but that didn't seem to help. I also had the thought to assume diagonalization in an attempt to find the diagonalizing unitary matrix (and, in turn, the eigenvectors), but I don't see that making things much nicer either. Any help would be greatly appreciated.
|
Note that
$$
A=DQD^{-1},\ D=\pmatrix{1\\ &e^{iy}},\ Q=\pmatrix{\sin x&\cos x\\ \cos x&-\sin x}.
$$
It follows that if $v$ is an eigenvector of $Q$, then $Dv$ is an eigenvector of $A$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2095575",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
}
|
A Ramanujan infinite series $$ 1-5\left(\frac{1}{2}\right)^3+9\left(\frac{1 \cdot 3}{2 \cdot 4}\right)^3-13\left(\frac{1 \cdot 3 \cdot 5}{2 \cdot 4 \cdot 6}\right)^3+\cdots $$
I went on evaluating the above series and encountered that solving $\displaystyle \sum_{n\ge 0}\left(\binom{2n}{n}\right)^3x^n$ would suffice.
But how do we make a generating function for the third power of a central binomial coefficient using the fact $\displaystyle \sum_{n\ge 0}\binom{2n}{n}x^n=\frac{1}{\sqrt{1-4x}}$
|
Modifying @Claude Leibovici's answer a little.
$\frac{1}{\left(1-z\right)^{a}}=\,_{1}F_{0}\left(a;;z\right)={\displaystyle \sum_{n=0}^{\infty}\frac{a_{n}}{n!}z^{n}}$
and
$\frac{1}{\sqrt{1-4\cdot x}}={\displaystyle \sum_{n=0}^{\infty}\left(\begin{array}{c}
2n\\
n
\end{array}\right)x^{n}}$
Let $z=4x,x=\frac{z}{4}$
$\frac{1}{\left(1-4x\right)^{a}}=\,_{1}F_{0}\left(a;;4x\right)={\displaystyle \sum_{n=0}^{\infty}\frac{a_{n}}{n!}\,\left(4x\right)^{n}}$
$4^{n}\frac{\left(\frac{1}{2}\right)_{n}}{n!}=\left(\begin{array}{c}
2n\\
n
\end{array}\right)$
And we have the generating function for $\left(\begin{array}{c}
2n\\
n
\end{array}\right)^{k}\left(4^{k}x\right)^{n}$
i.e.
$\,_{k}F_{k-1}\left({\displaystyle \prod_{i=1}^{k}\left(\frac{1}{2}\right)_n};{\displaystyle \prod_{i=1}^{k-1}\left(1\right)_n};4^{k}z\right)={\displaystyle \sum_{n=0}^{\infty}}\left(\begin{array}{c}
2n\\
n
\end{array}\right)^{k}\left(4^{k}x\right)^{n}$
Where we “borrowed” the $n!=\left(1\right)_{n}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2095780",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 2
}
|
Proving a limit equals zero I have to prove, without L'hopital rule, the following limit:
$$\lim_{x \to \infty}\sqrt{x} \sin \frac{1}{x} =0$$
I tried doing a variable change, setting $t=\frac{1}{x}$ and reaching the following: $$\lim_{t\to 0} \sqrt{\frac{1}{t}} \sin t $$
But I can't prove neither. Tried the second version with the squeeze theorem, but I can't prove the limit.
Thanks!
|
$$\lim_{t \to 0} \frac{1}{\sqrt{t}} \sin(t) = \lim_{t \to 0} \frac{1}{t} \sin(t^2) = \lim_{t \to 0} \frac{\sin t^2 - \sin 0^2}{t} = f'(0)$$
where $f(x) = \sin x^2$.
By the chain rule, $f'(x) = 2x \cos x^2$, which is $0$ at $x=0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2095879",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 1
}
|
suggestion for implicit equation problem Hye guys
Given $$a = b\ e^{mt}-c\ e^{kt},$$
where $a,b,c,m,k$ are constants.
I am trying to solve this implicit equation to find for $t$. Can anyone here suggest me any theorem/method which can help me to solve this?
Thanks
|
By setting $x=e^{mt}$, you can put the equation in the form
$$x^\alpha=px+q,$$
which is the intersection of a power law and a straight line.
In a few particular cases $\alpha=2,3,4,\frac12,\frac13,-1,-2,\cdots$ you can use the formulas for the polynomials up to quartic. But in general, there is no closed form, and you will need to resort to numerical methods.
You can discuss the number of real roots by changing the value of $q$ until the line is tangent to the curve, which occurs when
$$\alpha x^{\alpha-1}=p,$$ i.e.
$$x^*=\left(\frac p\alpha\right)^{1/(\alpha-1)}$$ and
$$q^*=\left(\frac p\alpha\right)^{\alpha/(\alpha-1)}-p\left(\frac p\alpha\right)^{1/(\alpha-1)}.$$
From one side to the other of $q^*$, the number of roots changes by two units.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2095991",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How do I solve an ellipse with three chord lengths and angles? I have a plane on which is a circle, there are three arbitrary points on the circle ($A$, $B$ and $C$) of which the relative angles are known. The plane the circle is on is then rotated arbitrary line on the plane (through the centre of the circle) to create an ellipse with the three points on it.
I have the equation for the length of the chords from a previous question answered by @coffeemath:
$$l = \sqrt{a^2(\cos(t)-\cos(t1))^2 + b^2(\sin(t)-\sin(t_1))^2}$$
This is complicated by the fact that the points are not going to be line with the ellipse axis and will be off by an angle $R$:
$$l = \sqrt{a^2(\cos(t+R)-\cos(t_1+R))^2 + b^2(\sin(t+R)-\sin(t_1+R))^2}$$
If I have the three original theta angles ($t_A$, $t_B$ and $t_C$) from the circle, and three sets of chord lengths ($d_{AB}$, $d_{AC}$ and $d_{BC}$) how do I go about working out $a$, $b$ and $R$ (where a and b are the ellipse major/minor axis and R is the rotation of the circle on the plane which would map the ellipse axis to the circle axis)?
In theory the system should produce four answers, two for R (within a 2PI rotation) and the ability for the major/minor ellipse axis to be switched. The four solutions would be based on the angle of the line of rotation on the plane and the sign of the rotation itself.
Thank you in advance!
Lee
|
If I understand the question properly,
You have 3 points on a circle given by the angles $(t, t_1, t_2) $ which the points make with (say) the x-axis.
Now the circle has undergone a stretch:
$$ (r\cos t,r\sin t) \mapsto \begin{bmatrix} a/r &&0 \\ 0 && b/r \end{bmatrix} (r \cos t,r \sin t)$$
Which makes it an ellipse. It is also rotated by an angle $R$.
Now you are given the lengths of the 3 chords connecting the original 3 points.
First, since you are only given lengths in the ellipse, I don't think there is a way of finding the angle $R$ since that leaves the lengths unchanged.
Second, note that the transformation above does not change the parametric angle of the point in the ellipse. So your angles remain unchanged (except for a $+R$ shift). These are different from the geometric angles of the points which indeed transform to $ \tan^{-1} (\frac{b}{a} \tan t). $
But you can use the chord lengths and the formula you mentioned to form 3 equations in 3 variables: $a, b, r$ where $r$ is the radius of the circle.
Hope this helps.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2096067",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Why isn't the golden ratio defined as the points where $f(x)=x^3-x^2-x$ are zero? I was messing around on desmos, and when I plugged in $f(x) = x^2 - x - 1$, I get two points where $f(x)$ is zero, which are answers to the golden ratio. Why is this not used in the definition? It seems so much clearer to me.
Link: https://www.desmos.com/calculator/qmmlhbtwog
|
It is more clear to you, maybe. There are LOTS of equations whose solutions may be the golden ratio.
But it's definition comes from geometry, like many other mathematical constants like $\pi$ and $\sqrt{2}$.
The golden ratio is defined in this way: it's the ratio of two numbers which is also equal to the ratio between their sum and the larger of the two, that is naming $a$ and $b$ with $a >b$,
$$\frac{a+b}{a} = \frac{a}{b} = \phi$$
This definition comes in handy because it shows many interesting properties of the golden ratio such as:
*
*$\phi^{-1} = 1 - \phi$
*$ \phi^2 = \phi + 1$
It's also straightforward to derive the golden ration from this definition since it's.. the definition!
Your equation cannot be solved that easily by hands, whereas the definition for $\phi$ is immediate.
$$\frac{a+b}{a} = \frac{a}{b} = \phi$$
hence
$$\frac{a+b}{a} = 1 + \frac{b}{a} \longrightarrow 1 + \frac{1}{\phi}$$
That is
$$1 + \frac{1}{\phi} = \phi$$
That is
$$\phi^2 - \phi - 1 = 0$$
Form where the golden ration can be easily calculated.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2096171",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Quotient Modules of a polynomial ring Let $R = K[x], K$ a field. Define for $a \space \epsilon \space K$ the ideal $ I_a := (x-a)$ in $K[x]$ and see $I_a$ as an $R$ module. Using that for $ a \space \epsilon \space K$ and $ b \space \epsilon \space K$, $I_a$ and $I_b$ are isomorphic, prove that the quotient modules $R/I_a$ and $R/I_b$ are not isomorphic as $R$ -modules.
How should I prove this? I have tried doing it with the first isomorphism theorem for modules, constructing a function $ \theta $ with $ ker(\theta) = I_a$ but I don't seem to get anywhere. Does anyone know a proper solution to this question? Thanks.
|
Suppose you have a isomorphism $f:R/I_b \to R/I_a$ as $R$ modules. Observe that $x=b$ in $R/I_b$ and $x=a$ in $R/I_a$. Then you have
$$ bf(1)=f(b)=f(x)=xf(1)=af(1) $$
Now recall that the module structure in $R/I_a$ comes from the projection map $\pi$. Now by assumption $a\neq b$. And this means $(a-b)f(1)=0$ implies that $f(1)\in I_a$ since $a-b \notin (x-a)=ker(\pi)$ but this means $f(1)=0 $ in $R/I_a$ i.e. $f(1)\in ker(f)$ contradicting $f$ being an isomorphism.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2096253",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Solving equation over $\mathbb{Z}$ involving squares
Solve the equation $$a^2 - 10b^2 = 2$$ for $a,b \in \mathbb{Z}$.
I tried to consider the equation as a polynomial in the indeterminate $a$ and what I get is $$a_{1,2} = \pm \sqrt{10b^2 + 2}$$ which does not really help (I would have to find $b \in \mathbb{Z}$ such that $10b^2 + 2 = n^2$ for some natural number $n$). So I looked at the solutions where it is written: obviously, this equation is not solvable over $\mathbb{Z}$. For me, it is not that obvious. Could anyone please show me how to see, that this equation is not solvable over $\mathbb{Z}$?
|
If you consider the equation modulo $5$, you get $$a^2\equiv2\pmod5.$$ Checking the squares modulo $5$, we see that $2$ isn't a square, hence this equation has no solutions in the integers.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2096378",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How to approximate 1/3 by only add/subtracting powers of 2 How approximate $\frac{1}{3}$ up to four significant digits by using only $\pm2^n$ where $n$ is a negative integer.
Preliminary attempt/example:
$$0.33\approx0.25+0.0625=0.3125$$
$$=2^{-2}+2^{-4}$$
|
Hint: The series
$$\frac12 - \frac1{2^2} + \frac1{2^3} - \cdots + \frac{(-1)^{n-1}}{2^n} + \cdots$$
converges to $\dfrac13$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2096466",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
What does the following statement in the definition of right inverse mean? ("For $b\in B$, $b\neq a\alpha$ for any $a$, define $b \beta=a_{1}\in A$") Question:
Let $A$ and $B$ be arbitrary sets, with $\alpha:A\rightarrow B$ an injection. Show how to define $\beta:B\rightarrow A$ such that $\alpha \beta$ is the identity function on $A$.
Solution:
For $a\in A$ define $(a\alpha)\beta=a$. For $b\in B$, $b\neq a\alpha$ for any $a$, define $b \beta=a_{1}\in A$.
Source: Groups: A Path To Geometry by R. P. Burn. Chapter: 1 Question: 24
The injection has the property $x\alpha=y\alpha \Rightarrow x=y$.
My problem lies in understanding this statement "For $b\in B$, $b\neq a\alpha$ for any $a$, define $b \beta=a_{1}\in A$".
Does "$b\neq a\alpha$ for any $a$" mean that no image of any $a\in A$ can be equal to itself? Why must this be true?
|
I’ll rephrase the solution in what I hope is a more understandable way.
We need to define $b\beta$ for each $b\in B$. There are two kinds of elements of $B$: those that are in the range of $\alpha$, and those that are not. If $b=a\alpha$ for some $a\in A$, we define $b\beta=a$. Now fix a particular $a_1\in A$. If there is no $a\in A$ such that $b=a\alpha$, we define $b\beta=a_1$.
The last two sentences correspond to the statement about which you’re asking.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2096658",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Is this a trick question? Balls into urns probability There are two urns. Urn 1 contains 3 white and 2 red ball, urn 2 one white and two
red. First, a ball from urn 1 is randomly chosen and placed into urn 2. Finally, a ball from
urn 2 is picked. This ball be red: What is the probability that the ball transferred from
urn 1 to urn 2 was white?
My answer is 3/5 - 3W over 5 balls in urn 1 as the second event does not tell me anything to influence which ball was transferred since urn 2 already has a red ball anyways. Hope to know if my reasoning is valid!
|
A Pretty Straightforward question. Please go through the diagram. Another way to think is if 1W transferred to U2= (1W + 2R) makes U2 = (2R + 2W). Now, # White Balls = # Red Balls . Then P(W)= P(R) = 1/2 for Urn2.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2096757",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
}
|
Do the Laurent polynomials over $\mathbb{Z}$ form a principal ideal domain?
I'm trying to prove whether or not the Laurent polynomials $\mathbb{Z}[x, x^{-1}]$ with coefficients in $\mathbb{Z}$ form a principal ideal domain.
I know that $\mathbb{F}[x, x^{-1}]$ is a PID when $\mathbb{F}$ is a field, but clearly $\mathbb{Z}$ is not a field so I cannot appeal to this result. And my intuitions are not serving me very well at the moment. Can anyone provide a hint or direction to take?
|
Hint: Consider the ideal $I = (2, 1+x)$. Can you find a single generator for $I$?
Full solution:
Consider the ideal $(2, 1+x)$; I claim that it is not principal. Note that
$$(0) \subsetneq (2) \subsetneq (2, 1+x)$$ is a chain of prime ideals of length $2$, so $(2, 1+x)$ has height $\geq 2$. A principal ideal has height at most $1$ by Krull's Hauptidealsatz, which shows that $(2, 1+x)$ is not principal.
In terms of intuition: the ring $\mathbb{Z}[x]$ is "too big" to be a PID. A PID that is not a field has Krull dimension $1$, and $\mathbb{Z}[x]$, much like $k[x,y]$, is $2$-dimensional. Localizing at $x$ gets rid of the single irreducible element $x$ (making it into a unit), but there are plenty of other irreducible polynomials one can use instead to build a chain of primes of length $2$.
EDIT: To respond to your question: since
$$
\frac{\mathbb{Z}[x,x^{-1}]}{(2, 1+x)} \cong \frac{\mathbb{F}_2[x,x^{-1}]}{(1+x)} \cong \mathbb{F}_2[-1, (-1)^{-1}] = \mathbb{F}_2
$$
which is a field, so $(2, 1+x)$ is maximal, hence prime.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2096860",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Derivative of Function with Cases: $f(x)=x^2\sin x^{-1}$ for $x\ne0$ So far I assumed the derivative of a function with cases such as this one:
$$f(x) = \begin{cases}x^2\sin x^{-1} & \text{ if } x \neq 0\\ 0 & \text{ else }\end{cases}$$ would be the cases of the derivatives.
So, for $f'$ I would get:
$$f'(x) = \begin{cases}
2x \sin{\left( \frac{1}{x} \right)} -\cos{\left( \frac{1}{x} \right)} & \text{if } x \neq 0\\
0 & \text{else}
\end{cases}$$
And for $f''$
$$f''(x) = \begin{cases}
\frac{(2x^2 - 1)\sin{\left( \frac{1}{x} \right)} - 2x\cos{\left( \frac{1}{x} \right)}}{x^2} & \text{if } x \neq 0\\
0 & \text{else}
\end{cases}$$
However, supposedly $f$ is only differentiable once and not twice. Therefore I must have made a mistake here, but I am at a loss as to what that would be. Is the derivative of cases not the cases of the derivatives?
|
How you concluded $f'(0)=0$ you made no attempt to explain. Remember that
$$
f'(0) = \lim_{h\to0} \frac{ f(0+h) - f(0) } h.
$$
You'll probably need to squeeze in order to find the limit. Next you have
$$
f''(0) = \lim_{h\to0} \frac{f'(0+h) - f'(0)} h.
$$
And again you'll probably have to squeeze. However, it's not hard to see without doing that, that $f'$ is not continuous at $0$ since it approaches no limit at $0$ because of the way it oscillates. Therefore it cannot be differentiable at $0.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2096965",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 2,
"answer_id": 1
}
|
If $f'(a)$ exists, does $f'(a^+)$ and $f'(a^-)$ exist? Is it true that
If $f(x)$ is differentiable at $a$, then both $f'(a^+)$ and $f'(a^-)$ exist and $f'(a^+)=f'(a^-)=f'(a)$.
My answer is NO.
Consider the function
$$
f(x)=\begin{cases}
x^2\sin\dfrac{1}{x}&\text{for $x\ne0$}\\[1ex]
0&\text{for $x=0$}
\end{cases}
$$
$f'(0)$ can be found by
\begin{align} \lim_{x \to 0} \dfrac{f(x) - f(0)}{x-0} & = \lim_{x \to 0} \dfrac{f(x) - 0}{x} & \textrm{ as } f(0) = 0 \\ & = \lim_{x \to 0} \dfrac{x^2 \sin\left(\frac{1}{x}\right)}{x} & \\
& = \lim_{x \to 0} x \sin\left(\frac{1}{x}\right) & \end{align}
Now we can use the Squeeze Theorem. As $-1 \leq \sin\left(\frac{1}{x}\right) \leq 1$, we have that $$0 = \lim_{x \to 0} x \cdot -1 \leq \lim_{x \to 0} x \sin\left(\frac{1}{x}\right) \leq \lim_{x \to 0} x \cdot 1 = 0$$
Therefore, $\lim_{x \to 0} x \sin\left(\frac{1}{x}\right) = 0$ and we have $f'(0)=0$.
However,
$$
f'(x)=\begin{cases}
-\cos\dfrac{1}{x}+2x\sin\dfrac{1}{x}&\text{for $x\ne0$}\\[1ex]
0&\text{for $x=0$}
\end{cases}
$$
$f'(0^+)$ nor $f'(0^-)$ exists as $x\to 0$.
Is my answer correct?
I have found some pages related to this question.
Is $f'$ continuous at $0$ if $f(x)=x^2\sin(1/x)$
Calculating derivative by definition vs not by definition
Differentiability of $f(x) = x^2 \sin{\frac{1}{x}}$ and $f'$
$f'$ exists, but $\lim \frac{f(x)-f(y)}{x-y}$ does not exist
Thanks.
|
Yes, your answer is correct. The existence of the derivative of a function at a point does not always mean that the derivative will be continuous at that point. The condition $f′(a+)=f′(a−)=f′(a)$ implies continuity of the derivative at $x=a$ which is clearly not true for the function you mentioned at $x=0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2097051",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
}
|
$\left[\begin{array}{cc} P & A^T\\ A & 0\end{array}\right]$ is non-singular if and only if $\mathcal N(P) \cap \mathcal N(A)=\{0\}?$ Suppose $P\succeq 0$ and $A$ is of full row rank. I want to show that $\left[\begin{array}{cc} P & A^T\\ A & 0\end{array}\right]$ is nonsingular if and only if $\mathcal N(P) \cap \mathcal N(A)=\{0\},$ where $\mathcal N(\bullet)$ denotes the null space of a matrix.
The only-if part is straightforward, because $\left[\begin{array}{c} P \\ A \end{array}\right]$ would not even be full-rank otherwise.
I think I managed to show the if part, but would appreciate it if someone can confirm it or point out where I'm mistaken. I'd also appreciate an alternative, simpler or more intuitive proof/comments. Here's my attempted proof: ($0$ denotes a scalar or a matrix below, depending on the context.)
Suppose $\left[\begin{array}{cc} P & A^T \\ A & 0\end{array}\right] \left[\begin{array}{c} x \\ y\end{array}\right]=0.$ Then we have $Ax=0$ and $Px+A^Ty=0.$ Therefore, $x^TPx+x^TA^Ty=0$. Since $Ax=0$, this implies that $x^TPx=0$, hence $P^{1/2}x=0$ and $Px=0.$ It then follows that $x=0$, since $\mathcal N(P) \cap \mathcal N(A)=\{0\}.$ As a result, $y=0$ since $A^T$ is of full column rank. Q.E.D.
|
Let $A$ be an $n \times n$ matrix and the rank of $A$ is $r$ where $r < n$.
There is an $n \times n$ matrix $U=[U_1|U_2]$ with orthogonal columns such that
$$
AU = [AU_1| AU_2] = [AU_1 | 0]
$$
where $AU_1$ and $U_1$ are $n \times r$ matrices and $AU_1$ is full rank.. The matrix $U$ may be computed using the singular value decomposition (SVD) or the QR factorisation.
We now consider the orthogonal transformation
$$
\left[ \begin{array}{cc}
U^t & 0 \\
0 & I
\end{array} \right]
\left[ \begin{array}{cc}
P & A^t \\
A & 0
\end{array} \right]
\left[ \begin{array}{cc}
U & 0 \\
0 & I
\end{array} \right]
= \left[ \begin{array}{cc}
R & U^tA^t \\
AU & 0
\end{array} \right]
=: M
$$
which does not change the rank of the composite matrix. Also the rank of $R$ is the same as the rank of $P$. We partition the matrix $R$,
$$
R = \left[ \begin{array}{cc}
R_{11} & R_{12} \\
R_{21} & R_{22}
\end{array} \right]
= \left[ \begin{array}{cc}
U_1^t P U_1 & U_1^tPU_2 \\
U_2^t P U_1 & U_2^t P U_2
\end{array} \right]
$$
where $R_{11}$ is an $r \times r$ matrix.
The first $n$ columns of the matrix $M$ is then given by
$$
\left[ \begin{array}{cc}
R_{11} & R_{12} \\
R_{21} & R_{22} \\
AU_1 & 0
\end{array}
\right]
$$
If the null space of $P$ is defined by a column of $U_2$ then the matrix
$$
\left[ \begin{array}{c}
R_{12} \\
R_{22}
\end{array}
\right]
$$
will have zero column which makes the $M$ a non-singular matrix.
This completes the missing part of the proof.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2097117",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 1,
"answer_id": 0
}
|
$\lim b_n=\frac{n^n}{(n+1)(n+2)\dots(n+n)}.$ $$b_n=\frac{n^n}{(n+1)(n+2)\dots(n+n)}.$$
Now, there is this theorem for sequences that if $\lim_{n\to ∞} a_{n+1} /a_n =l$, $|l|<1$ then $\lim_{n\to ∞} a_n=0$.
so, $\lim_{n\to ∞} b_{n+1} /b_n =e/4$ which is less than $1$, so $\lim_{n\to ∞} b_n$ should be equal to zero.
But if I calculate the limit of $b_n$ as, $b_n=(n\cdot n\cdot n\cdots n)/((n+1)(n+2)...(n+n))$ I get $\lim_{n\to ∞} b_n=1/2$.
Something is definitely going wrong. Can someone point out my mistake,Please.
|
One approach:
$$b_n=\frac{n^n}{(n+1)(n+2)\dots(n+n)}=\prod_{i=1}^n\frac{n}{n+i}$$
$$\implies \log b_n = n \cdot\left[ \frac{1}{n}\sum_{i=1}^n f\left(\frac{i}{n}\right)\right] \text{, where $f(x)=-\log(1+x)$}$$
So, the bracketed expression ought to tend to $\int_0^1f(x)\,dx=1-2\log2<0$
As such, $\log b_n = (1-2\log2)(n+o(1))\implies b_n=\left(\frac{e}{4}\right)^{n+o(1)}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2097232",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Which of the following value(s) of $t^3$ are not possible? Let $t$ be a real number such that $t^2 = at + b$ for some positive integers $a$ and $b$. Then for any choice of positive integers $a$ and $b$, $t^3$ is not equal to -
1) $4t+3$
2) $8t+5$
3) $10t+3$
4) $6t+5$
This question is from a prestigious Indian Scholarship exam, and I'm solving it for fun. But I can't get at the bottom of the problem. I'm also a bit shaky on which concept to use in this problem.
My try on this question
$t^2 =at +b$
$t = \sqrt(at+b)$
For $t \in R$
$at+b \ge 0$
Since $a$ and $b$ are positive integers
$at \ge -b$ and
$t \ge \frac{-b}{a}$
I also tried differentiating the function $t^2 = at + b$ to get
$\frac{d}{dt}(t^2) = a\frac{d}{dt}(t) + 0$
$2t = a$
Alas it leads nowhere. But
$\frac{a}{2} \ge \frac{-b}{a}$
Since $a$ and $2$ are positive integers
$a^2 \ge -2b$
What has to done next? Please help
|
Assume that $t^3$ does equal $ut+v$ (where $u=4$, $v=3$ for the first part, etc.).
Then $t$ is a root of the cubic polynomial
$$f(X)=X^3-uX-v \in\Bbb Z[X]$$
as well as of the quadratic polynomial
$$g(X)=X^2-aX-b \in\Bbb Z[X].$$
But then $t$ is also a root of
$$f(X)-Xg(X) =aX^2+(b-u)X-v$$
and also of
$$h(X)=(f(X)-Xg(X))-ag(X)=(a^2+b-u)X+ab-v. $$
There are two possibilities:
*
*If $a^2+b=u$ then we conclude $ab=v$. Two of the problem parts allow you to find $a,b$ with these properties. As the corresponding $g(X)$ has two real roots (the discriminant $a^2+4b$ is certainly positive), in these cases it is possible that $t^3=ut+v$.
*If $a^2+b\ne u$, $h(X)$ has a unique root $t$ and it is rational. By the rational root theorem, $t$ must in fact be an integer and among the signed divisors of $v$. Conveniently, $v$ is prime in all problem parts, you you need only check for $t\in\{v,1,-1,-v\}$ whether or not $t^3=ut+v$. That is, check if $v^2=u+1$ or $u=0$ or $v=u-1$ or $v^2=u-1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2097303",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Find $xyz$ given that $x + z + y = 5$, $x^2 + z^2 + y^2 = 21$, $x^3 + z^3 + y^3 = 80$ I was looking back in my junk, then I found this:
$$x + z + y = 5$$
$$x^2 + z^2 + y^2 = 21$$
$$x^3 + z^3 + y^3 = 80$$
What is the value of $xyz$?
A) $5$
B) $4$
C) $1$
D) $-4$
E) $-5$
It's pretty easy, any chances of solving this question? I already have the
answer for this, but I didn't fully understand.
Thanks for the attention.
|
Consider the polynomial
$$p(t) = (1-x t)(1-y t)(1-z t)$$
Let's consider the series expansion of $\log\left[p(t)\right]$:
$$\log\left[p(t)\right] =-\sum_{k=1}^{\infty}\frac{S_k}{k} t^k$$
where
$$S_k = x^k + y^k + z^k$$
Since we're given the $S_k$ for $k$ up to $3$ we can write down the series expansion of $\log\left[p(t)\right]$ up to third order in $t$, but that's sufficient to calculate $p(t)$, as it's a third degree polynomial. The coefficient of $t^3$ equals $-xyz$, so we only need to focus on that term. We have:
$$\log\left[p(t)\right] = -\left(5 t +\frac{21}{2} t^2 + \frac{80}{3} t^3+\cdots\right)$$
Exponentiating yields:
$$p(t) = \exp(-5t)\exp\left(-\frac{21}{2}t^2\right)\exp\left(-\frac{80}{3}t^3\right)\times\exp\left[\mathcal{O}(t^4)\right]$$
The $t^3$ term can come in its entirety from the first factor, or we can pick the linear term in $t$ from there and then multiply that by the $t^2$ term from the second factor or we can take the $t^3$ term from the last factor. Adding up the 3 possibilities yields:
$$xyz = \frac{5^3}{3!} -5\times\frac{21}{2} + \frac{80}{3} = -5$$
It is also easy to show that $x^4 + y^4 + z^4 = 333$ by using that the coefficient of the $t^4$ term is zero.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2097444",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
}
|
Does the series $\sum_{n\ge1}\int_0^1\left(1-(1-t^n)^{1/n}\right)\,dt$ converge? Here is a question that I've been working on, a few years ago. I do know how to solve it but I am convinced that it deserves at least another point of view ...
I will post my own solution soon (within a week, at most) and I hope that - meanwhile - other people will suggest various approaches.
Consider, for all $n\in\mathbb{N^\star}$ :
$$u_n=\int_0^1\left(1-(1-t^n)^{1/n}\right)\,dt$$
Does the series $\sum_{n\ge 1}u_n$ converge ?
|
Here is a rather elementary solution.
Notice that $u_n$ is the area of the region $D$ in the unit square $[0,1]^2$ defined by $x^n + y^n \geq 1$. Now let $a_n = 2^{-1/n}$ and we split $D$ into three parts,
*
*$ D_1 = \{(x, y) \in [0,1]^2 : x^n + y^n \geq 1 \text{ and } x \leq a_n \} $
*$ D_2 = \{(x, y) \in [0,1]^2 : x^n + y^n \geq 1 \text{ and } y \leq a_n \} $
*$ D_3 = \{(x, y) \in [0,1]^2 : x, y \geq a_n \} $
$\hspace{8em}$
Then it is easy to check that $D = D_1 \cup D_2 \cup D_3$ and they are non-overlapping. Also, exploiting the symmetry, we check that $D_1$ and $D_2$ have the same area. So it follows that
\begin{align*}
u_n
&= 2\text{[Area of $D_1$]} + \text{[Area of $D_3$]} \\
&= 2\int_{0}^{a_n} (1 - (1-x^n)^{1/n}) \, dx + (1 - a_n)^2.
\end{align*}
Since $a_n = 1 - \mathcal{O}(\frac{1}{n})$, the term $(1-a_n)^2$ is good. For the integral term, notice that for $x \in [0, a_n]$,
\begin{align*}
1 - (1-x^n)^{1/n}
&= 1 - e^{\frac{1}{n}\log(1-x^n)}
\stackrel{\text{(1)}}{\leq} -\frac{1}{n}\log(1-x^n) \\
&= \int_{0}^{x} \frac{t^{n-1}}{1-t^n} \, dt
\stackrel{\text{(2)}}{\leq} \int_{0}^{x} 2t^{n-1} \, dt \\
&= \frac{2}{n}x^n.
\end{align*}
For $\text{(1)}$, we used the inequality $e^t \geq 1 + t$ which holds for all real $t$. The second inequality $\text{(2)}$ follows from the fact that $1-t^n \geq \frac{1}{2}$ for $t \in [0,a_n]$. Thus
$$ u_n \leq \frac{4}{n}\int_{0}^{a_n} x^n \, dx + (1-a_n)^2 \leq \frac{4}{n(n+1)} + (1-a_n)^2 = \mathcal{O}\left(\frac{1}{n^2}\right). $$
This proves the convergence of $\sum_{n=1}^{\infty} u_n$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2097522",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 1
}
|
Exercise on a fixed end Lagrange's MVT Given a function f with derivative on all $[a;b]$ with $f'(a) = f'(b)$, show that there exists $c \in (a;b)$ such that $f'(c) = \frac{f(c)-f(a)}{c-a}$.
This is some kind of MVT with constraint. I have a proof but it uses Darboux's theorem. Can you prove it without using it?
Sketch of the proof I have using Darboux
I suppose first that $f'(a) = 0$ (it's easy to get back to this case with an affine transform) and I set $g(x) = \frac{f(x)-f(a)}{x-a}$. $g'(b) = - \frac{g(b)-g(a)}{b-a} = - g'(d)$ for some $d \in (a;b)$ by Lagrange's MVT. Then $g'(b)$ and $g'(d)$ are either null or have different signs in which case we can find $e \in (d;b)$ such that $g'(e) = 0$. In all cases $g'$ has a zero which is the $c$ we need to find.
|
This paper https://arxiv.org/pdf/1309.5715.pdf contains two nice proofs of Flett's MVT, which is a statement of your question. Of course, Riedel and Sahoo book I pointed in the comment is also good.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2097629",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Is $a>b\iff \neg(b\geq a)$ true in constructive maths?
*
*Is $a>b\iff \neg(b\geq a)\;$ true in constructive maths? Why (not)?
*Also: is $\neg(a > b) \iff b\geq a\;$ true in constructive maths? Why (not)?
|
(Disclamer: This is based on what I remember about counterexamples involving creative subjects in intuitionism and my memory is a bit rusty. I may have made a mistake.)
I'm pretty sure that (1) is not true in intuitionism. Here is why:
Let $P$ be a mathematical proposition which has not yet been proved or disproved. Let $a$ be the decimal number $0.a_1a_2a_3\ldots$ where $a_i$ is 1 if $P$ has been proved or disproved at day number $i$ counting from today, while $a_i$ is $0$ if $P$ has still not been proved or disproved on that day. Let $b$ be $0$.
$b\geq a$ is absurd, as proving that would amount to proving that $P$ cannot be proved or disproved, which cannot be done in intuitionism (it's very different from an axiomatic system). Hence $\neg(b\geq a)$.
On the other hand, $a>b$ is not true (at present) because we cannot indicate a day in the future when $P$ will be proved.
Therefore, (1) is not true (at present).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2097719",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Show that $(1-\rho(x,F)/\varepsilon)^+$ is uniformly continuous. Let $(M,\rho)$ be a metric space, $F\subset M$ a closed subset, $\varepsilon>0$.
On page 8 of the book "Convergence of Probability measures" by Patrick Billingsley one says that the function
$$f:M\to [0,1]\\ x\mapsto(1-\rho(x,F)/\varepsilon)^+$$
is uniformly continuous since it holds $|f(x)-f(y)|\leq \rho(x,y)/\varepsilon$.
Actually I'm not able to prove this inequality: I tried to solve all the possible cases ($x,y\in F, \not\in F, \in F^\varepsilon, \not\in F^\varepsilon$) but it doesn't looks elegant at all and I even could't prove all the possibilities.
$(.)^+:=max(0,.)$
Thanks
|
If $x$ is in the metric space and $z \in \mathrm{F}$ then $d(x, \mathrm{F}) \leq d(x, z) \leq d(x, y) + d(y ,z)$ so taking the infimum on $z$ on the left-most and right-most gives $d(x, \mathrm{F}) - d(y, \mathrm{F}) \leq d(x,y)$ and by symmetry, you can put absolute value. So $f(x) - f(y) = \dfrac{\rho(x, \mathrm{F}) - \rho(y, \mathrm{F})}{\varepsilon}.$ Is the rest clear from here?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2097869",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Question about reading proof notation When reading the following problem, do you assume that each premise is true? So since number 2 states ¬ B am I to assume that ¬ B is true? Which would mean B is false?
*
*A ∨ C → D Premise
*¬ B Premise
*A ∨ B Premise
*A 2, 3, Disjunctive Syllogism
*A ∨ C 4, Addition
*D 1, 5, Modus Ponens
QED.
Thanks for the help.
|
dictionary.com defines:
'Premise': a statement which is assumed to be true for the purpose of an argument from which a conclusion is drawn
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2097955",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Can you explain me this problem of combinatorics? Yesterday was my final test of Discrete Mathematics and the second question was:
Wath is the cardinality of these sets:
a) Integers with 4 different digits and its digits increase, decrease and then increase again.
Eg. 1308 is valid
1300 is not valid
1320 is not valid
b) Integers $4000\le n<7000$ even and all its digits are different.
c) Positive integers $n<10000$ such that the sum of its digits is 15.
Could you explain me this question?
|
For (b), which is the easiest of the three problems:
Let the digits from left to right be $D(1),D(2),D(3),D(4).$
If each of the $3$ cases $D(4)=0$ or $D(4)=2$ or $D(4)=8,$ there are $3$ choices ($4,5$,or $6$) for $D(1)$. This gives $(3)(3)=9$ choices for the pair $(D(1),D(4)).$ For each pair there are $8$ choices of $D(2)$. There are 7 choices of $D(3)$ for each triplet $(D(1),D(2),D(4)).$ This gives $(9)(8)(7)=504$ values when $D(4)$ is $0,2$,or $8$.
In each of the $2$ cases $D(4)=4$ or $D(4)=6$ there are only $2$ choices for $D(1).$ (That is, $D(1)=5$ or $6$ when $D(4)=4$; and $D(1)=4$ or $5$ when $D(4)=6$). This gives $(2)(2)=4$ choices for the pair $(D(1),D(4)).$ There are $8$ choices of $D(2)$ for each pair $(D(1),D(4))$ and 7 choices of $D(3)$ for each triplet $(D(1),D(2),D(4)).$ This gives $(4)(8)(7)=224$ values when $D(4)$ is $4$ or $6.$
Altogether we have a total of $504+224=728$ values.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2098060",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Proving that $\cos(\arcsin(x))=\sqrt{1-x^2}$ I am asked to prove that $\cos(\arcsin(x)) = \sqrt{1-x^2}$
I have used the trig identity to show that $\cos^2(x) = 1 - x^2$
Therefore why isn't the answer denoted with the plus-or-minus sign?
as in $\pm \sqrt{1-x^2}$.
Thank you!
|
Let $\arcsin x = \theta$. Then, by definition of the arcsine function, $\sin\theta = x$, where $-\frac{\pi}{2} \leq \theta \leq \frac{\pi}{2}$, and
$\cos(\arcsin x) = \cos\theta$. Using the Pythagorean Identity $\sin^2\theta + \cos^2\theta = 1$, we obtain
\begin{align*}
\sin^2\theta + \cos^2\theta & = 1\\
\cos^2\theta & = 1 - \sin^2\theta\\
\cos^2\theta & = 1 - x^2\\
|\cos\theta| & = \sqrt{1 - x^2}
\end{align*}
Since $-\frac{\pi}{2} \leq \theta \leq \frac{\pi}{2}$, $\cos\theta \geq 0$. Thus, $|\cos\theta| = \cos\theta$, whence
\begin{align*}
\cos\theta & = \sqrt{1 - x^2}\\
\cos(\arcsin x) & = \sqrt{1 - x^2}
\end{align*}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2098163",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 1
}
|
Find all complex numbers $z$ satisfying the equation $z^{4} = -1+\sqrt{3}i$ Since any complex number can be of polar form. We set that $z = r\cos \theta +ir\sin \theta$. Now by de Moivre's Theorem, we easily see that $$z^{4} = r^{4}\cos4 \theta + ir^{4}\sin4 \theta$$
Since $$z^{4} = -1+\sqrt{3}i$$
We equip accordingly and see that $$r^{4}\cos4 \theta = -1$$
$$r^{4}\sin4 \theta = \sqrt{3}$$
Solving the above 2 equations we have $$\tan 4\theta = -\sqrt{3} \Rightarrow 4\theta = -\dfrac{\pi}{3} \Rightarrow \theta = -\dfrac{\pi}{12}$$
Hence we have $$\text{arg}z = -\dfrac{\pi}{12} + k\pi$$
Furthermore, $$\text{Arg}z = -\dfrac{\pi}{12} \text{ or } \dfrac{11\pi}{12}$$
However, i cannot find out what $r$ is, as when i substitute in to solve, $r$ become a complex number?? Is my answer correct, how can i make my steps better? Thanks!
|
Observe that $-1+\sqrt{3}i=2e^{\frac{2\pi i}{3}}$. Then the roots of the equation are
$$
z=\sqrt[4]{2}e^{\frac{\pi i}{6}+\frac{k\pi}{2}},
$$
where $k=0,1,2,3$. Thus $r=\sqrt[4]{2}$. Substitute $k$, then we get $z=\sqrt[4]{2}e^{\frac{\pi i}{6}}$, $z=\sqrt[4]{2}e^{\frac{2\pi i}{3}}$, $z=\sqrt[4]{2}e^{\frac{7\pi i}{6}}$, and $z=\sqrt[4]{2}e^{\frac{5\pi i}{3}}$. There is no root whose argument is $\frac{11\pi}{12}$. What happened? Because your attempt has a flaw: Since $r^4 \cos 4\theta=-1$ and $r^4\sin 4\theta=\sqrt{3}$, we get $\tan 4\theta=-\sqrt{3}$, but it doesn't imply that $4\theta = -\frac{\pi}{3}$. If it is true, then $r^4 \cos4\theta$ cannot be negative!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2098241",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Conditions for when a line in $\mathbb{C}$ is tangent to a point on a circle I am working on the following problem from Chapter 1, Section 5 of Conway's "Functions of One Complex Variable":
Let $C$ be the circle {z:|z-c|=r}, r>0; let $a=c+r\text{ cis }\alpha$ and put
$$
L_\beta=\left\lbrace z:\text{Im}\left(\frac{z-a}{b}\right)=0\right\rbrace
$$
where $b=\text{cis }\beta$. Find necessary and sufficient conditions in terms of $\beta$ that $L_\beta$ be tangent to $C$ at $a$.
Here, Conway uses the notation $\text{cis }\alpha$ to denote $(\cos\alpha+i\sin\alpha)$; also, he shows in the section mentioned that $L_\beta$ is simply the line in $\mathbb{C}$ containing $a$ in the direction of $b$.
I've been able to convince myself in pictures that the necessary and sufficient condition is that $\beta=\alpha\pm \pi/2$. One direction is:
Define
$$
L_\alpha=\left\lbrace z:\text{Im}\left(\frac{z-c}{d}\right)=0\right\rbrace
$$
where $d=\text{cis }\alpha$. Then $L_\alpha$ is the line containing $c$ in the direction of $a$. Assuming $\beta=\alpha\pm\pi/2$, define unit vectors $z_1=\text{cis }\alpha$ and $z_2=\text{cis }\beta$. If we were to add $a$ to each of these, then $z_1\in L_\alpha$ and $z_2\in L_\beta$. So, it suffices to show that $z_1$ and $z_2$ are perpendicular when considered as vectors in $\mathbb{R}^2$. Since $z_1=(\cos\alpha,\sin\alpha)$ and $z_2=(\cos\beta,\sin\beta)$, then $$z_1\cdot z_2=\cos\alpha\cos\beta+\sin\alpha\sin\beta=\cos\alpha-\beta=\cos(\pm \pi/2)=0.$$
My questions are:
*
*Is this enough to show one direction, or would I have to show something more to show that $L_\beta$ is tangent to $C$ at $a$? Or maybe I've missed the mark completely?
*How would I begin the other direction? Should I use the definition of "tangent to $C$ at $a$" that says $L_\beta$ contains only the point $a$ in the circle $C$? Or something else?
|
Here is a "computational" proof.
This issue is translation-invariant and enlargment (i.e., homothety)-invariant; you may thus assume $c=0$ and $r=1$.
Therefore,
*
*you can parametrize the circle as the set of $z$ such that $z=e^{i \theta}.$
*the "equation" of the straight line becomes
$$\Im(\dfrac{z-e^{i \alpha}}{e^{i \beta}})=0 \ \iff \ \Im((z-e^{i \alpha})e^{-i \beta})=0.$$
A straight line is tangent to a circle if it has only one common point with it. Thus, it remains to see under which condition the following equation in $\theta$ has a unique solution:
$$\Im((e^{i \theta}-e^{i \alpha})e^{-i \beta})=0 \ \iff \ \Im(e^{i (\theta-\beta)}-e^{i (\alpha-\beta)})=0$$
$$ \ \iff \ \sin(\theta-\beta)-sin(\alpha-\beta)=0.$$
Two angles have the same sine if they are either equal or supplementary (modulo $2 \pi$).
We can therefore infer two cases, delivering two solutions in $\theta$:
$$\begin{cases}\theta-\beta&=&\alpha-\beta & \iff & \theta&=&\alpha \\ \theta-\beta&=&\pi - (\alpha-\beta) & \iff & \theta & = &\pi - \alpha + 2\beta.\end{cases}$$
The solution is thus unique if and only if the two previous solutions are equal:
$\alpha=\pi - \alpha + 2\beta \ $ modulo $ \ 2 \pi\ \iff \ 2 \alpha=2\beta+\pi \ $ mod. $2\pi \iff \exists k \in \mathbb{Z}$ s.t. $2 \alpha=2\beta+k2\pi$
Dividing by 2, we find condition : $ \alpha=\beta+k\pi \ \iff$
$$\alpha=\beta+\dfrac{\pi}{2} \ \text{modulo} \ \pi.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2098321",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Four numbers are chosen from 1 to 20. If $1\leq k \leq 17$, in how many ways is the difference between the smallest and the largest number equal to k?
Four numbers are chosen from 1 to 20. If $1\leq k \leq 17$, in how many ways is the difference between the smallest and the largest number equal to k?
My Working:
Case 1:
The greatest number is 20.
As $1\leq k \leq 17$, hence, the smallest number $\geq 3$.
If it is 3, the number of ways of choosing the other 2 numbers is $16\cdot 15$
Similarly, for smallest number $4, 5, 6,\cdots$ no. of possibilities are $15\cdot 14$, $14\cdot 13$, $13\cdot 12$,$\cdots$
Hence, total combinations$=\sum{n(n+1)}$ from $n=16$ to $n=1$
Case 2:
The greatest number is 19 and 18. We can proceed similarly to get the same result i.e. total combinations$=\sum{n(n+1)}$ from $n=16$ to $n=1$
Case 3:
The greatest number $\leq 17$.
Let us call it $l$. total combinations$=\sum{n(n+1)}$ from $n=l-2$ to $n=1$
Problem:
This solution is very long. Is there an easier way of solving it?
|
I will assume that numbers may not be repeated and that order of selection of the numbers does not matter., i.e. we are counting how many subsets, $A$, of $\{1,2,\dots,20\}$ have the property that $max(A)-min(A)=k$
First, recognize that $max(A)-1\geq max(A)-min(A)=k$ implies $max(A)\geq k+1$, for example if the distance between max and min is six, you cannot have the largest number be $6$ or less, it must be at least $7$ or more.
Step 1: Pick the largest number.
We first need to count how many ways in which we may pick the largest number for our set for a specific $k$. As $20\geq max(A)\geq k+1$ there are $20-k$ different possibilities for $max(A)$.
(E.g. for $k=19$ our only choice is for $max(A)=20$ and $min(A)=1$ for a total of $20-19=1$ choices while for $k=17$ we could have $max(A)=18~min(A)=1,~~max(A)=19~min(A)=2,$ or $max(A)=20~min(A)=3$ for a total of $20-17=3$ choices)
In having picked the largest number, the smallest number is forced to ensure that the desired difference is achieved.
Step 2: Pick the locations of the remaining two numbers in relation to the smallest number.
There will be $k-1$ available numbers between $min(A)$ and $max(A)$ to choose from and we wish to select two of these without regard to their order. There are $\binom{k-1}{2}$ ways to accomplish this.
There are then $(20-k)\binom{k-1}{2}$ subsets of $\{1,2,\dots,20\}$ with the property that $max(A)-min(A)=k$
Note: for $k=1$ and $k=2$ the above formula correctly gives a total of zero possibilities without need to add a special case since $\binom{k-1}{2}=0$ in both of those cases.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2098433",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Showing $\sum_{d\mid n} \mu(d)\tau(n/d)=1$ and $\sum_{d\mid n} \mu(d)\tau(d)=(-1)^r$ Need some help on this question from Victor Shoup
Let $\tau(n)$ be the number of positive divisors of $n$. Show that:
*
*$\sum_{d\mid n} \mu(d)\tau(n/d)=1$;
*$\sum_{d\mid n} \mu(d)\tau(d)=(-1)^r$, where $n=p_1^{e_1}\cdots p_r^{e_r}$ is the prime factorization of $n$.
I have tried both of them but cant find any solution!We have to use Mobius Function properties to prove this Question.
|
Using the fact that both $\mu$ and $\tau$ are multiplicative for the first one we get from first principles the value
$$\tau(n) \prod_{q=1}^r \left(1+(-1)\times\frac{e_q}{e_q+1}\right)$$
which simplifies to
$$\tau(n) \prod_{q=1}^r \frac{1}{e_q+1} = 1.$$
For the second one we may write
$$\prod_{q=1}^r (1+(-1)\times 2) = (-1)^r.$$
Here we have used that for a subset $S$ of the set of prime factors $P$ corresponding to a squarefree divisor $d$ of $n$ we get $\mu(d) = (-1)^{|S|}$ and $\tau(n/d) = \tau(n) \prod_{p_q\in S} \frac{e_q}{e_q+1}.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2098546",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Solve matrix exponential power series The matrix exponential is defined as
$$ e^A = \sum_{n=0}^\infty \frac{1}{n!} A^n $$
However I would like to solve something similar:
$$ B = \sum_{n=1}^\infty \frac{1}{n!} A^{n-1} $$
(NOTE: starting index and the power)
I can transform that into
$$ B = A^{-1}\sum_{n=0}^\infty \frac{1}{n!} A^{n} -I = A^{-1} \left(e^A - I \right) $$
*
*That works. But when $A^{-1}$ does not exist ($A$ is not invertible) it does not work.
*Is it possible to make it work for general $A$? Has anyone thoughts on that? Thanks.
|
Basically, your question asks how to apply the function $(e^x-1)/x$ to a matrix. One way to do it is using the integral representation
$$\frac{e^x-1}{x}=\int_0^1 e^{tx}\ dt.$$
Thus we can define $B$ to be
$$
B:=\int_0^1 e^{tA}\ dt.
$$
Since you already know what the matrix exponential is, this expression makes sense: the integral of a matrix is computed by taking the integral of each of its entries.
Note also that this coincides with the power series definition you have written, since $$\int_0^1 \frac{(tA)^{n}}{n!}\ dt=\frac{A^{n}}{(n+1)!}.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2098675",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Which of the integers cannot be formed with $x^2+y^5$ So, I was asked by my teacher in school to solve this problem it really had me stumped.The problem is as follows:Given that $x$ and $y$ are integers, which of the following cannot be expressed in the form $x^2+y^5$?
$1.)\ 59170$
$2.)\ 59012$
$3.)\ 59121$
$4.)\ 59149$
$5.)\ 59130$
Is it possible for an elegant solution and not tedious trial and error?
|
I have found
$$59170=9^5+11^2$$
$$59012=8^5+162^2$$
$$59149=9^5+10^2$$
$$59130=9^5+9^2$$
and $59121$ can't be expressed in this form.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2098741",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 4,
"answer_id": 1
}
|
Finding the limit of the area of a Koch Snowflake this is my first question for this site and I made this account specifically for help with the following topic.
I am doing a research presentation on the Koch Snowflake, specifically, the area.
So far, I have been attempting to generalize a formula for finding the area of the snowflake at n iterations, and I am now trying to find the limit as n tends toward infinity.
So, basically, what is the limit for the following?:
$$\lim_{n\to\infty} \sum_{r=2}^{n} \frac{3 \cdot 4^{r-2}}{9^{r-1}} \cdot \frac{s^2 \sqrt{3}}{4} $$
|
The following forms a GP with $a=\frac{1}{9}$ and $r=\frac{4}{9}$.
$$\begin{align}
&\lim_{n\to\infty} \sum_{r=2}^{n} \frac{3 \cdot 4^{r-2}}{9^{r-1}} \cdot \frac{s^2 \sqrt{3}}{4}\\
=\ &\frac{ 3\sqrt{3}\cdot s^2}{4}\lim_{n\to\infty} \sum_{r=2}^{n} \frac{ 4^{r-2}}{9^{r-1}} \\
=\ &\frac{ 3\sqrt{3}\cdot s^2}{4}\cdot\frac{\frac{1}{9}}{1-\frac{4}{9}}\\
=\ &\frac{ 3\sqrt{3}\cdot s^2}{4}\cdot\frac{1}{5}\\
=\ &\frac{ 3\sqrt{3}\cdot s^2}{20}
\end{align}\\
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2098947",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Relation between Diagonals in a pentagon i do need another pair of eyes to look at this stupid question:
Take a regular pentagon and draw diagonals from each vertex/point. By doing that, you create another pentagon in the middle of the old one. Regarding the n-Pentagon with sides $s_n$ and diagonals $d_n$, show the following formula:
$d_{n+1} = d_n - s_n$
$s_{n+1} = d_n - 2d_{n+1} = 2s_n - d_n$
As mentioned, im not to interested in any kind of full solutions, since this is pretty basic stuff, i just in the need for someone to make sense of this question and explain to me, what i have to do here.
|
Given a regular pentagon $ABCDE$, draw diagonals $AC$ and $BE$ which intersect at $F$. $BCDF$ is a parallelogram because each diagonal is parallel to the side it does not meet at the vertices, thus $AF=BF=$(diagonal minus side).
Next draw all five diagonals of the pentagon. Let $F$ be the intersection of $AC$ and $BE$ as above; $G$ be the intersection of $AD$ and $BE$; $H$ be the intersection of $BD$ and $CE$. You should be able to prove triangles $AFG$ and $HFG$ congruent by $ASA$, thus the diagonals of the inner pentagon equal (diagonal minus side) of the outer one.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2099076",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
how to solve $min\{x \in \mathbb N_0 \quad |x \cdot 714\quad mod \quad 1972 \quad = \quad 1292 \quad mod \quad 1972 \} $ (modulo equation) Question: How can I solve: $min\{x \in \mathbb N_0 \quad |x \cdot 714 \equiv 1292 \mod 1972 \} $ ?
I only know about:
$x \cdot a \equiv _m b \Rightarrow m|x \cdot a - b$
different way of notation:
$min\{x \in \mathbb N_0 \quad | x \cdot 714 \equiv_{1972} 1292\}$
How to go on?
I appreciate every hint.
|
As $714=34\cdot21$, $1292=34\cdot 38$, $1972=34\cdot 58$, this is equivalent to solving $21 x\equiv 38\mod 58$.
We have to find the inverse of $21$ modulo $58$. The tool for this is the Extended Euclidean algorithm:
$$\begin{array}{rrrl}
\hline
r_i&u_i&v_i&q_i\\
\hline
58&0&1\\
21&1&0&2\\
\hline
16&-2&1&1\\
5&3&-1&3\\1&-11&4\\
\hline
\end{array}$$
Thus the inverse of $21$ mod $58$ is $-11\equiv47$. The solution is
$$x\equiv47\cdot 38\equiv (-11)(-20)\equiv 46\mod 58. $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2099165",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Proof regarding logic and negation of quantifiers I've read about the following task, but don't know how to prove it:
Proof that $\neg(\forall x ( V(x)\rightarrow F(x))\iff \exists x( V(x) \land \lnot F(x)) $.
Maybe we start by proving "$\Leftrightarrow$" by proving the contraposition:
$\neg\exists x (V(x) \land \lnot F(x)) \Rightarrow \forall x ( V(x)\rightarrow F(x))$
Now we can make a contradiction by assuming:
$\exists x (V(x)\land \lnot F(x))$ but how do I move on now?
|
Here is an approach to proving the biconditional using a Fitch-style proof checker. Using a proof checker makes sure that I am following the rules.
I used the following rules: change of quantifiers (CQ), universal elimination (∀E), universal introduction (∀I), De Morgan's laws (DeM), negation elimination (¬E), negation introduction (¬I), disjunctive syllogism (DS), double negative elimination (DNE), conditional introduction (→I), conditional elimination (¬E), indirect proof (IP), existential elimination (∃E) and biconditional introduction (↔I).
Kevin Klement's JavaScript/PHP Fitch-style natural deduction proof editor and checker http://proofs.openlogicproject.org/
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2099256",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
How are these two different general equations both represent a hyperboloid of 2 sheet? A. $\frac{x^2}{a^2}+\frac{y^2}{b^2}-z^2=k<0$
B.$\frac{x^2}{a^2}-\frac{y^2}{b^2}-\frac{z^2}{c^2}=1$
I don’t know how I can turn one into the other.
EDIT:Thanks to Bernard, I have somewhat of a clue as to where to begin. But Dividing the negative k on both both sides (assuming that the k is positive with a negative signs) and distribute the negative sign, I get $\frac{-x^2}{ka^2}-\frac{y^2}{kb^2}+\frac{z^2}{k}=1$ But this equation is still not identical with the second equation. Two of the three signs are different.
|
Rewrite the first equation as
$$\frac{z^2}{(\sqrt{-k})^2}-\frac{x^2}{(\sqrt{-k}\, a)^2}-\frac{y^2}{(\sqrt{-k}\, b)^2}=1.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2099330",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How to calculate the number of states of a 7-segment LCD? A 7-segment LCD display can display a number of 128 states.
The following image shows the 16x8-grid with all the possible states:
How can you calculate the number of states?
|
The combinations of segments can be calculated with the binomial coefficient:
$$
_nC_k=\binom nk=\frac{n!}{k!(n-k)!}
$$
I came across this problem evaluating the different number of segments lightened:
No segment $= 1$
One segment $= \binom 71 = \frac{7!}{1!(6)!} = 7$
Two segments $= \binom 72 = \frac{7!}{2!(5)!} = 21$
Three segments $= \binom 73 = \frac{7!}{3!(4)!} = 35$
Four segments $= \binom 74 = \frac{7!}{4!(3)!} = 35$
Five segments $= \binom 75 = \frac{7!}{5!(2)!} = 21$
Six segments $= \binom 76 = \frac{7!}{6!(1)!} = 7$
Seven segments $= 1$
Number of states = 1+7+21+35+35+21+7+1 = 128
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2099422",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
How to calculate the limit $\displaystyle \lim_{x\rightarrow 0} (\sin(2\phi x)-2\phi x)\cot(\phi x)\csc^2(\phi x)$ How to calculate the limit?
$$\lim_{x\rightarrow 0} (\sin(2\phi x)-2\phi x)\cot(\phi x)\csc^2(\phi x)=-\frac{4}{3}$$
where $\displaystyle \phi$ is a real number.
|
First see that
$$\sin(x)=x-\frac16x^3+\mathcal O(x^5)$$
So,
$$\sin(2\phi x)-2\phi x=\color{#4488dd}{-\frac43}\phi^3x^3+\mathcal O(x^5)$$
Similarly,
$$\cot(\phi x)\csc^2(\phi x)=\frac{\cos(\phi x)}{\sin^3(\phi x)}=\frac{\cos(\phi x)}{\phi^3x^3-2\phi^5x^5+\mathcal O(x^7)}$$
And combining all of this,
$$\begin{align}(\sin(2\phi x)-2\phi x)\cot(\phi x)\csc^2(\phi x)&=\frac{\cos(\phi x)(-\frac43\phi^3x^3+\mathcal O(x^5))}{\phi^3x^3-2\phi^5x^5+\mathcal O(x^7)}\\&=\frac{\cos(\phi x)(-\frac43+\mathcal O(x^2))}{1-2\phi^2x^2+\mathcal O(x^5)}\\&\to\color{#4488dd}{-\frac43}\end{align}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2099542",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Find a subsequence whose limit exists
Let $\{a_n\}$ be a sequence of non-zero real numbers.
Show that it has a subsequence $\{a_{n_k}\}$ such that $\lim \dfrac{a_{n_{k+1}}}{ {a_{n_k}}}$ exists and belongs to $\{0,1,\infty\}$.
I am finding the above problem false.
If I take $(a_n)_n=(e^{-n})_n$ then any sub-sequence of $a_n$ is $e^{-n_k}$ but $\lim \dfrac{a_{n_{k+1}}}{ {a_{n_k}}}=\dfrac{e^{-n-1}}{e^{-n}}=\dfrac{1}{e}\notin \{0,1,\infty\}$.
Edits:By @Henry's comment I am sure the problem is true.But how should I find the sub-sequence.Please give some hints.
|
Notice that $n_k+1 = n_{k+1}$ is false most of the times so your counterexample does not work. Take $n_k = 2^k$, then
$$ \frac{a_{n_{k+1}}}{a_{n_k}} = \frac{e^{-2^{k+1}}}{e^{-2^k}} = \exp(-2^k) \to 0
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2099666",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Definition of significant figures Normally when we are taught how to add numbers with regards to significant figures, we are told to round the result to the rightmost place of the least precise digit. $3.55 + 4 = 7.55$, for example, would be rounded off to $8$. But for my argument I will be considering the addition of two numbers $9$ and $2$, both of which are significant to the ones digit.
Following the usual rule for addition, $9 + 2 = 11$ and now we have two significant figures. However, whenever I think about significant figures I would intuitively think of these two definitions:
1 - Digits to the right of the last significant digit, can be any number
between 0 and 9.
2 - Any digit that has the possibility of being more than one numerical
value, is not a significant digit.
Using the first definition, we can put $9$ and $2$ in these forms:
$9.A$
$2.B$
where $A$ and $B$ are arbitrary digits ranging from $0$ to $9$. Thus, when we add
these two numbers, we would end up with:
$$9.A + 2.B$$
$$= 9 + 2 + 0.A + 0.B$$
$$= 11 + 0.A + 0.B$$
Finally when we assign some meaningful values to $A$ and $B$ such as $(A , B) =
(0 , 0)$ or $(A , B) = (9 , 9)$, we can observe something very interesting:
*
*For $(A , B) = (0 , 0)$:
$$11 + 0.0 + 0.0 = 11$$
*
*For $(A , B) = (9 , 9)$:
$$11 + 0.9 + 0.9 = 12.8$$
In the set of all possible numbers that can result from adding $9.A$ and
$2.B$, both of which are significant to the ones digit, the minimum happens
to be $11$ and the maximum happens to be $12.8$ (the set becomes even greater
if we add more uncertain digits to the right of $A$ and $B$, and in that case
the maximum resulting value starts to approach $13$ as you add more and
more 9's to both $9$ and $2$).
Lastly, we can see that tens digit is always $1$, and using my second definition of significant digits, we can claim that the tens digit is significant. On the other hand, because ones digit can either be $1$ or $2$, the ones digit is not significant. Thus, using my two definitions of significant figures, we can claim that the addition between $9$ and $2$ results in one significant figure (at the tens place), rather than two significant figures one would get from the usual rule of thumb.
So my question is, are my definitions of significant digits correct? If yes, then I can say the general rules of thumb for adding numbers with significant figures is wrong and is being blindly taught and learned in high schools and universities. If not, then can you show me why not and if you have any credible sources.
|
There is some inconsistency in usage. Some people count significant digits from the decimal point, which works well enough when the possible range of values is reasonably small. For the rule you cite (and others related to error analysis and propagation) to be applied consistency, one must instead start counting significant digits from the first non-zero digit from the left. This is equivalent to writing all numbers in scientific notation and simply counting digits starting on the left.
In your example, then, you’re actually adding $9\times10^0$ and $2\times10^0$. Both exponents are the same so no adjustment is necessary before performing the addition. Their sum is $1.1\times10^1$, but this has too many significant digits, so it gets rounded to $1\times 10^1$, or $10$.
There is an ambiguity that you have to deal with, though: how many significant digits does an integer that ends with a string of zeros have? A number like $1000$ can cause the precision of a calculation to collapse very quickly if not handled correctly. This is another advantage of scientific notation in this context: there’s no such ambiguity. Trailing zeros, if any are significant digits.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2099735",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
}
|
Does $\int_{0}^{\infty}{\cos(10x^2\pi)\sin(6x^2\pi)\over \sinh^2(2x\pi)}\mathrm dx={1\over 16}?$ How do we prove these two results?
$$\int_{0}^{\infty}{\cos(10x^2\pi)\sin(6x^2\pi)\over \sinh^2(2x\pi)}\mathrm dx={1\over 16}\tag1$$
$$\int_{0}^{\infty}{\sin(10x^2\pi)\sin(6x^2\pi)\over \sinh^2(2x\pi)}\mathrm dx={1\over 8\pi}\tag2$$
Try an approach to split out the form
$${\cos(10x^2\pi)\sin(6x^2\pi)\over \sinh^2(2x\pi)}\mathrm dx={1\over 4}{\sin(16x^2\pi)-\sin(4x^2\pi)\over 2\sinh(x\pi)\cosh(x\pi)}\tag3$$
$$={1\over 4}{\sin(16x^2\pi)-4\sin(x^2\pi)\cos(x^2\pi)+8\sin^3(x^2\pi)\cos(x^2\pi)\over 2\sinh(x\pi)\cosh(x\pi)}\tag4$$
Or we could write $(1)$ as
$$\int_{0}^{\infty}{({e^{i10x^2\pi}+e^{-i10x^2\pi}})({e^{i6x^2\pi}-e^{-i6x^2\pi}})\over4i \sinh^2(2x\pi)}\mathrm dx\tag5$$
$$\int_{0}^{\infty}{{e^{i16x^2\pi}-e^{-i16x^2\pi}}-({e^{i4x^2\pi}-e^{-i4x^2\pi}})\over4i \sinh^2(2x\pi)}\mathrm dx\tag6$$
$$\int_{0}^{\infty}{\sinh(i16x^2\pi)-\sinh(i4x^2\pi)\over2i \sinh^2(2x\pi)}\mathrm dx\tag7$$
Surely this is not the correct approach here.
I estimated the closed form using wolfram integrator, not sure it is correct.
|
Maybe the most effective way to calculate these integrals is not to start from scratch and use residue theory, but to apply the general formula proved in Ramanujan's Lost Notebook, part IV, formula 14.4.14. In another words there is no need to reinvent the wheel. I will demonstrate the method and its effectiveness by calculating $(1)$. Note that
\begin{align}
I_1&=\int_{0}^{\infty}{\cos(10x^2\pi)\sin(6x^2\pi)\over \sinh^2(2x\pi)}\mathrm dx\\
&=\int_{0}^{\infty}{\sin(16x^2\pi)-\sin(4x^2\pi)\over 2\sinh^2(2x\pi)}\mathrm dx\\
&=\int_{0}^{\infty}{32\pi x\cos(16x^2\pi)-8\pi x\cos(4x^2\pi)\over 4\pi}\left(\frac{1}{\tanh(2x\pi)}-1\right)\mathrm dx\\
&=\frac12\int_{0}^{\infty}{\left(4\cos(4x^2\pi)-\cos(x^2\pi)\right)}x\left(\frac{1}{\tanh(x\pi)}-1\right)~\mathrm dx
\end{align}
It is known that the following diverging integral can be regularized through the introduction of a regularization factor $e^{-\delta x^2},~\delta\to+0$
$$
\int_{0}^{\infty}x\cos\pi a x^2~dx=0\tag{*}.
$$
Ramanujan considers the function
$$
F_w(t)=\int_{0}^{\infty}{\sin(\pi t x)\over\tanh(x\pi)}e^{-\pi wx^2}~\mathrm dx
$$
and proves the following proposition
where the prime $'$ on the summation sign indicates that the terms with $j = 0, n$ are to be multiplied by $\frac{1}{2}$.
This formula allows one to calculate $F_w(t)$ when $w=-\eta_2ni/(\eta_1 m)$, because in that case $s=t$ and 14.4.14 can be solved for $F_w(t)$.
For example, for odd $m$ and $n$ 14.4.14 becomes:
Now divide (14.5.3) by $t$ and take the limit $t\to 0$. Of course the integral will be divergent, however it is made convergent by substracting $4\int_{0}^{\infty}x\cos(\pi x^2)~dx=0$. For example for $m=n=1$ one gets
$$
4\int_{0}^{\infty}{x\cos(x^2\pi)\left(\frac{1}{\tanh(2x\pi)}-1\right)}~\mathrm dx=\frac{1}{2}+\frac{-1}{2}(-1)+\left(\frac{-1}{2}\cdot \frac{1}{\sqrt{2}}+\frac{1}{2}\frac{-1}{\sqrt{2}}\right)-0\\
=1-\frac{1}{\sqrt{2}}.\tag{**}
$$
Numerical check confirms that (**) is correct.
When one of $m,n$ is even, then analog of $14.5.3$ will have $\sinh(mt)$ on the LHS instead of $\cosh(mt)$, and in this case one needs to divide by $t^2$ and apply Lopital's rule to the RHS. Of course there is no need to calculate everything by hand, because formulas $14.4.14$ can be plugged in Mathematica and closed forms evaluated automatically, thus saving a great deal of time and effort.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2099800",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20",
"answer_count": 2,
"answer_id": 0
}
|
solve system of two trigonometric equations I have two equations:
$$f(x) = x^2/k^2$$
and
$$ g(x) = a \cdot sinh(bx)$$
The derivatives are therefore
$$ f'(x) = 2x/k^2$$ and $$g'(x) = ab \cdot cosh(bx)$$
Now for given values of $k$ and $x$ (for example: $k=0.42$ and $x=7$) I want to find values of $a$ and $b$ such that the values and also the slopes of $f$ and $g$ are identical,
which should be doable by solving the set of two equations $f=g$ and $f'=g'$ simultaneously.
From equation $f$ I can get
$$ a= \frac{x^2}{k^2 \cdot sinh(bx)}$$
and insert that in $g$,
or from $g$ I can get
$$ a= \frac{2x}{bk^2 \cdot cosh(bx)}$$
and insert that in equation $f$,
but then I am still not able to solve the resulting single equation.
|
Since you have $a$ in both equation, you can eliminate it and the equation becomes
$$
\frac{x}{\sinh(bx)}=\frac{2}{b\cosh(bx)}
$$
that is,
$$
bx=2\tanh(bx)
$$
Consider the function
$$
f(t)=2\tanh t-t
$$
that we can study for $t\ge0$. The derivative is
$$
f'(t)=\frac{2}{\cosh^2t}-1=\frac{2-\cosh^2t}{\cosh^2t}
$$
which vanishes at $\cosh t=\sqrt{2}$ and is positive in the interval $[0,\operatorname{arcosh}\sqrt{2}]$. Moreover, $\lim_{t\to\infty}f(t)=-\infty$. Thus the equation $f(t)=0$ has a single positive solution $t_0$, that can be determined numerically; an approximate value is $1.9189$.
Once you have found it, you get
$$
b=\frac{t_0}{x}
$$
and you can compute the value of $a$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2100016",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Solutions of $x^ y = y ^ x$, with rational x and irrational y It seems to me that the equation $x^y = y^x$ has no solution in which $x$ is rational and $y$ irrational, or vice versa.
I could not get any counterexample.
|
The equation is equivelant to $y\log x = x\log y$ or ${y\over\log y}={x\over\log x}$. As $x/\log x$ is convex, for any rational number $x$, there exists a real number $y$ such that $x^y = y^x$.
Let $y$ be such that $3^y=y^3$. Suppose $y$ is rational, $y=\frac mn$. Then,
$$n^{3n}3^m=m^{3n}$$
Which implies $n=1$, as $(m,n)=1$. So, $m^3=3^m$. Contradiction, as $(2,4)$ is the only integer solution.
Thus, $(3,y)$ is a rational-irrational pair for the equation.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2100147",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Fundamental group of $S^2\cup\{xyz = 0\}$ We want to compute the fundamental group of $S^2 \cup \{xyz=0\}$. It is easy to see that it retracts to a sphere joined with three disks. How can I show that its fundamental group is trivial?
|
Note that
\begin{align*}
&\ \{(x, y, z) \in \mathbb{R}^3 \mid xyz = 0\}\\
=&\, \underbrace{\{(x, y, z) \in \mathbb{R}^3 \mid x = 0\}}_{yz-\text{plane}}\cup\underbrace{\{(x, y, z) \in \mathbb{R}^3 \mid y = 0\}}_{xz-\text{plane}}\cup\underbrace{\{(x, y, z) \in \mathbb{R}^3 \mid z = 0\}}_{xy-\text{plane}}.
\end{align*}
So the space you are interested in, call it $X$, is the union of the unit sphere and the three coordinate planes. You can use the Seifert-van Kampen Theorem by choosing one set, $U$, to be a connected open neighbourhood of the three planes, and the other, $V$, to be a connected open neighbourhood of the sphere; $U\cap V$ is an open neigbourhood of three great circles on $S^2$ which, if $U$ and $V$ are chosen correctly, is also connected. However, $U$ is contractible (again, if chosen correctly), so it has trivial fundamental group and therefore $X$ has trivial fundamental group.
An explicit choice of suitable $U$ and $V$ is as follows:
\begin{align*}
U &= \{(x, y, z) \in X \mid |x| < 0.1, |y| < 0.1,\ \text{or}\ |z| < 0.1\}\\
V &= \{(x, y, z) \in X \mid 0.9 < \|(x, y, z)\| < 1.1\}.
\end{align*}
Alternatively, by 'shrinking' the planes to the origin, one can see that $X$ is homotopy equivalent to a bouquet of eight spheres (i.e. a wedge product of eight copies of $S^2$). It follows from the Seifert-van Kampen Theorem that $\pi_1(Y\vee Z) = \pi_1(Y)*\pi_1(Z)$, so $X$ is simply connected (i.e. has trivial fundamental group).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2100383",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
What's the name of this fractal?
I created it with some simple chaos game restrictions:
*
*You can always move towards the point you moved to on the last step
*When on the top left corner, you can go to the right corners
*When on the top right corner, you can go to the left corners
*When on the bottom corners, you can only go to the corner that is opposite
|
Your figure can be constructed by a graph directed iterated function system. An iterated function system typically has no restrictions on which transforms may follow each other. A graph-directed IFS has restrictions like the one you have imposed: there is a directed graph in which the transformations correspond to edges. In your example the nodes of this directed graph would correspond to the corners of your figure, like this:
For a reference, with fractal dimension calculations, see:
"Hausdorff dimension in graph directed constructions" R. Daniel Mauldin and S. C. Williams (Trans. Amer. Math. Soc. 309 (1988), 811-829) http://www.ams.org/journals/tran/1988-309-02/S0002-9947-1988-0961615-4/
We introduce the notion of geometric constructions in $R^m$ governed by a directed graph $G$ and by similarity ratios which are labelled with the edges of this graph.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2100526",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Find $\lim \limits_{x\to 0}{\sin{42x} \over \sin{6x}-\sin{7x}}$ I want to find
$$\lim \limits_{x\to 0}{\sin{42x} \over \sin{6x}-\sin{7x}}$$
without resorting to L'Hôpital's rule. Numerically, this computes as $-42$. My idea is to examine two cases: $x>0$ and $x<0$ and use ${\sin{42x} \over \sin{6x}}\to 7$ and ${\sin{42x} \over \sin{7x}}\to 6$. I can't find the appropriate inequalities to use the squeeze theorem, though. Do you have suggestions?
|
Hint:
$$\frac{\sin42x}{\sin6x-\sin7x}=\cfrac{\frac{\sin42x}{42x}}{\frac17\frac{\sin6x}{6x}-\frac16\frac{\sin7x}{7x}}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2100637",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Given a word, how to tell whether it is a Roman numeral? If I have a word that consists of letters I, V, X, L, C, D, M, how can I tell whether it is a valid roman numeral? For example, how do I tell that IXXL is not valid?
|
To ease the description you can consider the pairs "IV", "IX", "XL", "XC", "CD" and "CM" as single symbols.
That is, the roman numerals are sequences made of these symbols: I, IV, V, IX, X, XL, L, XC, C, CD, D, CM, M.
The sequences must hold these rules:
*
*The symbols I, X, C, M can be repeated up to three consecutive times. Other symbols must not be repeated.
*In each sequence, the symbols appear in decreasing order.
*If a symbol with two letters occurs, none of these two letters occur after it, with the followinf exceptions: after XL and XC there can be IX; and after CD and CM there can be XC.
This way, your example is not legal because IX < XL, or because after IX there must not be any letter X.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2100834",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Uniform Boundedness principle for bounded linear maps from Frechet Space into a Banach Space I am looking for a proof of the uniform boundedness principle where the domain is a Frechet space, instead of the usual setting of a Banach Space.
This is used in proving the space of tempered distributions is complete but I can't find a proof of it anywhere.
When I try to prove it myself I get stuck on the final part(which uses the scaling property of linear maps).
Does anyone have a proof that they could share?
|
Let $X$ be Frechet and $Y$ be locally convex. Let $T_a: X \rightarrow Y$ be a continuous linear map for each $a\in A$, and let $q: Y \rightarrow [0,\infty)$ be a continuous semi-norm.
If $\sup \{q(T_a x):a\in A\} <\infty$ $\forall x\in X$ (pointwise bounded), then $x\mapsto \sup \{q(T_a x):a\in A\}$ is a continuous semi-norm (i.e., the set $\{T_a : a\in A\}$ is uniformly equicontinuous/uniformly bounded).
To prove this, let
$$E_n = \{x\in X: q(T_a x) \le n, \forall a\}$$
By pointwise boundedness, we see that $\cup_n E_n = X$. Since $E_n$ are closed and $X$ is a Baire space, we see that there exists $E_n$ with an interior point $x$. Since $X$ is Frechet, its topology is generated by countable semi-norms $p_k$ and thus there exists $N,r$ such that
$$
x + \bigcap_{k=1}^N \{p_k < r\} \subseteq E_n
$$
Hence,
$$
\bigcap_{k=1}^N \{p_k < 2r\} \subseteq E_n -E_n \subseteq E_{2n},
$$
because if $a\in \bigcap_{k=1}^N \{p_k < 2r\} $, then $a=(x+a/2)-(x-a/2)\in E_n-E_n$.
Let
$$
p(x) = \frac{1}{r} \sum_{k=1}^N p_k (x).
$$
Hence,
$$
0\in \{p < 1\} \subseteq E_{2n}\subseteq \{x: \sup \{q(T_a x):a\in A\} <3n\}.
$$
Since $p$ is continuous, we see that $0$ is an interior point of $\{\sup \{q(T_a x):a\in A\} <3n\} $ and thus $\sup \{q(T_a x):a\in A\}$ is a continuous semi-norm.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2100943",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Evaluate $\int^4_0 g(x) \text { d}x$ Evaluate $\int^4_0 g(x) \text { d}x$
Breaking this up into separate integrals, I got:
$$\int^4_0 g(x) \text { d}x = \int^1_0 x \text { d}x + \int^2_1 (x-1) \text { d}x + \int^3_2 (x-2) \text { d}x + \int^4_3 (x-3) \text { d}x$$
$$ = \frac{1}{2} + \frac{1}{2} + \frac{1}{2} + \frac{1}{2} = 2$$
However, is this correct? I ask because we have closed/open intervals and so I'm not sure how integration behaves on closed/open intervals.
|
You are most right.
As a result of its definition, the riemann integral does not change its value if you change a function in a finite set of points, because you can just choose a partition to have those troublesome points as boundaries of the intervals of the partition. Likewise, integrating $f $ over the four intervals $[a, b]; [a, b); (a, b]; (a, b) $ yields the same result.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2101058",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Identifying left- and right-Riemann sums of $\int_9^{14}e^{-x^4}\ dx$
My attempt:
Relooking at it, I think $L_{20}$ would be the highest, so like
$R_{1200} < L_{1200} < L_{20}$, but I have no way to justify it, any help is appreciated.
|
No matter what $n$ and $m$ are, $R_n<L_m$ based on your knowledge that $R_n<A$ and $A<L_n$. So $R_{1200}$ should be the smallest of the three: $0.33575$
Now both $L_{20}$ and $L_{1200}$ overestimate the value of $A$. Informally, $L_{1200}$ is closer to $A$, because $A=\lim_{n\to\infty}L_{n}$. The function is not particularly weird enough for $L_{1200}$ to break the downward trend of $L_n$ towards $A$ as $n\to\infty$. So this much understanding suggests $L_{1200}$ is the smaller of the remaining two numbers.
A little more formally, $L_{20}$ is the area of a certain $20$ rectangles. And $L_{1200}$ is the area of a certain $1200$ rectangles. Since $20$ divides into $1200$, we can in fact place sets of $60$ of the rectangles from $L_{1200}$ inside each rectangle from $L_20$. Since the function is deacreasing, the $60$ rectangles will fit inside the one rectangle with room to spare. So again, $L_{1200}$ should be less than $L_{20}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2101205",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
A tough integral : $\int_{0}^{\infty }\frac{\sin x \text{ or} \cos x}{\sqrt{x^{2}+z^{2}}}\ln\left ( x^{2}+z^{2} \right )\mathrm{d}x$ Recently, I found these two interesting integrals in Handbook of special functions page 141.
$$\mathcal{I}=\int_{0}^{\infty }\frac{\sin(ax)}{\sqrt{x^{2}+z^{2}}}\ln\left ( x^{2}+z^{2} \right )\mathrm{d}x$$
$$\mathcal{J}=\int_{0}^{\infty }\frac{\cos(ax)}{\sqrt{x^{2}+z^{2}}}\ln\left ( x^{2}+z^{2} \right )\mathrm{d}x$$
In this book, it gives the answer below
$$\mathcal{I}=\frac{\pi }{2}\left (\ln\frac{z}{2a}-\gamma \right )\left [ I_0\left ( az \right )- \mathbf{L}_0\left ( az \right )\right ]+\frac{1}{4\pi }G_{24}^{32}\left ( \frac{a^{2}z^{2}}{4}\middle|\begin{matrix}
\dfrac{1}{2},\dfrac{1}{2} \\
0,0,\dfrac{1}{2},\dfrac{1}{2}
\end{matrix} \right )~~~,~~~\left (a,\Re z>0 \right )$$
$$\mathcal{J}=\left ( \ln\frac{z}{2a}-\gamma \right )K_0\left ( az \right )~~~,~~~\left ( a,\Re z>0 \right )$$
where $I_0(\cdot)$ is modified bessel function of the first kind, $\mathbf{L}_0(\cdot)$ is modified struve function, $G_{pq}^{mn}(\cdot)$ is meijer-G function and $K_0(\cdot)$ is bessel function of rhe second kind.
So, I tried to figure out how to get the answer.
My attempt:
Let $x=z\tan t$, we have
\begin{align*}
\mathcal{I}&=2\int_{0}^{\frac{\pi }{2}}\sin\left ( az\tan t \right )\ln\left ( z\sec t \right )\sec t\, \mathrm{d}t\\
&=2\int_{0}^{\frac{\pi }{2}}\sin\left ( az\tan t \right )\ln\left ( \sec t \right )\sec t\, \mathrm{d}t+2\ln z\int_{0}^{\frac{\pi }{2}}\sin\left ( az\tan t \right )\sec t\, \mathrm{d}t
\end{align*}
Hence, define
$$\mathcal{I}\left ( m \right )=\int_{0}^{\frac{\pi }{2}}\sin\left ( az\tan t \right )\sec^mt\, \mathrm{d}t$$
then using the taylor series of $\sin x$ we get
\begin{align*}
\mathcal{I}\left ( m \right )&=\sum_{k=0}^{\infty }\left ( -1 \right )^{k}\frac{\left ( az \right )^{2k+1}}{\left ( 2k+1 \right )!}\int_{0}^{\frac{\pi }{2}}\tan^{2k+1}t\sec^mt\, \mathrm{d}t \\
&=\sum_{k=0}^{\infty }\left ( -1 \right )^{k}\frac{\left ( az \right )^{2k+1}}{\left ( 2k+1 \right )!}\int_{0}^{\frac{\pi }{2}}\sin^{2k+1}t\cos^{-2k-m-1}t\, \mathrm{d}t
\end{align*}
By using the same way we get
$$\begin{align*}
\mathcal{J}\left ( m \right )&=\sum_{k=0}^{\infty }\left ( -1 \right )^{k}\frac{\left ( az \right )^{2k}}{\left ( 2k \right )!}\int_{0}^{\frac{\pi }{2}}\tan^{2k}t\sec^mt\, \mathrm{d}t \\
&=\sum_{k=0}^{\infty }\left ( -1 \right )^{k}\frac{\left ( az \right )^{2k}}{\left ( 2k \right )!}\int_{0}^{\frac{\pi }{2}}\sin^{2k}t\cos^{-2k-m}t\, \mathrm{d}t
\end{align*}$$
But how to evaluate the last integral, it seems can't be expressed by Beta function.
If I'm doing the wrong way, is there another way to solve the problem.
Any help will be appreciated!
|
For the second integral
Note that
$$ K_\nu(az)=\frac{\Gamma(\nu+1/2)(2z/a)^\nu}{\sqrt{\pi}}\int_0^\infty\frac{\cos at }{(t^2+z^2)^{\nu+1/2}} dt$$
By differentiation with respect to $\nu$
\begin{align}
\frac{\partial K_\nu(az)}{ \partial \nu} &=(\Gamma'(\nu+1/2)+\log(2z/a) )K_\nu(az)\\&-\frac{\Gamma(\nu+1/2)(2z/a)^\nu}{\sqrt{\pi}}\int_0^\infty\frac{\cos a t }{(t^2+z^2)^{\nu+1/2}} \log(x^2+z^2)dt
\end{align}
Note that in
$$\left|\frac{\partial K_\nu(z)}{ \partial \nu} \right|_{\nu=0} = 0$$
This implies
$$\int_0^\infty\frac{\cos a t }{\sqrt{t^2+z^2)}} \log(x^2+z^2)\,dt = (\Gamma'(1/2)+\log(2z/a) )K_0(az) $$
Note that
$$\Gamma'(1/2)+\log(2z/a) = -\gamma -2\log(2)+\log(2)+\log(z/a) =\log(z/2a) -\gamma$$
Hence
$$\mathcal{J}=\left ( \log (z/2a)-\gamma \right )K_0\left ( az \right )$$
Addendum
$$K_\nu (z) = \int^\infty_0 e^{-z\cosh t} \cosh(\nu t)\,dt$$
This implies
$$\frac{\partial K_\nu(z)}{ \partial \nu} = \int^\infty_0t e^{-z\cosh t} \sinh(\nu t)\,dt$$
Hence we have
$$\left|\frac{\partial K_\nu(z)}{ \partial \nu} \right|_{\nu=0} = 0$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2101311",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
}
|
Endomorphisms whose matrices relative to every basis are triangular Let $E$ be a vector space and $u\in{\cal L}(E)$.
Given $\beta$ a basis of $E$, we denote by $M_\beta(u)$ the matrix of $u$ relative to $\beta$.
If we suppose that, for every basis $\beta$, the matrix $M_\beta(u)$ is upper triangular, then it is easy to prove that there exists some scalar $\lambda$ such that $u=\lambda\,id_E$.
Indeed, if $x$ is any nonzero vector, we can consider suitable vectors $e_2,\cdots,e_n$ such that $\beta=(x,e_2,\cdots,e_n)$ is a basis of $E$. Since $M_\beta(u)$ is upper triangular, we see that $x$ is an eigenvector.
At this point, a classical lemma shows that the conclusion holds.
My question :
How could it be proved that the same results holds when replacing "upper triangular" with "triangular" ?
|
For any given basis $(e_1,\ldots,e_n)$, we can show that $e_1$ or $e_n$ is an eigenvector. So we obtain one eigenvector $f_1$.
Now, choose a basis $(g_1,\ldots,g_n)$ such that $f_1\notin \text{span}\{g_1,g_n\}$. Thus, we obtain a second eigenvector $f_2$ (which is $g_1$ or $g_n$) linear independent with $f_1$. We can repeat the procedure in order to obtain $f_1,f_2,\ldots,f_{n-1}$ linear independent eigenvectors.
Now, let $f\notin\text{span}\{f_1,f_2,\ldots,f_{n-1}\}$ and consider the basis $(f+f_1,\ldots,f+f_{n-1},f)$. Again, $f+f_1$ or $f$ is an eigenvector. Notice that neither $f$ or $f+f_1$ belongs to $\text{span}\{f_1,f_2,\ldots,f_{n-1}\}$.
Thus, we obtained a basis formed by eigenvectors. Now, I leave to you the proof that the eigenvalues associated to these eigenvectors must be equal in order to satisfy the hypothesis.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2101401",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Idea for deriving Green's functions - Helmholtz equation Let $x,y\in\mathbb{R}$. Then the Green's function for the Helmholtz equation is given by
$$\left(\Delta+\frac{\omega^{2}}{c^{2}}\right)G(x,y,\omega)=\delta(x-y).$$
Now what is the idea for deriving the Green's function here? Intuitively, I would take Fourier transforms on both sides, which would give me a convolution on the LHS and an exponential function on the RHS, but this would be very messy and I think I am missing something here. I believe the answer should be
$$G(x,y,\omega)=\frac{ic}{2\omega}e^{i\omega|x-y|/c}.$$
Apologies if this is a duplicate, but I could not find what I was looking for on the search bar.
|
I don't see where you get the convolutions on the LHS. The FT would give you (up to some multiplicative constants from FT normalisation, who cares)
$$(\omega^2/c^2-|\xi|^2) G(\xi,y,\omega ) = \exp(i y \xi),$$
which should be easy to manipulate - I think a residue theorem would do the trick.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2101524",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Weak derivative: recursive definition, or confusing notation? According to the Wiki article, if $u$ and $v$ are locally integrable functions on some open subset of $\mathbb{R}^n$, then $v$ is the weak derivative of $u$ if, for any infinitely differentiable function $\varphi$ on $U$ with compact support, we have
$$\int_U u D^{\alpha}\varphi = (-1)^{|\alpha|}\int v\varphi,$$
where
$$D^{\alpha}\varphi = \frac{\partial^{|\alpha|}\varphi}{\partial x_1^{\alpha_1}...\partial x_n^{\alpha_n}}.$$
They then go on to say that the weak derivative is often notated $D^{\alpha}u$. Replacing this in the above definition, we get
$$\int_U u D^{\alpha}\varphi = (-1)^{|\alpha|}\int D^{\alpha}u \varphi.$$
Just checking my understanding here: $D^{\alpha}$ is used to signify two different things here, right? The one of $\varphi$ is a big partial derivative, while the one on $u$ denotes the weak derivative (i.e. a locally integrable function that satisfies that satisfies that identity). If so, is there no better notation that we can use? This looks terribly confusing.
|
Yes $D^{\alpha}u$ is a notation for the weak derivative that satisfies the "partial integration-equation" and $D^{\alpha}\varphi$ denotes the classical derivative which is of course a weak derivative too, because it satisfies
$\int_U\phi D^{\alpha}\psi=(-1)^{|\alpha|}\int_U D^{\alpha}\phi \psi$
for all smooth $\psi$ with compact support. So the notation is now understood as the weak derivative but is still consistent with that of a classical partial derivative. Even if it is confusing at the beginning it´s better to get used to it because it is used very often (cf. Evans Partial differential equations and many others).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2101639",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
What is the slope of a line half way between two lines of slope m1 and m2? I have two lines of slope m1 and m2 respectively. What is the slope of the line bisecting these two lines? Clearly there are 2 different lines which can be constructed, one perpendicular to the other. I am looking for the one which bisects the acute angle between the 2 lines.
|
We have:
$$m_1=\frac{y_2-y_1}{x_2-x_1}=\tan{(\alpha)}$$
And:
$$m_2=\frac{y_3-y_1}{x_3-x_1}=\tan{(\beta)}$$
Therefore,
$$m_3=\tan\left(\beta+\frac{\alpha-\beta}{2}\right)=\tan\left(\frac{\alpha+\beta}{2}\right)$$
Where $m_3$ is the slope of the bisector.
Now, we will use this result to find an expression in terms of $m_1$ and $m_2$.
One may show that:
$$\tan\left(\frac{\alpha+\beta}{2}\right)\equiv \frac{1-\cos(\alpha+\beta)}{\sin(\alpha+\beta)}$$
Now, you can use the double angle formulas for $\sin(\alpha+\beta)$ and $\cos(\alpha+\beta)$, then apply the substitutions $\sin(x)=\frac{m}{\sqrt{1+m^2}}$ and $\cos(x)=\frac{1}{\sqrt{1+m^2}}$ to obtain an expression for $m_3=f(m_1,m_2)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2101738",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Is it possible to simplify the complexity $O(n\log{}n + \frac{n^2}{m}\log{}m)$? The question is in the title: is it possible to simplify the complexity $O(n\log{}n + \frac{n^2}{m}\log{}m)$ ? $n$ and $m$ are two variables, and you know that $n > m$ (by the way, what if we don't know that?).
What I first thought was that it is possible to reduce it by keeping only the second term: $O(\frac{n^2}{m}\log{}m)$, because the term $n^2$ is dominating the term $n$, even if we divide $n^2$ by $m$. But the more I think about it, the less confident I am about this.
Any idea/explanation? That would help a lot.
Thanks!
|
In general, if you have two parameters $m$ and $n$ that can be independently adjusted, you are stuck with two parameters. Sometimes $m$ and $n$ are naturally tied together (maybe they're roughly proportional in most cases, or maybe $m$ is generally $O(1)$ and its typical values don't depend much on $n$) and then you can reduce it.
For instance if you assume $m$ does not change and you are interested in the scaling of complexity with $n,$ then your approach of fixing $m$ and regognizing that the second term scales faster than the first would be valid.
However, all you are told is that $m<n,$ you're not told 'how much less'. In addition, this is an interesting circumstance when the second term actually decreases with $m.$ So the best case runtime is when $m = \Theta(n).$ In this case we can plug $n=m$ into the second term and see that it's $n\log(n)$ just like the first. The worst case is the case I mentioned before where $m$ stays $O(1)$ and then the overall complexity is $O(n^2).$
All of this assumes that $m$ is (loosely) a function of $n$ in the sense that typical values of $m$ depend on $n$ in a predictable way. As I said at the beginning, if there's no such relationship and the $m$ and $n$ are pretty much independent, two parameters means two parameters.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2101835",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
$\forall x,y \in \mathbb{R}:\vert\cos x- \cos y\vert\leq\vert x-y\vert$ Prove that $$\forall x,y \in \mathbb{R}:\vert\cos x- \cos y\vert\leq\vert x-y\vert.$$
My try : $ f(x) =\cos x$ and use the mean value theorem.
|
since $\left| \sin { x } \right| \le \left| x \right| $
$$|\cos x-\cos y|≤\left| 2\sin { \frac { y-x }{ 2 } \sin { \frac { x+y }{ 2 } } } \right| \le 2\left| \sin { \frac { x-y }{ 2 } } \right| \left| \sin { \frac { x+y }{ 2 } } \right| =2\left| \frac { x-y }{ 2 } \right| \cdot 1=\left| x-y \right| $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2101931",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
}
|
asymptotics on series Define
$$
f(x)=\sum_{k=1}^\infty \frac{x^k}{k\cdot k!}.
$$
Is there a way to find the asymptotics of $f(x)$ as $x\rightarrow \infty$? What I suspect is
$$
f(x)\sim \frac{e^x}{x},
$$
because
\begin{align*}
f(x)&=\sum_{k=1}^\infty \frac{x^k}{k\cdot k!}\\
&\geq \sum_{k=1}^\infty \frac{x^k}{(k+1)\cdot k!}\\
&=\frac{1}{x}\sum_{k=1}^\infty \frac{x^{k+1}}{(k+1)!}\\
&=\frac{1}{x}(e^x-1-x)\sim\frac{e^x}{x},
\end{align*}
but this is only a lower bound, how to get a similar upper bound? Thanks a lot!
|
Here it is a (very!) brute-force approach. Both $\frac{e^t-1}{t}$ and its primitive are entire functions, hence
$$ g(x)\stackrel{\text{def}}{=}\frac{1}{x\,e^x}\sum_{k\geq 1}\frac{x^k}{k\cdot k!} = e^{-x}\sum_{k\geq 0}\frac{x^k}{(k+1)(k+1)!} = \sum_{j,k\geq 0}\frac{(-1)^j x^{k+j}}{(k+1)^2 j!k!}$$
leads to:
$$ g(x) = \sum_{s\geq 0}\frac{x^s}{s!}\sum_{k=0}^{s}\frac{\binom{s}{k}(-1)^{s-k}}{(k+1)^2}=\sum_{s\geq 0}\frac{x^s}{s!}\int_{0}^{1}\sum_{k=0}^{s}\binom{s}{k}(-1)^{s-k}u^k(-\log u)\,du$$
then to:
$$ g(x) = \sum_{s\geq 0}\frac{x^s (-1)^s}{s!}\int_{0}^{1}(1-u)^s(-\log u)\,du = \sum_{s\geq 0}\frac{x^s (-1)^s H_{s+1}}{(s+1)!}$$
and the behaviour in a neighbourhood of the origin has no secrets anymore.
We may deal with the behavior in a left neighbourhood of $+\infty$ by noticing that, by Frullani's theorem, the Laplace transform of $\frac{e^x-1}{x}$ is given by $-\log\frac{s-1}{s}$ for $s>1$. It follows that:
$$ \mathcal{L}\left(\sum_{k\geq 1}\frac{x^k}{k\cdot k!}\right)=-\frac{1}{s}\,\log\left(\frac{s-1}{s}\right)=\mathcal{L}\left(\frac{e^x-1}{x}\right)+\frac{s-1}{s}\,\log\left(\frac{s-1}{s}\right) $$
and:
$$ \mathcal{L}\left(e^{-x}\sum_{k\geq 1}\frac{x^k}{k\cdot k!}\right)=\frac{1}{s+1}\log\left(1+\frac{1}{s}\right)$$
$$ \mathcal{L}\left(x\, e^{-x}\sum_{k\geq 1}\frac{x^k}{k\cdot k!}\right)=\frac{1\color{red}{-s\log\left(\frac{s}{s+1}\right)}}{s\color{blue}{(s+1)^2}}$$
where the red term $\color{red}{\to 0}$ and the blue term $\color{blue}{\to 1}$ as $s\to 0^{+}$. This proves the asymptotic behaviour of $\int_{0}^{x}\frac{e^t-1}{t}\,dt$ as $x\to +\infty$ is given by $\frac{e^x}{x}\mathcal{L}^{-1}\left(\frac{1}{s}\right)=\frac{e^x}{x}$ as wanted.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2102042",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Can someone give an example of an ideal $I \subset R= \Bbb{Z}[x_1,...x_n]$ with $R /I \cong \Bbb{Q}$? Question in the title. I have never seen a quotient of $R$ by a maximal ideal $I$ that is an infinite field, so I would also be interested in the case that $R/I$ is any infinite field. If no such examples exist, I would also like to hear about the case where $R$ is a polynomial ring over $\Bbb{Z}$ in infinitely many variables.
|
You can't find such an example: a maximal ideal $\mathfrak m$ in $\mathbf Z[x_1,\dots,x_n]$ has a non-zero intersection with $\mathbf Z$, which is a prime ideal $p\mathbf Z$, hence the quotient $\mathbf Z[x_1,\dots,x_n]/\mathfrak m$ is a finitely generated algebra over the finite field $\mathbf F_p$, which has characteristic $p$.
The same argument is valid for polynomial rings with an infinite number of indeterminates.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2102128",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
Is there a additive non-linear non-complex map?
Let $V$ and $W$ be vector spaces over a field $\mathbb{F}\neq \mathbb{C}$. Give an example of a non-linear map $T:V\to W$ such that \begin{equation}T(x+y) = T(x)+T(y), \forall x,y\in V.\end{equation}
I asked myself this question when I was resolving an excercise list os Linear Algebra. This example is pretty easy when $\mathbb{F}=\mathbb{C}$. We take $V=W=\mathbb{C}$ and $T:z\mapsto \bar{z}$, and we have that $T(\lambda z) = \bar{\lambda}T(z)$. However, I couldn't find any examples for non-complex vector spaces.
|
A discontinuous additive function $a\colon\Bbb R\to\Bbb R$ is non-linear map of a linear space $\Bbb R$ over a field $\Bbb R$ to itself. Observe that any additive map is linear, if the field of scalars is $\Bbb Q$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2102196",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
A question on Boundary Value Problem asked in CSIR(India)-Dec16 The boundary value problem $x^2y''-2xy'+2y=0$ subjected to boundary conditions$y(1)+\alpha y'(1)=1 ; y(2)+\beta y'(2)=2$ has unique solution if
$$1.\alpha=-1,\beta=2\\2.\alpha=-2,\beta=2\\3.\alpha=-1,\beta=-2\\4.\alpha=-3,\beta=2/3.\\$$ Kindly suggest from where do I prepare to solve such problem and how do I solve them? Question seems very nice to me but I couldn't find any usual ODE technique I could solve it.
|
Given the boundary value problem
$$\tag 1 x^2y''-2xy'+2y=0$$
We see that this is an Euler-Cauchy type equation, so we try
$$y = x^m \implies y' = mx^{m-1}, y''=m(m-1)x^{m-2}$$
It is worth noting that as an alternate approach, we can make the substitution $x = e^t$ and then simplify to arrive at the same result.
We substitute this back into $(1)$ and combine like terms
$$x^m(m^2 - 3 m + 2) = 0 \implies m_{1,2} = 1, 2$$
This gives us the solution
$$y(x) = c_1 x^{m_1} + c_2 x^{m_2} = c_1 x + c_2 x^2$$
Now for the BCs.
Case 1:
$$y(1)+\alpha y'(1)=1 , y(2)+\beta y'(2)=2, \alpha=-1,\beta=2$$
This gives
$$y(x) = c_1 x + c_2 x^2 \implies y'(x) = c_1 + 2 c_2 x$$
$$y(1) - y'(1) = c_1 + c_2 - (c_1 + 2 c_2) = -c_2 = 1 \implies c_2 = -1\\y(2) + 2 y'(2) = 2 c_1 + 4 c_2 +2 (c_1 + 4c_2) = 4 c_1 + 12 c_2 = 2 \implies c_1 = \dfrac{7}{2}$$
Thus
$$y(x) = \dfrac{7 }{2}x - x^2$$
You should do the other three cases.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2102306",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How to determine what components make up a complex mixture based on a summary of that mixture? Any wine can be made up of many components. Each component is basically a bunch of grapes from a specific place and time, so each component has three attributes: vintage, varietal, and appellation.
While grapes come in to a winery as single components (eg. 2016 Russian River Valley Cabernet Sauvignon), a wine may end up composed of many components (after blending).
Given a summary of a finished wine's properties, how can you find a set of components — and their percentage contribution to the finished wine — that add up to that wine?
Here's what we have:
VINTAGE % VARIETY % APPELLATION %
2014 98.00 Cab Sauv. 78.95 Russian River Valley 37.02
2015 1.50 Petit Sirah 9.12 Lake County 35.81
2016 0.50 Syrah 7.34 Sonoma Coast 22.79
Petit Verdot 4.42 Sonoma 4.07
Sauv. Blanc 0.14 Napa 0.31
Concentrate 0.03
Ideally we are able to find the smallest set of components, but any set of components that could solve this would be acceptable.
What is the answer(s) for this specific data set, and what is the right approach for this problem in general?
|
One solution is take all $3\times 6 \times 5 = 90$ possible combinations and simply multiply the relevant percentages together to give an overall percentage for each particular combination. For your example of a 2016 Cabernet Sauvignon from Russian River Valley, that would give $0.50\%\times 78.95\% \times 37.02\% = 0.14613645\%$
Another solution would be to draw from choices until they are used up to give $3+ 6 + 5 - 2=12$ possible combinations, for example using $37.02\%$ as 2014 Cabernet Sauvignon from Russian River Valley, which uses all the Russian River Valley, so the next choice is Lake County, and so on to give something like
%
2014 Cab Sauv. Russian River Valley 37.02 Russian River Valley used up
2014 Cab Sauv. Lake County 35.81 Lake County used up
2014 Cab Sauv. Sonoma Coast 6.12 Cab Sauv. used up
2014 Petit Sirah Sonoma Coast 9.12 Petit Sirah used up
2014 Syrah Sonoma Coast 7.34 Syrah used up
2014 Petit Verdot Sonoma Coast 0.21 Sonoma Coast used up
2014 Petit Verdot Sonoma 2.38 2014 used up
2015 Petit Verdot Sonoma 1.50 2015 used up
2016 Petit Verdot Sonoma 0.19 Sonoma used up
2016 Petit Verdot Napa 0.14 Petit Verdot used up
2016 Sauv. Blanc Napa 0.14 Sauv. Blanc used up
2016 Concentrate Napa 0.03 all used up
If the percentages are random real numbers, you are unlikely to be able to produce a shorter list than this, though many other lists of $12$ are possible and can be generated by reordering the input data. The question of whether there are shorter lists from a particular set of data may be computationally complicated (my guess is it could be an NP problem)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2102405",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Using an Increasing Differentiable Function to Show A Sequence is Increasing I am taking my first real analysis course, and we are talking about sequences of real numbers. I have a question where as part of a proof I want to show that a sequence $a_n=\frac{n}{n+1}$ for $n\in\Bbb Z_{>0}$, is strictly increasing. Note $\Bbb Z_{>0}=\{1,2,3,\ldots\}$. I understand that induction is often used for showing these types of things, since $n$ is a positive integer. Another method that I used was showing $\frac{a_{n+1}}{a_n}> 1$: since $\frac{a_{n+1}}{a_n}=\frac{(n+1)^2}{n(n+2)}$, we have
$$(n+1)^2=n^2+2n+1>n^2+2n=n(n+2)\implies \frac{(n+1)^2}{n(n+2)}>1$$
However, as I am very familiar with real valued functions of real variables, and derivatives, I thought I could show $(a_n)$ is strictly increasing by showing that the function $f:\Bbb R_{>0}\to\Bbb R$ defined as $f(x)=\frac{x}{x+1}$ is strictly increasing. Note $\Bbb R_{>0}=\{x\in\Bbb R:x>0\}$. Surely $f$ is differentiable on $(0,\infty)$, so $f'(x)=\frac{1}{(x+1)^2}>0$ for all $x>0$. Thus $f$ is strictly increasing for $x>0$. Since $f$ is strictly increasing for $x>0$, we can conclude that $a_n=f(n)<f(n+1)=a_{n+1}$ for every $n\in\Bbb Z_{>0}$. Thus, $(a_n)$ is strictly increasing.
$\mathbf{My\,Question:}$ is the method of solving this problem that I showed in the previous paragraph (i.e. using $f(x)$) a valid and accepted method for showing $(a_n)$ is increasing. The question is really about any $(a_n)$ where we can construct a differentiable function such that $f(n)=a_n$, this is just one example. I question whether it is valid, because we are taught to use induction and the method of showing $\frac{a_{n+1}}{a_n}> 1$ to show that sequences are strictly increasing. Although I find the method of using $f(x)$ to be much easier in some cases (and quite intuitive), yet I never see it in lectures.
Thanks in advance for any help.
|
Suppose we manage to construct, $f$ such that $f(n)=a_n$ and $ \forall x >0, f'(x) >0$, then we know that $f$ is an increasing function.
That is $x>y$ then $f(x)> f(y)$.
In particular since $n+1 > n$, we have shown that $a_{n+1} > a_n$ and hence the sequence is increaseing.
(remark: if we have further information such as $a_n>0$, we have just proven that $\frac{a_{n+1}}{a_n}>1$)
But I agree with other answers that it depends on what is known to you.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2102514",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
The general solution of $y''=-\sin y$ When I asked Mathematica to solve the ODE
$$y''=-\sin y \tag{1} $$
I got the solutions
$$y=\pm 2 \text{am}\left(\frac{1}{2} \sqrt{\left(c_1+2\right) \left(t+c_2\right){}^2}|\frac{4}{c_1+2}\right), \tag{2} $$
where $\text{am}(u|m)$ is the Jacobi Amplitude function. I wonder why there is a $\pm$ ambiguity here (I believe it's related to the square root), since the equation $(1)$ is explicit.
P.S.
Using the rules $c_1 \mapsto -2+4c_1^2,c_2 \mapsto c_2/c_1$ one gets the equivalent form
$$y= \pm2 \text{am}\left(\sqrt{\left(t c_1+c_2\right){}^2}|\frac{1}{c_1^2}\right)$$
and if one cancels the square root with the square, noticing that am is odd in the first argument one gets an even nicer form
$$y=\pm2 \text{am}\left(c_1t +c_2|\frac{1}{c_1^2}\right).$$
However, my question still remains: Why is there an ugly $\pm$ in the solution if the ODE is written explicitly in the form $y''=f(x,y,y')$? Is there a nicer form for the general solution?
Thank you!
|
$$y=\pm2 \text{am}\left(c_1t +c_2\bigg|\frac{1}{c_1^2}\right).$$
The Jacobi amplitude function is symmetrical : $\quad\text{am}(-x|k)=-\text{am}(x|k)$
$$2\text{am}\left(c_1t +c_2\bigg|\frac{1}{c_1^2}\right)=-2\text{am}\left(-(c_1t +c_2)\bigg|\frac{1}{c_1^2}\right) = -2\text{am}\left(C_1t +C_2\bigg|\frac{1}{C_1^2}\right)$$
with $C_1=-c_1$ and $C_2=-c_2 .$
This means that, considering the general solution of the ODE one can forget the $\pm\:$:
$$y=\pm2 \text{am}\left(c_1t +c_2\bigg|\frac{1}{c_1^2}\right) \equiv 2\text{am}\left(c_1t +c_2\bigg|\frac{1}{c_1^2}\right) \text{insofar } c_1,c_2 \text{ are any constants.}$$
Of course, this isn't true if we don't consider the whole set of solutions, but one particular solution, according to initial/boundary conditions : The conditions determine the constants and the sign. That is why it is better to write :
$$y=\pm2 \text{am}\left(c_1t +c_2\bigg|\frac{1}{c_1^2}\right)\:,$$
knowing that, for the particular solution, $\pm$ means $+$ or $-$ (i.e.: only one solution, not both).
NOTE :
The question itself is somehow ambiguous : " I wonder why there is a $\pm$ ambiguity here , since the equation $(1)$ is explicit ".
It isn't specified if the subject of the question concerns an ODE alone, or an ODE with initial/boundary conditions.
Not only in the case of equation $(1)$, but in all cases of ODEs : If the question concerns an ODE without initial/boundary conditions "explicit" doesn't mean that only one solution exist. They are an infinity of solutions since the constants such as $c_1$, $c_2$, (and if they are $\pm$ in it), all can be chosen among many values. One cannot say that there is an ambiguity due to the presence of arbitrary constants and $\pm$ in the general solution.
If the question concerns an ODE with initial/boundary conditions, insofar the conditions are consistent for a unique solution, those conditions determines the values of the constants and the signe affected to $\pm$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2102582",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Existence of $\alpha\in \Bbb C$ such that $f(\alpha)=f(\alpha+1)=0$.
Let $f(x)\in \Bbb Q[x]$ be an irreducible polynomial over $\Bbb Q$.
Show that there exists no complex number $\alpha$ such that $f(\alpha)=f(\alpha+1)=0$.
Following @quasi;
Let $f(x)=a_0+a_1x+a_2x^2+\dots +a_nx^n$.
Define $g(x)=f(x+1)-f(x)$
Then $g(\alpha)=0$.Also $g(\bar \alpha)=0\implies g(x)=(x-\alpha)^a(x-\bar \alpha)^b h(x);h(x)\in \Bbb C[x]$
.
What to do now?Please help.
|
This is a conceptual proof: if $f(a)=f(a+1)=0$, then $a$ is a root of both $f(x)$ and $f(x+1)$. Both polynomials are irreducible, hence minimal polynomials of $a$. By uniqueness, we obtain $f(x)=f(x+1)$ as polynomials, hence by plugging in $x=a+1$ we get $0=f(a+1)=f(a+2)$. Continue with $a+2,a+3, \dotsc$ and you will obtain infinitely many roots. Contradiction!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2102688",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
}
|
Cofinite topology on $X \times X$ with $X$ an infinite set Let $X$ be an infinite set. And consider $(X \times X)_{cof}$ and $X_{cof} \times X_{cof}$. I can see that $(X \times X)_{cof}$ is not finer than $X_{cof} \times X_{cof}$.
MY QUESTION: But is $X_{cof} \times X_{cof}$ finer (hence strictly finer) than $(X \times X)_{cof}$ ?
Specifically, an open set $U$ in $(X \times X)_{cof}$ will be such that $(X\times X)-U$ is finite, meaning here that $(X\times X)-U$ is some finite set of ordered pairs. But I'm struggling to formally show that such a set can be written as an open set in $X_{cof} \times X_{cof}$, i.e., as a union of products of two sets whose complements are finite subsets of $X$. I'd appreciate any help.
|
Maybe is slightly simpler to think about closed sets. For any pair of points $x,y\in X$, both $\{x\}$ and $\{y\}$ are closed in $X_{cof}$. By definition of product topology, both $\{x\}\times X$ and $X\times \{y\}$ are closed in $X_{cof}\times X_{cof}$. Therefore their intersection
$$(\{x\}\times X)\cap (X\times \{y\})=\{(x,y)\} $$
is closed. In particular, since finite unions of closed sets are closed, every finite set in $X\times X$ is closed in $X_{cof}\times X_{cof}$. This proves that $X_{cof}\times X_{cof}$ is finer than $(X\times X)_{cof}$.
It is also strictly finer since for instance $\{x\}\times X$ is closed in $X_{cof}\times X_{cof}$, but since it is infinite it can not be closed in $(X\times X)_{cof}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2102799",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
}
|
Similarities between triangles that share the same incircle I know this question may be based on programming but its core is geometry hence I am posting it here.
Okay what I am trying to do is to write a small program to find out all the possible triangles that have the same incircle radius.
One important condition is that all the units will be in integers. That is to say the ratio between the area and semi-perimeter must be a fully divisible with each other. The area calculated by Heron's formula must be a perfect square root. Also the perimeter must be fully divisible by 2. No rounding off, values with floating points are discarded entirely.
Following the above conditions the two scalene triangles share the same incircle ie. of radius 2.
5 12 13
6 8 10
Do they have any similarities ? What is the maximum area of the triangle that can contain an incircle of radius $r$ ?
What I am doing right now is actually checking all triangles starting with sides $2r$ to $2r*100$. Its basically a brute-force. But is there a better way of determining where I should stop ?
EDIT:
Just found this algo in CodeChef for this problem. Can someone explain to me how it works ? Or whats the logic behind it is ?
for x = 1 to 2*r-1 do:
for y=x to (3*(r*r))/x do:
p = (r*r)*(x+y)
q = x*y-(r*r)
if q <= 0
perform next iteration of inner loop
else
z = p/q
if (z < y or p mod q ! = 0)
perform next iteration of inner loop
else
points for the triangle are
x+y
x+z
z+y
end inner loop
end outer loop
Here $r$ is the radius of the incircle. And mod means the remainder when I divide by the divisor(Its a programming operator). For example 4 mod 2 = 0, 4 mod 3 = 1
Please note I come from a computer science background so I am not that much familiar with the mathematical language, so if the above algo seems difficult to understand in the fashion I have given it then please let me know what I can do to better present it.
|
The triangles you are looking for are either Pythagorean triangles or a sum of two (possibly scaled) Pythagorean triangles.
To see that, one can show first of all that any Pythagorean triangle has an integer inradius. Any primitive Pythagorean triple $(a,b,c)$ can be obtained from two coprime integers $m$ and $n$ (not both odd and with $m>n$) as:
$$
a=m^2-n^2,\quad b=2mn,\quad c=m^2+n^2,
$$
and the Pythagorean triangle derived from that triple has inradius
$$
r={ab\over a+b+c}=n(m-n),
$$
which is indeed an integer.
Of course, a multiple $(ka,kb,kc)$ of that Pythagorean triple will still give a triangle with integer inradius $r'=kr$.
In general, a triangle with integer sides and area also has rational altitudes. The altitude relative to the largest side falls inside the base and divides then the triangle into two right-angled triangles. Those triangles have integer hypotenuses, a rational cathetus common to both, and the other catheti whose sum is an integer, so they must be (rational multiples of) Pythagorean triangles.
EDIT.
For example, a triangle with sides $AC=9$, $BC=10$, $AB=17$, has an inradius of $2$ but is not Pythagorean. Let $CH$ be the altitude with respect to base $AB$: its length is $72/17$, which is a rational number. By Pythagoras' Theorem we also get: $AH=135/17$ and $BH=154/17$, so that triangles $ACH$ and $BCH$ are rescaled Pythagorean triangles, with sides:
$$
\left(9,{135\over17},{72\over17}\right)={9\over17}\left(17,15,8\right)
\quad\hbox{and}\quad
\left(10,{154\over17},{72\over17}\right)={2\over17}\left(85,77,36\right),
$$
and inradii $27/17$ and $28/17$ respectively.
I don't know if the above is relevant or not for your search: I tried to find a simple relation between the inradius of the composite triangle and the inradii of its two constituents, but without success. At the moment, I think a brute-force approach is still preferable to find all triangles with a given inradius.
EDIT 2.
That piece of code makes sense.
Tangency points divide triangle sides into $6$
parts (equal pairwise), whose lengths are $x$, $y$ and $z$
(see diagram below).
Let $2\alpha$, $2\beta$ and $2\gamma$ the angles formed by the radii joining incenter with tangency points, so that $x=r\tan\alpha$, $y=r\tan\beta$, $z=r\tan\gamma$, and suppose $x\le y\le z$. The largest value of $x$ is attained when $\alpha=\beta=\gamma=60°$, so we have a first bound:
$$x\le r\tan 60°=\sqrt3 r.$$
Once $x$ is given, the largest value of $y$ is reached when $\beta=\gamma=90°-\alpha/2$, whence a second bound:
$$
y\le r\tan(90°-\alpha/2)={r^2+r\sqrt{x^2+r^2}\over x}\le{3r^2\over x}.
$$
Finally, once $x$ and $y$ are given one can compute:
$$
z=r\tan(180°-\alpha-\beta)={r^2(x+y)\over xy-r^2}.
$$
Notice that in the code the first and second bound are used in a weaker form, possibly for the sake of simplicity. But for large values of $r$, using the optimal bounds given above:
$$
1\le x\le \sqrt3 r,
\quad
x\le y \le {r^2+r\sqrt{x^2+r^2}\over x},
$$
could result in a faster execution.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2102886",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Doubt in Moorey's ineqality given in Sobolev space . I am going through the statement of Moorey's inequality given in chapter $5$ of evans pde book on page $266$.
Statement is: Assume $p > n$. Then there exist a constant $C$, depending on $p, n$ such that $$\|u\|_{C^{0, \gamma}(\mathbb{R}^n)} \leq C \|u\|_{W^{1,p}(\mathbb{R}^n)}$$ for all $u \in C^1(\mathbb{R}^n) $ where $\gamma = 1-\frac{n}{p}$.
Now i am consfused in the following
$1)$
How can we say that any function in $C^1(\mathbb{R}^n) $ is also in $C^{0, \gamma}(\mathbb{R}^n)$.
$2)$ How any function in $C^1(\mathbb{R}^n) $ is also in $W^{1,p}(\mathbb{R}^n) $
I am confused in this statement as for $(1)$ to hold $u$ should have compact support. For (2) to hold is that $u$ and $Du$ (derivative) should be in $L^p(\mathbb{R}^n)$. Am i right or wrong?
|
The theorem states that if $u\in C^1(\mathbb{R}^n)$ is no element of $C^{0,\gamma}(\mathbb{R}^n)$(which might be the case) it is not in $W^{1,p}(\mathbb{R}^n)$ (under the given circumstances)So
$||u||_{C^{0,\gamma}(\mathbb{R}^n)}=\infty$ implies $||u||_{W^{1,p}(\mathbb{R}^n)}=\infty$(this for (1)). As for (2) You are right again: $C^1(\mathbb{R}^n)$ is not contained in $W^{1,p}(\mathbb{R}^n)$ but then the right hand side of the inequality is $\infty$ and the inequality still holds in a sense. The only point You are wrong : Hölder continuous functions need not have compact support (take a constant function as an example).(See the next Theorem in Evans for an application of Theorem 4)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2102982",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Equivalence of two smooth curves in the plane having the same image The book I read is complex analysis by stein, he first define what is the equivalence of two parametrized curve, and then define the complex intagral on a smooth curve which is indepentent of our choice of parametrization.
And, my question is about a basic fundamental definition as the following:
If two smooth complex curve $z(t):[a,b]\to C$ and $z_1(s):[c,d]\to C$ have the same image, then does there exists real function $f:[a,b]\to[c,d]$ such that $z(t)=z_1(f(t))$ for all $t$ on $[a,b]$, and $f$ is a continuously differentiable functoin on $[a,b]$, and $f'(x)>0$?
The way of the author use is that if there exist a continuouly differentiable real function $f:[a,b]\to[c,d]$, $f'(x)>0$, and $z(t)=z_1(f(t))=z_1(s)$, then we call the two curve $z(t)$ and $z_1(s)$ are equivalent, but I want to try to claim that it only needs that if two curve have the same image, i.e., $z(t)=z_1(f(t))=z_1(s)$ , and then we can imply the other two requirement to be equivalent.
My attempt: Since $z(t)=z_1(s)=z_1(f(t))$ and $z(t)$ is smooth on $[a,b]$, so $z_1(f(t))$ is continuously differentiable. Next, I guess that we can write $z_1^{-1}$ since $z_1$ is a one to one map, but I don't know how to say that $z_1^{-1}$ is differentiable, if $z_1^{-1}$ is continuously differentiable, then $z_1^{-1}(z_1(f(t)))=f(t)$ is continuously differentiable.
Thanks for any comment and help~!
|
Hint: Consider the following cases; in each case the images of the curves are the same.
*
*On $ [0,2\pi]:$ $z(t) = e^{it}, z_1(t)= e^{-it}.$
*On $ [0,1]:$ $z(t) = t, z_1(t) = t^2.$
*On $ [0,2\pi]:$ $z(t) = e^{it}, z_1(t)= e^{2it}.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2103067",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
An Elementary Inequality Problem Given $n$ positive real numbers $x_1, x_2,...,x_{n+1}$, suppose: $$\frac{1}{1+x_1}+\frac{1}{1+x_2}+...+\frac{1}{1+x_{n+1}}=1$$ Prove:$$x_1x_2...x_{n+1}\ge{n^{n+1}}$$
I have tried the substitution $$t_i=\frac{1}{1+x_i}$$
The problem thus becomes:
Given $$t_1+t_2+...+t_{n+1}=1, t_i\gt0$$
Prove:$$(\frac{1}{t_1}-1)(\frac{1}{t_2}-1)...(\frac{1}{t_{n+1}}-1)\ge{n^{n+1}}$$
Which is equavalent to the following:
$$(\frac{t_2+t_3+...+t_{n+1}}{t_1})(\frac{t_1+t_3+...+t_{n+1}}{t_2})...(\frac{t_1+t_2+...+t_{n}}{t_{n+1}})\ge{n^{n+1}}$$
From now on, I think basic equality (which says arithmetic mean is greater than geometric mean) can be applied but I cannot quite get there. Any hints? Thanks in advance.
|
From last, let $\sqrt[n]{t_1t_2...t_{n+1}}=A$ with $AM-GM$:
$$\frac{t_2+t_3+...+t_{n+1}}{n}\geq\sqrt[n]{t_2t_3...t_{n+1}}=\frac{A}{\sqrt[n]{t_1}}$$
so for the first
$$t_2+t_3+...+t_{n+1}\geq\frac{nA}{\sqrt[n]{t_1}}$$
and for the rest take like this.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2103191",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
}
|
Probability of getting the first red ball before the first blue ball Consider an urn containing $N = N_1 + N_2 + N_3$ balls, more precisely, an urn containing $N_i \gt 0$ balls of colour $i$, $i = 1, 2, 3$. We draw the balls without replacement. Show that the probability of getting a ball of colour $1$ before a ball of colour $2$ is $\frac{N_1}{N_1 + N_2}$.
I've done a similar problem before, where we assumed we did a sequence of independant trials of results $R_1$, $R_2$ or $R_3$ with probability $p_i \gt 0$, $i = 1, 2, 3$ and $p_1 + p_2 + p_3 = 1$. The probability that the result $R_1$ happens before the result $R_2$ is $\frac{p_1}{p_1 + p_2}$.
It looks like the above problem. In that case, the hint was to consider first, for $0 \lt n_1 \lt n_2$, the probability of getting the first $R_1$ result at the $n_1$-th trial and the first $R_2$ result at the $n_2$-th trial.
I've tried to compute something similar for this problem, but it led to something quite complicated....
|
Disregarding the irrelevant colour-3 balls, an outcome of the experiment gives you a random permutation of the $N_1+N_2$ balls of colours 1 and 2. The event in question is that the first of these balls is colour $1$. Since each of the $N_1 + N_2$ balls is equally likely to be first, and $N_1$ of them are of colour $1$, the probability is $N_1/(N_1 + N_2)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2103290",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Comparison of sum of the minimum distances from the vertices between two triangles
*
*Can I say that the sum of the minimum distances from the vertices in triangle ABC is less than the sum of the minimum distances from the vertices in triangle DEF, if given that the perimeter of triangle ABC is less than the perimeter of triangle DEF?
*Can I say that the sum of the minimum distances from the vertices in triangle ABC is equal to the sum of the minimum distances from the vertices in triangle DEF, if given that the perimeter of triangle ABC is equal to the perimeter of triangle DEF?
Thanks ahead!
|
No and no.
In the depicted configuration, the triangles $F_1 F_2 A_1$ and $F_1 F_2 A_2$ have the same perimeter, but the lengths of their Steiner nets (i.e. the sum of distances from the vertices for the Fermat point of such triangles), given by $F_2 V_1$ and $F_1 V_2$, is not the same.$^{(*)}$
In general, if $ABC$ is a triangle with angles $\leq 120^\circ$ and $P$ is its Fermat point,
$$(PA+PB+PC)^2 = a^2+b^2-2ab\cos(C+60^\circ) = \color{red}{\frac{a^2+b^2+c^2}{2}+2\sqrt{3}\Delta}.$$
Given such identity, it is interesting to prove that in the vast majority of cases the length of the Steiner net is around the $55\%$ of the perimeter. However, we cannot state that if the perimeter of $T_1$ is greater than the perimeter of $T_2$, the same holds for the lengths of their Steiner nets.
$^{(*)}$ We are just stating that an arc of an ellipse is never an arc of circle, i.e. an ellipse is not a curve with constant curvature, kind of obvious.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2103388",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How to use Triangle inequality to prove $|(x+y)-5| < 0.05$ when $|x-2| < 0.01$ and $|y-3| < 0.04$ It's the first day of calculus, and it's been almost a year since I've been in college algebra, and really stuck on the following homework question:
"Suppose that $| x - 2| < 0.01$ and $| y - 3 | < 0.04$. Use the
Triangle Inequality to show that $| (x + y) — 5 | < 0.05$."
I don't even know how to start this one, and I can't find anything remotely similar on google. I know that the triangle inequality is $|x+y| = |x| + |y|$, but I don't know how it relates to this question.
I would actually prefer if an alternative answer could be given, and allowed to work it out myself if possible, as I feel like I need to learn this
EDIT: So I am writing this the next day, apparently the reason I was so confused was that we hadn't gone over it in class yet -_- Thanks to everyone who helped though
|
One thing about the problem that was given to you that could confuse you
is that it uses the same symbols $x$ and $y$ that are in the
Triangle Equality (at least in the version of that fact that you've seen).
So rename the variables in the Triangle Inequality.
They're just arbitrary names for anything you could plug into the formula,
after all.
You can just as easily write
$$
\lvert a + b \rvert \leq \lvert a \rvert + \lvert b \rvert
$$
(but do remember it's an inequality, specifically $\leq$, not $=$).
Now you have a known fact with three things in absolute values, and
you have three other things in absolute values that you've been asked to
say something about. So let's try matching up the three things you
were asked about with the three things you know about.
For example, you could try $a = (x + y) - 5$ and $b = x -2$.
Then $a + b = \ldots$ ?
As you can see if you try that, it wasn't very useful.
So try a different way to match $a$ and $b$ with $x - 2,$
$y - 3,$ or $(x + y) - 5,$ and see what you get for $a+b.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2103534",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
If X and Y are two independent standard normal r.v.s, calculate $P(X+Y \in [0,1] \mid X \in [0,1])$ I'm not sure how to solve this. This is my attempt so far.
So if X,Y two independent standard normal r.v.s, we have:
$$\mathbb{P}(X+Y\in [0,1] \mid X \in [0,1])=\frac{\mathbb{P}(\{X+Y\in [0,1]\} \cap \{X \in [0,1]\})}{\mathbb{P}(X \in [0,1])}.$$
Moreover, we have:
\begin{split} \mathbb{P}(\{X+Y\in [0,1]\} \cap \{X \in [0,1]\}) = {} & \int_0^{1}dx\frac{1}{\sqrt{2 \pi}}e^{-\frac{x^2}{2}}\int_{-x}^{1-x} dy\frac{1}{\sqrt{2\pi}}e^{-\frac{y^2}{2}}.
\end{split}
To calculate the integral, it looks like it might be better to switch to polar coordinates (?). Then we have:
\begin{split} \int_0^{1}dx\frac{1}{\sqrt{2 \pi}}e^{-\frac{x^2}2}\int_{-x}^{1-x} dy \frac{1}{\sqrt{2\pi}}e^{-\frac{y^2}{2}} = & \frac{1}{2\pi}\int_{-\frac\pi4}^{0} \, d \varphi \int_0^{\frac{1} { \cos \varphi}}r e^{-\frac{r^2}2} \, dr \\
& + \frac{1}{2\pi}\int_{0}^{\frac\pi2} \, d \varphi \int_0^{\frac1 { \sin \varphi + \cos \varphi}}r e^{-\frac{r^2}2} \, dr \\
= &- \frac{1}{2\pi}\int_{-\frac\pi4}^{0} \, d \varphi \int_0^{-\frac1 { 2\cos^2 \varphi}} e^t \, dt \\
& - \frac{1}{2\pi}\int_{0}^{\frac{\pi}{2}}d \varphi \int_{0}^{-\frac{1}{2 ( \sin \varphi + \cos \varphi) ^2}} e^t\,dt, \\
\end{split}
and I don't know what to do from here. Although, I'm not even sure if my procedure doesn't have any mistakes. Thanks for any insights.
|
Write
$$ \{ X +Y \in [0, 1], X \in [0, 1]\} = \{ (X, Y) \in D_1 \} \cup \{ (X, Y) \in D_2 \}, $$
where
\begin{align*}
D_1 &= \{(x, y) : 0 \leq x \leq 1 \text{ and } 0 \leq y \leq 1 \text{ and } 0 \leq x+y \leq 1 \} \\
D_2 &= \{(x, y) : 0 \leq x \leq 1 \text{ and } -1 \leq y \leq 0 \text{ and } 0 \leq x+y \leq 1 \}.
\end{align*}
$\hspace{14.5em}$
Then by symmetry,
\begin{align*}
\Bbb{P}(X +Y \in [0, 1], X \in [0, 1])
&= \Bbb{P}((X, Y) \in D_1) + \Bbb{P}((X, Y) \in D_2) \\
&= \tfrac{1}{4}\Bbb{P}((X, Y) \in \tilde{D}_1) + \tfrac{1}{8}\Bbb{P}((X, Y) \in [-1,1]^2)
\end{align*}
where
$$ \tilde{D}_1 = \{(x, y) : -1 \leq x + y \leq 1 \text{ and } -1 \leq x - y \leq 1 \} $$
$\hspace{11em}$
is the square with corners $(\pm 1, 0)$ and $(0, \pm1)$. Finally, using the fact that the law of $(X, Y)$ is rotation invariant, we can replace $\tilde{D}_1$ by its $45^{\circ}$ rotation without affecting the probability:
\begin{align*}
\Bbb{P}(X +Y \in [0, 1], X \in [0, 1])
&= \tfrac{1}{4}\Bbb{P}((X, Y) \in [-\tfrac{1}{\sqrt{2}},\tfrac{1}{\sqrt{2}}]^2) + \tfrac{1}{8}\Bbb{P}((X, Y) \in [-1,1]^2) \\
&= \Bbb{P}(X \in [0, \tfrac{1}{\sqrt{2}}])^2 + \frac{1}{2}\Bbb{P}(X \in [0, 1])^2 \\
&= \left( \Phi\left(\tfrac{1}{\sqrt{2}}\right) - \tfrac{1}{2}\right)^2 + \tfrac{1}{2}\left( \Phi(1)- \tfrac{1}{2}\right)^2.
\end{align*}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2103671",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Trouble calcualting simple limit I'm not sure if my solution is correct. The limit is:
$\lim_{x\to0}\cot(x)-\frac{1}{x}$. Here is how I tried to solve it:
1. $\lim_{x\to0}\cot(x)-\frac{1}{x}$ = $\lim_{x\to0}\cot(x) - x^{-1}$
2. Since $\cot(0)$ is not valid, apply the de l'Hôpital rule $(\cot (x) )' = -\frac{1}{\sin^2 x}=\sin^{-2} x $ and $(x^{-1})'$ = $x^{-2}$
3. $\lim_{x\to0}\sin^{-2} (x)-x^{-2} = 0 - 0 = 0$
However I'm not sure that my logic is correct
|
For the first one, $\tan x \sim x$ for small $x$ by Taylor's yields $\cot x\sim 1/x$ which should make the limit pretty easy.
To apply L'Hôpital's I always convert it to the indeterminate form $\frac{\infty}{\infty}$ to avoid mistakes. Although you are justified in applying L'Hôpital's in this case as $\infty-\infty$ is an indeterminate form.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2103763",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Group acting by isometries on a tree
I've read the above in a book called "Translation equivalence in free groups" by Ilya Kapovich, Gilbert levitt, Paul Schupp, and Vladimir Shpirain.
Why does the infimum can be replaced by a minimum? How can I actually show that?
|
Here is a proof basically from Culler and Morgan's paper Group actions on $\mathbb{R}$-trees.
Say $g$ fixes no element of $X$, and consider the arc $[x,gx]$, with midpoint $m$, and the arcs $g[x,gx], g^{-1}[x,gx]$. Note the translated arcs do not contain $m$ since then $m$ would be the midpoints of those arcs, so $g$ would fix $m$.
Now consider $a=g[x,gx] \cap [x,gx]$ and $b=g^{-1}[x,gx] \cap [x,gx]$, and let $c$ be an arc joining $a$ and $b$, and note $c$ touches both $a$ and $b$ at exactly one point. Let $A=\bigcup \{ g^n c \mid n \in \mathbb{Z} \}$, which is isometric to $\mathbb{R}$, and is an invariant set under the action of $g$. Any $x$ is translated at least the length of $c$, since if we draw a geodesic arc from $x$ to $A$, and translate by $g$, we get a disjoint arc joining $gx$ to $A$, separated by some $g^nc$.
So $\ell_X(g)=$"length of $c$", which is the minimal translation length.
If $g$ fixes a point then it is obvious the minimal is $0$, which will be the translation length.
(For the above it is helpful to run through the proof using an example in $F_2$ acting on the standard tree.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2103861",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Let $G$ be a finite abelian group and $H$ be a subgroup of $G$. Then there is an epimorphism from $G$ to $H$. Let $G$ be a finite abelian group and $H$ be a subgroup of $G$. Then there is an epimorphism from $G$ to $H$.
(That is, an epimorphism from a finite abelian group to its subgroup.)
|
By Exercise II.5.8 in Hungerford's Algebra,
$G\cong \prod_{i=1}^{n}P_i$ is the direct product of its Sylow $p$-subgroups.
By Exerise 24.56 in Gallian's Contermporary Abstract Algebra,
$H\cong \prod_{i=1}^{n}(H\cap P_i)$ is the direct product of the Sylow $p$-subgroups of $H$.
So it is sufficient to consider the case $G$ is a $p$-group.
By this,
we can assume $G\cong \Bbb{Z}_{p^{r_1}}\oplus\Bbb{Z}_{p^{r_2}}\oplus\cdots \oplus \Bbb{Z}_{p^{r_s}}$
and $H\cong \Bbb{Z}_{p^{t_1}}\oplus\Bbb{Z}_{p^{t_2}}\oplus\cdots \oplus \Bbb{Z}_{p^{t_u}}$,
where $s\geq u$ and $r_i\geq t_i$ for each $i=1, 2, ..., u$.
Recall that for $\Bbb{Z}_n$,
if $d\mid n$,
then consider the onto natural homomorphism $\theta:\Bbb{Z}\to \Bbb{Z}_d$.
Note that $\ker{\theta}=\langle d\rangle$ and $\langle n\rangle\subseteq \ker{\theta}$.
By Hungerford, p.43, thm.I.5.6,
there exists an onto homomorphism $f:\Bbb{Z}_n\cong \Bbb{Z}/\langle n\rangle\to \Bbb{Z}_d$.
Apply this argument to each $i=1, 2, ..., s$.
That is,
there is an onto homomorphism $f_i$ from $\Bbb{Z}_{p^{r_i}}$ to $\Bbb{Z}_{p^{t_i}}$ for each $i=1, 2, ..., s$.
($f_i$ maps $\Bbb{Z}_{p^{r_i}}$ to trivial group if $i>u$.)
Then $(f_1, f_2, ..., f_s)$ is an onto homomorphism from $G$ to $H$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2103974",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Algebraic proof of the chain rule? I would like to prove the chain rule: given $f$ and $g$ polynomial functions, $h = f \circ g$, and $a \in \mathbb{R}$, that $h'(a) = f'(g(a)) \cdot g'(a)$. However, I would like to do so without using the limit definition of the derivative or any sort of differentiation rules.
So far, the only lead I've got is that given $P(x)$ a polynomial function, by the division algorithm, $P(x) = (x-a)^2Q(x) + R(x)$, and $R(x)$ is the equation of the line tangent to $P(x)$ at $x = a$.
|
This is not a full solution, but it does describe another approach to defining the derivative of a polynomial that does not involve limits, and that is related the division algorithm.
Let $P(x)$ be any polynomial. Choose some real number $a$. Then if we divide $P(x)$ by $x-a$ we get a quotient, $Q(x)$, and a constant remainder, $R$. In fact by the Remainder Theorem $R=P(a)$, so we have $P(x) = (x-a)Q(x) + P(a)$. Rearranging,
$$Q(x) = \frac{P(x)-P(a)}{x-a}$$
This equation has a natural interpretation: the quotient polynomial $Q(x)$ tells us the slope of the secant line passing through the graph of $P(x)$ at the points $(a,P(a))$ and $(x,P(x))$. With this interpretation, we can recognize that the slope of the tangent line to $P(x)$ at $x=a$ is just given by $Q(a)$. So we define $f'(a) = Q(a)$. (Since $Q(x)$ is a polynomial, there is no need to take a limit here.)
With this as background, let's set out to answer the question in the OP.
To compute the derivative of $f(g(x))$ at $x=a$ we divide $f(g(x))$ by $x-a$, obtaining a quotient $q_1(x)$, with
$$f(g(x))=(x-a)q_1(x) + f(g(a))$$
and then the derivative is $q_1(a)$.
To compute $f'(g(a))$ we divide $f(x)$ by $x-g(a)$, obtaining a quotient $q_2(x)$, with $$f(x) = (x-g(a))q_2(x) + f(g(a))$$
and then $f'(g(a)) = q_2(g(a))$.
To compute $g'(a)$ we divide $g(x)$ by $x-a$, obtaining a quotient $q_3(x)$, with $$g(x) = (x-a)q_3(x) + g(a)$$
and then $g'(a) = q_3(a)$.
The chain rule is then expressed by the identity $$q_1(a) = q_2(g(a))\cdot q_3(a)$$. This is what we need to prove. Can you take it from here?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2104068",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 3
}
|
Find $y'$ given $y\,\sin\,x^3=x\,\sin\,y^3$? The problem is
$$y\,\sin\,x^3=x\,\sin\,y^3$$
Find the $y'$
The answer is
Can some explain how to do this, please help.
|
I have found that it helps students to understand implicit differentiation if first they think of both $x$ and $y$ as functions of some third variable such as $t$ and take the derivative of both sides with respect to $t$, being careful to use the product rule, chain rule, etc when needed. Then as a final step, multiply both sides by $dt$. Since the object usually is to solve for $y^\prime=\frac{dy}{dx}$ you can then divide both sides by $dx$ and solve.
\begin{eqnarray}
\frac{d}{dt}\left[y\sin(x^3)\right]&=&\frac{d}{dt}\left[x\sin(y^3)\right]\\
\sin(x^3)\frac{dy}{dt}+y\frac{d\sin(x^3)}{dt}&=&\sin(y^3)\frac{dx}{dt}+x\frac{d\sin(y^3)}{dt}\\
\sin(x^3)\frac{dy}{dt}+3x^2y\cos(x^3)\frac{dx}{dt}&=&\sin(y^3)\frac{dx}{dt}+3xy^2\cos(y^3)\frac{dy}{dt}\\
\sin(x^3)dy+3x^2y\cos(x^3)dx&=&\sin(y^3)dx+3xy^2\cos(y^3)dy\\
\sin(x^3)\frac{dy}{dx}+3x^2y\cos(x^3)&=&\sin(y^3)+3xy^2\cos(y^3)\frac{dy}{dx}\\
\left[\sin(x^3)-3xy^2\cos(y^3)\right]\frac{dy}{dx}&=&\sin(y^3)-3x^2y\cos(x^3)\\
\frac{dy}{dx}&=&\frac{\sin(y^3)-3x^2y\cos(x^3)}{\sin(x^3)-3xy^2\cos(y^3)}
\end{eqnarray}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2104203",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
}
|
Coincidence of standard derivative and weak derivative Let $f:\mathbb{R}^n \to \mathbb{R}$ be in $W^{1,p}(\mathbb{R}^n)$ and differentiable (in the classical sense) almost everywhere.
Is it true that the standard derivative and the weak derivative conicide?
When $p>n$ this is a corollary from theorem 4.9 ("LECTURES ON LIPSCHITZ ANALYSIS"- by Heinonen; In fact, in that case $f \in W^{1,p}$ implies that $f$ is differentiable almost everywhere).
What happens for other values of $p$?
|
This follows from the ACL characterization of Sobolev spaces, see https://en.wikipedia.org/wiki/Sobolev_space#Absolutely_continuous_on_lines_.28ACL.29_characterization_of_Sobolev_functions.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2104350",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
How to express the cardinality of $∏_{1≤i≤n} A_i$ in terms of cardinalities $|A_1|, |A_2|, . . . , |A_n|$ I was given the problem:
For finite sets $A_1, A_2,\dotsc , A_n$ define their Cartesian product $\prod_{i=1}^n A_i$ as the
set of all $n$-sequences $(x_1, x_2,\dotsc, x_n)$, where $x_i \in A_i$ for every $i = 1, 2, \dotsc, n$.
Find a formula expressing the cardinality of $\prod_{i=1}^n A_i$ in terms of cardinalities
$|A_1|, |A_2|,\dotsc , |A_n|$.
And I am struggling to understand what it is actually asking for, could someone explain it to me please, thanks. :)
|
We know that $$|A\times B|=|A|\times|B|\qquad(1)$$.
We want to show $$\left|\prod_{i=1}^nA_i\right|=\prod_{i=1}^n|A_i|$$ is true for any natural number $n$, where $\prod_{i=1}^n|A_i|=|A_1|\times|A_2|\times\dotsc\times|A_n|$. So, we have use induction.
The base case $n=1$ ($|A_1| = |A_1|$) is trivial. Now suppose inductively that $\left|\prod_{i=1}^nA_i\right|=\prod_{i=1}^n|A_i|$. We want to show $$\left|\prod_{i=1}^{n+1}A_i\right|=\prod_{i=1}^{n+1}|A_i|.$$ Now we need to show $$\left|\prod_{i=1}^{n+1}A_i\right|=\left|\left(\prod_{i=1}^nA_i\right)\times A_{n+1}\right|\qquad(2),$$ i.e., the cardinality of the set $\prod_{i=1}^{n+1}A_i$ is equal to the cardinality of the set $\left(\prod_{i=1}^nA_i\right)\times A_{n+1}$. So $$\begin{aligned}\left|\prod_{i=1}^{n+1}A_i\right|&=&\left|\left(\prod_{i=1}^nA_i\right)\times A_{n+1}\right|&\qquad\text{by }(2)\\&=&\left|\prod_{i=1}^nA_i\right|\times |A_{n+1}|&\qquad\text{by }(1)\\&=&|A_1|\times|A_2|\times\dotsc\times|A_n|\times|A_{n+1}|&\qquad\text{by induction hypothesis}\\&=&\prod_{i=1}^{n+1}|A_i|.\end{aligned}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2104483",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
If $ \sin\theta + \cos\theta = \frac 1 2$, what does $\tan\theta + \cot\theta$ equal? A SAT II question asks:
If $ \sin\theta + \cos\theta = \dfrac 1 2$, what does $\tan\theta + \cot\theta$ equal?
Which identity would I need to solve this?
|
Hint
$$\sin\theta+\cos\theta=\frac{1}{2} \implies \left( \sin\theta+\cos\theta \right)^2 = \frac{1}{4} \iff \color{blue}{\cos\theta\sin\theta} = \cdots$$
and
$$\tan\theta+\cot\theta = \frac{\sin\theta}{\cos\theta}+\frac{\cos\theta}{\sin\theta} = \frac{\cos^2\theta+\sin^2\theta}{\cos\theta\sin\theta}= \frac{1}{\color{blue}{\cos\theta\sin\theta}} = \cdots$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2104575",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
}
|
Prove that $f$ is increasing if and only if a given inequality holds Let $f: [0, \infty) \to \mathbb{R}$ be a continuous function. Prove that $f$ is increasing if and only if:
$$\int_a^b f(x) dx \leq bf(b) - af(a), \, \forall \, \, 0 \leq a \leq b.$$
I have no difficulties in proving that if $f$ is increasing then the inequality holds. But I haven't figured out yet how to prove it the other way around, that is knowing the inequality and proving that $f$ is increasing.
Thank you!
|
Note that if $f(a) > f(b) $ then by continuity we can choose $c\in(a, b] $ such that $f(x) > f(c) $ for all $x\in [a, c) $ and hence $$\int_{a} ^{c} f(t) \, dt>(c-a) f(c) $$ And given condition on $f$ implies that integral above is not greater than $cf(c) - af(a) $. It now follows that $f(a) <f(c) $ which is contrary to $f(a) >f(c) $. Thus we must have $f(a) \leq f(b) $.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2104674",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 2
}
|
If $f(1)$ and $f(i)$ real, then find minimum value of $|a|+|b|$ A function $f$ is defined by $$f(z)=(4+i)z^2+a z+b$$ $(i=\sqrt{-1})$for all complex number $z$ where $a$ and $b$ are complex numbers. If $f(1)$ and $f(i)$ are both purely real, then find minimum value of $|a|+|b|$
Now
$f(1)=4+i+a+b$ which means imaginary part of $a+b=-1$ and $f(i)=-4-i+a \cdot i+b$ which gives imaginary part of $a \cdot i+b=1$ but how to proceed further to find desired value?
|
If $a=a_1+a_2i$ and $b=b_1+b_2i$ then you have that $b_2+a_2=-1, b_2+a_1=1$. So you can pick any $b_1$, which, since you are seeking a minimum, means you can choose $b_1=0$. You get $b=b_2i$ and $a_2=-(b_2+1)$ and $a_1=1-b_2$. So you are trying to minimize:
$$\sqrt{a_1^2+a_2^2}+\sqrt{b_1^2+b_2^2}=\sqrt{(-1-b_2)^2+(1-b_2)^2}+|b_2|=\sqrt{2+2b_2^2}+|b_2|$$
Apply usual calculus tricks to minimize $\sqrt{2+2x^2}+x$ with $x\geq 0$. You should get that the minimum is when $x=0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2104800",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.